Dec 06 06:46:26 localhost kernel: Linux version 5.14.0-284.11.1.el9_2.x86_64 (mockbuild@x86-vm-09.build.eng.bos.redhat.com) (gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), GNU ld version 2.35.2-37.el9) #1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023
Dec 06 06:46:26 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 06 06:46:26 localhost kernel: Command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M
Dec 06 06:46:26 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 06 06:46:26 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 06 06:46:26 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 06 06:46:26 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 06 06:46:26 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Dec 06 06:46:26 localhost kernel: signal: max sigframe size: 1776
Dec 06 06:46:26 localhost kernel: BIOS-provided physical RAM map:
Dec 06 06:46:26 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 06 06:46:26 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 06 06:46:26 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 06 06:46:26 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec 06 06:46:26 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec 06 06:46:26 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 06 06:46:26 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 06 06:46:26 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000043fffffff] usable
Dec 06 06:46:26 localhost kernel: NX (Execute Disable) protection: active
Dec 06 06:46:26 localhost kernel: SMBIOS 2.8 present.
Dec 06 06:46:26 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 06 06:46:26 localhost kernel: Hypervisor detected: KVM
Dec 06 06:46:26 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 06 06:46:26 localhost kernel: kvm-clock: using sched offset of 1837049755 cycles
Dec 06 06:46:26 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 06 06:46:26 localhost kernel: tsc: Detected 2799.998 MHz processor
Dec 06 06:46:26 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 06 06:46:26 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 06 06:46:26 localhost kernel: last_pfn = 0x440000 max_arch_pfn = 0x400000000
Dec 06 06:46:26 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 06 06:46:26 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec 06 06:46:26 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec 06 06:46:26 localhost kernel: Using GB pages for direct mapping
Dec 06 06:46:26 localhost kernel: RAMDISK: [mem 0x2eef4000-0x33771fff]
Dec 06 06:46:26 localhost kernel: ACPI: Early table checksum verification disabled
Dec 06 06:46:26 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec 06 06:46:26 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 06 06:46:26 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 06 06:46:26 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 06 06:46:26 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec 06 06:46:26 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 06 06:46:26 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 06 06:46:26 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec 06 06:46:26 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec 06 06:46:26 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec 06 06:46:26 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec 06 06:46:26 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec 06 06:46:26 localhost kernel: No NUMA configuration found
Dec 06 06:46:26 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000043fffffff]
Dec 06 06:46:26 localhost kernel: NODE_DATA(0) allocated [mem 0x43ffd5000-0x43fffffff]
Dec 06 06:46:26 localhost kernel: Reserving 256MB of memory at 2800MB for crashkernel (System RAM: 16383MB)
Dec 06 06:46:26 localhost kernel: Zone ranges:
Dec 06 06:46:26 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 06 06:46:26 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 06 06:46:26 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000043fffffff]
Dec 06 06:46:26 localhost kernel:   Device   empty
Dec 06 06:46:26 localhost kernel: Movable zone start for each node
Dec 06 06:46:26 localhost kernel: Early memory node ranges
Dec 06 06:46:26 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 06 06:46:26 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec 06 06:46:26 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000043fffffff]
Dec 06 06:46:26 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000043fffffff]
Dec 06 06:46:26 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 06 06:46:26 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 06 06:46:26 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 06 06:46:26 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Dec 06 06:46:26 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 06 06:46:26 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 06 06:46:26 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 06 06:46:26 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 06 06:46:26 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 06 06:46:26 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 06 06:46:26 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 06 06:46:26 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 06 06:46:26 localhost kernel: TSC deadline timer available
Dec 06 06:46:26 localhost kernel: smpboot: Allowing 8 CPUs, 0 hotplug CPUs
Dec 06 06:46:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 06 06:46:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 06 06:46:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 06 06:46:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 06 06:46:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec 06 06:46:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec 06 06:46:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 06 06:46:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 06 06:46:26 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 06 06:46:26 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec 06 06:46:26 localhost kernel: Booting paravirtualized kernel on KVM
Dec 06 06:46:26 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 06 06:46:26 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec 06 06:46:26 localhost kernel: percpu: Embedded 55 pages/cpu s188416 r8192 d28672 u262144
Dec 06 06:46:26 localhost kernel: pcpu-alloc: s188416 r8192 d28672 u262144 alloc=1*2097152
Dec 06 06:46:26 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Dec 06 06:46:26 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 06 06:46:26 localhost kernel: Fallback order for Node 0: 0 
Dec 06 06:46:26 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 4128475
Dec 06 06:46:26 localhost kernel: Policy zone: Normal
Dec 06 06:46:26 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M
Dec 06 06:46:26 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64", will be passed to user space.
Dec 06 06:46:26 localhost kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear)
Dec 06 06:46:26 localhost kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 06 06:46:26 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 06 06:46:26 localhost kernel: software IO TLB: area num 8.
Dec 06 06:46:26 localhost kernel: Memory: 2873456K/16776676K available (14342K kernel code, 5536K rwdata, 10180K rodata, 2792K init, 7524K bss, 741260K reserved, 0K cma-reserved)
Dec 06 06:46:26 localhost kernel: random: get_random_u64 called from kmem_cache_open+0x1e/0x210 with crng_init=0
Dec 06 06:46:26 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec 06 06:46:26 localhost kernel: ftrace: allocating 44803 entries in 176 pages
Dec 06 06:46:26 localhost kernel: ftrace: allocated 176 pages with 3 groups
Dec 06 06:46:26 localhost kernel: Dynamic Preempt: voluntary
Dec 06 06:46:26 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 06 06:46:26 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec 06 06:46:26 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Dec 06 06:46:26 localhost kernel:         Rude variant of Tasks RCU enabled.
Dec 06 06:46:26 localhost kernel:         Tracing variant of Tasks RCU enabled.
Dec 06 06:46:26 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 06 06:46:26 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec 06 06:46:26 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec 06 06:46:26 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 06 06:46:26 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 06 06:46:26 localhost kernel: random: crng init done (trusting CPU's manufacturer)
Dec 06 06:46:26 localhost kernel: Console: colour VGA+ 80x25
Dec 06 06:46:26 localhost kernel: printk: console [tty0] enabled
Dec 06 06:46:26 localhost kernel: printk: console [ttyS0] enabled
Dec 06 06:46:26 localhost kernel: ACPI: Core revision 20211217
Dec 06 06:46:26 localhost kernel: APIC: Switch to symmetric I/O mode setup
Dec 06 06:46:26 localhost kernel: x2apic enabled
Dec 06 06:46:26 localhost kernel: Switched APIC routing to physical x2apic.
Dec 06 06:46:26 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 06 06:46:26 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Dec 06 06:46:26 localhost kernel: pid_max: default: 32768 minimum: 301
Dec 06 06:46:26 localhost kernel: LSM: Security Framework initializing
Dec 06 06:46:26 localhost kernel: Yama: becoming mindful.
Dec 06 06:46:26 localhost kernel: SELinux:  Initializing.
Dec 06 06:46:26 localhost kernel: LSM support for eBPF active
Dec 06 06:46:26 localhost kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
Dec 06 06:46:26 localhost kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
Dec 06 06:46:26 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 06 06:46:26 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 06 06:46:26 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 06 06:46:26 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 06 06:46:26 localhost kernel: Spectre V2 : Mitigation: Retpolines
Dec 06 06:46:26 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Dec 06 06:46:26 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Dec 06 06:46:26 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 06 06:46:26 localhost kernel: RETBleed: Mitigation: untrained return thunk
Dec 06 06:46:26 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 06 06:46:26 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 06 06:46:26 localhost kernel: Freeing SMP alternatives memory: 36K
Dec 06 06:46:26 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 06 06:46:26 localhost kernel: cblist_init_generic: Setting adjustable number of callback queues.
Dec 06 06:46:26 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1.
Dec 06 06:46:26 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1.
Dec 06 06:46:26 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1.
Dec 06 06:46:26 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 06 06:46:26 localhost kernel: ... version:                0
Dec 06 06:46:26 localhost kernel: ... bit width:              48
Dec 06 06:46:26 localhost kernel: ... generic registers:      6
Dec 06 06:46:26 localhost kernel: ... value mask:             0000ffffffffffff
Dec 06 06:46:26 localhost kernel: ... max period:             00007fffffffffff
Dec 06 06:46:26 localhost kernel: ... fixed-purpose events:   0
Dec 06 06:46:26 localhost kernel: ... event mask:             000000000000003f
Dec 06 06:46:26 localhost kernel: rcu: Hierarchical SRCU implementation.
Dec 06 06:46:26 localhost kernel: rcu:         Max phase no-delay instances is 400.
Dec 06 06:46:26 localhost kernel: smp: Bringing up secondary CPUs ...
Dec 06 06:46:26 localhost kernel: x86: Booting SMP configuration:
Dec 06 06:46:26 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec 06 06:46:26 localhost kernel: smp: Brought up 1 node, 8 CPUs
Dec 06 06:46:26 localhost kernel: smpboot: Max logical packages: 8
Dec 06 06:46:26 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Dec 06 06:46:26 localhost kernel: node 0 deferred pages initialised in 23ms
Dec 06 06:46:26 localhost kernel: devtmpfs: initialized
Dec 06 06:46:26 localhost kernel: x86/mm: Memory block size: 128MB
Dec 06 06:46:26 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 06 06:46:26 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Dec 06 06:46:26 localhost kernel: pinctrl core: initialized pinctrl subsystem
Dec 06 06:46:26 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 06 06:46:26 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations
Dec 06 06:46:26 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 06 06:46:26 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 06 06:46:26 localhost kernel: audit: initializing netlink subsys (disabled)
Dec 06 06:46:26 localhost kernel: audit: type=2000 audit(1765003585.499:1): state=initialized audit_enabled=0 res=1
Dec 06 06:46:26 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 06 06:46:26 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 06 06:46:26 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 06 06:46:26 localhost kernel: cpuidle: using governor menu
Dec 06 06:46:26 localhost kernel: HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB
Dec 06 06:46:26 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 06 06:46:26 localhost kernel: PCI: Using configuration type 1 for base access
Dec 06 06:46:26 localhost kernel: PCI: Using configuration type 1 for extended access
Dec 06 06:46:26 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 06 06:46:26 localhost kernel: HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB
Dec 06 06:46:26 localhost kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Dec 06 06:46:26 localhost kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Dec 06 06:46:26 localhost kernel: cryptd: max_cpu_qlen set to 1000
Dec 06 06:46:26 localhost kernel: ACPI: Added _OSI(Module Device)
Dec 06 06:46:26 localhost kernel: ACPI: Added _OSI(Processor Device)
Dec 06 06:46:26 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 06 06:46:26 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 06 06:46:26 localhost kernel: ACPI: Added _OSI(Linux-Dell-Video)
Dec 06 06:46:26 localhost kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Dec 06 06:46:26 localhost kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Dec 06 06:46:26 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 06 06:46:26 localhost kernel: ACPI: Interpreter enabled
Dec 06 06:46:26 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec 06 06:46:26 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Dec 06 06:46:26 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 06 06:46:26 localhost kernel: PCI: Using E820 reservations for host bridge windows
Dec 06 06:46:26 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 06 06:46:26 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 06 06:46:26 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [3] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [4] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [5] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [6] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [7] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [8] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [9] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [10] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [11] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [12] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [13] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [14] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [15] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [16] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [17] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [18] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [19] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [20] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [21] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [22] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [23] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [24] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [25] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [26] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [27] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [28] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [29] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [30] registered
Dec 06 06:46:26 localhost kernel: acpiphp: Slot [31] registered
Dec 06 06:46:26 localhost kernel: PCI host bridge to bus 0000:00
Dec 06 06:46:26 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 06 06:46:26 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 06 06:46:26 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 06 06:46:26 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 06 06:46:26 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x440000000-0x4bfffffff window]
Dec 06 06:46:26 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 06 06:46:26 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Dec 06 06:46:26 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Dec 06 06:46:26 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
Dec 06 06:46:26 localhost kernel: pci 0000:00:01.1: reg 0x20: [io  0xc140-0xc14f]
Dec 06 06:46:26 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
Dec 06 06:46:26 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
Dec 06 06:46:26 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
Dec 06 06:46:26 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
Dec 06 06:46:26 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300
Dec 06 06:46:26 localhost kernel: pci 0000:00:01.2: reg 0x20: [io  0xc100-0xc11f]
Dec 06 06:46:26 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Dec 06 06:46:26 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 06 06:46:26 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 06 06:46:26 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000
Dec 06 06:46:26 localhost kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref]
Dec 06 06:46:26 localhost kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 06 06:46:26 localhost kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff]
Dec 06 06:46:26 localhost kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref]
Dec 06 06:46:26 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 06 06:46:26 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000
Dec 06 06:46:26 localhost kernel: pci 0000:00:03.0: reg 0x10: [io  0xc080-0xc0bf]
Dec 06 06:46:26 localhost kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff]
Dec 06 06:46:26 localhost kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 06 06:46:26 localhost kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref]
Dec 06 06:46:26 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000
Dec 06 06:46:26 localhost kernel: pci 0000:00:04.0: reg 0x10: [io  0xc000-0xc07f]
Dec 06 06:46:26 localhost kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff]
Dec 06 06:46:26 localhost kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 06 06:46:26 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00
Dec 06 06:46:26 localhost kernel: pci 0000:00:05.0: reg 0x10: [io  0xc0c0-0xc0ff]
Dec 06 06:46:26 localhost kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 06 06:46:26 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00
Dec 06 06:46:26 localhost kernel: pci 0000:00:06.0: reg 0x10: [io  0xc120-0xc13f]
Dec 06 06:46:26 localhost kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 06 06:46:26 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 06 06:46:26 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 06 06:46:26 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 06 06:46:26 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 06 06:46:26 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 06 06:46:26 localhost kernel: iommu: Default domain type: Translated 
Dec 06 06:46:26 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode 
Dec 06 06:46:26 localhost kernel: SCSI subsystem initialized
Dec 06 06:46:26 localhost kernel: ACPI: bus type USB registered
Dec 06 06:46:26 localhost kernel: usbcore: registered new interface driver usbfs
Dec 06 06:46:26 localhost kernel: usbcore: registered new interface driver hub
Dec 06 06:46:26 localhost kernel: usbcore: registered new device driver usb
Dec 06 06:46:26 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 06 06:46:26 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 06 06:46:26 localhost kernel: PTP clock support registered
Dec 06 06:46:26 localhost kernel: EDAC MC: Ver: 3.0.0
Dec 06 06:46:26 localhost kernel: NetLabel: Initializing
Dec 06 06:46:26 localhost kernel: NetLabel:  domain hash size = 128
Dec 06 06:46:26 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 06 06:46:26 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Dec 06 06:46:26 localhost kernel: PCI: Using ACPI for IRQ routing
Dec 06 06:46:26 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 06 06:46:26 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 06 06:46:26 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Dec 06 06:46:26 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 06 06:46:26 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 06 06:46:26 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 06 06:46:26 localhost kernel: vgaarb: loaded
Dec 06 06:46:26 localhost kernel: clocksource: Switched to clocksource kvm-clock
Dec 06 06:46:26 localhost kernel: VFS: Disk quotas dquot_6.6.0
Dec 06 06:46:26 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 06 06:46:26 localhost kernel: pnp: PnP ACPI init
Dec 06 06:46:26 localhost kernel: pnp 00:03: [dma 2]
Dec 06 06:46:26 localhost kernel: pnp: PnP ACPI: found 5 devices
Dec 06 06:46:26 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 06 06:46:26 localhost kernel: NET: Registered PF_INET protocol family
Dec 06 06:46:26 localhost kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Dec 06 06:46:26 localhost kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear)
Dec 06 06:46:26 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 06 06:46:26 localhost kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 06 06:46:26 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 06 06:46:26 localhost kernel: TCP: Hash tables configured (established 131072 bind 65536)
Dec 06 06:46:26 localhost kernel: MPTCP token hash table entries: 16384 (order: 6, 393216 bytes, linear)
Dec 06 06:46:26 localhost kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear)
Dec 06 06:46:26 localhost kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear)
Dec 06 06:46:26 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 06 06:46:26 localhost kernel: NET: Registered PF_XDP protocol family
Dec 06 06:46:26 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 06 06:46:26 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 06 06:46:26 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 06 06:46:26 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec 06 06:46:26 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x440000000-0x4bfffffff window]
Dec 06 06:46:26 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 06 06:46:26 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 06 06:46:26 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 06 06:46:26 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 26803 usecs
Dec 06 06:46:26 localhost kernel: PCI: CLS 0 bytes, default 64
Dec 06 06:46:26 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 06 06:46:26 localhost kernel: Trying to unpack rootfs image as initramfs...
Dec 06 06:46:26 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec 06 06:46:26 localhost kernel: ACPI: bus type thunderbolt registered
Dec 06 06:46:26 localhost kernel: Initialise system trusted keyrings
Dec 06 06:46:26 localhost kernel: Key type blacklist registered
Dec 06 06:46:26 localhost kernel: workingset: timestamp_bits=36 max_order=22 bucket_order=0
Dec 06 06:46:26 localhost kernel: zbud: loaded
Dec 06 06:46:26 localhost kernel: integrity: Platform Keyring initialized
Dec 06 06:46:26 localhost kernel: NET: Registered PF_ALG protocol family
Dec 06 06:46:26 localhost kernel: xor: automatically using best checksumming function   avx       
Dec 06 06:46:26 localhost kernel: Key type asymmetric registered
Dec 06 06:46:26 localhost kernel: Asymmetric key parser 'x509' registered
Dec 06 06:46:26 localhost kernel: Running certificate verification selftests
Dec 06 06:46:26 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 06 06:46:26 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 06 06:46:26 localhost kernel: io scheduler mq-deadline registered
Dec 06 06:46:26 localhost kernel: io scheduler kyber registered
Dec 06 06:46:26 localhost kernel: io scheduler bfq registered
Dec 06 06:46:26 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 06 06:46:26 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 06 06:46:26 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 06 06:46:26 localhost kernel: ACPI: button: Power Button [PWRF]
Dec 06 06:46:26 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 06 06:46:26 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 06 06:46:26 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 06 06:46:26 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 06 06:46:26 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 06 06:46:26 localhost kernel: Non-volatile memory driver v1.3
Dec 06 06:46:26 localhost kernel: rdac: device handler registered
Dec 06 06:46:26 localhost kernel: hp_sw: device handler registered
Dec 06 06:46:26 localhost kernel: emc: device handler registered
Dec 06 06:46:26 localhost kernel: alua: device handler registered
Dec 06 06:46:26 localhost kernel: libphy: Fixed MDIO Bus: probed
Dec 06 06:46:26 localhost kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
Dec 06 06:46:26 localhost kernel: ehci-pci: EHCI PCI platform driver
Dec 06 06:46:26 localhost kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
Dec 06 06:46:26 localhost kernel: ohci-pci: OHCI PCI platform driver
Dec 06 06:46:26 localhost kernel: uhci_hcd: USB Universal Host Controller Interface driver
Dec 06 06:46:26 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 06 06:46:26 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 06 06:46:26 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 06 06:46:26 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec 06 06:46:26 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 06 06:46:26 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 06 06:46:26 localhost kernel: usb usb1: Product: UHCI Host Controller
Dec 06 06:46:26 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-284.11.1.el9_2.x86_64 uhci_hcd
Dec 06 06:46:26 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec 06 06:46:26 localhost kernel: hub 1-0:1.0: USB hub found
Dec 06 06:46:26 localhost kernel: hub 1-0:1.0: 2 ports detected
Dec 06 06:46:26 localhost kernel: usbcore: registered new interface driver usbserial_generic
Dec 06 06:46:26 localhost kernel: usbserial: USB Serial support registered for generic
Dec 06 06:46:26 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 06 06:46:26 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 06 06:46:26 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 06 06:46:26 localhost kernel: mousedev: PS/2 mouse device common for all mice
Dec 06 06:46:26 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 06 06:46:26 localhost kernel: rtc_cmos 00:04: registered as rtc0
Dec 06 06:46:26 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 06 06:46:26 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-06T06:46:25 UTC (1765003585)
Dec 06 06:46:26 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 06 06:46:26 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 06 06:46:26 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 06 06:46:26 localhost kernel: usbcore: registered new interface driver usbhid
Dec 06 06:46:26 localhost kernel: usbhid: USB HID core driver
Dec 06 06:46:26 localhost kernel: drop_monitor: Initializing network drop monitor service
Dec 06 06:46:26 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 06 06:46:26 localhost kernel: Initializing XFRM netlink socket
Dec 06 06:46:26 localhost kernel: NET: Registered PF_INET6 protocol family
Dec 06 06:46:26 localhost kernel: Segment Routing with IPv6
Dec 06 06:46:26 localhost kernel: NET: Registered PF_PACKET protocol family
Dec 06 06:46:26 localhost kernel: mpls_gso: MPLS GSO support
Dec 06 06:46:26 localhost kernel: IPI shorthand broadcast: enabled
Dec 06 06:46:26 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Dec 06 06:46:26 localhost kernel: AES CTR mode by8 optimization enabled
Dec 06 06:46:26 localhost kernel: sched_clock: Marking stable (745059463, 184011206)->(1056574837, -127504168)
Dec 06 06:46:26 localhost kernel: registered taskstats version 1
Dec 06 06:46:26 localhost kernel: Loading compiled-in X.509 certificates
Dec 06 06:46:26 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72'
Dec 06 06:46:26 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 06 06:46:26 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 06 06:46:26 localhost kernel: zswap: loaded using pool lzo/zbud
Dec 06 06:46:26 localhost kernel: page_owner is disabled
Dec 06 06:46:26 localhost kernel: Key type big_key registered
Dec 06 06:46:26 localhost kernel: Freeing initrd memory: 74232K
Dec 06 06:46:26 localhost kernel: Key type encrypted registered
Dec 06 06:46:26 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 06 06:46:26 localhost kernel: Loading compiled-in module X.509 certificates
Dec 06 06:46:26 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72'
Dec 06 06:46:26 localhost kernel: ima: Allocated hash algorithm: sha256
Dec 06 06:46:26 localhost kernel: ima: No architecture policies found
Dec 06 06:46:26 localhost kernel: evm: Initialising EVM extended attributes:
Dec 06 06:46:26 localhost kernel: evm: security.selinux
Dec 06 06:46:26 localhost kernel: evm: security.SMACK64 (disabled)
Dec 06 06:46:26 localhost kernel: evm: security.SMACK64EXEC (disabled)
Dec 06 06:46:26 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 06 06:46:26 localhost kernel: evm: security.SMACK64MMAP (disabled)
Dec 06 06:46:26 localhost kernel: evm: security.apparmor (disabled)
Dec 06 06:46:26 localhost kernel: evm: security.ima
Dec 06 06:46:26 localhost kernel: evm: security.capability
Dec 06 06:46:26 localhost kernel: evm: HMAC attrs: 0x1
Dec 06 06:46:26 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 06 06:46:26 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 06 06:46:26 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 06 06:46:26 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Dec 06 06:46:26 localhost kernel: usb 1-1: Manufacturer: QEMU
Dec 06 06:46:26 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec 06 06:46:26 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 06 06:46:26 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec 06 06:46:26 localhost kernel: Freeing unused decrypted memory: 2036K
Dec 06 06:46:26 localhost kernel: Freeing unused kernel image (initmem) memory: 2792K
Dec 06 06:46:26 localhost kernel: Write protecting the kernel read-only data: 26624k
Dec 06 06:46:26 localhost kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K
Dec 06 06:46:26 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 60K
Dec 06 06:46:26 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 06 06:46:26 localhost kernel: Run /init as init process
Dec 06 06:46:26 localhost kernel:   with arguments:
Dec 06 06:46:26 localhost kernel:     /init
Dec 06 06:46:26 localhost kernel:   with environment:
Dec 06 06:46:26 localhost kernel:     HOME=/
Dec 06 06:46:26 localhost kernel:     TERM=linux
Dec 06 06:46:26 localhost kernel:     BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64
Dec 06 06:46:26 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 06 06:46:26 localhost systemd[1]: Detected virtualization kvm.
Dec 06 06:46:26 localhost systemd[1]: Detected architecture x86-64.
Dec 06 06:46:26 localhost systemd[1]: Running in initrd.
Dec 06 06:46:26 localhost systemd[1]: No hostname configured, using default hostname.
Dec 06 06:46:26 localhost systemd[1]: Hostname set to <localhost>.
Dec 06 06:46:26 localhost systemd[1]: Initializing machine ID from VM UUID.
Dec 06 06:46:26 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Dec 06 06:46:26 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 06 06:46:26 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 06 06:46:26 localhost systemd[1]: Reached target Initrd /usr File System.
Dec 06 06:46:26 localhost systemd[1]: Reached target Local File Systems.
Dec 06 06:46:26 localhost systemd[1]: Reached target Path Units.
Dec 06 06:46:26 localhost systemd[1]: Reached target Slice Units.
Dec 06 06:46:26 localhost systemd[1]: Reached target Swaps.
Dec 06 06:46:26 localhost systemd[1]: Reached target Timer Units.
Dec 06 06:46:26 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 06 06:46:26 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Dec 06 06:46:26 localhost systemd[1]: Listening on Journal Socket.
Dec 06 06:46:26 localhost systemd[1]: Listening on udev Control Socket.
Dec 06 06:46:26 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 06 06:46:26 localhost systemd[1]: Reached target Socket Units.
Dec 06 06:46:26 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 06 06:46:26 localhost systemd[1]: Starting Journal Service...
Dec 06 06:46:26 localhost systemd[1]: Starting Load Kernel Modules...
Dec 06 06:46:26 localhost systemd[1]: Starting Create System Users...
Dec 06 06:46:26 localhost systemd[1]: Starting Setup Virtual Console...
Dec 06 06:46:26 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 06 06:46:26 localhost systemd-journald[284]: Journal started
Dec 06 06:46:26 localhost systemd-journald[284]: Runtime Journal (/run/log/journal/f03c623985fa4e2bb1f756cf939bb96f) is 8.0M, max 314.7M, 306.7M free.
Dec 06 06:46:26 localhost systemd-modules-load[285]: Module 'msr' is built in
Dec 06 06:46:26 localhost systemd[1]: Started Journal Service.
Dec 06 06:46:26 localhost systemd[1]: Finished Load Kernel Modules.
Dec 06 06:46:26 localhost systemd[1]: Finished Setup Virtual Console.
Dec 06 06:46:26 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 06 06:46:26 localhost systemd[1]: Starting dracut cmdline hook...
Dec 06 06:46:26 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 06 06:46:26 localhost systemd-sysusers[286]: Creating group 'sgx' with GID 997.
Dec 06 06:46:26 localhost systemd-sysusers[286]: Creating group 'users' with GID 100.
Dec 06 06:46:26 localhost systemd-sysusers[286]: Creating group 'dbus' with GID 81.
Dec 06 06:46:26 localhost systemd-sysusers[286]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 06 06:46:26 localhost systemd[1]: Finished Create System Users.
Dec 06 06:46:26 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 06 06:46:26 localhost dracut-cmdline[289]: dracut-9.2 (Plow) dracut-057-21.git20230214.el9
Dec 06 06:46:26 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 06 06:46:26 localhost dracut-cmdline[289]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M
Dec 06 06:46:26 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 06 06:46:26 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 06 06:46:26 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 06 06:46:26 localhost systemd[1]: Finished dracut cmdline hook.
Dec 06 06:46:26 localhost systemd[1]: Starting dracut pre-udev hook...
Dec 06 06:46:26 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 06 06:46:26 localhost kernel: device-mapper: uevent: version 1.0.3
Dec 06 06:46:26 localhost kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com
Dec 06 06:46:26 localhost kernel: RPC: Registered named UNIX socket transport module.
Dec 06 06:46:26 localhost kernel: RPC: Registered udp transport module.
Dec 06 06:46:26 localhost kernel: RPC: Registered tcp transport module.
Dec 06 06:46:26 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 06 06:46:26 localhost rpc.statd[407]: Version 2.5.4 starting
Dec 06 06:46:26 localhost rpc.statd[407]: Initializing NSM state
Dec 06 06:46:26 localhost rpc.idmapd[412]: Setting log level to 0
Dec 06 06:46:26 localhost systemd[1]: Finished dracut pre-udev hook.
Dec 06 06:46:26 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 06 06:46:26 localhost systemd-udevd[425]: Using default interface naming scheme 'rhel-9.0'.
Dec 06 06:46:26 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 06 06:46:26 localhost systemd[1]: Starting dracut pre-trigger hook...
Dec 06 06:46:26 localhost systemd[1]: Finished dracut pre-trigger hook.
Dec 06 06:46:26 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 06 06:46:26 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 06 06:46:26 localhost systemd[1]: Reached target System Initialization.
Dec 06 06:46:26 localhost systemd[1]: Reached target Basic System.
Dec 06 06:46:26 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 06 06:46:26 localhost systemd[1]: Reached target Network.
Dec 06 06:46:26 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 06 06:46:26 localhost systemd[1]: Starting dracut initqueue hook...
Dec 06 06:46:26 localhost kernel: virtio_blk virtio2: [vda] 838860800 512-byte logical blocks (429 GB/400 GiB)
Dec 06 06:46:26 localhost kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Dec 06 06:46:26 localhost kernel: GPT:20971519 != 838860799
Dec 06 06:46:26 localhost kernel: GPT:Alternate GPT header not at the end of the disk.
Dec 06 06:46:26 localhost kernel: GPT:20971519 != 838860799
Dec 06 06:46:26 localhost kernel: GPT: Use GNU Parted to correct GPT errors.
Dec 06 06:46:26 localhost kernel:  vda: vda1 vda2 vda3 vda4
Dec 06 06:46:26 localhost kernel: libata version 3.00 loaded.
Dec 06 06:46:26 localhost systemd-udevd[453]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 06:46:26 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Dec 06 06:46:26 localhost kernel: scsi host0: ata_piix
Dec 06 06:46:26 localhost kernel: scsi host1: ata_piix
Dec 06 06:46:26 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14
Dec 06 06:46:26 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15
Dec 06 06:46:26 localhost systemd[1]: Found device /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a.
Dec 06 06:46:27 localhost systemd[1]: Reached target Initrd Root Device.
Dec 06 06:46:27 localhost kernel: ata1: found unknown device (class 0)
Dec 06 06:46:27 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 06 06:46:27 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 06 06:46:27 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 06 06:46:27 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 06 06:46:27 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 06 06:46:27 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Dec 06 06:46:27 localhost systemd[1]: Finished dracut initqueue hook.
Dec 06 06:46:27 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Dec 06 06:46:27 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Dec 06 06:46:27 localhost systemd[1]: Reached target Remote File Systems.
Dec 06 06:46:27 localhost systemd[1]: Starting dracut pre-mount hook...
Dec 06 06:46:27 localhost systemd[1]: Finished dracut pre-mount hook.
Dec 06 06:46:27 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a...
Dec 06 06:46:27 localhost systemd-fsck[512]: /usr/sbin/fsck.xfs: XFS file system.
Dec 06 06:46:27 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a.
Dec 06 06:46:27 localhost systemd[1]: Mounting /sysroot...
Dec 06 06:46:27 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 06 06:46:27 localhost kernel: XFS (vda4): Mounting V5 Filesystem
Dec 06 06:46:27 localhost kernel: XFS (vda4): Ending clean mount
Dec 06 06:46:27 localhost systemd[1]: Mounted /sysroot.
Dec 06 06:46:27 localhost systemd[1]: Reached target Initrd Root File System.
Dec 06 06:46:27 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 06 06:46:27 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 06 06:46:27 localhost systemd[1]: Reached target Initrd File Systems.
Dec 06 06:46:27 localhost systemd[1]: Reached target Initrd Default Target.
Dec 06 06:46:27 localhost systemd[1]: Starting dracut mount hook...
Dec 06 06:46:27 localhost systemd[1]: Finished dracut mount hook.
Dec 06 06:46:27 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 06 06:46:27 localhost rpc.idmapd[412]: exiting on signal 15
Dec 06 06:46:27 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 06 06:46:27 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 06 06:46:27 localhost systemd[1]: Stopped target Network.
Dec 06 06:46:27 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 06 06:46:27 localhost systemd[1]: Stopped target Timer Units.
Dec 06 06:46:27 localhost systemd[1]: dbus.socket: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 06 06:46:27 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 06 06:46:27 localhost systemd[1]: Stopped target Initrd Default Target.
Dec 06 06:46:27 localhost systemd[1]: Stopped target Basic System.
Dec 06 06:46:27 localhost systemd[1]: Stopped target Initrd Root Device.
Dec 06 06:46:27 localhost systemd[1]: Stopped target Initrd /usr File System.
Dec 06 06:46:27 localhost systemd[1]: Stopped target Path Units.
Dec 06 06:46:27 localhost systemd[1]: Stopped target Remote File Systems.
Dec 06 06:46:27 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 06 06:46:27 localhost systemd[1]: Stopped target Slice Units.
Dec 06 06:46:27 localhost systemd[1]: Stopped target Socket Units.
Dec 06 06:46:27 localhost systemd[1]: Stopped target System Initialization.
Dec 06 06:46:27 localhost systemd[1]: Stopped target Local File Systems.
Dec 06 06:46:27 localhost systemd[1]: Stopped target Swaps.
Dec 06 06:46:27 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: Stopped dracut mount hook.
Dec 06 06:46:27 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: Stopped dracut pre-mount hook.
Dec 06 06:46:27 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Dec 06 06:46:27 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 06 06:46:27 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: Stopped dracut initqueue hook.
Dec 06 06:46:27 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: Stopped Apply Kernel Variables.
Dec 06 06:46:27 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: Stopped Load Kernel Modules.
Dec 06 06:46:27 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Dec 06 06:46:27 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: Stopped Coldplug All udev Devices.
Dec 06 06:46:27 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: Stopped dracut pre-trigger hook.
Dec 06 06:46:27 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 06 06:46:27 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: Stopped Setup Virtual Console.
Dec 06 06:46:27 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 06 06:46:27 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 06 06:46:27 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: Closed udev Control Socket.
Dec 06 06:46:27 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: Closed udev Kernel Socket.
Dec 06 06:46:27 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: Stopped dracut pre-udev hook.
Dec 06 06:46:27 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: Stopped dracut cmdline hook.
Dec 06 06:46:27 localhost systemd[1]: Starting Cleanup udev Database...
Dec 06 06:46:27 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 06 06:46:27 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Dec 06 06:46:27 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: Stopped Create System Users.
Dec 06 06:46:27 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 06 06:46:27 localhost systemd[1]: Finished Cleanup udev Database.
Dec 06 06:46:27 localhost systemd[1]: Reached target Switch Root.
Dec 06 06:46:27 localhost systemd[1]: Starting Switch Root...
Dec 06 06:46:28 localhost systemd[1]: Switching root.
Dec 06 06:46:28 localhost systemd-journald[284]: Journal stopped
Dec 06 06:46:28 localhost systemd-journald[284]: Received SIGTERM from PID 1 (systemd).
Dec 06 06:46:28 localhost kernel: audit: type=1404 audit(1765003588.093:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 06 06:46:28 localhost kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 06:46:28 localhost kernel: SELinux:  policy capability open_perms=1
Dec 06 06:46:28 localhost kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 06:46:28 localhost kernel: SELinux:  policy capability always_check_network=0
Dec 06 06:46:28 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 06:46:28 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 06:46:28 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 06:46:28 localhost kernel: audit: type=1403 audit(1765003588.193:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 06 06:46:28 localhost systemd[1]: Successfully loaded SELinux policy in 102.230ms.
Dec 06 06:46:28 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.885ms.
Dec 06 06:46:28 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 06 06:46:28 localhost systemd[1]: Detected virtualization kvm.
Dec 06 06:46:28 localhost systemd[1]: Detected architecture x86-64.
Dec 06 06:46:28 localhost systemd-rc-local-generator[582]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 06:46:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 06:46:28 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 06 06:46:28 localhost systemd[1]: Stopped Switch Root.
Dec 06 06:46:28 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 06 06:46:28 localhost systemd[1]: Created slice Slice /system/getty.
Dec 06 06:46:28 localhost systemd[1]: Created slice Slice /system/modprobe.
Dec 06 06:46:28 localhost systemd[1]: Created slice Slice /system/serial-getty.
Dec 06 06:46:28 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Dec 06 06:46:28 localhost systemd[1]: Created slice Slice /system/systemd-fsck.
Dec 06 06:46:28 localhost systemd[1]: Created slice User and Session Slice.
Dec 06 06:46:28 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 06 06:46:28 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Dec 06 06:46:28 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec 06 06:46:28 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 06 06:46:28 localhost systemd[1]: Stopped target Switch Root.
Dec 06 06:46:28 localhost systemd[1]: Stopped target Initrd File Systems.
Dec 06 06:46:28 localhost systemd[1]: Stopped target Initrd Root File System.
Dec 06 06:46:28 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Dec 06 06:46:28 localhost systemd[1]: Reached target Path Units.
Dec 06 06:46:28 localhost systemd[1]: Reached target rpc_pipefs.target.
Dec 06 06:46:28 localhost systemd[1]: Reached target Slice Units.
Dec 06 06:46:28 localhost systemd[1]: Reached target Swaps.
Dec 06 06:46:28 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Dec 06 06:46:28 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Dec 06 06:46:28 localhost systemd[1]: Reached target RPC Port Mapper.
Dec 06 06:46:28 localhost systemd[1]: Listening on Process Core Dump Socket.
Dec 06 06:46:28 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Dec 06 06:46:28 localhost systemd[1]: Listening on udev Control Socket.
Dec 06 06:46:28 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 06 06:46:28 localhost systemd[1]: Mounting Huge Pages File System...
Dec 06 06:46:28 localhost systemd[1]: Mounting POSIX Message Queue File System...
Dec 06 06:46:28 localhost systemd[1]: Mounting Kernel Debug File System...
Dec 06 06:46:28 localhost systemd[1]: Mounting Kernel Trace File System...
Dec 06 06:46:28 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 06 06:46:28 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 06 06:46:28 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 06 06:46:28 localhost systemd[1]: Starting Load Kernel Module drm...
Dec 06 06:46:28 localhost systemd[1]: Starting Load Kernel Module fuse...
Dec 06 06:46:28 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec 06 06:46:28 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 06 06:46:28 localhost systemd[1]: Stopped File System Check on Root Device.
Dec 06 06:46:28 localhost systemd[1]: Stopped Journal Service.
Dec 06 06:46:28 localhost systemd[1]: Starting Journal Service...
Dec 06 06:46:28 localhost systemd[1]: Starting Load Kernel Modules...
Dec 06 06:46:28 localhost systemd[1]: Starting Generate network units from Kernel command line...
Dec 06 06:46:28 localhost kernel: fuse: init (API version 7.36)
Dec 06 06:46:28 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Dec 06 06:46:28 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 06 06:46:28 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 06 06:46:28 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec 06 06:46:28 localhost systemd[1]: Mounted Huge Pages File System.
Dec 06 06:46:28 localhost systemd-journald[618]: Journal started
Dec 06 06:46:28 localhost systemd-journald[618]: Runtime Journal (/run/log/journal/4b30904fc4748c16d0c72dbebcabab49) is 8.0M, max 314.7M, 306.7M free.
Dec 06 06:46:28 localhost systemd[1]: Queued start job for default target Multi-User System.
Dec 06 06:46:28 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 06 06:46:28 localhost systemd-modules-load[619]: Module 'msr' is built in
Dec 06 06:46:28 localhost systemd[1]: Started Journal Service.
Dec 06 06:46:28 localhost systemd[1]: Mounted POSIX Message Queue File System.
Dec 06 06:46:28 localhost systemd[1]: Mounted Kernel Debug File System.
Dec 06 06:46:28 localhost systemd[1]: Mounted Kernel Trace File System.
Dec 06 06:46:28 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 06 06:46:28 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 06 06:46:28 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 06 06:46:28 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 06 06:46:28 localhost systemd[1]: Finished Load Kernel Module fuse.
Dec 06 06:46:28 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec 06 06:46:28 localhost systemd[1]: Finished Load Kernel Modules.
Dec 06 06:46:28 localhost systemd[1]: Finished Generate network units from Kernel command line.
Dec 06 06:46:28 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Dec 06 06:46:28 localhost systemd[1]: Mounting FUSE Control File System...
Dec 06 06:46:28 localhost systemd[1]: Mounting Kernel Configuration File System...
Dec 06 06:46:28 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 06 06:46:28 localhost systemd[1]: Starting Rebuild Hardware Database...
Dec 06 06:46:28 localhost kernel: ACPI: bus type drm_connector registered
Dec 06 06:46:28 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Dec 06 06:46:28 localhost systemd[1]: Starting Load/Save Random Seed...
Dec 06 06:46:28 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 06 06:46:28 localhost systemd[1]: Starting Create System Users...
Dec 06 06:46:28 localhost systemd-journald[618]: Runtime Journal (/run/log/journal/4b30904fc4748c16d0c72dbebcabab49) is 8.0M, max 314.7M, 306.7M free.
Dec 06 06:46:28 localhost systemd-journald[618]: Received client request to flush runtime journal.
Dec 06 06:46:28 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 06 06:46:28 localhost systemd[1]: Finished Load Kernel Module drm.
Dec 06 06:46:28 localhost systemd[1]: Mounted FUSE Control File System.
Dec 06 06:46:28 localhost systemd[1]: Mounted Kernel Configuration File System.
Dec 06 06:46:28 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Dec 06 06:46:28 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 06 06:46:28 localhost systemd[1]: Finished Load/Save Random Seed.
Dec 06 06:46:28 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 06 06:46:28 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 06 06:46:28 localhost systemd-sysusers[632]: Creating group 'sgx' with GID 989.
Dec 06 06:46:28 localhost systemd-sysusers[632]: Creating group 'systemd-oom' with GID 988.
Dec 06 06:46:28 localhost systemd-sysusers[632]: Creating user 'systemd-oom' (systemd Userspace OOM Killer) with UID 988 and GID 988.
Dec 06 06:46:28 localhost systemd[1]: Finished Create System Users.
Dec 06 06:46:28 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 06 06:46:28 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 06 06:46:28 localhost systemd[1]: Reached target Preparation for Local File Systems.
Dec 06 06:46:28 localhost systemd[1]: Set up automount EFI System Partition Automount.
Dec 06 06:46:29 localhost systemd[1]: Finished Rebuild Hardware Database.
Dec 06 06:46:29 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 06 06:46:29 localhost systemd-udevd[636]: Using default interface naming scheme 'rhel-9.0'.
Dec 06 06:46:29 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 06 06:46:29 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 06 06:46:29 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 06 06:46:29 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 06 06:46:29 localhost systemd-udevd[641]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 06:46:29 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec 06 06:46:29 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/7B77-95E7 being skipped.
Dec 06 06:46:29 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/7B77-95E7...
Dec 06 06:46:29 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/b141154b-6a70-437a-a97f-d160c9ba37eb being skipped.
Dec 06 06:46:29 localhost systemd[1]: Mounting /boot...
Dec 06 06:46:29 localhost kernel: XFS (vda3): Mounting V5 Filesystem
Dec 06 06:46:29 localhost systemd-fsck[684]: fsck.fat 4.2 (2021-01-31)
Dec 06 06:46:29 localhost systemd-fsck[684]: /dev/vda2: 12 files, 1782/51145 clusters
Dec 06 06:46:29 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/7B77-95E7.
Dec 06 06:46:29 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec 06 06:46:29 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec 06 06:46:29 localhost kernel: XFS (vda3): Ending clean mount
Dec 06 06:46:29 localhost kernel: xfs filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff)
Dec 06 06:46:29 localhost systemd[1]: Mounted /boot.
Dec 06 06:46:29 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec 06 06:46:29 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec 06 06:46:29 localhost kernel: Console: switching to colour dummy device 80x25
Dec 06 06:46:29 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 06 06:46:29 localhost kernel: [drm] features: -context_init
Dec 06 06:46:29 localhost kernel: [drm] number of scanouts: 1
Dec 06 06:46:29 localhost kernel: [drm] number of cap sets: 0
Dec 06 06:46:29 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 0 for virtio0 on minor 0
Dec 06 06:46:29 localhost kernel: virtio_gpu virtio0: [drm] drm_plane_enable_fb_damage_clips() not called
Dec 06 06:46:29 localhost kernel: Console: switching to colour frame buffer device 128x48
Dec 06 06:46:29 localhost kernel: virtio_gpu virtio0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 06 06:46:29 localhost kernel: SVM: TSC scaling supported
Dec 06 06:46:29 localhost kernel: kvm: Nested Virtualization enabled
Dec 06 06:46:29 localhost kernel: SVM: kvm: Nested Paging enabled
Dec 06 06:46:29 localhost kernel: SVM: LBR virtualization supported
Dec 06 06:46:29 localhost systemd[1]: Mounting /boot/efi...
Dec 06 06:46:29 localhost systemd[1]: Mounted /boot/efi.
Dec 06 06:46:29 localhost systemd[1]: Reached target Local File Systems.
Dec 06 06:46:29 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec 06 06:46:29 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec 06 06:46:29 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 06 06:46:29 localhost systemd[1]: Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 06 06:46:29 localhost systemd[1]: Starting Automatic Boot Loader Update...
Dec 06 06:46:29 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec 06 06:46:29 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 06 06:46:29 localhost systemd[1]: efi.automount: Got automount request for /efi, triggered by 716 (bootctl)
Dec 06 06:46:29 localhost systemd[1]: Starting File System Check on /dev/vda2...
Dec 06 06:46:29 localhost systemd[1]: Finished File System Check on /dev/vda2.
Dec 06 06:46:29 localhost systemd[1]: Mounting EFI System Partition Automount...
Dec 06 06:46:29 localhost systemd[1]: Mounted EFI System Partition Automount.
Dec 06 06:46:29 localhost systemd[1]: Finished Automatic Boot Loader Update.
Dec 06 06:46:29 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 06 06:46:29 localhost systemd[1]: Starting Security Auditing Service...
Dec 06 06:46:29 localhost systemd[1]: Starting RPC Bind...
Dec 06 06:46:29 localhost systemd[1]: Starting Rebuild Journal Catalog...
Dec 06 06:46:29 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec 06 06:46:29 localhost auditd[726]: audit dispatcher initialized with q_depth=1200 and 1 active plugins
Dec 06 06:46:29 localhost auditd[726]: Init complete, auditd 3.0.7 listening for events (startup state enable)
Dec 06 06:46:29 localhost systemd[1]: Finished Rebuild Journal Catalog.
Dec 06 06:46:29 localhost systemd[1]: Starting Update is Completed...
Dec 06 06:46:29 localhost systemd[1]: Finished Update is Completed.
Dec 06 06:46:29 localhost systemd[1]: Started RPC Bind.
Dec 06 06:46:29 localhost augenrules[731]: /sbin/augenrules: No change
Dec 06 06:46:29 localhost augenrules[742]: No rules
Dec 06 06:46:29 localhost augenrules[742]: enabled 1
Dec 06 06:46:29 localhost augenrules[742]: failure 1
Dec 06 06:46:29 localhost augenrules[742]: pid 726
Dec 06 06:46:29 localhost augenrules[742]: rate_limit 0
Dec 06 06:46:29 localhost augenrules[742]: backlog_limit 8192
Dec 06 06:46:29 localhost augenrules[742]: lost 0
Dec 06 06:46:29 localhost augenrules[742]: backlog 3
Dec 06 06:46:29 localhost augenrules[742]: backlog_wait_time 60000
Dec 06 06:46:29 localhost augenrules[742]: backlog_wait_time_actual 0
Dec 06 06:46:29 localhost augenrules[742]: enabled 1
Dec 06 06:46:29 localhost augenrules[742]: failure 1
Dec 06 06:46:29 localhost augenrules[742]: pid 726
Dec 06 06:46:29 localhost augenrules[742]: rate_limit 0
Dec 06 06:46:29 localhost augenrules[742]: backlog_limit 8192
Dec 06 06:46:29 localhost augenrules[742]: lost 0
Dec 06 06:46:29 localhost augenrules[742]: backlog 4
Dec 06 06:46:29 localhost augenrules[742]: backlog_wait_time 60000
Dec 06 06:46:29 localhost augenrules[742]: backlog_wait_time_actual 0
Dec 06 06:46:29 localhost augenrules[742]: enabled 1
Dec 06 06:46:29 localhost augenrules[742]: failure 1
Dec 06 06:46:29 localhost augenrules[742]: pid 726
Dec 06 06:46:29 localhost augenrules[742]: rate_limit 0
Dec 06 06:46:29 localhost augenrules[742]: backlog_limit 8192
Dec 06 06:46:29 localhost augenrules[742]: lost 0
Dec 06 06:46:29 localhost augenrules[742]: backlog 3
Dec 06 06:46:29 localhost augenrules[742]: backlog_wait_time 60000
Dec 06 06:46:29 localhost augenrules[742]: backlog_wait_time_actual 0
Dec 06 06:46:29 localhost systemd[1]: Started Security Auditing Service.
Dec 06 06:46:29 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec 06 06:46:29 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec 06 06:46:29 localhost systemd[1]: Reached target System Initialization.
Dec 06 06:46:29 localhost systemd[1]: Started dnf makecache --timer.
Dec 06 06:46:29 localhost systemd[1]: Started Daily rotation of log files.
Dec 06 06:46:29 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec 06 06:46:29 localhost systemd[1]: Reached target Timer Units.
Dec 06 06:46:29 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 06 06:46:29 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec 06 06:46:29 localhost systemd[1]: Reached target Socket Units.
Dec 06 06:46:29 localhost systemd[1]: Starting Initial cloud-init job (pre-networking)...
Dec 06 06:46:29 localhost systemd[1]: Starting D-Bus System Message Bus...
Dec 06 06:46:29 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 06 06:46:29 localhost systemd[1]: Started D-Bus System Message Bus.
Dec 06 06:46:29 localhost systemd[1]: Reached target Basic System.
Dec 06 06:46:29 localhost dbus-broker-lau[751]: Ready
Dec 06 06:46:29 localhost systemd[1]: Starting NTP client/server...
Dec 06 06:46:29 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec 06 06:46:29 localhost systemd[1]: Started irqbalance daemon.
Dec 06 06:46:29 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec 06 06:46:29 localhost systemd[1]: Starting System Logging Service...
Dec 06 06:46:29 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 06:46:29 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 06:46:29 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 06:46:29 localhost systemd[1]: Reached target sshd-keygen.target.
Dec 06 06:46:29 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec 06 06:46:29 localhost systemd[1]: Reached target User and Group Name Lookups.
Dec 06 06:46:29 localhost systemd[1]: Starting User Login Management...
Dec 06 06:46:29 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec 06 06:46:29 localhost systemd[1]: Started System Logging Service.
Dec 06 06:46:29 localhost rsyslogd[759]: [origin software="rsyslogd" swVersion="8.2102.0-111.el9" x-pid="759" x-info="https://www.rsyslog.com"] start
Dec 06 06:46:29 localhost rsyslogd[759]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2040 ]
Dec 06 06:46:30 localhost chronyd[766]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Dec 06 06:46:30 localhost chronyd[766]: Using right/UTC timezone to obtain leap second data
Dec 06 06:46:30 localhost chronyd[766]: Loaded seccomp filter (level 2)
Dec 06 06:46:30 localhost systemd[1]: Started NTP client/server.
Dec 06 06:46:30 localhost systemd-logind[760]: New seat seat0.
Dec 06 06:46:30 localhost systemd-logind[760]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 06 06:46:30 localhost systemd-logind[760]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 06 06:46:30 localhost systemd[1]: Started User Login Management.
Dec 06 06:46:30 localhost rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 06:46:30 localhost cloud-init[770]: Cloud-init v. 22.1-9.el9 running 'init-local' at Sat, 06 Dec 2025 06:46:30 +0000. Up 5.48 seconds.
Dec 06 06:46:30 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Dec 06 06:46:30 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Dec 06 06:46:30 localhost systemd[1]: Starting Hostname Service...
Dec 06 06:46:30 localhost systemd[1]: Started Hostname Service.
Dec 06 06:46:30 np0005548790.novalocal systemd-hostnamed[784]: Hostname set to <np0005548790.novalocal> (static)
Dec 06 06:46:30 np0005548790.novalocal systemd[1]: run-cloud\x2dinit-tmp-tmp05pwm6iw.mount: Deactivated successfully.
Dec 06 06:46:30 np0005548790.novalocal systemd[1]: Finished Initial cloud-init job (pre-networking).
Dec 06 06:46:30 np0005548790.novalocal systemd[1]: Reached target Preparation for Network.
Dec 06 06:46:30 np0005548790.novalocal systemd[1]: Starting Network Manager...
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.7749] NetworkManager (version 1.42.2-1.el9) is starting... (boot:212d9f56-eeac-46d2-9ba5-a3e0b9ec2c1a)
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.7757] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf)
Dec 06 06:46:30 np0005548790.novalocal systemd[1]: Started Network Manager.
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.7791] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 06 06:46:30 np0005548790.novalocal systemd[1]: Reached target Network.
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.7848] manager[0x557e120b8020]: monitoring kernel firmware directory '/lib/firmware'.
Dec 06 06:46:30 np0005548790.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.7890] hostname: hostname: using hostnamed
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.7891] hostname: static hostname changed from (none) to "np0005548790.novalocal"
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.7899] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 06 06:46:30 np0005548790.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Dec 06 06:46:30 np0005548790.novalocal systemd[1]: Starting Enable periodic update of entitlement certificates....
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8035] manager[0x557e120b8020]: rfkill: Wi-Fi hardware radio set enabled
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8036] manager[0x557e120b8020]: rfkill: WWAN hardware radio set enabled
Dec 06 06:46:30 np0005548790.novalocal systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 06 06:46:30 np0005548790.novalocal systemd[1]: Started Enable periodic update of entitlement certificates..
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8117] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so)
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8120] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8123] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8124] manager: Networking is enabled by state file
Dec 06 06:46:30 np0005548790.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8143] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8143] settings: Loaded settings plugin: keyfile (internal)
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8180] dhcp: init: Using DHCP client 'internal'
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8185] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8206] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8215] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external')
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8226] device (lo): Activation: starting connection 'lo' (a79cf659-28d7-404e-b1b6-918ea52b62d2)
Dec 06 06:46:30 np0005548790.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8240] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8245] device (eth0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external')
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8300] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external')
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8303] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external')
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8305] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external')
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8309] device (eth0): carrier: link connected
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8312] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external')
Dec 06 06:46:30 np0005548790.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8319] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed')
Dec 06 06:46:30 np0005548790.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 06 06:46:30 np0005548790.novalocal systemd[1]: Reached target NFS client services.
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8356] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8363] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8364] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8367] manager: NetworkManager state is now CONNECTING
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8370] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8380] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')
Dec 06 06:46:30 np0005548790.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8384] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 06 06:46:30 np0005548790.novalocal systemd[1]: Reached target Remote File Systems.
Dec 06 06:46:30 np0005548790.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 06 06:46:30 np0005548790.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8497] dhcp4 (eth0): state changed new lease, address=38.102.83.234
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8506] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8532] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external')
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8540] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed')
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8547] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external')
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8554] device (lo): Activation: successful, device activated.
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8562] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed')
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8568] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed')
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8573] manager: NetworkManager state is now CONNECTED_SITE
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8576] device (eth0): Activation: successful, device activated.
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8582] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 06 06:46:30 np0005548790.novalocal NetworkManager[789]: <info>  [1765003590.8588] manager: startup complete
Dec 06 06:46:30 np0005548790.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 06 06:46:30 np0005548790.novalocal systemd[1]: Starting Initial cloud-init job (metadata service crawler)...
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: Cloud-init v. 22.1-9.el9 running 'init' at Sat, 06 Dec 2025 06:46:31 +0000. Up 6.31 seconds.
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: ci-info: |  eth0  | True |        38.102.83.234         | 255.255.255.0 | global | fa:16:3e:7e:49:5b |
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: ci-info: |  eth0  | True | fe80::f816:3eff:fe7e:495b/64 |       .       |  link  | fa:16:3e:7e:49:5b |
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec 06 06:46:31 np0005548790.novalocal cloud-init[994]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 06 06:46:31 np0005548790.novalocal systemd[1]: Starting Authorization Manager...
Dec 06 06:46:31 np0005548790.novalocal polkitd[1036]: Started polkitd version 0.117
Dec 06 06:46:31 np0005548790.novalocal systemd[1]: Started Dynamic System Tuning Daemon.
Dec 06 06:46:31 np0005548790.novalocal polkitd[1036]: Loading rules from directory /etc/polkit-1/rules.d
Dec 06 06:46:31 np0005548790.novalocal polkitd[1036]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 06 06:46:31 np0005548790.novalocal polkitd[1036]: Finished loading, compiling and executing 4 rules
Dec 06 06:46:31 np0005548790.novalocal systemd[1]: Started Authorization Manager.
Dec 06 06:46:31 np0005548790.novalocal polkitd[1036]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Dec 06 06:46:32 np0005548790.novalocal useradd[1115]: new group: name=cloud-user, GID=1001
Dec 06 06:46:32 np0005548790.novalocal useradd[1115]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Dec 06 06:46:32 np0005548790.novalocal useradd[1115]: add 'cloud-user' to group 'adm'
Dec 06 06:46:32 np0005548790.novalocal useradd[1115]: add 'cloud-user' to group 'systemd-journal'
Dec 06 06:46:32 np0005548790.novalocal useradd[1115]: add 'cloud-user' to shadow group 'adm'
Dec 06 06:46:32 np0005548790.novalocal useradd[1115]: add 'cloud-user' to shadow group 'systemd-journal'
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: Generating public/private rsa key pair.
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: The key fingerprint is:
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: SHA256:G/SPPAQUIVJdvq73U/KmYfieJ0b+6b8wRH/dUQVqNew root@np0005548790.novalocal
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: The key's randomart image is:
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: +---[RSA 3072]----+
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |    ..o.++.  .+.+|
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |     . o..   o...|
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |        o . oo . |
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |       . o o. E +|
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |        S +  . .+|
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |         * ++ . .|
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |        . *o+*   |
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |         ..+=o*. |
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |        .. +*O+oo|
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: +----[SHA256]-----+
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: Generating public/private ecdsa key pair.
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: The key fingerprint is:
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: SHA256:x4wG4rjTyNyNDF1NdDHX011VFpwZgfXRU7N9qZxPwfQ root@np0005548790.novalocal
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: The key's randomart image is:
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: +---[ECDSA 256]---+
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |       .o +...+X^|
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |       o . o .=*%|
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |    . o .      *E|
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |   + o . +  . o o|
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |  o o   S +  + . |
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: | o B o . .    o  |
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |  * = .        . |
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |   .             |
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |                 |
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: +----[SHA256]-----+
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: Generating public/private ed25519 key pair.
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: The key fingerprint is:
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: SHA256:tUKIjy/6kLbOhk83p6Lkq4nkkHMqBGNq2aLlXm+l6oE root@np0005548790.novalocal
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: The key's randomart image is:
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: +--[ED25519 256]--+
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |                 |
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |     . .         |
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |    . . . .      |
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |o.   o . . .     |
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |+.o . . S .      |
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |.*.+ .  ..       |
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |B*E * oo         |
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |@O+* Bo          |
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: |OXO+=o.          |
Dec 06 06:46:33 np0005548790.novalocal cloud-init[994]: +----[SHA256]-----+
Dec 06 06:46:33 np0005548790.novalocal systemd[1]: Finished Initial cloud-init job (metadata service crawler).
Dec 06 06:46:33 np0005548790.novalocal systemd[1]: Reached target Cloud-config availability.
Dec 06 06:46:33 np0005548790.novalocal systemd[1]: Reached target Network is Online.
Dec 06 06:46:33 np0005548790.novalocal systemd[1]: Starting Apply the settings specified in cloud-config...
Dec 06 06:46:33 np0005548790.novalocal systemd[1]: Run Insights Client at boot was skipped because of an unmet condition check (ConditionPathExists=/etc/insights-client/.run_insights_client_next_boot).
Dec 06 06:46:33 np0005548790.novalocal systemd[1]: Starting Crash recovery kernel arming...
Dec 06 06:46:33 np0005548790.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Dec 06 06:46:33 np0005548790.novalocal systemd[1]: Starting OpenSSH server daemon...
Dec 06 06:46:33 np0005548790.novalocal sm-notify[1128]: Version 2.5.4 starting
Dec 06 06:46:33 np0005548790.novalocal systemd[1]: Starting Permit User Sessions...
Dec 06 06:46:33 np0005548790.novalocal systemd[1]: Started Notify NFS peers of a restart.
Dec 06 06:46:33 np0005548790.novalocal systemd[1]: Finished Permit User Sessions.
Dec 06 06:46:33 np0005548790.novalocal sshd[1129]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 06:46:33 np0005548790.novalocal systemd[1]: Started Command Scheduler.
Dec 06 06:46:33 np0005548790.novalocal sshd[1129]: Server listening on 0.0.0.0 port 22.
Dec 06 06:46:33 np0005548790.novalocal sshd[1129]: Server listening on :: port 22.
Dec 06 06:46:33 np0005548790.novalocal systemd[1]: Started Getty on tty1.
Dec 06 06:46:33 np0005548790.novalocal crond[1134]: (CRON) STARTUP (1.5.7)
Dec 06 06:46:33 np0005548790.novalocal crond[1134]: (CRON) INFO (Syslog will be used instead of sendmail.)
Dec 06 06:46:33 np0005548790.novalocal systemd[1]: Started Serial Getty on ttyS0.
Dec 06 06:46:33 np0005548790.novalocal crond[1134]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 52% if used.)
Dec 06 06:46:33 np0005548790.novalocal crond[1134]: (CRON) INFO (running with inotify support)
Dec 06 06:46:33 np0005548790.novalocal systemd[1]: Reached target Login Prompts.
Dec 06 06:46:33 np0005548790.novalocal systemd[1]: Started OpenSSH server daemon.
Dec 06 06:46:33 np0005548790.novalocal systemd[1]: Reached target Multi-User System.
Dec 06 06:46:33 np0005548790.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Dec 06 06:46:33 np0005548790.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 06 06:46:33 np0005548790.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Dec 06 06:46:33 np0005548790.novalocal kdumpctl[1132]: kdump: No kdump initial ramdisk found.
Dec 06 06:46:33 np0005548790.novalocal kdumpctl[1132]: kdump: Rebuilding /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img
Dec 06 06:46:33 np0005548790.novalocal cloud-init[1249]: Cloud-init v. 22.1-9.el9 running 'modules:config' at Sat, 06 Dec 2025 06:46:33 +0000. Up 9.06 seconds.
Dec 06 06:46:34 np0005548790.novalocal systemd[1]: Finished Apply the settings specified in cloud-config.
Dec 06 06:46:34 np0005548790.novalocal systemd[1]: Starting Execute cloud user/final scripts...
Dec 06 06:46:34 np0005548790.novalocal dracut[1414]: dracut-057-21.git20230214.el9
Dec 06 06:46:34 np0005548790.novalocal cloud-init[1432]: Cloud-init v. 22.1-9.el9 running 'modules:final' at Sat, 06 Dec 2025 06:46:34 +0000. Up 9.42 seconds.
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: Executing: /usr/bin/dracut --add kdumpbase --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  -o "plymouth resume ifcfg earlykdump" --mount "/dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device -f /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img 5.14.0-284.11.1.el9_2.x86_64
Dec 06 06:46:34 np0005548790.novalocal cloud-init[1450]: #############################################################
Dec 06 06:46:34 np0005548790.novalocal cloud-init[1453]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec 06 06:46:34 np0005548790.novalocal cloud-init[1462]: 256 SHA256:x4wG4rjTyNyNDF1NdDHX011VFpwZgfXRU7N9qZxPwfQ root@np0005548790.novalocal (ECDSA)
Dec 06 06:46:34 np0005548790.novalocal cloud-init[1469]: 256 SHA256:tUKIjy/6kLbOhk83p6Lkq4nkkHMqBGNq2aLlXm+l6oE root@np0005548790.novalocal (ED25519)
Dec 06 06:46:34 np0005548790.novalocal cloud-init[1476]: 3072 SHA256:G/SPPAQUIVJdvq73U/KmYfieJ0b+6b8wRH/dUQVqNew root@np0005548790.novalocal (RSA)
Dec 06 06:46:34 np0005548790.novalocal cloud-init[1479]: -----END SSH HOST KEY FINGERPRINTS-----
Dec 06 06:46:34 np0005548790.novalocal cloud-init[1482]: #############################################################
Dec 06 06:46:34 np0005548790.novalocal cloud-init[1432]: Cloud-init v. 22.1-9.el9 finished at Sat, 06 Dec 2025 06:46:34 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.65 seconds
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 06 06:46:34 np0005548790.novalocal systemd[1]: Reloading Network Manager...
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 06 06:46:34 np0005548790.novalocal NetworkManager[789]: <info>  [1765003594.5732] audit: op="reload" arg="0" pid=1583 uid=0 result="success"
Dec 06 06:46:34 np0005548790.novalocal NetworkManager[789]: <info>  [1765003594.5740] config: signal: SIGHUP (no changes from disk)
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 06 06:46:34 np0005548790.novalocal systemd[1]: Reloaded Network Manager.
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 06 06:46:34 np0005548790.novalocal systemd[1]: Finished Execute cloud user/final scripts.
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 06 06:46:34 np0005548790.novalocal systemd[1]: Reached target Cloud-init target.
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'ifcfg' will not be installed, because it's in the list to be omitted!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'plymouth' will not be installed, because it's in the list to be omitted!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'resume' will not be installed, because it's in the list to be omitted!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec 06 06:46:34 np0005548790.novalocal dracut[1417]: dracut module 'earlykdump' will not be installed, because it's in the list to be omitted!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: memstrack is not available
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: memstrack is not available
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: *** Including module: systemd ***
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: *** Including module: systemd-initrd ***
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: *** Including module: i18n ***
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: No KEYMAP configured.
Dec 06 06:46:35 np0005548790.novalocal chronyd[766]: Selected source 23.159.16.194 (2.rhel.pool.ntp.org)
Dec 06 06:46:35 np0005548790.novalocal chronyd[766]: System clock TAI offset set to 37 seconds
Dec 06 06:46:35 np0005548790.novalocal dracut[1417]: *** Including module: drm ***
Dec 06 06:46:36 np0005548790.novalocal dracut[1417]: *** Including module: prefixdevname ***
Dec 06 06:46:36 np0005548790.novalocal dracut[1417]: *** Including module: kernel-modules ***
Dec 06 06:46:36 np0005548790.novalocal dracut[1417]: *** Including module: kernel-modules-extra ***
Dec 06 06:46:36 np0005548790.novalocal dracut[1417]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Dec 06 06:46:36 np0005548790.novalocal dracut[1417]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Dec 06 06:46:36 np0005548790.novalocal dracut[1417]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Dec 06 06:46:36 np0005548790.novalocal dracut[1417]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Dec 06 06:46:36 np0005548790.novalocal dracut[1417]: *** Including module: qemu ***
Dec 06 06:46:36 np0005548790.novalocal dracut[1417]: *** Including module: fstab-sys ***
Dec 06 06:46:37 np0005548790.novalocal dracut[1417]: *** Including module: rootfs-block ***
Dec 06 06:46:37 np0005548790.novalocal dracut[1417]: *** Including module: terminfo ***
Dec 06 06:46:37 np0005548790.novalocal dracut[1417]: *** Including module: udev-rules ***
Dec 06 06:46:37 np0005548790.novalocal dracut[1417]: Skipping udev rule: 91-permissions.rules
Dec 06 06:46:37 np0005548790.novalocal dracut[1417]: Skipping udev rule: 80-drivers-modprobe.rules
Dec 06 06:46:37 np0005548790.novalocal dracut[1417]: *** Including module: virtiofs ***
Dec 06 06:46:37 np0005548790.novalocal dracut[1417]: *** Including module: dracut-systemd ***
Dec 06 06:46:37 np0005548790.novalocal dracut[1417]: *** Including module: usrmount ***
Dec 06 06:46:37 np0005548790.novalocal dracut[1417]: *** Including module: base ***
Dec 06 06:46:37 np0005548790.novalocal dracut[1417]: *** Including module: fs-lib ***
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]: *** Including module: kdumpbase ***
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]: *** Including module: microcode_ctl-fw_dir_override ***
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]:   microcode_ctl module: mangling fw_dir
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]:     microcode_ctl: configuration "intel" is ignored
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]:     microcode_ctl: final fw_dir: "/lib/firmware/updates/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware/updates /lib/firmware/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware"
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]: *** Including module: shutdown ***
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]: *** Including module: squash ***
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]: *** Including modules done ***
Dec 06 06:46:38 np0005548790.novalocal dracut[1417]: *** Installing kernel module dependencies ***
Dec 06 06:46:39 np0005548790.novalocal dracut[1417]: *** Installing kernel module dependencies done ***
Dec 06 06:46:39 np0005548790.novalocal dracut[1417]: *** Resolving executable dependencies ***
Dec 06 06:46:40 np0005548790.novalocal dracut[1417]: *** Resolving executable dependencies done ***
Dec 06 06:46:40 np0005548790.novalocal dracut[1417]: *** Hardlinking files ***
Dec 06 06:46:40 np0005548790.novalocal dracut[1417]: Mode:           real
Dec 06 06:46:40 np0005548790.novalocal dracut[1417]: Files:          1099
Dec 06 06:46:40 np0005548790.novalocal dracut[1417]: Linked:         3 files
Dec 06 06:46:40 np0005548790.novalocal dracut[1417]: Compared:       0 xattrs
Dec 06 06:46:40 np0005548790.novalocal dracut[1417]: Compared:       373 files
Dec 06 06:46:40 np0005548790.novalocal dracut[1417]: Saved:          61.04 KiB
Dec 06 06:46:40 np0005548790.novalocal dracut[1417]: Duration:       0.025961 seconds
Dec 06 06:46:40 np0005548790.novalocal dracut[1417]: *** Hardlinking files done ***
Dec 06 06:46:40 np0005548790.novalocal dracut[1417]: Could not find 'strip'. Not stripping the initramfs.
Dec 06 06:46:40 np0005548790.novalocal dracut[1417]: *** Generating early-microcode cpio image ***
Dec 06 06:46:40 np0005548790.novalocal dracut[1417]: *** Constructing AuthenticAMD.bin ***
Dec 06 06:46:40 np0005548790.novalocal dracut[1417]: *** Store current command line parameters ***
Dec 06 06:46:40 np0005548790.novalocal dracut[1417]: Stored kernel commandline:
Dec 06 06:46:40 np0005548790.novalocal dracut[1417]: No dracut internal kernel commandline stored in the initramfs
Dec 06 06:46:41 np0005548790.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 06 06:46:41 np0005548790.novalocal dracut[1417]: *** Install squash loader ***
Dec 06 06:46:41 np0005548790.novalocal dracut[1417]: *** Squashing the files inside the initramfs ***
Dec 06 06:46:42 np0005548790.novalocal dracut[1417]: *** Squashing the files inside the initramfs done ***
Dec 06 06:46:42 np0005548790.novalocal dracut[1417]: *** Creating image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' ***
Dec 06 06:46:42 np0005548790.novalocal dracut[1417]: *** Creating initramfs image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' done ***
Dec 06 06:46:43 np0005548790.novalocal kdumpctl[1132]: kdump: kexec: loaded kdump kernel
Dec 06 06:46:43 np0005548790.novalocal kdumpctl[1132]: kdump: Starting kdump: [OK]
Dec 06 06:46:43 np0005548790.novalocal systemd[1]: Finished Crash recovery kernel arming.
Dec 06 06:46:43 np0005548790.novalocal systemd[1]: Startup finished in 1.254s (kernel) + 2.043s (initrd) + 15.489s (userspace) = 18.787s.
Dec 06 06:46:51 np0005548790.novalocal sshd[4152]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 06:46:51 np0005548790.novalocal sshd[4152]: Connection reset by 38.102.83.114 port 50524 [preauth]
Dec 06 06:46:51 np0005548790.novalocal sshd[4154]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 06:46:51 np0005548790.novalocal sshd[4154]: Unable to negotiate with 38.102.83.114 port 50536: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Dec 06 06:46:51 np0005548790.novalocal sshd[4156]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 06:46:51 np0005548790.novalocal sshd[4156]: Connection reset by 38.102.83.114 port 50550 [preauth]
Dec 06 06:46:51 np0005548790.novalocal sshd[4158]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 06:46:51 np0005548790.novalocal sshd[4158]: Unable to negotiate with 38.102.83.114 port 50556: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Dec 06 06:46:51 np0005548790.novalocal sshd[4160]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 06:46:51 np0005548790.novalocal sshd[4160]: Unable to negotiate with 38.102.83.114 port 50566: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Dec 06 06:46:51 np0005548790.novalocal sshd[4162]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 06:46:51 np0005548790.novalocal sshd[4164]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 06:46:51 np0005548790.novalocal sshd[4166]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 06:46:51 np0005548790.novalocal sshd[4166]: fatal: mm_answer_sign: sign: error in libcrypto
Dec 06 06:46:51 np0005548790.novalocal sshd[4162]: Connection closed by 38.102.83.114 port 50572 [preauth]
Dec 06 06:46:51 np0005548790.novalocal sshd[4168]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 06:46:51 np0005548790.novalocal sshd[4168]: Unable to negotiate with 38.102.83.114 port 50598: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Dec 06 06:46:51 np0005548790.novalocal sshd[4164]: Connection closed by 38.102.83.114 port 50582 [preauth]
Dec 06 06:47:00 np0005548790.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 06 06:48:24 np0005548790.novalocal sshd[4175]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 06:48:24 np0005548790.novalocal sshd[4175]: Accepted publickey for zuul from 38.102.83.114 port 39212 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Dec 06 06:48:24 np0005548790.novalocal systemd[1]: Created slice User Slice of UID 1000.
Dec 06 06:48:24 np0005548790.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec 06 06:48:24 np0005548790.novalocal systemd-logind[760]: New session 1 of user zuul.
Dec 06 06:48:24 np0005548790.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec 06 06:48:24 np0005548790.novalocal systemd[1]: Starting User Manager for UID 1000...
Dec 06 06:48:24 np0005548790.novalocal systemd[4179]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 06:48:24 np0005548790.novalocal systemd[4179]: Queued start job for default target Main User Target.
Dec 06 06:48:24 np0005548790.novalocal systemd[4179]: Created slice User Application Slice.
Dec 06 06:48:24 np0005548790.novalocal systemd[4179]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 06 06:48:24 np0005548790.novalocal systemd[4179]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 06:48:24 np0005548790.novalocal systemd[4179]: Reached target Paths.
Dec 06 06:48:24 np0005548790.novalocal systemd[4179]: Reached target Timers.
Dec 06 06:48:24 np0005548790.novalocal systemd[4179]: Starting D-Bus User Message Bus Socket...
Dec 06 06:48:24 np0005548790.novalocal systemd[4179]: Starting Create User's Volatile Files and Directories...
Dec 06 06:48:24 np0005548790.novalocal systemd[4179]: Listening on D-Bus User Message Bus Socket.
Dec 06 06:48:24 np0005548790.novalocal systemd[4179]: Reached target Sockets.
Dec 06 06:48:24 np0005548790.novalocal systemd[4179]: Finished Create User's Volatile Files and Directories.
Dec 06 06:48:24 np0005548790.novalocal systemd[4179]: Reached target Basic System.
Dec 06 06:48:24 np0005548790.novalocal systemd[4179]: Reached target Main User Target.
Dec 06 06:48:24 np0005548790.novalocal systemd[4179]: Startup finished in 102ms.
Dec 06 06:48:24 np0005548790.novalocal systemd[1]: Started User Manager for UID 1000.
Dec 06 06:48:24 np0005548790.novalocal systemd[1]: Started Session 1 of User zuul.
Dec 06 06:48:24 np0005548790.novalocal sshd[4175]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 06:48:25 np0005548790.novalocal python3[4231]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:48:36 np0005548790.novalocal python3[4250]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:48:42 np0005548790.novalocal python3[4303]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:48:43 np0005548790.novalocal python3[4333]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec 06 06:48:46 np0005548790.novalocal python3[4349]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVgIoETU+ZMXzSQYJdf7tKLhQsLaB9easlDHbhHsBFXd1+Axjoyg338dVOvCx68r/a15lecdlSwbLqd4GXxUOdHnWLa1I9u6bd6azOwE0Dd6ZjnquN3BRq9dLJXMlKHhXMddL6WHNfxT/JOL+gKp0CM74naUBGqrzV05qlb19n7xZJtmxVohAGGeQdFwQJBVoQ6yZOjcJZ5CpbWCs4pFXZT/31fA0KIAJkrzAeUGRRkQEnzXY1riF0RHwvXaNJ0ZoAYfT7q263Pd5gnQEmpiBirUBH6CXJn4lIQyNMyVRbnKWemW9P1kyv2bjZUPg2b1xWBE7MBTs/wMt1RjdO9p+sxtwOd2IQMf1t3JLa2p3xqgxtGTMugpJUBr1TWwdLoHl+eAMuWZwAWofLWICHUlPzyTN8L8acu0im2eR60FEl9XdUjp8DYCBGxhhIVx+xZxj6nTnNc5T7GJpJlCTF+9YPlDVrLg8y/YXly0BoOqr7p+RaqMAJnoZymNDbuu9V3Vs= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:48:47 np0005548790.novalocal python3[4363]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:48 np0005548790.novalocal python3[4422]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 06:48:48 np0005548790.novalocal python3[4463]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765003728.1793175-395-220618469208667/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=00c07cb1874e45b180d5c151333e96b1_id_rsa follow=False checksum=59556e0a2f4b936183817041ae1f59f0f3c92dd9 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:51 np0005548790.novalocal python3[4536]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 06:48:51 np0005548790.novalocal python3[4577]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765003730.9409611-497-136236842101793/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=00c07cb1874e45b180d5c151333e96b1_id_rsa.pub follow=False checksum=2b77fe3fb3441abe077d8d93b68745bd8f418f92 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:48:53 np0005548790.novalocal python3[4605]: ansible-ping Invoked with data=pong
Dec 06 06:48:55 np0005548790.novalocal python3[4619]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 06:48:58 np0005548790.novalocal python3[4672]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec 06 06:49:01 np0005548790.novalocal python3[4694]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:02 np0005548790.novalocal python3[4708]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:02 np0005548790.novalocal python3[4722]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:04 np0005548790.novalocal python3[4736]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:04 np0005548790.novalocal python3[4750]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:04 np0005548790.novalocal python3[4764]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:06 np0005548790.novalocal sudo[4778]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hvbjzmlnrzvrqfqcccjkjbmxbegcotgj ; /usr/bin/python3
Dec 06 06:49:06 np0005548790.novalocal sudo[4778]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 06:49:07 np0005548790.novalocal python3[4780]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:07 np0005548790.novalocal sudo[4778]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:08 np0005548790.novalocal sudo[4826]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-achgmzxtlgnzsuqalbpkgrqbkllrefnu ; /usr/bin/python3
Dec 06 06:49:08 np0005548790.novalocal sudo[4826]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 06:49:08 np0005548790.novalocal python3[4828]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 06:49:08 np0005548790.novalocal sudo[4826]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:08 np0005548790.novalocal sudo[4869]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-urqsqteiofynxqbbgyfktbqtqgjzjyhu ; /usr/bin/python3
Dec 06 06:49:08 np0005548790.novalocal sudo[4869]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 06:49:08 np0005548790.novalocal python3[4871]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003748.3056865-105-142301762938387/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:08 np0005548790.novalocal sudo[4869]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:16 np0005548790.novalocal python3[4900]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:16 np0005548790.novalocal python3[4914]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:17 np0005548790.novalocal python3[4928]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:17 np0005548790.novalocal python3[4942]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:17 np0005548790.novalocal python3[4956]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:17 np0005548790.novalocal python3[4970]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:18 np0005548790.novalocal python3[4984]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:18 np0005548790.novalocal python3[4998]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:18 np0005548790.novalocal python3[5012]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:18 np0005548790.novalocal python3[5026]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:19 np0005548790.novalocal python3[5040]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:19 np0005548790.novalocal python3[5054]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:19 np0005548790.novalocal python3[5068]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:20 np0005548790.novalocal python3[5082]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:20 np0005548790.novalocal python3[5096]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:20 np0005548790.novalocal python3[5110]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:20 np0005548790.novalocal python3[5124]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:21 np0005548790.novalocal python3[5138]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:21 np0005548790.novalocal python3[5152]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:21 np0005548790.novalocal python3[5166]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:21 np0005548790.novalocal python3[5180]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:22 np0005548790.novalocal python3[5194]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:22 np0005548790.novalocal python3[5208]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:22 np0005548790.novalocal python3[5222]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:22 np0005548790.novalocal python3[5236]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:23 np0005548790.novalocal python3[5250]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 06:49:24 np0005548790.novalocal sudo[5264]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cfcsxndvcdtxmhmscieveioakcvoneae ; /usr/bin/python3
Dec 06 06:49:24 np0005548790.novalocal sudo[5264]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 06:49:25 np0005548790.novalocal python3[5266]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 06 06:49:25 np0005548790.novalocal systemd[1]: Starting Time & Date Service...
Dec 06 06:49:25 np0005548790.novalocal systemd[1]: Started Time & Date Service.
Dec 06 06:49:25 np0005548790.novalocal systemd-timedated[5268]: Changed time zone to 'UTC' (UTC).
Dec 06 06:49:25 np0005548790.novalocal sudo[5264]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:26 np0005548790.novalocal sudo[5285]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xtizheurjlczqvgdxkptpwhvtubddmvt ; /usr/bin/python3
Dec 06 06:49:26 np0005548790.novalocal sudo[5285]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 06:49:26 np0005548790.novalocal python3[5287]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:26 np0005548790.novalocal sudo[5285]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:28 np0005548790.novalocal python3[5333]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 06:49:28 np0005548790.novalocal python3[5374]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1765003767.8718057-500-26249290996828/source _original_basename=tmp1iiaz1yp follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:29 np0005548790.novalocal python3[5434]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 06:49:30 np0005548790.novalocal python3[5475]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765003769.4943426-589-8648771878696/source _original_basename=tmpoxof2lnw follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:31 np0005548790.novalocal sudo[5535]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ppdprudeumtycnfklqmexazwlqskskhw ; /usr/bin/python3
Dec 06 06:49:31 np0005548790.novalocal sudo[5535]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 06:49:31 np0005548790.novalocal python3[5537]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 06:49:31 np0005548790.novalocal sudo[5535]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:31 np0005548790.novalocal sudo[5578]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qkvpggcppdzyhukdviirhhneikjauvik ; /usr/bin/python3
Dec 06 06:49:31 np0005548790.novalocal sudo[5578]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 06:49:32 np0005548790.novalocal python3[5580]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765003771.5360122-732-79953851713417/source _original_basename=tmpqih43epf follow=False checksum=9aa420946138b91e611361a1f3fc02e7d91b7140 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:32 np0005548790.novalocal sudo[5578]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:33 np0005548790.novalocal python3[5608]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:49:33 np0005548790.novalocal python3[5624]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:49:34 np0005548790.novalocal sudo[5672]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ytfqekhgyirianrpsvyoujiddoqbrwha ; /usr/bin/python3
Dec 06 06:49:34 np0005548790.novalocal sudo[5672]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 06:49:34 np0005548790.novalocal python3[5674]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 06:49:34 np0005548790.novalocal sudo[5672]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:34 np0005548790.novalocal sudo[5715]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-paptvamkbghzosqcqrvtzvaxudhoatbt ; /usr/bin/python3
Dec 06 06:49:34 np0005548790.novalocal sudo[5715]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 06:49:35 np0005548790.novalocal python3[5717]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1765003774.4522738-853-55613218223377/source _original_basename=tmpn058kapl follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:35 np0005548790.novalocal sudo[5715]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:36 np0005548790.novalocal sudo[5746]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-esdwwdobbnudawvekgxrbjuudqvbgpav ; /usr/bin/python3
Dec 06 06:49:36 np0005548790.novalocal sudo[5746]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 06:49:36 np0005548790.novalocal python3[5748]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-8d81-2216-000000000023-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:49:36 np0005548790.novalocal sudo[5746]: pam_unix(sudo:session): session closed for user root
Dec 06 06:49:38 np0005548790.novalocal python3[5766]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-8d81-2216-000000000024-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec 06 06:49:39 np0005548790.novalocal python3[5784]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:49:55 np0005548790.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 06 06:50:37 np0005548790.novalocal sudo[5802]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jusvvxrsxaoymjddrxkmxlsdxdslgdth ; /usr/bin/python3
Dec 06 06:50:37 np0005548790.novalocal sudo[5802]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 06:50:37 np0005548790.novalocal python3[5804]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:50:38 np0005548790.novalocal sudo[5802]: pam_unix(sudo:session): session closed for user root
Dec 06 06:50:45 np0005548790.novalocal systemd[4179]: Starting Mark boot as successful...
Dec 06 06:50:45 np0005548790.novalocal systemd[4179]: Finished Mark boot as successful.
Dec 06 06:51:38 np0005548790.novalocal sshd[4188]: Received disconnect from 38.102.83.114 port 39212:11: disconnected by user
Dec 06 06:51:38 np0005548790.novalocal sshd[4188]: Disconnected from user zuul 38.102.83.114 port 39212
Dec 06 06:51:38 np0005548790.novalocal sshd[4175]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:51:38 np0005548790.novalocal systemd-logind[760]: Session 1 logged out. Waiting for processes to exit.
Dec 06 06:51:51 np0005548790.novalocal systemd[1]: Unmounting EFI System Partition Automount...
Dec 06 06:51:51 np0005548790.novalocal systemd[1]: efi.mount: Deactivated successfully.
Dec 06 06:51:51 np0005548790.novalocal systemd[1]: Unmounted EFI System Partition Automount.
Dec 06 06:52:46 np0005548790.novalocal sshd[5811]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 06:52:47 np0005548790.novalocal sshd[5811]: error: kex_exchange_identification: Connection closed by remote host
Dec 06 06:52:47 np0005548790.novalocal sshd[5811]: Connection closed by 198.235.24.168 port 53420
Dec 06 06:53:45 np0005548790.novalocal systemd[4179]: Created slice User Background Tasks Slice.
Dec 06 06:53:45 np0005548790.novalocal systemd[4179]: Starting Cleanup of User's Temporary Files and Directories...
Dec 06 06:53:45 np0005548790.novalocal systemd[4179]: Finished Cleanup of User's Temporary Files and Directories.
Dec 06 06:54:44 np0005548790.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000
Dec 06 06:54:44 np0005548790.novalocal kernel: pci 0000:00:07.0: reg 0x10: [io  0x0000-0x003f]
Dec 06 06:54:44 np0005548790.novalocal kernel: pci 0000:00:07.0: reg 0x14: [mem 0x00000000-0x00000fff]
Dec 06 06:54:44 np0005548790.novalocal kernel: pci 0000:00:07.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit pref]
Dec 06 06:54:44 np0005548790.novalocal kernel: pci 0000:00:07.0: reg 0x30: [mem 0x00000000-0x0007ffff pref]
Dec 06 06:54:44 np0005548790.novalocal kernel: pci 0000:00:07.0: BAR 6: assigned [mem 0xc0000000-0xc007ffff pref]
Dec 06 06:54:44 np0005548790.novalocal kernel: pci 0000:00:07.0: BAR 4: assigned [mem 0x440000000-0x440003fff 64bit pref]
Dec 06 06:54:44 np0005548790.novalocal kernel: pci 0000:00:07.0: BAR 1: assigned [mem 0xc0080000-0xc0080fff]
Dec 06 06:54:44 np0005548790.novalocal kernel: pci 0000:00:07.0: BAR 0: assigned [io  0x1000-0x103f]
Dec 06 06:54:44 np0005548790.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec 06 06:54:44 np0005548790.novalocal NetworkManager[789]: <info>  [1765004084.9572] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 06 06:54:44 np0005548790.novalocal systemd-udevd[5813]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 06:54:44 np0005548790.novalocal NetworkManager[789]: <info>  [1765004084.9708] device (eth1): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external')
Dec 06 06:54:44 np0005548790.novalocal NetworkManager[789]: <info>  [1765004084.9737] settings: (eth1): created default wired connection 'Wired connection 1'
Dec 06 06:54:44 np0005548790.novalocal NetworkManager[789]: <info>  [1765004084.9741] device (eth1): carrier: link connected
Dec 06 06:54:44 np0005548790.novalocal NetworkManager[789]: <info>  [1765004084.9743] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed')
Dec 06 06:54:44 np0005548790.novalocal NetworkManager[789]: <info>  [1765004084.9748] policy: auto-activating connection 'Wired connection 1' (7ab89191-74d6-302b-9815-2606df2499fa)
Dec 06 06:54:44 np0005548790.novalocal NetworkManager[789]: <info>  [1765004084.9755] device (eth1): Activation: starting connection 'Wired connection 1' (7ab89191-74d6-302b-9815-2606df2499fa)
Dec 06 06:54:44 np0005548790.novalocal NetworkManager[789]: <info>  [1765004084.9757] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')
Dec 06 06:54:44 np0005548790.novalocal NetworkManager[789]: <info>  [1765004084.9761] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')
Dec 06 06:54:44 np0005548790.novalocal NetworkManager[789]: <info>  [1765004084.9767] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')
Dec 06 06:54:44 np0005548790.novalocal NetworkManager[789]: <info>  [1765004084.9772] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 06 06:54:45 np0005548790.novalocal sshd[5816]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 06:54:46 np0005548790.novalocal kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Dec 06 06:54:46 np0005548790.novalocal sshd[5816]: Accepted publickey for zuul from 38.102.83.114 port 54582 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 06:54:46 np0005548790.novalocal systemd-logind[760]: New session 3 of user zuul.
Dec 06 06:54:46 np0005548790.novalocal systemd[1]: Started Session 3 of User zuul.
Dec 06 06:54:46 np0005548790.novalocal sshd[5816]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 06:54:46 np0005548790.novalocal python3[5833]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-1ece-0164-000000000475-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:54:59 np0005548790.novalocal sudo[5881]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mfmhenamdypljmdgaeqxnsdbhllnuwte ; OS_CLOUD=vexxhost /usr/bin/python3
Dec 06 06:54:59 np0005548790.novalocal sudo[5881]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 06:54:59 np0005548790.novalocal python3[5883]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 06:54:59 np0005548790.novalocal sudo[5881]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:00 np0005548790.novalocal sudo[5924]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oeugxgebjixmrxomkewiqlyotmsilaaw ; OS_CLOUD=vexxhost /usr/bin/python3
Dec 06 06:55:00 np0005548790.novalocal sudo[5924]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 06:55:00 np0005548790.novalocal python3[5926]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765004099.5706203-537-117672265233994/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=df6702baa55e79113f13ba279699894642710af1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:55:00 np0005548790.novalocal sudo[5924]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:00 np0005548790.novalocal sudo[5954]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hpcenoqrrmnsuaucwrsxtxqhkujgjxcr ; OS_CLOUD=vexxhost /usr/bin/python3
Dec 06 06:55:00 np0005548790.novalocal sudo[5954]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 06:55:00 np0005548790.novalocal python3[5956]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 06:55:00 np0005548790.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 06 06:55:00 np0005548790.novalocal systemd[1]: Stopped Network Manager Wait Online.
Dec 06 06:55:00 np0005548790.novalocal systemd[1]: Stopping Network Manager Wait Online...
Dec 06 06:55:00 np0005548790.novalocal systemd[1]: Stopping Network Manager...
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[789]: <info>  [1765004100.7972] caught SIGTERM, shutting down normally.
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[789]: <info>  [1765004100.8098] dhcp4 (eth0): canceled DHCP transaction
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[789]: <info>  [1765004100.8100] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[789]: <info>  [1765004100.8101] dhcp4 (eth0): state changed no lease
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[789]: <info>  [1765004100.8108] manager: NetworkManager state is now CONNECTING
Dec 06 06:55:00 np0005548790.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[789]: <info>  [1765004100.8198] dhcp4 (eth1): canceled DHCP transaction
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[789]: <info>  [1765004100.8199] dhcp4 (eth1): state changed no lease
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[789]: <info>  [1765004100.8272] exiting (success)
Dec 06 06:55:00 np0005548790.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 06 06:55:00 np0005548790.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 06 06:55:00 np0005548790.novalocal systemd[1]: Stopped Network Manager.
Dec 06 06:55:00 np0005548790.novalocal systemd[1]: NetworkManager.service: Consumed 2.534s CPU time.
Dec 06 06:55:00 np0005548790.novalocal systemd[1]: Starting Network Manager...
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.8756] NetworkManager (version 1.42.2-1.el9) is starting... (after a restart, boot:212d9f56-eeac-46d2-9ba5-a3e0b9ec2c1a)
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.8759] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf)
Dec 06 06:55:00 np0005548790.novalocal systemd[1]: Started Network Manager.
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.8783] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 06 06:55:00 np0005548790.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.8834] manager[0x563d61c10090]: monitoring kernel firmware directory '/lib/firmware'.
Dec 06 06:55:00 np0005548790.novalocal systemd[1]: Starting Hostname Service...
Dec 06 06:55:00 np0005548790.novalocal sudo[5954]: pam_unix(sudo:session): session closed for user root
Dec 06 06:55:00 np0005548790.novalocal systemd[1]: Started Hostname Service.
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9748] hostname: hostname: using hostnamed
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9748] hostname: static hostname changed from (none) to "np0005548790.novalocal"
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9756] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9766] manager[0x563d61c10090]: rfkill: Wi-Fi hardware radio set enabled
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9766] manager[0x563d61c10090]: rfkill: WWAN hardware radio set enabled
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9820] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so)
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9820] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9821] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9822] manager: Networking is enabled by state file
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9830] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9830] settings: Loaded settings plugin: keyfile (internal)
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9875] dhcp: init: Using DHCP client 'internal'
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9879] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9886] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9892] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external')
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9904] device (lo): Activation: starting connection 'lo' (a79cf659-28d7-404e-b1b6-918ea52b62d2)
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9913] device (eth0): carrier: link connected
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9919] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9925] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9926] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume')
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9933] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume')
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9942] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9950] device (eth1): carrier: link connected
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9955] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9963] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (7ab89191-74d6-302b-9815-2606df2499fa) (indicated)
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9963] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume')
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9969] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume')
Dec 06 06:55:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004100.9978] device (eth1): Activation: starting connection 'Wired connection 1' (7ab89191-74d6-302b-9815-2606df2499fa)
Dec 06 06:55:01 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004101.0004] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external')
Dec 06 06:55:01 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004101.0010] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external')
Dec 06 06:55:01 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004101.0012] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external')
Dec 06 06:55:01 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004101.0016] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume')
Dec 06 06:55:01 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004101.0020] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'assume')
Dec 06 06:55:01 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004101.0022] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume')
Dec 06 06:55:01 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004101.0025] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'assume')
Dec 06 06:55:01 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004101.0027] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external')
Dec 06 06:55:01 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004101.0035] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume')
Dec 06 06:55:01 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004101.0039] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 06 06:55:01 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004101.0051] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume')
Dec 06 06:55:01 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004101.0056] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 06 06:55:01 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004101.0105] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external')
Dec 06 06:55:01 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004101.0112] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external')
Dec 06 06:55:01 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004101.0118] device (lo): Activation: successful, device activated.
Dec 06 06:55:01 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004101.0127] dhcp4 (eth0): state changed new lease, address=38.102.83.234
Dec 06 06:55:01 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004101.0132] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 06 06:55:01 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004101.0241] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume')
Dec 06 06:55:01 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004101.0280] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume')
Dec 06 06:55:01 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004101.0282] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume')
Dec 06 06:55:01 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004101.0287] manager: NetworkManager state is now CONNECTED_SITE
Dec 06 06:55:01 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004101.0291] device (eth0): Activation: successful, device activated.
Dec 06 06:55:01 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004101.0297] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 06 06:55:01 np0005548790.novalocal python3[6024]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-1ece-0164-000000000136-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 06:55:11 np0005548790.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 06 06:55:31 np0005548790.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 06 06:55:45 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004145.7938] device (eth1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume')
Dec 06 06:55:45 np0005548790.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 06 06:55:45 np0005548790.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 06 06:55:45 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004145.8138] device (eth1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume')
Dec 06 06:55:45 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004145.8142] device (eth1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume')
Dec 06 06:55:45 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004145.8149] device (eth1): Activation: successful, device activated.
Dec 06 06:55:45 np0005548790.novalocal NetworkManager[5968]: <info>  [1765004145.8156] manager: startup complete
Dec 06 06:55:45 np0005548790.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 06 06:55:55 np0005548790.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 06 06:56:01 np0005548790.novalocal sshd[5819]: Received disconnect from 38.102.83.114 port 54582:11: disconnected by user
Dec 06 06:56:01 np0005548790.novalocal sshd[5819]: Disconnected from user zuul 38.102.83.114 port 54582
Dec 06 06:56:01 np0005548790.novalocal sshd[5816]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:56:01 np0005548790.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Dec 06 06:56:01 np0005548790.novalocal systemd[1]: session-3.scope: Consumed 1.463s CPU time.
Dec 06 06:56:01 np0005548790.novalocal systemd-logind[760]: Session 3 logged out. Waiting for processes to exit.
Dec 06 06:56:01 np0005548790.novalocal systemd-logind[760]: Removed session 3.
Dec 06 06:56:19 np0005548790.novalocal sshd[6059]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 06:56:19 np0005548790.novalocal sshd[6059]: Accepted publickey for zuul from 38.102.83.114 port 47848 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 06:56:19 np0005548790.novalocal systemd-logind[760]: New session 4 of user zuul.
Dec 06 06:56:19 np0005548790.novalocal systemd[1]: Started Session 4 of User zuul.
Dec 06 06:56:19 np0005548790.novalocal sshd[6059]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 06:56:20 np0005548790.novalocal sudo[6108]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jysgzrwnfmzswdbpnxpnqmbzqcrcqusv ; OS_CLOUD=vexxhost /usr/bin/python3
Dec 06 06:56:20 np0005548790.novalocal sudo[6108]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 06:56:20 np0005548790.novalocal python3[6110]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 06:56:20 np0005548790.novalocal sudo[6108]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:20 np0005548790.novalocal sudo[6151]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dwcdkecagpyaqrozirhwpstnflnooosw ; OS_CLOUD=vexxhost /usr/bin/python3
Dec 06 06:56:20 np0005548790.novalocal sudo[6151]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 06:56:20 np0005548790.novalocal python3[6153]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765004179.9661353-628-141198069363264/source _original_basename=tmpk6rsrc3c follow=False checksum=301833a7e04d955921816dd6c79e775f1a8a19aa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 06:56:20 np0005548790.novalocal sudo[6151]: pam_unix(sudo:session): session closed for user root
Dec 06 06:56:22 np0005548790.novalocal sshd[6059]: pam_unix(sshd:session): session closed for user zuul
Dec 06 06:56:22 np0005548790.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Dec 06 06:56:22 np0005548790.novalocal systemd-logind[760]: Session 4 logged out. Waiting for processes to exit.
Dec 06 06:56:22 np0005548790.novalocal systemd-logind[760]: Removed session 4.
Dec 06 06:56:23 np0005548790.novalocal sshd[6169]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 06:56:40 np0005548790.novalocal sshd[6169]: Connection closed by 167.94.138.160 port 16056 [preauth]
Dec 06 07:00:59 np0005548790.novalocal sshd[6172]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:01:01 np0005548790.novalocal sshd[6172]: Received disconnect from 45.78.219.217 port 35882:11: Bye Bye [preauth]
Dec 06 07:01:01 np0005548790.novalocal sshd[6172]: Disconnected from authenticating user root 45.78.219.217 port 35882 [preauth]
Dec 06 07:01:01 np0005548790.novalocal CROND[6175]: (root) CMD (run-parts /etc/cron.hourly)
Dec 06 07:01:01 np0005548790.novalocal run-parts[6178]: (/etc/cron.hourly) starting 0anacron
Dec 06 07:01:01 np0005548790.novalocal anacron[6186]: Anacron started on 2025-12-06
Dec 06 07:01:01 np0005548790.novalocal anacron[6186]: Will run job `cron.daily' in 42 min.
Dec 06 07:01:01 np0005548790.novalocal anacron[6186]: Will run job `cron.weekly' in 62 min.
Dec 06 07:01:01 np0005548790.novalocal anacron[6186]: Will run job `cron.monthly' in 82 min.
Dec 06 07:01:01 np0005548790.novalocal anacron[6186]: Jobs will be executed sequentially
Dec 06 07:01:01 np0005548790.novalocal run-parts[6188]: (/etc/cron.hourly) finished 0anacron
Dec 06 07:01:01 np0005548790.novalocal CROND[6174]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 06 07:01:09 np0005548790.novalocal sshd[6190]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:01:35 np0005548790.novalocal systemd[1]: Starting Cleanup of Temporary Directories...
Dec 06 07:01:35 np0005548790.novalocal systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec 06 07:01:35 np0005548790.novalocal systemd[1]: Finished Cleanup of Temporary Directories.
Dec 06 07:01:35 np0005548790.novalocal systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec 06 07:02:52 np0005548790.novalocal sshd[6194]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:03:09 np0005548790.novalocal sshd[6190]: fatal: Timeout before authentication for 125.122.30.255 port 38092
Dec 06 07:04:02 np0005548790.novalocal sshd[6196]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:04:09 np0005548790.novalocal sshd[6196]: Connection closed by 45.78.219.217 port 38550 [preauth]
Dec 06 07:04:09 np0005548790.novalocal systemd[1]: Starting dnf makecache...
Dec 06 07:04:10 np0005548790.novalocal dnf[6198]: Failed determining last makecache time.
Dec 06 07:04:10 np0005548790.novalocal dnf[6198]: There are no enabled repositories in "/etc/yum.repos.d", "/etc/yum/repos.d", "/etc/distro.repos.d".
Dec 06 07:04:10 np0005548790.novalocal systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 06 07:04:10 np0005548790.novalocal systemd[1]: Finished dnf makecache.
Dec 06 07:04:39 np0005548790.novalocal sshd[6200]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:04:39 np0005548790.novalocal sshd[6200]: Accepted publickey for zuul from 38.102.83.114 port 40730 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 07:04:39 np0005548790.novalocal systemd-logind[760]: New session 5 of user zuul.
Dec 06 07:04:39 np0005548790.novalocal systemd[1]: Started Session 5 of User zuul.
Dec 06 07:04:39 np0005548790.novalocal sshd[6200]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 07:04:39 np0005548790.novalocal sudo[6217]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qiscyyuixknsgkzzidqycqjizlzzghuj ; /usr/bin/python3
Dec 06 07:04:39 np0005548790.novalocal sudo[6217]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:04:40 np0005548790.novalocal python3[6219]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-e5b2-9de0-000000001d10-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 07:04:40 np0005548790.novalocal sudo[6217]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:41 np0005548790.novalocal sudo[6236]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hedpkjpattsjksqkhkufgvzcdaffybro ; /usr/bin/python3
Dec 06 07:04:41 np0005548790.novalocal sudo[6236]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:04:41 np0005548790.novalocal python3[6238]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 07:04:41 np0005548790.novalocal sudo[6236]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:41 np0005548790.novalocal sudo[6252]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jyttgiehumdznxqjbrxbqbeghgewxclb ; /usr/bin/python3
Dec 06 07:04:41 np0005548790.novalocal sudo[6252]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:04:41 np0005548790.novalocal python3[6254]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 07:04:41 np0005548790.novalocal sudo[6252]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:41 np0005548790.novalocal sudo[6268]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yswacxxypcsrlfgaedtmryzbssusgzry ; /usr/bin/python3
Dec 06 07:04:41 np0005548790.novalocal sudo[6268]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:04:42 np0005548790.novalocal python3[6270]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 07:04:42 np0005548790.novalocal sudo[6268]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:42 np0005548790.novalocal sudo[6284]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fyfwvstqldkzmgnlxdxoirurebuoywup ; /usr/bin/python3
Dec 06 07:04:42 np0005548790.novalocal sudo[6284]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:04:42 np0005548790.novalocal python3[6286]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 07:04:42 np0005548790.novalocal sudo[6284]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:42 np0005548790.novalocal sudo[6300]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lbqflniygbqssnhzqeekkhidraszaofu ; /usr/bin/python3
Dec 06 07:04:42 np0005548790.novalocal sudo[6300]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:04:43 np0005548790.novalocal python3[6302]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 07:04:43 np0005548790.novalocal sudo[6300]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:44 np0005548790.novalocal sudo[6348]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nqgrscadzwgbearhvyndjooumjabjdbz ; /usr/bin/python3
Dec 06 07:04:44 np0005548790.novalocal sudo[6348]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:04:44 np0005548790.novalocal python3[6350]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 07:04:44 np0005548790.novalocal sudo[6348]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:44 np0005548790.novalocal sudo[6391]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ytcbigzeevandxmtypwgffgibavhiotp ; /usr/bin/python3
Dec 06 07:04:44 np0005548790.novalocal sudo[6391]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:04:44 np0005548790.novalocal python3[6393]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765004684.114895-653-201776103444407/source _original_basename=tmpdtmxop13 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 07:04:44 np0005548790.novalocal sudo[6391]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:46 np0005548790.novalocal sudo[6421]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bbkxqruxyjgvllbhlfxnxxehhmjlbkia ; /usr/bin/python3
Dec 06 07:04:46 np0005548790.novalocal sudo[6421]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:04:46 np0005548790.novalocal python3[6423]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 07:04:46 np0005548790.novalocal systemd[1]: Reloading.
Dec 06 07:04:46 np0005548790.novalocal systemd-rc-local-generator[6440]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 07:04:46 np0005548790.novalocal systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 07:04:46 np0005548790.novalocal sudo[6421]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:47 np0005548790.novalocal sudo[6467]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aybupkstmpmbsosldykthnjvzoyfjonm ; /usr/bin/python3
Dec 06 07:04:47 np0005548790.novalocal sudo[6467]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:04:48 np0005548790.novalocal python3[6469]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec 06 07:04:48 np0005548790.novalocal sudo[6467]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:49 np0005548790.novalocal sudo[6483]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sbdvynibdqobfzhnwxgmdawzgvkwnhkr ; /usr/bin/python3
Dec 06 07:04:49 np0005548790.novalocal sudo[6483]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:04:49 np0005548790.novalocal python3[6485]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 07:04:49 np0005548790.novalocal sudo[6483]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:49 np0005548790.novalocal sudo[6501]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zfindgucczlpgjwwceysrrjliciiejli ; /usr/bin/python3
Dec 06 07:04:49 np0005548790.novalocal sudo[6501]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:04:49 np0005548790.novalocal python3[6503]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 07:04:49 np0005548790.novalocal sudo[6501]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:50 np0005548790.novalocal sudo[6519]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uwuqmuujnwkenrhipqjfducdvuzvnwop ; /usr/bin/python3
Dec 06 07:04:50 np0005548790.novalocal sudo[6519]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:04:50 np0005548790.novalocal python3[6521]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 07:04:50 np0005548790.novalocal sudo[6519]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:50 np0005548790.novalocal sudo[6537]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mpmbmdextzjromtrwiiiwykckvynzwws ; /usr/bin/python3
Dec 06 07:04:50 np0005548790.novalocal sudo[6537]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:04:50 np0005548790.novalocal python3[6539]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 07:04:50 np0005548790.novalocal sudo[6537]: pam_unix(sudo:session): session closed for user root
Dec 06 07:04:52 np0005548790.novalocal sshd[6194]: fatal: Timeout before authentication for 101.126.135.154 port 59546
Dec 06 07:05:01 np0005548790.novalocal python3[6556]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-e5b2-9de0-000000001d17-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 07:05:02 np0005548790.novalocal python3[6576]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 07:05:05 np0005548790.novalocal sshd[6200]: pam_unix(sshd:session): session closed for user zuul
Dec 06 07:05:05 np0005548790.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Dec 06 07:05:05 np0005548790.novalocal systemd[1]: session-5.scope: Consumed 3.774s CPU time.
Dec 06 07:05:05 np0005548790.novalocal systemd-logind[760]: Session 5 logged out. Waiting for processes to exit.
Dec 06 07:05:05 np0005548790.novalocal systemd-logind[760]: Removed session 5.
Dec 06 07:06:44 np0005548790.novalocal sshd[6582]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:06:47 np0005548790.novalocal sshd[6582]: Received disconnect from 45.78.219.217 port 41814:11: Bye Bye [preauth]
Dec 06 07:06:47 np0005548790.novalocal sshd[6582]: Disconnected from authenticating user root 45.78.219.217 port 41814 [preauth]
Dec 06 07:07:04 np0005548790.novalocal sshd[6586]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:07:04 np0005548790.novalocal sshd[6586]: Accepted publickey for zuul from 38.102.83.114 port 52186 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 07:07:04 np0005548790.novalocal systemd-logind[760]: New session 6 of user zuul.
Dec 06 07:07:04 np0005548790.novalocal systemd[1]: Started Session 6 of User zuul.
Dec 06 07:07:04 np0005548790.novalocal sshd[6586]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 07:07:04 np0005548790.novalocal sudo[6603]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jgwvumgqsnpkellrenwbbnqkaoeznbvr ; /usr/bin/python3
Dec 06 07:07:04 np0005548790.novalocal sudo[6603]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:07:05 np0005548790.novalocal systemd[1]: Starting RHSM dbus service...
Dec 06 07:07:05 np0005548790.novalocal systemd[1]: Started RHSM dbus service.
Dec 06 07:07:05 np0005548790.novalocal rhsm-service[6610]:  INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm'
Dec 06 07:07:05 np0005548790.novalocal rhsm-service[6610]:  INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm'
Dec 06 07:07:05 np0005548790.novalocal rhsm-service[6610]:  INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm'
Dec 06 07:07:05 np0005548790.novalocal rhsm-service[6610]:  INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm'
Dec 06 07:07:06 np0005548790.novalocal rhsm-service[6610]:  INFO [subscription_manager.managerlib:90] Consumer created: np0005548790.novalocal (5867a12f-d1f0-415d-b20c-b8a52147736c)
Dec 06 07:07:06 np0005548790.novalocal subscription-manager[6610]: Registered system with identity: 5867a12f-d1f0-415d-b20c-b8a52147736c
Dec 06 07:07:07 np0005548790.novalocal rhsm-service[6610]:  INFO [subscription_manager.entcertlib:131] certs updated:
Dec 06 07:07:07 np0005548790.novalocal rhsm-service[6610]: Total updates: 1
Dec 06 07:07:07 np0005548790.novalocal rhsm-service[6610]: Found (local) serial# []
Dec 06 07:07:07 np0005548790.novalocal rhsm-service[6610]: Expected (UEP) serial# [1210632431389949340]
Dec 06 07:07:07 np0005548790.novalocal rhsm-service[6610]: Added (new)
Dec 06 07:07:07 np0005548790.novalocal rhsm-service[6610]:   [sn:1210632431389949340 ( Content Access,) @ /etc/pki/entitlement/1210632431389949340.pem]
Dec 06 07:07:07 np0005548790.novalocal rhsm-service[6610]: Deleted (rogue):
Dec 06 07:07:07 np0005548790.novalocal rhsm-service[6610]:   <NONE>
Dec 06 07:07:07 np0005548790.novalocal subscription-manager[6610]: Added subscription for 'Content Access' contract 'None'
Dec 06 07:07:07 np0005548790.novalocal subscription-manager[6610]: Added subscription for product ' Content Access'
Dec 06 07:07:08 np0005548790.novalocal rhsm-service[6610]:  INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm'
Dec 06 07:07:08 np0005548790.novalocal rhsm-service[6610]:  INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm'
Dec 06 07:07:08 np0005548790.novalocal rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 07:07:08 np0005548790.novalocal rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 07:07:08 np0005548790.novalocal rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 07:07:09 np0005548790.novalocal rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 07:07:09 np0005548790.novalocal rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 07:07:10 np0005548790.novalocal sudo[6603]: pam_unix(sudo:session): session closed for user root
Dec 06 07:07:12 np0005548790.novalocal python3[6702]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163ef9-e89a-ea42-bf82-00000000000d-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 07:08:05 np0005548790.novalocal sudo[6719]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jmayjiezdfehbetbiumrpcykhgkhwywz ; /usr/bin/python3
Dec 06 07:08:05 np0005548790.novalocal sudo[6719]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:08:05 np0005548790.novalocal python3[6721]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 07:08:35 np0005548790.novalocal setsebool[6797]: The virt_use_nfs policy boolean was changed to 1 by root
Dec 06 07:08:35 np0005548790.novalocal setsebool[6797]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec 06 07:08:43 np0005548790.novalocal kernel: SELinux:  Converting 410 SID table entries...
Dec 06 07:08:43 np0005548790.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 07:08:43 np0005548790.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 06 07:08:43 np0005548790.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 07:08:43 np0005548790.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 06 07:08:43 np0005548790.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 07:08:43 np0005548790.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 07:08:43 np0005548790.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 07:08:56 np0005548790.novalocal dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=3 res=1
Dec 06 07:08:56 np0005548790.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 07:08:56 np0005548790.novalocal systemd[1]: Starting man-db-cache-update.service...
Dec 06 07:08:56 np0005548790.novalocal systemd[1]: Reloading.
Dec 06 07:08:56 np0005548790.novalocal systemd-rc-local-generator[7673]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 07:08:56 np0005548790.novalocal systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 07:08:56 np0005548790.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 07:08:58 np0005548790.novalocal rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 07:08:58 np0005548790.novalocal sudo[6719]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:00 np0005548790.novalocal sudo[13038]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ngbreawrhfyfsreafddwxujndvqcjkfc ; /usr/bin/python3
Dec 06 07:09:00 np0005548790.novalocal sudo[13038]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:09:01 np0005548790.novalocal podman[13306]: 2025-12-06 07:09:01.017170826 +0000 UTC m=+0.109741299 system refresh
Dec 06 07:09:01 np0005548790.novalocal sudo[13038]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:01 np0005548790.novalocal systemd[4179]: Starting D-Bus User Message Bus...
Dec 06 07:09:01 np0005548790.novalocal dbus-broker-launch[14969]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 06 07:09:01 np0005548790.novalocal dbus-broker-launch[14969]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 06 07:09:01 np0005548790.novalocal systemd[4179]: Started D-Bus User Message Bus.
Dec 06 07:09:01 np0005548790.novalocal dbus-broker-lau[14969]: Ready
Dec 06 07:09:01 np0005548790.novalocal systemd[4179]: selinux: avc:  op=load_policy lsm=selinux seqno=3 res=1
Dec 06 07:09:01 np0005548790.novalocal systemd[4179]: Created slice Slice /user.
Dec 06 07:09:01 np0005548790.novalocal systemd[4179]: podman-14796.scope: unit configures an IP firewall, but not running as root.
Dec 06 07:09:01 np0005548790.novalocal systemd[4179]: (This warning is only shown for the first unit using IP firewalling.)
Dec 06 07:09:01 np0005548790.novalocal systemd[4179]: Started podman-14796.scope.
Dec 06 07:09:02 np0005548790.novalocal systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 07:09:02 np0005548790.novalocal systemd[4179]: Started podman-pause-442724bf.scope.
Dec 06 07:09:02 np0005548790.novalocal sshd[6586]: pam_unix(sshd:session): session closed for user zuul
Dec 06 07:09:02 np0005548790.novalocal systemd[1]: session-6.scope: Deactivated successfully.
Dec 06 07:09:02 np0005548790.novalocal systemd[1]: session-6.scope: Consumed 49.915s CPU time.
Dec 06 07:09:02 np0005548790.novalocal systemd-logind[760]: Session 6 logged out. Waiting for processes to exit.
Dec 06 07:09:02 np0005548790.novalocal systemd-logind[760]: Removed session 6.
Dec 06 07:09:04 np0005548790.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 07:09:04 np0005548790.novalocal systemd[1]: Finished man-db-cache-update.service.
Dec 06 07:09:04 np0005548790.novalocal systemd[1]: man-db-cache-update.service: Consumed 9.737s CPU time.
Dec 06 07:09:04 np0005548790.novalocal systemd[1]: run-r6bda01da2d5d4e19b0be555ce9d60c0c.service: Deactivated successfully.
Dec 06 07:09:18 np0005548790.novalocal sshd[18452]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:09:18 np0005548790.novalocal sshd[18451]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:09:18 np0005548790.novalocal sshd[18450]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:09:18 np0005548790.novalocal sshd[18452]: Unable to negotiate with 38.102.83.83 port 59140: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 06 07:09:18 np0005548790.novalocal sshd[18449]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:09:18 np0005548790.novalocal sshd[18453]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:09:18 np0005548790.novalocal sshd[18451]: Unable to negotiate with 38.102.83.83 port 59162: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 06 07:09:18 np0005548790.novalocal sshd[18450]: Connection closed by 38.102.83.83 port 59122 [preauth]
Dec 06 07:09:18 np0005548790.novalocal sshd[18449]: Connection closed by 38.102.83.83 port 59138 [preauth]
Dec 06 07:09:18 np0005548790.novalocal sshd[18453]: Unable to negotiate with 38.102.83.83 port 59156: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 06 07:09:22 np0005548790.novalocal sshd[18459]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:09:22 np0005548790.novalocal sshd[18459]: Accepted publickey for zuul from 38.102.83.114 port 45598 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 07:09:22 np0005548790.novalocal systemd-logind[760]: New session 7 of user zuul.
Dec 06 07:09:22 np0005548790.novalocal systemd[1]: Started Session 7 of User zuul.
Dec 06 07:09:22 np0005548790.novalocal sshd[18459]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 07:09:22 np0005548790.novalocal python3[18476]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEYVtM235X0xWH2FKli0CUGpvCLQnDDtCI4yCYqNdWcGuxt1LThsgCBuwYYpkvH+K5VLRKMEyM949Yu6yQU/mgI= zuul@np0005548782.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 07:09:23 np0005548790.novalocal sudo[18490]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hlygpkfyqxhububdtmixyapuumxohewz ; /usr/bin/python3
Dec 06 07:09:23 np0005548790.novalocal sudo[18490]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:09:23 np0005548790.novalocal python3[18492]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEYVtM235X0xWH2FKli0CUGpvCLQnDDtCI4yCYqNdWcGuxt1LThsgCBuwYYpkvH+K5VLRKMEyM949Yu6yQU/mgI= zuul@np0005548782.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 07:09:23 np0005548790.novalocal sudo[18490]: pam_unix(sudo:session): session closed for user root
Dec 06 07:09:24 np0005548790.novalocal sshd[18493]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:09:25 np0005548790.novalocal sshd[18459]: pam_unix(sshd:session): session closed for user zuul
Dec 06 07:09:25 np0005548790.novalocal systemd[1]: session-7.scope: Deactivated successfully.
Dec 06 07:09:25 np0005548790.novalocal systemd-logind[760]: Session 7 logged out. Waiting for processes to exit.
Dec 06 07:09:25 np0005548790.novalocal systemd-logind[760]: Removed session 7.
Dec 06 07:11:02 np0005548790.novalocal sshd[18496]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:11:02 np0005548790.novalocal sshd[18496]: Accepted publickey for zuul from 38.102.83.114 port 46470 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 07:11:02 np0005548790.novalocal systemd-logind[760]: New session 8 of user zuul.
Dec 06 07:11:02 np0005548790.novalocal systemd[1]: Started Session 8 of User zuul.
Dec 06 07:11:02 np0005548790.novalocal sshd[18496]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 07:11:02 np0005548790.novalocal sudo[18513]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-trdeotptbjkbhviybrdnptfdvjevciez ; /usr/bin/python3
Dec 06 07:11:02 np0005548790.novalocal sudo[18513]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:11:03 np0005548790.novalocal python3[18515]: ansible-authorized_key Invoked with user=root manage_dir=True key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVgIoETU+ZMXzSQYJdf7tKLhQsLaB9easlDHbhHsBFXd1+Axjoyg338dVOvCx68r/a15lecdlSwbLqd4GXxUOdHnWLa1I9u6bd6azOwE0Dd6ZjnquN3BRq9dLJXMlKHhXMddL6WHNfxT/JOL+gKp0CM74naUBGqrzV05qlb19n7xZJtmxVohAGGeQdFwQJBVoQ6yZOjcJZ5CpbWCs4pFXZT/31fA0KIAJkrzAeUGRRkQEnzXY1riF0RHwvXaNJ0ZoAYfT7q263Pd5gnQEmpiBirUBH6CXJn4lIQyNMyVRbnKWemW9P1kyv2bjZUPg2b1xWBE7MBTs/wMt1RjdO9p+sxtwOd2IQMf1t3JLa2p3xqgxtGTMugpJUBr1TWwdLoHl+eAMuWZwAWofLWICHUlPzyTN8L8acu0im2eR60FEl9XdUjp8DYCBGxhhIVx+xZxj6nTnNc5T7GJpJlCTF+9YPlDVrLg8y/YXly0BoOqr7p+RaqMAJnoZymNDbuu9V3Vs= zuul-build-sshkey state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 07:11:03 np0005548790.novalocal sudo[18513]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:03 np0005548790.novalocal sudo[18529]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qrsgpzuoulcgxttkrmgwkeopwygteirq ; /usr/bin/python3
Dec 06 07:11:03 np0005548790.novalocal sudo[18529]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:11:04 np0005548790.novalocal python3[18531]: ansible-user Invoked with name=root state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005548790.novalocal update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 06 07:11:04 np0005548790.novalocal sudo[18529]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:05 np0005548790.novalocal sudo[18579]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eazrujwvvxecqyczrhpnibqktzfritxh ; /usr/bin/python3
Dec 06 07:11:05 np0005548790.novalocal sudo[18579]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:11:05 np0005548790.novalocal python3[18581]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 07:11:05 np0005548790.novalocal sudo[18579]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:05 np0005548790.novalocal sudo[18622]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uwxotzgpctroellrmhkhtcqfaoeyrsqq ; /usr/bin/python3
Dec 06 07:11:05 np0005548790.novalocal sudo[18622]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:11:05 np0005548790.novalocal python3[18624]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765005065.3236446-138-17885261704371/source dest=/root/.ssh/id_rsa mode=384 owner=root force=False _original_basename=00c07cb1874e45b180d5c151333e96b1_id_rsa follow=False checksum=59556e0a2f4b936183817041ae1f59f0f3c92dd9 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 07:11:05 np0005548790.novalocal sudo[18622]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:07 np0005548790.novalocal sudo[18684]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-njcfjhnyopiqzivuouguxdfnkmevukjq ; /usr/bin/python3
Dec 06 07:11:07 np0005548790.novalocal sudo[18684]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:11:07 np0005548790.novalocal python3[18686]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 07:11:07 np0005548790.novalocal sudo[18684]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:07 np0005548790.novalocal sudo[18727]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zzlpbhkntvzrdgxbgtoegqslqrfgakph ; /usr/bin/python3
Dec 06 07:11:07 np0005548790.novalocal sudo[18727]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:11:07 np0005548790.novalocal python3[18729]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765005067.1834595-225-63951124934433/source dest=/root/.ssh/id_rsa.pub mode=420 owner=root force=False _original_basename=00c07cb1874e45b180d5c151333e96b1_id_rsa.pub follow=False checksum=2b77fe3fb3441abe077d8d93b68745bd8f418f92 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 07:11:07 np0005548790.novalocal sudo[18727]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:09 np0005548790.novalocal sudo[18757]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gcdmxhndqlrppzjgcyjgqikcneczddqp ; /usr/bin/python3
Dec 06 07:11:09 np0005548790.novalocal sudo[18757]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:11:09 np0005548790.novalocal python3[18759]: ansible-ansible.builtin.file Invoked with path=/etc/nodepool state=directory mode=0777 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 07:11:09 np0005548790.novalocal sudo[18757]: pam_unix(sudo:session): session closed for user root
Dec 06 07:11:10 np0005548790.novalocal python3[18805]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 07:11:11 np0005548790.novalocal python3[18821]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes _original_basename=tmphwg3va5r recurse=False state=file path=/etc/nodepool/sub_nodes force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 07:11:12 np0005548790.novalocal python3[18881]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 07:11:12 np0005548790.novalocal python3[18897]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes_private _original_basename=tmpzrb5nb77 recurse=False state=file path=/etc/nodepool/sub_nodes_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 07:11:14 np0005548790.novalocal python3[18957]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 07:11:14 np0005548790.novalocal python3[18973]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/node_private _original_basename=tmpj4b5ugg2 recurse=False state=file path=/etc/nodepool/node_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 07:11:14 np0005548790.novalocal sshd[18496]: pam_unix(sshd:session): session closed for user zuul
Dec 06 07:11:14 np0005548790.novalocal systemd[1]: session-8.scope: Deactivated successfully.
Dec 06 07:11:14 np0005548790.novalocal systemd[1]: session-8.scope: Consumed 3.532s CPU time.
Dec 06 07:11:14 np0005548790.novalocal systemd-logind[760]: Session 8 logged out. Waiting for processes to exit.
Dec 06 07:11:14 np0005548790.novalocal systemd-logind[760]: Removed session 8.
Dec 06 07:11:24 np0005548790.novalocal sshd[18493]: fatal: Timeout before authentication for 45.78.219.217 port 59758
Dec 06 07:13:29 np0005548790.novalocal sshd[18989]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:13:29 np0005548790.novalocal sshd[18989]: Accepted publickey for zuul from 38.102.83.83 port 44030 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 07:13:29 np0005548790.novalocal systemd-logind[760]: New session 9 of user zuul.
Dec 06 07:13:29 np0005548790.novalocal systemd[1]: Started Session 9 of User zuul.
Dec 06 07:13:29 np0005548790.novalocal sshd[18989]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 07:13:29 np0005548790.novalocal python3[19035]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 07:15:03 np0005548790.novalocal sshd[19038]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:15:11 np0005548790.novalocal sshd[19038]: error: maximum authentication attempts exceeded for root from 181.23.194.117 port 56494 ssh2 [preauth]
Dec 06 07:15:11 np0005548790.novalocal sshd[19038]: Disconnecting authenticating user root 181.23.194.117 port 56494: Too many authentication failures [preauth]
Dec 06 07:15:11 np0005548790.novalocal sshd[19040]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:15:22 np0005548790.novalocal sshd[19040]: error: maximum authentication attempts exceeded for root from 181.23.194.117 port 56506 ssh2 [preauth]
Dec 06 07:15:22 np0005548790.novalocal sshd[19040]: Disconnecting authenticating user root 181.23.194.117 port 56506: Too many authentication failures [preauth]
Dec 06 07:15:22 np0005548790.novalocal sshd[19042]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:15:36 np0005548790.novalocal sshd[19042]: Connection closed by 181.23.194.117 port 56522 [preauth]
Dec 06 07:17:20 np0005548790.novalocal sshd[19045]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:17:21 np0005548790.novalocal sshd[19045]: Received disconnect from 45.78.219.217 port 39850:11: Bye Bye [preauth]
Dec 06 07:17:21 np0005548790.novalocal sshd[19045]: Disconnected from authenticating user root 45.78.219.217 port 39850 [preauth]
Dec 06 07:18:29 np0005548790.novalocal sshd[18992]: Received disconnect from 38.102.83.83 port 44030:11: disconnected by user
Dec 06 07:18:29 np0005548790.novalocal sshd[18992]: Disconnected from user zuul 38.102.83.83 port 44030
Dec 06 07:18:29 np0005548790.novalocal sshd[18989]: pam_unix(sshd:session): session closed for user zuul
Dec 06 07:18:29 np0005548790.novalocal systemd[1]: session-9.scope: Deactivated successfully.
Dec 06 07:18:29 np0005548790.novalocal systemd-logind[760]: Session 9 logged out. Waiting for processes to exit.
Dec 06 07:18:29 np0005548790.novalocal systemd-logind[760]: Removed session 9.
Dec 06 07:20:02 np0005548790.novalocal sshd[19048]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:20:06 np0005548790.novalocal sshd[19048]: Received disconnect from 45.78.219.217 port 53868:11: Bye Bye [preauth]
Dec 06 07:20:06 np0005548790.novalocal sshd[19048]: Disconnected from authenticating user root 45.78.219.217 port 53868 [preauth]
Dec 06 07:22:50 np0005548790.novalocal sshd[19050]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:22:53 np0005548790.novalocal sshd[19050]: Received disconnect from 45.78.219.217 port 49936:11: Bye Bye [preauth]
Dec 06 07:22:53 np0005548790.novalocal sshd[19050]: Disconnected from authenticating user root 45.78.219.217 port 49936 [preauth]
Dec 06 07:24:16 np0005548790.novalocal sshd[19052]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:24:18 np0005548790.novalocal sshd[19052]: Invalid user admin from 45.135.232.92 port 29182
Dec 06 07:24:18 np0005548790.novalocal sshd[19052]: Connection reset by invalid user admin 45.135.232.92 port 29182 [preauth]
Dec 06 07:24:18 np0005548790.novalocal sshd[19054]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:24:20 np0005548790.novalocal sshd[19054]: Invalid user ftpuser from 45.135.232.92 port 29202
Dec 06 07:24:20 np0005548790.novalocal sshd[19054]: Connection reset by invalid user ftpuser 45.135.232.92 port 29202 [preauth]
Dec 06 07:24:20 np0005548790.novalocal sshd[19056]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:24:22 np0005548790.novalocal sshd[19056]: Connection reset by authenticating user root 45.135.232.92 port 29216 [preauth]
Dec 06 07:24:22 np0005548790.novalocal sshd[19059]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:24:25 np0005548790.novalocal sshd[19059]: Invalid user 123456 from 45.135.232.92 port 29238
Dec 06 07:24:25 np0005548790.novalocal sshd[19059]: Connection reset by invalid user 123456 45.135.232.92 port 29238 [preauth]
Dec 06 07:24:25 np0005548790.novalocal sshd[19061]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:24:27 np0005548790.novalocal sshd[19061]: Connection reset by authenticating user root 45.135.232.92 port 62256 [preauth]
Dec 06 07:28:31 np0005548790.novalocal sshd[19064]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:28:35 np0005548790.novalocal sshd[19064]: Received disconnect from 45.78.219.217 port 34656:11: Bye Bye [preauth]
Dec 06 07:28:35 np0005548790.novalocal sshd[19064]: Disconnected from authenticating user root 45.78.219.217 port 34656 [preauth]
Dec 06 07:31:12 np0005548790.novalocal sshd[19067]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:31:20 np0005548790.novalocal sshd[19067]: Connection closed by 45.78.219.217 port 53540 [preauth]
Dec 06 07:31:32 np0005548790.novalocal sshd[19071]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:31:32 np0005548790.novalocal sshd[19071]: Accepted publickey for zuul from 38.102.83.114 port 42236 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 07:31:32 np0005548790.novalocal systemd-logind[760]: New session 10 of user zuul.
Dec 06 07:31:32 np0005548790.novalocal systemd[1]: Started Session 10 of User zuul.
Dec 06 07:31:32 np0005548790.novalocal sshd[19071]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 07:31:32 np0005548790.novalocal python3[19088]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163ef9-e89a-1bf3-6840-00000000000c-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 07:31:35 np0005548790.novalocal sudo[19106]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mgvpuebrmyrzmolkcjjqcdobibgatovo ; /usr/bin/python3
Dec 06 07:31:35 np0005548790.novalocal sudo[19106]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:31:35 np0005548790.novalocal python3[19108]: ansible-ansible.legacy.command Invoked with _raw_params=yum clean all zuul_log_id=fa163ef9-e89a-1bf3-6840-00000000000d-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 07:31:37 np0005548790.novalocal sudo[19106]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:06 np0005548790.novalocal sudo[19126]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-udtparqpeiryylcreqauhozspbdlmbtj ; /usr/bin/python3
Dec 06 07:32:06 np0005548790.novalocal sudo[19126]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:32:06 np0005548790.novalocal python3[19128]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-baseos-eus-rpms'] state=enabled purge=False
Dec 06 07:32:09 np0005548790.novalocal rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 07:32:10 np0005548790.novalocal rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 07:32:35 np0005548790.novalocal sudo[19126]: pam_unix(sudo:session): session closed for user root
Dec 06 07:32:42 np0005548790.novalocal sudo[19283]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sfhyrblzwxapkhzkftkhlveeagghltmg ; /usr/bin/python3
Dec 06 07:32:42 np0005548790.novalocal sudo[19283]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:32:42 np0005548790.novalocal python3[19285]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-appstream-eus-rpms'] state=enabled purge=False
Dec 06 07:32:45 np0005548790.novalocal rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 07:32:45 np0005548790.novalocal rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 07:32:47 np0005548790.novalocal sudo[19283]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:03 np0005548790.novalocal sudo[19484]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qmbcmzjupecxouxzgrvcowiynwmhiczp ; /usr/bin/python3
Dec 06 07:33:03 np0005548790.novalocal sudo[19484]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:33:03 np0005548790.novalocal python3[19486]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-highavailability-eus-rpms'] state=enabled purge=False
Dec 06 07:33:06 np0005548790.novalocal rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 07:33:06 np0005548790.novalocal rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 07:33:11 np0005548790.novalocal rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 07:33:11 np0005548790.novalocal rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 07:33:18 np0005548790.novalocal sudo[19484]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:36 np0005548790.novalocal sudo[19760]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-euvpdfizbotmtampfjpcdcjvzvbjtoho ; /usr/bin/python3
Dec 06 07:33:36 np0005548790.novalocal sudo[19760]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:33:36 np0005548790.novalocal python3[19762]: ansible-community.general.rhsm_repository Invoked with name=['fast-datapath-for-rhel-9-x86_64-rpms'] state=enabled purge=False
Dec 06 07:33:39 np0005548790.novalocal rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 07:33:39 np0005548790.novalocal rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 07:33:44 np0005548790.novalocal rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 07:33:44 np0005548790.novalocal rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 07:33:51 np0005548790.novalocal sudo[19760]: pam_unix(sudo:session): session closed for user root
Dec 06 07:33:58 np0005548790.novalocal sshd[20083]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:34:05 np0005548790.novalocal sshd[20083]: Received disconnect from 45.78.219.217 port 56136:11: Bye Bye [preauth]
Dec 06 07:34:05 np0005548790.novalocal sshd[20083]: Disconnected from authenticating user root 45.78.219.217 port 56136 [preauth]
Dec 06 07:34:07 np0005548790.novalocal sudo[20098]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eugvxafxbcasdohgewibjnqkbctvboed ; /usr/bin/python3
Dec 06 07:34:07 np0005548790.novalocal sudo[20098]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:34:07 np0005548790.novalocal python3[20100]: ansible-community.general.rhsm_repository Invoked with name=['openstack-17.1-for-rhel-9-x86_64-rpms'] state=enabled purge=False
Dec 06 07:34:10 np0005548790.novalocal rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 07:34:11 np0005548790.novalocal rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 07:34:16 np0005548790.novalocal rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 07:34:16 np0005548790.novalocal rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 07:34:23 np0005548790.novalocal sudo[20098]: pam_unix(sudo:session): session closed for user root
Dec 06 07:34:26 np0005548790.novalocal sudo[20436]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dnrpzokwtyqssmswctdbnojmmwoewzxt ; /usr/bin/python3
Dec 06 07:34:26 np0005548790.novalocal sudo[20436]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:34:26 np0005548790.novalocal python3[20438]: ansible-ansible.legacy.command Invoked with _raw_params=yum repolist --enabled
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-1bf3-6840-000000000013-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 07:34:28 np0005548790.novalocal sudo[20436]: pam_unix(sudo:session): session closed for user root
Dec 06 07:34:55 np0005548790.novalocal sudo[20455]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-popcrslizsdjeinkkrtljrthxpjyzags ; /usr/bin/python3
Dec 06 07:34:55 np0005548790.novalocal sudo[20455]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:34:55 np0005548790.novalocal python3[20457]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch', 'os-net-config', 'ansible-core'] state=present update_cache=True allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 07:35:14 np0005548790.novalocal kernel: SELinux:  Converting 488 SID table entries...
Dec 06 07:35:14 np0005548790.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 07:35:14 np0005548790.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 06 07:35:14 np0005548790.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 07:35:14 np0005548790.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 06 07:35:14 np0005548790.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 07:35:14 np0005548790.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 07:35:14 np0005548790.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 07:35:15 np0005548790.novalocal groupadd[20563]: group added to /etc/group: name=unbound, GID=987
Dec 06 07:35:15 np0005548790.novalocal groupadd[20563]: group added to /etc/gshadow: name=unbound
Dec 06 07:35:15 np0005548790.novalocal groupadd[20563]: new group: name=unbound, GID=987
Dec 06 07:35:15 np0005548790.novalocal useradd[20570]: new user: name=unbound, UID=987, GID=987, home=/etc/unbound, shell=/sbin/nologin, from=none
Dec 06 07:35:15 np0005548790.novalocal dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Dec 06 07:35:15 np0005548790.novalocal systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec 06 07:35:15 np0005548790.novalocal groupadd[20583]: group added to /etc/group: name=openvswitch, GID=986
Dec 06 07:35:15 np0005548790.novalocal groupadd[20583]: group added to /etc/gshadow: name=openvswitch
Dec 06 07:35:15 np0005548790.novalocal groupadd[20583]: new group: name=openvswitch, GID=986
Dec 06 07:35:15 np0005548790.novalocal useradd[20590]: new user: name=openvswitch, UID=986, GID=986, home=/, shell=/sbin/nologin, from=none
Dec 06 07:35:15 np0005548790.novalocal groupadd[20598]: group added to /etc/group: name=hugetlbfs, GID=985
Dec 06 07:35:15 np0005548790.novalocal groupadd[20598]: group added to /etc/gshadow: name=hugetlbfs
Dec 06 07:35:15 np0005548790.novalocal groupadd[20598]: new group: name=hugetlbfs, GID=985
Dec 06 07:35:15 np0005548790.novalocal usermod[20606]: add 'openvswitch' to group 'hugetlbfs'
Dec 06 07:35:15 np0005548790.novalocal usermod[20606]: add 'openvswitch' to shadow group 'hugetlbfs'
Dec 06 07:35:18 np0005548790.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 07:35:18 np0005548790.novalocal systemd[1]: Starting man-db-cache-update.service...
Dec 06 07:35:18 np0005548790.novalocal systemd[1]: Reloading.
Dec 06 07:35:18 np0005548790.novalocal systemd-sysv-generator[21120]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 07:35:18 np0005548790.novalocal systemd-rc-local-generator[21113]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 07:35:18 np0005548790.novalocal systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 07:35:18 np0005548790.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 07:35:19 np0005548790.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 07:35:19 np0005548790.novalocal systemd[1]: Finished man-db-cache-update.service.
Dec 06 07:35:19 np0005548790.novalocal systemd[1]: run-r37056befe6254763be3d84c538ff2849.service: Deactivated successfully.
Dec 06 07:35:20 np0005548790.novalocal rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 07:35:20 np0005548790.novalocal sudo[20455]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:20 np0005548790.novalocal rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 07:35:24 np0005548790.novalocal sshd[21650]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:35:24 np0005548790.novalocal sshd[21650]: error: kex_exchange_identification: Connection closed by remote host
Dec 06 07:35:24 np0005548790.novalocal sshd[21650]: Connection closed by 193.32.162.146 port 45730
Dec 06 07:35:36 np0005548790.novalocal sudo[21664]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gkmilxwapyuvfkxexvnydfsvugkocmqf ; /usr/bin/python3
Dec 06 07:35:36 np0005548790.novalocal sudo[21664]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:35:36 np0005548790.novalocal python3[21666]: ansible-ansible.legacy.command Invoked with _raw_params=ansible-galaxy collection install ansible.posix
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-1bf3-6840-000000000015-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 07:35:50 np0005548790.novalocal sudo[21664]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:53 np0005548790.novalocal sudo[21686]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zappjrmdotsbdrbhgeikeusqinnolutp ; /usr/bin/python3
Dec 06 07:35:53 np0005548790.novalocal sudo[21686]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:35:53 np0005548790.novalocal python3[21688]: ansible-ansible.builtin.file Invoked with path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 07:35:53 np0005548790.novalocal sudo[21686]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:55 np0005548790.novalocal sudo[21734]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cmmdwacszyvqlraehnkvdkknzcmvegus ; /usr/bin/python3
Dec 06 07:35:55 np0005548790.novalocal sudo[21734]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:35:55 np0005548790.novalocal python3[21736]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/tripleo_config.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 07:35:55 np0005548790.novalocal sudo[21734]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:55 np0005548790.novalocal sudo[21777]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sukguccpcadayhpygccysgvmloszsvki ; /usr/bin/python3
Dec 06 07:35:55 np0005548790.novalocal sudo[21777]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:35:55 np0005548790.novalocal python3[21779]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765006554.9233477-332-34728812754520/source dest=/etc/os-net-config/tripleo_config.yaml mode=None follow=False _original_basename=overcloud_net_config.j2 checksum=91bc45728dd9738fc644e3ada9d8642294da29ff backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 07:35:55 np0005548790.novalocal sudo[21777]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:57 np0005548790.novalocal sudo[21807]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yklqzqpooeozzcmcbcsxcvhrllapkjlr ; /usr/bin/python3
Dec 06 07:35:57 np0005548790.novalocal sudo[21807]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:35:57 np0005548790.novalocal python3[21809]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network  state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Dec 06 07:35:57 np0005548790.novalocal sudo[21807]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:57 np0005548790.novalocal systemd-journald[618]: Field hash table of /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal has a fill level at 89.2 (297 of 333 items), suggesting rotation.
Dec 06 07:35:57 np0005548790.novalocal systemd-journald[618]: /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 06 07:35:57 np0005548790.novalocal rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 07:35:57 np0005548790.novalocal rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 07:35:57 np0005548790.novalocal sudo[21828]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gcvqjdisxceppzexntsthhygoaeesefy ; /usr/bin/python3
Dec 06 07:35:57 np0005548790.novalocal sudo[21828]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:35:57 np0005548790.novalocal python3[21830]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-20 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Dec 06 07:35:57 np0005548790.novalocal sudo[21828]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:57 np0005548790.novalocal sudo[21848]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hmfascnftolqlrdqediddrzrwfhighjz ; /usr/bin/python3
Dec 06 07:35:57 np0005548790.novalocal sudo[21848]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:35:58 np0005548790.novalocal python3[21850]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-21 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Dec 06 07:35:58 np0005548790.novalocal sudo[21848]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:58 np0005548790.novalocal sudo[21868]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xhjybhhhutlkydoazxtpaebcrlcqqxrk ; /usr/bin/python3
Dec 06 07:35:58 np0005548790.novalocal sudo[21868]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:35:58 np0005548790.novalocal python3[21870]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-22 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Dec 06 07:35:58 np0005548790.novalocal sudo[21868]: pam_unix(sudo:session): session closed for user root
Dec 06 07:35:58 np0005548790.novalocal sudo[21888]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aufswyzxfhxulxjqjkitvgzeqvnvfldz ; /usr/bin/python3
Dec 06 07:35:58 np0005548790.novalocal sudo[21888]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:35:58 np0005548790.novalocal python3[21890]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-23 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Dec 06 07:35:58 np0005548790.novalocal sudo[21888]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:02 np0005548790.novalocal sudo[21908]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zgcmjqnghtusrrbdxqabedmdyyxvjexl ; /usr/bin/python3
Dec 06 07:36:02 np0005548790.novalocal sudo[21908]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:36:02 np0005548790.novalocal python3[21910]: ansible-ansible.builtin.systemd Invoked with name=network state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 07:36:02 np0005548790.novalocal systemd[1]: Starting LSB: Bring up/down networking...
Dec 06 07:36:02 np0005548790.novalocal network[21913]: WARN      : [network] You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 07:36:02 np0005548790.novalocal network[21924]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 07:36:02 np0005548790.novalocal network[21913]: WARN      : [network] 'network-scripts' will be removed from distribution in near future.
Dec 06 07:36:02 np0005548790.novalocal network[21925]: 'network-scripts' will be removed from distribution in near future.
Dec 06 07:36:02 np0005548790.novalocal network[21913]: WARN      : [network] It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 07:36:02 np0005548790.novalocal network[21926]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 07:36:02 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006562.5343] audit: op="connections-reload" pid=21954 uid=0 result="success"
Dec 06 07:36:02 np0005548790.novalocal network[21913]: Bringing up loopback interface:  [  OK  ]
Dec 06 07:36:02 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006562.7148] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth0" pid=22042 uid=0 result="success"
Dec 06 07:36:02 np0005548790.novalocal network[21913]: Bringing up interface eth0:  [  OK  ]
Dec 06 07:36:02 np0005548790.novalocal systemd[1]: Started LSB: Bring up/down networking.
Dec 06 07:36:02 np0005548790.novalocal sudo[21908]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:02 np0005548790.novalocal sudo[22081]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-covwniylfkydpdvsmwafudqfdibuivdb ; /usr/bin/python3
Dec 06 07:36:02 np0005548790.novalocal sudo[22081]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:36:03 np0005548790.novalocal python3[22083]: ansible-ansible.builtin.systemd Invoked with name=openvswitch state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 07:36:04 np0005548790.novalocal systemd[1]: Starting Open vSwitch Database Unit...
Dec 06 07:36:04 np0005548790.novalocal chown[22087]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec 06 07:36:04 np0005548790.novalocal ovs-ctl[22092]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec 06 07:36:04 np0005548790.novalocal ovs-ctl[22092]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec 06 07:36:04 np0005548790.novalocal ovs-ctl[22092]: Starting ovsdb-server [  OK  ]
Dec 06 07:36:04 np0005548790.novalocal ovs-vsctl[22141]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec 06 07:36:04 np0005548790.novalocal ovs-vsctl[22161]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.6-141.el9fdp "external-ids:system-id=\"33b2d0f4-3dae-458c-b286-c937c7cb3d9e\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"rhel\"" "system-version=\"9.2\""
Dec 06 07:36:04 np0005548790.novalocal ovs-ctl[22092]: Configuring Open vSwitch system IDs [  OK  ]
Dec 06 07:36:04 np0005548790.novalocal ovs-ctl[22092]: Enabling remote OVSDB managers [  OK  ]
Dec 06 07:36:04 np0005548790.novalocal ovs-vsctl[22167]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005548790.novalocal
Dec 06 07:36:04 np0005548790.novalocal systemd[1]: Started Open vSwitch Database Unit.
Dec 06 07:36:04 np0005548790.novalocal systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec 06 07:36:04 np0005548790.novalocal systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec 06 07:36:04 np0005548790.novalocal systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec 06 07:36:04 np0005548790.novalocal kernel: openvswitch: Open vSwitch switching datapath
Dec 06 07:36:04 np0005548790.novalocal ovs-ctl[22211]: Inserting openvswitch module [  OK  ]
Dec 06 07:36:04 np0005548790.novalocal ovs-ctl[22180]: Starting ovs-vswitchd [  OK  ]
Dec 06 07:36:04 np0005548790.novalocal ovs-vsctl[22229]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005548790.novalocal
Dec 06 07:36:04 np0005548790.novalocal ovs-ctl[22180]: Enabling remote OVSDB managers [  OK  ]
Dec 06 07:36:04 np0005548790.novalocal systemd[1]: Started Open vSwitch Forwarding Unit.
Dec 06 07:36:04 np0005548790.novalocal systemd[1]: Starting Open vSwitch...
Dec 06 07:36:04 np0005548790.novalocal systemd[1]: Finished Open vSwitch.
Dec 06 07:36:04 np0005548790.novalocal sudo[22081]: pam_unix(sudo:session): session closed for user root
Dec 06 07:36:33 np0005548790.novalocal sudo[22245]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lnziytmrllmgnzvacmrfdgmrbsvtasab ; /usr/bin/python3
Dec 06 07:36:33 np0005548790.novalocal sudo[22245]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:36:34 np0005548790.novalocal python3[22247]: ansible-ansible.legacy.command Invoked with _raw_params=os-net-config -c /etc/os-net-config/tripleo_config.yaml
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-1bf3-6840-00000000001a-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 07:36:35 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006595.0992] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22406 uid=0 result="success"
Dec 06 07:36:35 np0005548790.novalocal ifup[22407]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 06 07:36:35 np0005548790.novalocal ifup[22408]: 'network-scripts' will be removed from distribution in near future.
Dec 06 07:36:35 np0005548790.novalocal ifup[22409]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 06 07:36:35 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006595.1322] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22415 uid=0 result="success"
Dec 06 07:36:35 np0005548790.novalocal ovs-vsctl[22417]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-ex -- set bridge br-ex other-config:mac-table-size=50000 -- set bridge br-ex other-config:hwaddr=fa:16:3e:26:ab:84 -- set bridge br-ex fail_mode=standalone -- del-controller br-ex
Dec 06 07:36:35 np0005548790.novalocal kernel: device ovs-system entered promiscuous mode
Dec 06 07:36:35 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006595.1622] manager: (ovs-system): new Generic device (/org/freedesktop/NetworkManager/Devices/4)
Dec 06 07:36:35 np0005548790.novalocal kernel: Timeout policy base is empty
Dec 06 07:36:35 np0005548790.novalocal kernel: Failed to associated timeout policy `ovs_test_tp'
Dec 06 07:36:35 np0005548790.novalocal systemd-udevd[22418]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:36:35 np0005548790.novalocal systemd-udevd[22433]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:36:35 np0005548790.novalocal kernel: device br-ex entered promiscuous mode
Dec 06 07:36:35 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006595.2006] manager: (br-ex): new Generic device (/org/freedesktop/NetworkManager/Devices/5)
Dec 06 07:36:35 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006595.2263] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22443 uid=0 result="success"
Dec 06 07:36:35 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006595.2450] device (br-ex): carrier: link connected
Dec 06 07:36:38 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006598.2952] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22472 uid=0 result="success"
Dec 06 07:36:38 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006598.3406] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22487 uid=0 result="success"
Dec 06 07:36:38 np0005548790.novalocal NET[22512]: /etc/sysconfig/network-scripts/ifup-post : updated /etc/resolv.conf
Dec 06 07:36:38 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006598.4242] device (eth1): state change: activated -> unmanaged (reason 'unmanaged', sys-iface-state: 'managed')
Dec 06 07:36:38 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006598.4418] dhcp4 (eth1): canceled DHCP transaction
Dec 06 07:36:38 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006598.4419] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 06 07:36:38 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006598.4419] dhcp4 (eth1): state changed no lease
Dec 06 07:36:38 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006598.4469] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22521 uid=0 result="success"
Dec 06 07:36:38 np0005548790.novalocal ifup[22522]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 06 07:36:38 np0005548790.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 06 07:36:38 np0005548790.novalocal ifup[22523]: 'network-scripts' will be removed from distribution in near future.
Dec 06 07:36:38 np0005548790.novalocal ifup[22525]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 06 07:36:38 np0005548790.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 06 07:36:38 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006598.4776] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22539 uid=0 result="success"
Dec 06 07:36:38 np0005548790.novalocal sshd[22538]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:36:38 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006598.5203] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22550 uid=0 result="success"
Dec 06 07:36:38 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006598.5270] device (eth1): carrier: link connected
Dec 06 07:36:38 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006598.5441] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22559 uid=0 result="success"
Dec 06 07:36:38 np0005548790.novalocal ipv6_wait_tentative[22571]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state
Dec 06 07:36:39 np0005548790.novalocal ipv6_wait_tentative[22576]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state
Dec 06 07:36:40 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006600.6115] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22586 uid=0 result="success"
Dec 06 07:36:40 np0005548790.novalocal ovs-vsctl[22601]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex eth1 -- add-port br-ex eth1
Dec 06 07:36:40 np0005548790.novalocal kernel: device eth1 entered promiscuous mode
Dec 06 07:36:40 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006600.6810] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22609 uid=0 result="success"
Dec 06 07:36:40 np0005548790.novalocal ifup[22610]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 06 07:36:40 np0005548790.novalocal ifup[22611]: 'network-scripts' will be removed from distribution in near future.
Dec 06 07:36:40 np0005548790.novalocal ifup[22612]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 06 07:36:40 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006600.7134] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22618 uid=0 result="success"
Dec 06 07:36:40 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006600.7603] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22628 uid=0 result="success"
Dec 06 07:36:40 np0005548790.novalocal ifup[22629]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 06 07:36:40 np0005548790.novalocal ifup[22630]: 'network-scripts' will be removed from distribution in near future.
Dec 06 07:36:40 np0005548790.novalocal ifup[22631]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 06 07:36:40 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006600.7911] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22637 uid=0 result="success"
Dec 06 07:36:40 np0005548790.novalocal ovs-vsctl[22640]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal
Dec 06 07:36:40 np0005548790.novalocal kernel: device vlan23 entered promiscuous mode
Dec 06 07:36:40 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006600.8353] manager: (vlan23): new Generic device (/org/freedesktop/NetworkManager/Devices/6)
Dec 06 07:36:40 np0005548790.novalocal systemd-udevd[22642]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:36:40 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006600.8611] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22651 uid=0 result="success"
Dec 06 07:36:40 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006600.8815] device (vlan23): carrier: link connected
Dec 06 07:36:43 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006603.9347] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22680 uid=0 result="success"
Dec 06 07:36:43 np0005548790.novalocal sshd[22538]: Received disconnect from 45.78.219.217 port 46938:11: Bye Bye [preauth]
Dec 06 07:36:43 np0005548790.novalocal sshd[22538]: Disconnected from authenticating user root 45.78.219.217 port 46938 [preauth]
Dec 06 07:36:43 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006603.9824] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22695 uid=0 result="success"
Dec 06 07:36:44 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006604.0404] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22716 uid=0 result="success"
Dec 06 07:36:44 np0005548790.novalocal ifup[22717]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 06 07:36:44 np0005548790.novalocal ifup[22718]: 'network-scripts' will be removed from distribution in near future.
Dec 06 07:36:44 np0005548790.novalocal ifup[22719]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 06 07:36:44 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006604.0730] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22725 uid=0 result="success"
Dec 06 07:36:44 np0005548790.novalocal ovs-vsctl[22728]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal
Dec 06 07:36:44 np0005548790.novalocal kernel: device vlan21 entered promiscuous mode
Dec 06 07:36:44 np0005548790.novalocal systemd-udevd[22730]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:36:44 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006604.1361] manager: (vlan21): new Generic device (/org/freedesktop/NetworkManager/Devices/7)
Dec 06 07:36:44 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006604.1605] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22740 uid=0 result="success"
Dec 06 07:36:44 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006604.1798] device (vlan21): carrier: link connected
Dec 06 07:36:47 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006607.2279] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22770 uid=0 result="success"
Dec 06 07:36:47 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006607.2775] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22785 uid=0 result="success"
Dec 06 07:36:47 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006607.3388] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22806 uid=0 result="success"
Dec 06 07:36:47 np0005548790.novalocal ifup[22807]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 06 07:36:47 np0005548790.novalocal ifup[22808]: 'network-scripts' will be removed from distribution in near future.
Dec 06 07:36:47 np0005548790.novalocal ifup[22809]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 06 07:36:47 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006607.3706] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22815 uid=0 result="success"
Dec 06 07:36:47 np0005548790.novalocal ovs-vsctl[22818]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal
Dec 06 07:36:47 np0005548790.novalocal kernel: device vlan20 entered promiscuous mode
Dec 06 07:36:47 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006607.4098] manager: (vlan20): new Generic device (/org/freedesktop/NetworkManager/Devices/8)
Dec 06 07:36:47 np0005548790.novalocal systemd-udevd[22820]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:36:47 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006607.4374] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22830 uid=0 result="success"
Dec 06 07:36:47 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006607.4613] device (vlan20): carrier: link connected
Dec 06 07:36:48 np0005548790.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 06 07:36:50 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006610.5149] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22860 uid=0 result="success"
Dec 06 07:36:50 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006610.5609] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22875 uid=0 result="success"
Dec 06 07:36:50 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006610.6180] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22896 uid=0 result="success"
Dec 06 07:36:50 np0005548790.novalocal ifup[22897]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 06 07:36:50 np0005548790.novalocal ifup[22898]: 'network-scripts' will be removed from distribution in near future.
Dec 06 07:36:50 np0005548790.novalocal ifup[22899]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 06 07:36:50 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006610.6489] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22905 uid=0 result="success"
Dec 06 07:36:50 np0005548790.novalocal ovs-vsctl[22908]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal
Dec 06 07:36:50 np0005548790.novalocal kernel: device vlan44 entered promiscuous mode
Dec 06 07:36:50 np0005548790.novalocal systemd-udevd[22910]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:36:50 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006610.6872] manager: (vlan44): new Generic device (/org/freedesktop/NetworkManager/Devices/9)
Dec 06 07:36:50 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006610.7119] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22920 uid=0 result="success"
Dec 06 07:36:50 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006610.7317] device (vlan44): carrier: link connected
Dec 06 07:36:53 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006613.7794] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22950 uid=0 result="success"
Dec 06 07:36:53 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006613.8261] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22965 uid=0 result="success"
Dec 06 07:36:53 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006613.8814] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22986 uid=0 result="success"
Dec 06 07:36:53 np0005548790.novalocal ifup[22987]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 06 07:36:53 np0005548790.novalocal ifup[22988]: 'network-scripts' will be removed from distribution in near future.
Dec 06 07:36:53 np0005548790.novalocal ifup[22989]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 06 07:36:53 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006613.9104] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22995 uid=0 result="success"
Dec 06 07:36:53 np0005548790.novalocal ovs-vsctl[22998]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal
Dec 06 07:36:53 np0005548790.novalocal kernel: device vlan22 entered promiscuous mode
Dec 06 07:36:53 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006613.9469] manager: (vlan22): new Generic device (/org/freedesktop/NetworkManager/Devices/10)
Dec 06 07:36:53 np0005548790.novalocal systemd-udevd[23000]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 07:36:53 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006613.9714] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23010 uid=0 result="success"
Dec 06 07:36:53 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006613.9915] device (vlan22): carrier: link connected
Dec 06 07:36:57 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006617.0572] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23040 uid=0 result="success"
Dec 06 07:36:57 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006617.1025] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23055 uid=0 result="success"
Dec 06 07:36:57 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006617.1606] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23076 uid=0 result="success"
Dec 06 07:36:57 np0005548790.novalocal ifup[23077]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 06 07:36:57 np0005548790.novalocal ifup[23078]: 'network-scripts' will be removed from distribution in near future.
Dec 06 07:36:57 np0005548790.novalocal ifup[23079]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 06 07:36:57 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006617.1930] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23085 uid=0 result="success"
Dec 06 07:36:57 np0005548790.novalocal ovs-vsctl[23088]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal
Dec 06 07:36:57 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006617.2556] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23095 uid=0 result="success"
Dec 06 07:36:58 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006618.3204] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23122 uid=0 result="success"
Dec 06 07:36:58 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006618.3613] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23137 uid=0 result="success"
Dec 06 07:36:58 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006618.4148] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23158 uid=0 result="success"
Dec 06 07:36:58 np0005548790.novalocal ifup[23159]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 06 07:36:58 np0005548790.novalocal ifup[23160]: 'network-scripts' will be removed from distribution in near future.
Dec 06 07:36:58 np0005548790.novalocal ifup[23161]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 06 07:36:58 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006618.4419] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23167 uid=0 result="success"
Dec 06 07:36:58 np0005548790.novalocal ovs-vsctl[23170]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal
Dec 06 07:36:58 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006618.4932] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23177 uid=0 result="success"
Dec 06 07:36:59 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006619.5502] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23205 uid=0 result="success"
Dec 06 07:36:59 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006619.5965] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23220 uid=0 result="success"
Dec 06 07:36:59 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006619.6564] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23241 uid=0 result="success"
Dec 06 07:36:59 np0005548790.novalocal ifup[23242]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 06 07:36:59 np0005548790.novalocal ifup[23243]: 'network-scripts' will be removed from distribution in near future.
Dec 06 07:36:59 np0005548790.novalocal ifup[23244]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 06 07:36:59 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006619.6886] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23250 uid=0 result="success"
Dec 06 07:36:59 np0005548790.novalocal ovs-vsctl[23253]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal
Dec 06 07:36:59 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006619.7443] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23260 uid=0 result="success"
Dec 06 07:37:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006620.8039] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23288 uid=0 result="success"
Dec 06 07:37:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006620.8523] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23303 uid=0 result="success"
Dec 06 07:37:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006620.9119] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23324 uid=0 result="success"
Dec 06 07:37:00 np0005548790.novalocal ifup[23325]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 06 07:37:00 np0005548790.novalocal ifup[23326]: 'network-scripts' will be removed from distribution in near future.
Dec 06 07:37:00 np0005548790.novalocal ifup[23327]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 06 07:37:00 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006620.9443] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23333 uid=0 result="success"
Dec 06 07:37:00 np0005548790.novalocal ovs-vsctl[23336]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal
Dec 06 07:37:01 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006621.0022] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23343 uid=0 result="success"
Dec 06 07:37:02 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006622.0639] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23371 uid=0 result="success"
Dec 06 07:37:02 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006622.1105] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23386 uid=0 result="success"
Dec 06 07:37:02 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006622.1634] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23407 uid=0 result="success"
Dec 06 07:37:02 np0005548790.novalocal ifup[23408]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 06 07:37:02 np0005548790.novalocal ifup[23409]: 'network-scripts' will be removed from distribution in near future.
Dec 06 07:37:02 np0005548790.novalocal ifup[23410]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 06 07:37:02 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006622.1896] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23416 uid=0 result="success"
Dec 06 07:37:02 np0005548790.novalocal ovs-vsctl[23419]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal
Dec 06 07:37:02 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006622.2359] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23426 uid=0 result="success"
Dec 06 07:37:03 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006623.3009] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23454 uid=0 result="success"
Dec 06 07:37:03 np0005548790.novalocal NetworkManager[5968]: <info>  [1765006623.3433] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23469 uid=0 result="success"
Dec 06 07:37:03 np0005548790.novalocal sudo[22245]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:30 np0005548790.novalocal python3[23501]: ansible-ansible.legacy.command Invoked with _raw_params=ip a
                                                       ping -c 2 -W 2 192.168.122.10
                                                       ping -c 2 -W 2 192.168.122.11
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-1bf3-6840-00000000001b-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 07:37:34 np0005548790.novalocal python3[23520]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVgIoETU+ZMXzSQYJdf7tKLhQsLaB9easlDHbhHsBFXd1+Axjoyg338dVOvCx68r/a15lecdlSwbLqd4GXxUOdHnWLa1I9u6bd6azOwE0Dd6ZjnquN3BRq9dLJXMlKHhXMddL6WHNfxT/JOL+gKp0CM74naUBGqrzV05qlb19n7xZJtmxVohAGGeQdFwQJBVoQ6yZOjcJZ5CpbWCs4pFXZT/31fA0KIAJkrzAeUGRRkQEnzXY1riF0RHwvXaNJ0ZoAYfT7q263Pd5gnQEmpiBirUBH6CXJn4lIQyNMyVRbnKWemW9P1kyv2bjZUPg2b1xWBE7MBTs/wMt1RjdO9p+sxtwOd2IQMf1t3JLa2p3xqgxtGTMugpJUBr1TWwdLoHl+eAMuWZwAWofLWICHUlPzyTN8L8acu0im2eR60FEl9XdUjp8DYCBGxhhIVx+xZxj6nTnNc5T7GJpJlCTF+9YPlDVrLg8y/YXly0BoOqr7p+RaqMAJnoZymNDbuu9V3Vs= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 07:37:34 np0005548790.novalocal sudo[23534]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cxianmdfreigiehbkocdkwxrcecfxewl ; /usr/bin/python3
Dec 06 07:37:34 np0005548790.novalocal sudo[23534]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:37:34 np0005548790.novalocal python3[23536]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVgIoETU+ZMXzSQYJdf7tKLhQsLaB9easlDHbhHsBFXd1+Axjoyg338dVOvCx68r/a15lecdlSwbLqd4GXxUOdHnWLa1I9u6bd6azOwE0Dd6ZjnquN3BRq9dLJXMlKHhXMddL6WHNfxT/JOL+gKp0CM74naUBGqrzV05qlb19n7xZJtmxVohAGGeQdFwQJBVoQ6yZOjcJZ5CpbWCs4pFXZT/31fA0KIAJkrzAeUGRRkQEnzXY1riF0RHwvXaNJ0ZoAYfT7q263Pd5gnQEmpiBirUBH6CXJn4lIQyNMyVRbnKWemW9P1kyv2bjZUPg2b1xWBE7MBTs/wMt1RjdO9p+sxtwOd2IQMf1t3JLa2p3xqgxtGTMugpJUBr1TWwdLoHl+eAMuWZwAWofLWICHUlPzyTN8L8acu0im2eR60FEl9XdUjp8DYCBGxhhIVx+xZxj6nTnNc5T7GJpJlCTF+9YPlDVrLg8y/YXly0BoOqr7p+RaqMAJnoZymNDbuu9V3Vs= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 07:37:34 np0005548790.novalocal sudo[23534]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:36 np0005548790.novalocal python3[23550]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVgIoETU+ZMXzSQYJdf7tKLhQsLaB9easlDHbhHsBFXd1+Axjoyg338dVOvCx68r/a15lecdlSwbLqd4GXxUOdHnWLa1I9u6bd6azOwE0Dd6ZjnquN3BRq9dLJXMlKHhXMddL6WHNfxT/JOL+gKp0CM74naUBGqrzV05qlb19n7xZJtmxVohAGGeQdFwQJBVoQ6yZOjcJZ5CpbWCs4pFXZT/31fA0KIAJkrzAeUGRRkQEnzXY1riF0RHwvXaNJ0ZoAYfT7q263Pd5gnQEmpiBirUBH6CXJn4lIQyNMyVRbnKWemW9P1kyv2bjZUPg2b1xWBE7MBTs/wMt1RjdO9p+sxtwOd2IQMf1t3JLa2p3xqgxtGTMugpJUBr1TWwdLoHl+eAMuWZwAWofLWICHUlPzyTN8L8acu0im2eR60FEl9XdUjp8DYCBGxhhIVx+xZxj6nTnNc5T7GJpJlCTF+9YPlDVrLg8y/YXly0BoOqr7p+RaqMAJnoZymNDbuu9V3Vs= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 07:37:36 np0005548790.novalocal sudo[23564]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mqthukfjkkexpdtyqkkgrccruaoqhzbf ; /usr/bin/python3
Dec 06 07:37:36 np0005548790.novalocal sudo[23564]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:37:36 np0005548790.novalocal python3[23566]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVgIoETU+ZMXzSQYJdf7tKLhQsLaB9easlDHbhHsBFXd1+Axjoyg338dVOvCx68r/a15lecdlSwbLqd4GXxUOdHnWLa1I9u6bd6azOwE0Dd6ZjnquN3BRq9dLJXMlKHhXMddL6WHNfxT/JOL+gKp0CM74naUBGqrzV05qlb19n7xZJtmxVohAGGeQdFwQJBVoQ6yZOjcJZ5CpbWCs4pFXZT/31fA0KIAJkrzAeUGRRkQEnzXY1riF0RHwvXaNJ0ZoAYfT7q263Pd5gnQEmpiBirUBH6CXJn4lIQyNMyVRbnKWemW9P1kyv2bjZUPg2b1xWBE7MBTs/wMt1RjdO9p+sxtwOd2IQMf1t3JLa2p3xqgxtGTMugpJUBr1TWwdLoHl+eAMuWZwAWofLWICHUlPzyTN8L8acu0im2eR60FEl9XdUjp8DYCBGxhhIVx+xZxj6nTnNc5T7GJpJlCTF+9YPlDVrLg8y/YXly0BoOqr7p+RaqMAJnoZymNDbuu9V3Vs= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 06 07:37:36 np0005548790.novalocal sudo[23564]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:37 np0005548790.novalocal python3[23580]: ansible-ansible.builtin.slurp Invoked with path=/etc/hostname src=/etc/hostname
Dec 06 07:37:38 np0005548790.novalocal python3[23595]: ansible-ansible.legacy.command Invoked with _raw_params=hostname="np0005548790.novalocal"
                                                       hostname_str_array=(${hostname//./ })
                                                       echo ${hostname_str_array[0]} > /home/zuul/ansible_hostname
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-1bf3-6840-000000000022-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 07:37:38 np0005548790.novalocal sudo[23613]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nofddkfnbzmgbcadqitgetzbthfrqyig ; /usr/bin/python3
Dec 06 07:37:38 np0005548790.novalocal sudo[23613]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:37:38 np0005548790.novalocal python3[23615]: ansible-ansible.legacy.command Invoked with _raw_params=hostname=$(cat /home/zuul/ansible_hostname)
                                                       hostnamectl hostname "$hostname.localdomain"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-1bf3-6840-000000000023-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 07:37:39 np0005548790.novalocal systemd[1]: Starting Hostname Service...
Dec 06 07:37:39 np0005548790.novalocal systemd[1]: Started Hostname Service.
Dec 06 07:37:39 np0005548790.localdomain systemd-hostnamed[23619]: Hostname set to <np0005548790.localdomain> (static)
Dec 06 07:37:39 np0005548790.localdomain NetworkManager[5968]: <info>  [1765006659.0830] hostname: static hostname changed from "np0005548790.novalocal" to "np0005548790.localdomain"
Dec 06 07:37:39 np0005548790.localdomain systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 06 07:37:39 np0005548790.localdomain systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 06 07:37:39 np0005548790.localdomain sudo[23613]: pam_unix(sudo:session): session closed for user root
Dec 06 07:37:40 np0005548790.localdomain sshd[19071]: pam_unix(sshd:session): session closed for user zuul
Dec 06 07:37:40 np0005548790.localdomain systemd[1]: session-10.scope: Deactivated successfully.
Dec 06 07:37:40 np0005548790.localdomain systemd[1]: session-10.scope: Consumed 1min 43.941s CPU time.
Dec 06 07:37:40 np0005548790.localdomain systemd-logind[760]: Session 10 logged out. Waiting for processes to exit.
Dec 06 07:37:40 np0005548790.localdomain systemd-logind[760]: Removed session 10.
Dec 06 07:37:43 np0005548790.localdomain sshd[23630]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:37:43 np0005548790.localdomain sshd[23630]: Accepted publickey for zuul from 38.102.83.114 port 38014 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 07:37:43 np0005548790.localdomain systemd-logind[760]: New session 11 of user zuul.
Dec 06 07:37:43 np0005548790.localdomain systemd[1]: Started Session 11 of User zuul.
Dec 06 07:37:43 np0005548790.localdomain sshd[23630]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 07:37:44 np0005548790.localdomain python3[23647]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname
Dec 06 07:37:45 np0005548790.localdomain sshd[23630]: pam_unix(sshd:session): session closed for user zuul
Dec 06 07:37:45 np0005548790.localdomain systemd[1]: session-11.scope: Deactivated successfully.
Dec 06 07:37:45 np0005548790.localdomain systemd-logind[760]: Session 11 logged out. Waiting for processes to exit.
Dec 06 07:37:45 np0005548790.localdomain systemd-logind[760]: Removed session 11.
Dec 06 07:37:49 np0005548790.localdomain systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 06 07:38:09 np0005548790.localdomain systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 06 07:38:37 np0005548790.localdomain sshd[23650]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:38:37 np0005548790.localdomain sshd[23650]: Accepted publickey for zuul from 38.102.83.114 port 48120 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 07:38:37 np0005548790.localdomain systemd-logind[760]: New session 12 of user zuul.
Dec 06 07:38:37 np0005548790.localdomain systemd[1]: Started Session 12 of User zuul.
Dec 06 07:38:37 np0005548790.localdomain sshd[23650]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 07:38:37 np0005548790.localdomain sudo[23667]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wmcxclagexjqmiggzxvgpjhiwqaziusw ; /usr/bin/python3
Dec 06 07:38:37 np0005548790.localdomain sudo[23667]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:38:38 np0005548790.localdomain python3[23669]: ansible-ansible.legacy.dnf Invoked with name=['lvm2', 'jq'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 07:38:41 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 07:38:41 np0005548790.localdomain systemd-rc-local-generator[23703]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 07:38:41 np0005548790.localdomain systemd-sysv-generator[23707]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 07:38:41 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 07:38:41 np0005548790.localdomain systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec 06 07:38:42 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 07:38:42 np0005548790.localdomain systemd-rc-local-generator[23747]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 07:38:42 np0005548790.localdomain systemd-sysv-generator[23751]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 07:38:42 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 07:38:42 np0005548790.localdomain systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec 06 07:38:42 np0005548790.localdomain systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec 06 07:38:42 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 07:38:42 np0005548790.localdomain systemd-rc-local-generator[23791]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 07:38:42 np0005548790.localdomain systemd-sysv-generator[23794]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 07:38:42 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 07:38:42 np0005548790.localdomain systemd[1]: Listening on LVM2 poll daemon socket.
Dec 06 07:38:42 np0005548790.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 07:38:42 np0005548790.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 06 07:38:42 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 07:38:43 np0005548790.localdomain systemd-rc-local-generator[23841]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 07:38:43 np0005548790.localdomain systemd-sysv-generator[23845]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 07:38:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 07:38:43 np0005548790.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 07:38:43 np0005548790.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 07:38:43 np0005548790.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 07:38:43 np0005548790.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 06 07:38:43 np0005548790.localdomain systemd[1]: run-r1cb93db9c0a44f60992612da2ad3a2ad.service: Deactivated successfully.
Dec 06 07:38:43 np0005548790.localdomain systemd[1]: run-rf4960b5d7f454b97a2a9b317a414866e.service: Deactivated successfully.
Dec 06 07:38:44 np0005548790.localdomain sudo[23667]: pam_unix(sudo:session): session closed for user root
Dec 06 07:39:24 np0005548790.localdomain sshd[24442]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:39:44 np0005548790.localdomain sshd[23653]: Received disconnect from 38.102.83.114 port 48120:11: disconnected by user
Dec 06 07:39:44 np0005548790.localdomain sshd[23653]: Disconnected from user zuul 38.102.83.114 port 48120
Dec 06 07:39:44 np0005548790.localdomain sshd[23650]: pam_unix(sshd:session): session closed for user zuul
Dec 06 07:39:44 np0005548790.localdomain systemd[1]: session-12.scope: Deactivated successfully.
Dec 06 07:39:44 np0005548790.localdomain systemd[1]: session-12.scope: Consumed 4.605s CPU time.
Dec 06 07:39:44 np0005548790.localdomain systemd-logind[760]: Session 12 logged out. Waiting for processes to exit.
Dec 06 07:39:44 np0005548790.localdomain systemd-logind[760]: Removed session 12.
Dec 06 07:39:47 np0005548790.localdomain sshd[24442]: Connection closed by 45.78.219.217 port 49854 [preauth]
Dec 06 07:41:22 np0005548790.localdomain sshd[24445]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:41:23 np0005548790.localdomain sshd[24445]: Invalid user ubuntu from 193.32.162.146 port 38854
Dec 06 07:41:23 np0005548790.localdomain sshd[24445]: Connection closed by invalid user ubuntu 193.32.162.146 port 38854 [preauth]
Dec 06 07:42:02 np0005548790.localdomain sshd[24447]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:42:04 np0005548790.localdomain sshd[24447]: Received disconnect from 45.78.219.217 port 52214:11: Bye Bye [preauth]
Dec 06 07:42:04 np0005548790.localdomain sshd[24447]: Disconnected from authenticating user root 45.78.219.217 port 52214 [preauth]
Dec 06 07:43:01 np0005548790.localdomain anacron[6186]: Job `cron.daily' started
Dec 06 07:43:01 np0005548790.localdomain anacron[6186]: Job `cron.daily' terminated
Dec 06 07:44:39 np0005548790.localdomain sshd[24451]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:44:43 np0005548790.localdomain sshd[24451]: Received disconnect from 45.78.219.217 port 51428:11: Bye Bye [preauth]
Dec 06 07:44:43 np0005548790.localdomain sshd[24451]: Disconnected from authenticating user root 45.78.219.217 port 51428 [preauth]
Dec 06 07:44:44 np0005548790.localdomain sshd[24453]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:44:44 np0005548790.localdomain sshd[24453]: Invalid user validator from 193.32.162.146 port 51262
Dec 06 07:44:44 np0005548790.localdomain sshd[24453]: Connection closed by invalid user validator 193.32.162.146 port 51262 [preauth]
Dec 06 07:47:16 np0005548790.localdomain sshd[24457]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:47:18 np0005548790.localdomain sshd[24457]: Received disconnect from 45.78.219.217 port 43300:11: Bye Bye [preauth]
Dec 06 07:47:18 np0005548790.localdomain sshd[24457]: Disconnected from authenticating user root 45.78.219.217 port 43300 [preauth]
Dec 06 07:47:32 np0005548790.localdomain sshd[24459]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:47:35 np0005548790.localdomain sshd[24459]: Connection reset by authenticating user root 45.135.232.92 port 30902 [preauth]
Dec 06 07:47:36 np0005548790.localdomain sshd[24461]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:47:38 np0005548790.localdomain sshd[24461]: Connection reset by authenticating user root 45.135.232.92 port 39876 [preauth]
Dec 06 07:47:38 np0005548790.localdomain sshd[24463]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:47:39 np0005548790.localdomain sshd[24463]: Connection reset by authenticating user root 45.135.232.92 port 39900 [preauth]
Dec 06 07:47:40 np0005548790.localdomain sshd[24465]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:47:43 np0005548790.localdomain sshd[24465]: Invalid user oracle from 45.135.232.92 port 39908
Dec 06 07:47:43 np0005548790.localdomain sshd[24465]: Connection reset by invalid user oracle 45.135.232.92 port 39908 [preauth]
Dec 06 07:47:43 np0005548790.localdomain sshd[24467]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:47:45 np0005548790.localdomain sshd[24467]: Connection reset by authenticating user root 45.135.232.92 port 39914 [preauth]
Dec 06 07:48:07 np0005548790.localdomain sshd[24469]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:48:07 np0005548790.localdomain sshd[24469]: Invalid user node from 193.32.162.146 port 35442
Dec 06 07:48:07 np0005548790.localdomain sshd[24469]: Connection closed by invalid user node 193.32.162.146 port 35442 [preauth]
Dec 06 07:49:51 np0005548790.localdomain sshd[24471]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:51:23 np0005548790.localdomain sshd[24474]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:51:23 np0005548790.localdomain sshd[24474]: Invalid user solana from 193.32.162.146 port 47872
Dec 06 07:51:23 np0005548790.localdomain sshd[24474]: Connection closed by invalid user solana 193.32.162.146 port 47872 [preauth]
Dec 06 07:51:51 np0005548790.localdomain sshd[24471]: fatal: Timeout before authentication for 45.78.219.217 port 45076
Dec 06 07:52:29 np0005548790.localdomain sshd[24477]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:52:35 np0005548790.localdomain sshd[24477]: Received disconnect from 45.78.219.217 port 47416:11: Bye Bye [preauth]
Dec 06 07:52:35 np0005548790.localdomain sshd[24477]: Disconnected from authenticating user root 45.78.219.217 port 47416 [preauth]
Dec 06 07:54:33 np0005548790.localdomain sshd[24480]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:54:33 np0005548790.localdomain sshd[24480]: Invalid user sol from 193.32.162.146 port 60278
Dec 06 07:54:34 np0005548790.localdomain sshd[24480]: Connection closed by invalid user sol 193.32.162.146 port 60278 [preauth]
Dec 06 07:55:05 np0005548790.localdomain sshd[24482]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:55:07 np0005548790.localdomain sshd[24482]: Received disconnect from 45.78.219.217 port 45966:11: Bye Bye [preauth]
Dec 06 07:55:07 np0005548790.localdomain sshd[24482]: Disconnected from authenticating user root 45.78.219.217 port 45966 [preauth]
Dec 06 07:55:17 np0005548790.localdomain sshd[24484]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:55:17 np0005548790.localdomain sshd[24484]: Accepted publickey for zuul from 192.168.122.100 port 52180 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 07:55:17 np0005548790.localdomain systemd-logind[760]: New session 13 of user zuul.
Dec 06 07:55:17 np0005548790.localdomain systemd[1]: Started Session 13 of User zuul.
Dec 06 07:55:17 np0005548790.localdomain sshd[24484]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 07:55:17 np0005548790.localdomain sudo[24530]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-phaismymqcirmgshezvjqqrmokznufsk ; /usr/bin/python3
Dec 06 07:55:17 np0005548790.localdomain sudo[24530]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:17 np0005548790.localdomain python3[24532]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 07:55:18 np0005548790.localdomain sudo[24530]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:19 np0005548790.localdomain sudo[24616]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bgjsnzanxjtdtbzftdfelqcgvwzeshbm ; /usr/bin/python3
Dec 06 07:55:19 np0005548790.localdomain sudo[24616]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:19 np0005548790.localdomain python3[24618]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 07:55:22 np0005548790.localdomain sudo[24616]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:22 np0005548790.localdomain sudo[24634]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ttghedoekaecsrtkaouaqakgyelhguve ; /usr/bin/python3
Dec 06 07:55:22 np0005548790.localdomain sudo[24634]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:23 np0005548790.localdomain python3[24636]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 07:55:23 np0005548790.localdomain sudo[24634]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:23 np0005548790.localdomain sudo[24650]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jeovbtkyoykfyrvsbadfgvlsvpswoxtp ; /usr/bin/python3
Dec 06 07:55:23 np0005548790.localdomain sudo[24650]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:23 np0005548790.localdomain python3[24652]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G
                                                         losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                                         lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 07:55:23 np0005548790.localdomain kernel: loop: module loaded
Dec 06 07:55:23 np0005548790.localdomain kernel: loop3: detected capacity change from 0 to 14680064
Dec 06 07:55:23 np0005548790.localdomain sudo[24650]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:24 np0005548790.localdomain sudo[24674]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gxvvwunrtkchpayuhsfkmyffbeedecke ; /usr/bin/python3
Dec 06 07:55:24 np0005548790.localdomain sudo[24674]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:24 np0005548790.localdomain python3[24676]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                                         vgcreate ceph_vg0 /dev/loop3
                                                         lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                                         lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 07:55:24 np0005548790.localdomain lvm[24679]: PV /dev/loop3 not used.
Dec 06 07:55:24 np0005548790.localdomain lvm[24681]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 07:55:24 np0005548790.localdomain systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec 06 07:55:24 np0005548790.localdomain lvm[24690]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 07:55:24 np0005548790.localdomain lvm[24690]: VG ceph_vg0 finished
Dec 06 07:55:24 np0005548790.localdomain lvm[24691]:   1 logical volume(s) in volume group "ceph_vg0" now active
Dec 06 07:55:24 np0005548790.localdomain systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec 06 07:55:24 np0005548790.localdomain sudo[24674]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:24 np0005548790.localdomain sudo[24738]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lquthxeussajnwfddfxawrmzfdyjleay ; /usr/bin/python3
Dec 06 07:55:24 np0005548790.localdomain sudo[24738]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:25 np0005548790.localdomain python3[24740]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 07:55:25 np0005548790.localdomain sudo[24738]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:25 np0005548790.localdomain sudo[24781]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nozvcmcldkzltipepkoqijckxjsssnox ; /usr/bin/python3
Dec 06 07:55:25 np0005548790.localdomain sudo[24781]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:25 np0005548790.localdomain python3[24783]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765007724.7804394-54545-4424826975522/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 07:55:25 np0005548790.localdomain sudo[24781]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:26 np0005548790.localdomain sudo[24811]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ovzxroofzlrsjfohvbiqrssmwsjhpfmr ; /usr/bin/python3
Dec 06 07:55:26 np0005548790.localdomain sudo[24811]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:26 np0005548790.localdomain python3[24813]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 07:55:26 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 07:55:26 np0005548790.localdomain systemd-rc-local-generator[24841]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 07:55:26 np0005548790.localdomain systemd-sysv-generator[24846]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 07:55:26 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 07:55:26 np0005548790.localdomain systemd[1]: Starting Ceph OSD losetup...
Dec 06 07:55:26 np0005548790.localdomain bash[24855]: /dev/loop3: [64516]:8399529 (/var/lib/ceph-osd-0.img)
Dec 06 07:55:26 np0005548790.localdomain systemd[1]: Finished Ceph OSD losetup.
Dec 06 07:55:26 np0005548790.localdomain lvm[24856]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 07:55:26 np0005548790.localdomain lvm[24856]: VG ceph_vg0 finished
Dec 06 07:55:26 np0005548790.localdomain sudo[24811]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:27 np0005548790.localdomain sudo[24871]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mptknxvpisrnrjclcotjrqqswmjoxwkg ; /usr/bin/python3
Dec 06 07:55:27 np0005548790.localdomain sudo[24871]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:27 np0005548790.localdomain python3[24873]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 07:55:29 np0005548790.localdomain sudo[24871]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:30 np0005548790.localdomain sudo[24888]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lsrldywywxfvxwzcttjlaflkliatwiur ; /usr/bin/python3
Dec 06 07:55:30 np0005548790.localdomain sudo[24888]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:30 np0005548790.localdomain python3[24890]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 07:55:30 np0005548790.localdomain sudo[24888]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:30 np0005548790.localdomain sudo[24904]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dcqggtsbxfbatykdbjrnlmzreggjynkv ; /usr/bin/python3
Dec 06 07:55:30 np0005548790.localdomain sudo[24904]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:30 np0005548790.localdomain python3[24906]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=7G
                                                         losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                                         lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 07:55:30 np0005548790.localdomain kernel: loop4: detected capacity change from 0 to 14680064
Dec 06 07:55:30 np0005548790.localdomain sudo[24904]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:31 np0005548790.localdomain sudo[24926]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rxcoydfevbewhqybqhfyamfreuqfrwjh ; /usr/bin/python3
Dec 06 07:55:31 np0005548790.localdomain sudo[24926]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:31 np0005548790.localdomain python3[24928]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                                         vgcreate ceph_vg1 /dev/loop4
                                                         lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                                         lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 07:55:31 np0005548790.localdomain lvm[24931]: PV /dev/loop4 not used.
Dec 06 07:55:31 np0005548790.localdomain lvm[24933]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 06 07:55:31 np0005548790.localdomain systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Dec 06 07:55:31 np0005548790.localdomain lvm[24943]:   1 logical volume(s) in volume group "ceph_vg1" now active
Dec 06 07:55:31 np0005548790.localdomain systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Dec 06 07:55:31 np0005548790.localdomain sudo[24926]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:32 np0005548790.localdomain sudo[24989]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vghlngkpqkusnqvtldlkinwxptrxmbjg ; /usr/bin/python3
Dec 06 07:55:32 np0005548790.localdomain sudo[24989]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:32 np0005548790.localdomain python3[24991]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 07:55:32 np0005548790.localdomain sudo[24989]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:32 np0005548790.localdomain sudo[25032]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wyacqwpkekbblmrirdttptbstkleuzgp ; /usr/bin/python3
Dec 06 07:55:32 np0005548790.localdomain sudo[25032]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:32 np0005548790.localdomain python3[25034]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765007731.8909247-54739-255158378330488/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 07:55:32 np0005548790.localdomain sudo[25032]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:33 np0005548790.localdomain sudo[25062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-esuqcmjxoslqgveszzxfgrppfuwzmbuv ; /usr/bin/python3
Dec 06 07:55:33 np0005548790.localdomain sudo[25062]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:33 np0005548790.localdomain python3[25064]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 07:55:33 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 07:55:33 np0005548790.localdomain systemd-rc-local-generator[25089]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 07:55:33 np0005548790.localdomain systemd-sysv-generator[25092]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 07:55:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 07:55:33 np0005548790.localdomain systemd[1]: Starting Ceph OSD losetup...
Dec 06 07:55:33 np0005548790.localdomain bash[25105]: /dev/loop4: [64516]:8400144 (/var/lib/ceph-osd-1.img)
Dec 06 07:55:33 np0005548790.localdomain systemd[1]: Finished Ceph OSD losetup.
Dec 06 07:55:33 np0005548790.localdomain lvm[25106]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 06 07:55:33 np0005548790.localdomain lvm[25106]: VG ceph_vg1 finished
Dec 06 07:55:33 np0005548790.localdomain sudo[25062]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:42 np0005548790.localdomain sudo[25150]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-epyoviowybafbyugpjqkxtxojuwtmmpz ; /usr/bin/python3
Dec 06 07:55:42 np0005548790.localdomain sudo[25150]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:42 np0005548790.localdomain python3[25152]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 07:55:42 np0005548790.localdomain sudo[25150]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:43 np0005548790.localdomain sudo[25170]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kmmbzuhzvijrnpikywpmdjxclywupkll ; /usr/bin/python3
Dec 06 07:55:43 np0005548790.localdomain sudo[25170]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:43 np0005548790.localdomain python3[25172]: ansible-hostname Invoked with name=np0005548790.localdomain use=None
Dec 06 07:55:43 np0005548790.localdomain systemd[1]: Starting Hostname Service...
Dec 06 07:55:43 np0005548790.localdomain systemd[1]: Started Hostname Service.
Dec 06 07:55:43 np0005548790.localdomain sudo[25170]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:45 np0005548790.localdomain sudo[25193]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mflcbuzuxzqzbjhssbqbqqtxbfivztjo ; /usr/bin/python3
Dec 06 07:55:45 np0005548790.localdomain sudo[25193]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:45 np0005548790.localdomain python3[25195]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None
Dec 06 07:55:45 np0005548790.localdomain sudo[25193]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:46 np0005548790.localdomain sudo[25241]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fwoyyqqjtmoyfypdhksvfrmraochndwy ; /usr/bin/python3
Dec 06 07:55:46 np0005548790.localdomain sudo[25241]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:46 np0005548790.localdomain python3[25243]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.jsc4bbc2tmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 07:55:46 np0005548790.localdomain sudo[25241]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:46 np0005548790.localdomain sudo[25271]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-avaluccavxbkpcqrpiahfpdfqajznoua ; /usr/bin/python3
Dec 06 07:55:46 np0005548790.localdomain sudo[25271]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:46 np0005548790.localdomain python3[25273]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.jsc4bbc2tmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 07:55:46 np0005548790.localdomain sudo[25271]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:47 np0005548790.localdomain sudo[25287]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ogsxfruhsrebsabsvlsphprpjwqummax ; /usr/bin/python3
Dec 06 07:55:47 np0005548790.localdomain sudo[25287]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:47 np0005548790.localdomain python3[25289]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.jsc4bbc2tmphosts insertbefore=BOF block=192.168.122.106 np0005548788.localdomain np0005548788
                                                         192.168.122.106 np0005548788.ctlplane.localdomain np0005548788.ctlplane
                                                         192.168.122.107 np0005548789.localdomain np0005548789
                                                         192.168.122.107 np0005548789.ctlplane.localdomain np0005548789.ctlplane
                                                         192.168.122.108 np0005548790.localdomain np0005548790
                                                         192.168.122.108 np0005548790.ctlplane.localdomain np0005548790.ctlplane
                                                         192.168.122.103 np0005548785.localdomain np0005548785
                                                         192.168.122.103 np0005548785.ctlplane.localdomain np0005548785.ctlplane
                                                         192.168.122.104 np0005548786.localdomain np0005548786
                                                         192.168.122.104 np0005548786.ctlplane.localdomain np0005548786.ctlplane
                                                         192.168.122.105 np0005548787.localdomain np0005548787
                                                         192.168.122.105 np0005548787.ctlplane.localdomain np0005548787.ctlplane
                                                         
                                                         192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane
                                                          marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 07:55:47 np0005548790.localdomain sudo[25287]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:47 np0005548790.localdomain sudo[25303]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qdqaepfvqnzpgqcqtcbnghihwqhxahmo ; /usr/bin/python3
Dec 06 07:55:47 np0005548790.localdomain sudo[25303]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:48 np0005548790.localdomain python3[25305]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.jsc4bbc2tmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 07:55:48 np0005548790.localdomain sudo[25303]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:48 np0005548790.localdomain sudo[25320]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sdqzjgbxpfsejuiuyxvknxvymncwsxfb ; /usr/bin/python3
Dec 06 07:55:48 np0005548790.localdomain sudo[25320]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:48 np0005548790.localdomain python3[25322]: ansible-file Invoked with path=/tmp/ansible.jsc4bbc2tmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 07:55:48 np0005548790.localdomain sudo[25320]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:50 np0005548790.localdomain sudo[25336]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tqzwtszkdsnwahippfthcfucvfrkrgwb ; /usr/bin/python3
Dec 06 07:55:50 np0005548790.localdomain sudo[25336]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:50 np0005548790.localdomain python3[25338]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 07:55:50 np0005548790.localdomain sudo[25336]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:51 np0005548790.localdomain sudo[25354]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ebfxwdrqrgtjgmutderkpubiwaucnpwj ; /usr/bin/python3
Dec 06 07:55:51 np0005548790.localdomain sudo[25354]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:51 np0005548790.localdomain python3[25356]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 07:55:53 np0005548790.localdomain sudo[25354]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:55 np0005548790.localdomain sudo[25403]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fwfdlysevjdctxtyrovodhymtkuvluqt ; /usr/bin/python3
Dec 06 07:55:55 np0005548790.localdomain sudo[25403]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:55 np0005548790.localdomain python3[25405]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 07:55:55 np0005548790.localdomain sudo[25403]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:55 np0005548790.localdomain sudo[25448]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lggtzbpehrvslrliimeqjvodtkcdlmuv ; /usr/bin/python3
Dec 06 07:55:55 np0005548790.localdomain sudo[25448]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:56 np0005548790.localdomain python3[25450]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765007755.1611066-55570-180448965185884/source dest=/etc/chrony.conf owner=root group=root mode=420 follow=False _original_basename=chrony.conf.j2 checksum=4fd4fbbb2de00c70a54478b7feb8ef8adf6a3362 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 07:55:56 np0005548790.localdomain sudo[25448]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:57 np0005548790.localdomain sudo[25478]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pfdpurgkrsxobjxipdvthhotfbmwqghi ; /usr/bin/python3
Dec 06 07:55:57 np0005548790.localdomain sudo[25478]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:57 np0005548790.localdomain python3[25480]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 07:55:57 np0005548790.localdomain sudo[25478]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:58 np0005548790.localdomain sudo[25496]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yboydrrtygdyzpdgziooczacjhiqgvmy ; /usr/bin/python3
Dec 06 07:55:58 np0005548790.localdomain sudo[25496]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:58 np0005548790.localdomain python3[25498]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 07:55:59 np0005548790.localdomain chronyd[766]: chronyd exiting
Dec 06 07:55:59 np0005548790.localdomain systemd[1]: Stopping NTP client/server...
Dec 06 07:55:59 np0005548790.localdomain systemd[1]: chronyd.service: Deactivated successfully.
Dec 06 07:55:59 np0005548790.localdomain systemd[1]: Stopped NTP client/server.
Dec 06 07:55:59 np0005548790.localdomain systemd[1]: chronyd.service: Consumed 131ms CPU time, read 1.9M from disk, written 4.0K to disk.
Dec 06 07:55:59 np0005548790.localdomain systemd[1]: Starting NTP client/server...
Dec 06 07:55:59 np0005548790.localdomain chronyd[25505]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Dec 06 07:55:59 np0005548790.localdomain chronyd[25505]: Frequency -26.394 +/- 0.097 ppm read from /var/lib/chrony/drift
Dec 06 07:55:59 np0005548790.localdomain chronyd[25505]: Loaded seccomp filter (level 2)
Dec 06 07:55:59 np0005548790.localdomain systemd[1]: Started NTP client/server.
Dec 06 07:55:59 np0005548790.localdomain sudo[25496]: pam_unix(sudo:session): session closed for user root
Dec 06 07:55:59 np0005548790.localdomain sudo[25552]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-klylcmgnirdgtobtcqcrsszqirzoympa ; /usr/bin/python3
Dec 06 07:55:59 np0005548790.localdomain sudo[25552]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:55:59 np0005548790.localdomain python3[25554]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 07:55:59 np0005548790.localdomain sudo[25552]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:00 np0005548790.localdomain sudo[25595]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-npqucroiinwfruicvnpzijhvdgfnuynn ; /usr/bin/python3
Dec 06 07:56:00 np0005548790.localdomain sudo[25595]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:56:00 np0005548790.localdomain python3[25597]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765007759.5568666-55760-97001033481950/source dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service follow=False checksum=d4d85e046d61f558ac7ec8178c6d529d893e81e1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 07:56:00 np0005548790.localdomain sudo[25595]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:00 np0005548790.localdomain sudo[25625]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vvrzpwusqqsxvxdzcydrboynwwqsdije ; /usr/bin/python3
Dec 06 07:56:00 np0005548790.localdomain sudo[25625]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:56:00 np0005548790.localdomain python3[25627]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 07:56:00 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 07:56:00 np0005548790.localdomain systemd-rc-local-generator[25645]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 07:56:00 np0005548790.localdomain systemd-sysv-generator[25653]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 07:56:00 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 07:56:01 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 07:56:01 np0005548790.localdomain systemd-sysv-generator[25689]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 07:56:01 np0005548790.localdomain systemd-rc-local-generator[25685]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 07:56:01 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 07:56:01 np0005548790.localdomain systemd[1]: Starting chronyd online sources service...
Dec 06 07:56:01 np0005548790.localdomain chronyc[25701]: 200 OK
Dec 06 07:56:01 np0005548790.localdomain systemd[1]: chrony-online.service: Deactivated successfully.
Dec 06 07:56:01 np0005548790.localdomain systemd[1]: Finished chronyd online sources service.
Dec 06 07:56:01 np0005548790.localdomain sudo[25625]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:01 np0005548790.localdomain sudo[25715]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-icyczzbyiwhoszxrxhpvsevbddwfqivt ; /usr/bin/python3
Dec 06 07:56:01 np0005548790.localdomain sudo[25715]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:56:02 np0005548790.localdomain python3[25717]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 07:56:02 np0005548790.localdomain chronyd[25505]: System clock was stepped by 0.000000 seconds
Dec 06 07:56:02 np0005548790.localdomain sudo[25715]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:02 np0005548790.localdomain sudo[25732]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eptqxqvajzebqnbtqllnxxrssoxzmfaj ; /usr/bin/python3
Dec 06 07:56:02 np0005548790.localdomain sudo[25732]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:56:02 np0005548790.localdomain python3[25734]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 07:56:03 np0005548790.localdomain chronyd[25505]: Selected source 216.232.132.102 (pool.ntp.org)
Dec 06 07:56:12 np0005548790.localdomain sudo[25732]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:12 np0005548790.localdomain sudo[25749]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-waezforcnlmxdqdrghnplxzduohnfwgq ; /usr/bin/python3
Dec 06 07:56:12 np0005548790.localdomain sudo[25749]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:56:13 np0005548790.localdomain python3[25751]: ansible-timezone Invoked with name=UTC hwclock=None
Dec 06 07:56:13 np0005548790.localdomain systemd[1]: Starting Time & Date Service...
Dec 06 07:56:13 np0005548790.localdomain systemd[1]: Started Time & Date Service.
Dec 06 07:56:13 np0005548790.localdomain sudo[25749]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:13 np0005548790.localdomain systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 06 07:56:13 np0005548790.localdomain sudo[25772]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-chrypsbhbmrhprcekffkfdhzhdhqfima ; /usr/bin/python3
Dec 06 07:56:13 np0005548790.localdomain sudo[25772]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:56:14 np0005548790.localdomain python3[25774]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 07:56:14 np0005548790.localdomain systemd[1]: Stopping NTP client/server...
Dec 06 07:56:14 np0005548790.localdomain chronyd[25505]: chronyd exiting
Dec 06 07:56:14 np0005548790.localdomain systemd[1]: chronyd.service: Deactivated successfully.
Dec 06 07:56:14 np0005548790.localdomain systemd[1]: Stopped NTP client/server.
Dec 06 07:56:14 np0005548790.localdomain systemd[1]: Starting NTP client/server...
Dec 06 07:56:14 np0005548790.localdomain chronyd[25781]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Dec 06 07:56:14 np0005548790.localdomain chronyd[25781]: Frequency -26.394 +/- 0.097 ppm read from /var/lib/chrony/drift
Dec 06 07:56:14 np0005548790.localdomain chronyd[25781]: Loaded seccomp filter (level 2)
Dec 06 07:56:14 np0005548790.localdomain systemd[1]: Started NTP client/server.
Dec 06 07:56:14 np0005548790.localdomain sudo[25772]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:18 np0005548790.localdomain chronyd[25781]: Selected source 51.222.111.13 (pool.ntp.org)
Dec 06 07:56:30 np0005548790.localdomain sudo[25796]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oxltsyybabuzoikozywktpbrvnblivml ; /usr/bin/python3
Dec 06 07:56:30 np0005548790.localdomain sudo[25796]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:56:31 np0005548790.localdomain useradd[25800]: new group: name=ceph-admin, GID=1002
Dec 06 07:56:31 np0005548790.localdomain useradd[25800]: new user: name=ceph-admin, UID=1002, GID=1002, home=/home/ceph-admin, shell=/bin/bash, from=none
Dec 06 07:56:31 np0005548790.localdomain sudo[25796]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:31 np0005548790.localdomain sudo[25852]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-avusxtltiruinruxrqhjdpzozikrcnmm ; /usr/bin/python3
Dec 06 07:56:31 np0005548790.localdomain sudo[25852]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:56:31 np0005548790.localdomain sudo[25852]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:32 np0005548790.localdomain sudo[25895]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wteszazbsquyucnjadrjajudehxdqqqg ; /usr/bin/python3
Dec 06 07:56:32 np0005548790.localdomain sudo[25895]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:56:32 np0005548790.localdomain sudo[25895]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:32 np0005548790.localdomain sudo[25925]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dxzwiazacxbnsorhscupsdjjdluezjtk ; /usr/bin/python3
Dec 06 07:56:32 np0005548790.localdomain sudo[25925]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:56:32 np0005548790.localdomain sudo[25925]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:32 np0005548790.localdomain sudo[25941]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xlkfmszcdhnkcrmkdwkkwbtfxetchwbx ; /usr/bin/python3
Dec 06 07:56:32 np0005548790.localdomain sudo[25941]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:56:33 np0005548790.localdomain sudo[25941]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:33 np0005548790.localdomain sudo[25957]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-evofpxpchvfrswtwhshznsrxccrufyvk ; /usr/bin/python3
Dec 06 07:56:33 np0005548790.localdomain sudo[25957]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:56:33 np0005548790.localdomain sudo[25957]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:33 np0005548790.localdomain sudo[25973]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-csqrbloacwjadcvffwiamhmheodfnzgu ; /usr/bin/python3
Dec 06 07:56:33 np0005548790.localdomain sudo[25973]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 07:56:34 np0005548790.localdomain sudo[25973]: pam_unix(sudo:session): session closed for user root
Dec 06 07:56:43 np0005548790.localdomain systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 06 07:57:42 np0005548790.localdomain sshd[25978]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:57:42 np0005548790.localdomain sshd[25978]: Invalid user sol from 193.32.162.146 port 44446
Dec 06 07:57:42 np0005548790.localdomain sshd[25978]: Connection closed by invalid user sol 193.32.162.146 port 44446 [preauth]
Dec 06 07:57:43 np0005548790.localdomain sshd[25980]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:57:50 np0005548790.localdomain sshd[25980]: Connection closed by 45.78.219.217 port 48056 [preauth]
Dec 06 07:58:29 np0005548790.localdomain sshd[25982]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:58:29 np0005548790.localdomain sshd[25982]: Accepted publickey for ceph-admin from 192.168.122.103 port 35892 ssh2: RSA SHA256:HQYBT8n3HbnicDtP9tehd3+gJXMFtkw+fTMlmR2wCsE
Dec 06 07:58:29 np0005548790.localdomain systemd[1]: Created slice User Slice of UID 1002.
Dec 06 07:58:29 np0005548790.localdomain systemd[1]: Starting User Runtime Directory /run/user/1002...
Dec 06 07:58:29 np0005548790.localdomain systemd-logind[760]: New session 14 of user ceph-admin.
Dec 06 07:58:29 np0005548790.localdomain systemd[1]: Finished User Runtime Directory /run/user/1002.
Dec 06 07:58:29 np0005548790.localdomain systemd[1]: Starting User Manager for UID 1002...
Dec 06 07:58:29 np0005548790.localdomain systemd[25986]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 06 07:58:29 np0005548790.localdomain sshd[25999]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:58:29 np0005548790.localdomain systemd[25986]: Queued start job for default target Main User Target.
Dec 06 07:58:29 np0005548790.localdomain systemd[25986]: Created slice User Application Slice.
Dec 06 07:58:29 np0005548790.localdomain systemd[25986]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 06 07:58:29 np0005548790.localdomain systemd[25986]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 07:58:29 np0005548790.localdomain systemd[25986]: Reached target Paths.
Dec 06 07:58:29 np0005548790.localdomain systemd[25986]: Reached target Timers.
Dec 06 07:58:29 np0005548790.localdomain systemd[25986]: Starting D-Bus User Message Bus Socket...
Dec 06 07:58:29 np0005548790.localdomain systemd[25986]: Starting Create User's Volatile Files and Directories...
Dec 06 07:58:29 np0005548790.localdomain systemd[25986]: Listening on D-Bus User Message Bus Socket.
Dec 06 07:58:29 np0005548790.localdomain systemd[25986]: Reached target Sockets.
Dec 06 07:58:29 np0005548790.localdomain systemd[25986]: Finished Create User's Volatile Files and Directories.
Dec 06 07:58:29 np0005548790.localdomain systemd[25986]: Reached target Basic System.
Dec 06 07:58:29 np0005548790.localdomain systemd[25986]: Reached target Main User Target.
Dec 06 07:58:29 np0005548790.localdomain systemd[25986]: Startup finished in 98ms.
Dec 06 07:58:29 np0005548790.localdomain systemd[1]: Started User Manager for UID 1002.
Dec 06 07:58:29 np0005548790.localdomain systemd[1]: Started Session 14 of User ceph-admin.
Dec 06 07:58:29 np0005548790.localdomain sshd[25982]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 06 07:58:29 np0005548790.localdomain sshd[25999]: Accepted publickey for ceph-admin from 192.168.122.103 port 35902 ssh2: RSA SHA256:HQYBT8n3HbnicDtP9tehd3+gJXMFtkw+fTMlmR2wCsE
Dec 06 07:58:29 np0005548790.localdomain systemd-logind[760]: New session 16 of user ceph-admin.
Dec 06 07:58:29 np0005548790.localdomain systemd[1]: Started Session 16 of User ceph-admin.
Dec 06 07:58:29 np0005548790.localdomain sshd[25999]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 06 07:58:29 np0005548790.localdomain sudo[26006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:58:29 np0005548790.localdomain sudo[26006]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:58:29 np0005548790.localdomain sudo[26006]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:29 np0005548790.localdomain sshd[26021]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:58:29 np0005548790.localdomain sshd[26021]: Accepted publickey for ceph-admin from 192.168.122.103 port 35908 ssh2: RSA SHA256:HQYBT8n3HbnicDtP9tehd3+gJXMFtkw+fTMlmR2wCsE
Dec 06 07:58:29 np0005548790.localdomain systemd-logind[760]: New session 17 of user ceph-admin.
Dec 06 07:58:29 np0005548790.localdomain systemd[1]: Started Session 17 of User ceph-admin.
Dec 06 07:58:29 np0005548790.localdomain sshd[26021]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 06 07:58:29 np0005548790.localdomain sudo[26025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host --expect-hostname np0005548790.localdomain
Dec 06 07:58:29 np0005548790.localdomain sudo[26025]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:58:30 np0005548790.localdomain sudo[26025]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:31 np0005548790.localdomain sshd[26040]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:58:31 np0005548790.localdomain sshd[26040]: Accepted publickey for ceph-admin from 192.168.122.103 port 35912 ssh2: RSA SHA256:HQYBT8n3HbnicDtP9tehd3+gJXMFtkw+fTMlmR2wCsE
Dec 06 07:58:31 np0005548790.localdomain systemd-logind[760]: New session 18 of user ceph-admin.
Dec 06 07:58:31 np0005548790.localdomain systemd[1]: Started Session 18 of User ceph-admin.
Dec 06 07:58:31 np0005548790.localdomain sshd[26040]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 06 07:58:31 np0005548790.localdomain sudo[26044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3
Dec 06 07:58:31 np0005548790.localdomain sudo[26044]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:58:31 np0005548790.localdomain sudo[26044]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:31 np0005548790.localdomain sshd[26059]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:58:32 np0005548790.localdomain sshd[26059]: Accepted publickey for ceph-admin from 192.168.122.103 port 35924 ssh2: RSA SHA256:HQYBT8n3HbnicDtP9tehd3+gJXMFtkw+fTMlmR2wCsE
Dec 06 07:58:32 np0005548790.localdomain systemd-logind[760]: New session 19 of user ceph-admin.
Dec 06 07:58:32 np0005548790.localdomain systemd[1]: Started Session 19 of User ceph-admin.
Dec 06 07:58:32 np0005548790.localdomain sshd[26059]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 06 07:58:32 np0005548790.localdomain sudo[26063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 07:58:32 np0005548790.localdomain sudo[26063]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:58:32 np0005548790.localdomain sudo[26063]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:32 np0005548790.localdomain sshd[26078]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:58:32 np0005548790.localdomain sshd[26078]: Accepted publickey for ceph-admin from 192.168.122.103 port 35932 ssh2: RSA SHA256:HQYBT8n3HbnicDtP9tehd3+gJXMFtkw+fTMlmR2wCsE
Dec 06 07:58:32 np0005548790.localdomain systemd-logind[760]: New session 20 of user ceph-admin.
Dec 06 07:58:32 np0005548790.localdomain systemd[1]: Started Session 20 of User ceph-admin.
Dec 06 07:58:32 np0005548790.localdomain sshd[26078]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 06 07:58:32 np0005548790.localdomain sudo[26082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 07:58:32 np0005548790.localdomain sudo[26082]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:58:32 np0005548790.localdomain sudo[26082]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:32 np0005548790.localdomain sshd[26097]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:58:32 np0005548790.localdomain sshd[26097]: Accepted publickey for ceph-admin from 192.168.122.103 port 35938 ssh2: RSA SHA256:HQYBT8n3HbnicDtP9tehd3+gJXMFtkw+fTMlmR2wCsE
Dec 06 07:58:32 np0005548790.localdomain systemd-logind[760]: New session 21 of user ceph-admin.
Dec 06 07:58:32 np0005548790.localdomain systemd[1]: Started Session 21 of User ceph-admin.
Dec 06 07:58:32 np0005548790.localdomain sshd[26097]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 06 07:58:33 np0005548790.localdomain sudo[26101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3.new
Dec 06 07:58:33 np0005548790.localdomain sudo[26101]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:58:33 np0005548790.localdomain sudo[26101]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:33 np0005548790.localdomain sshd[26116]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:58:33 np0005548790.localdomain sshd[26116]: Accepted publickey for ceph-admin from 192.168.122.103 port 35940 ssh2: RSA SHA256:HQYBT8n3HbnicDtP9tehd3+gJXMFtkw+fTMlmR2wCsE
Dec 06 07:58:33 np0005548790.localdomain systemd-logind[760]: New session 22 of user ceph-admin.
Dec 06 07:58:33 np0005548790.localdomain systemd[1]: Started Session 22 of User ceph-admin.
Dec 06 07:58:33 np0005548790.localdomain sshd[26116]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 06 07:58:33 np0005548790.localdomain sudo[26120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 07:58:33 np0005548790.localdomain sudo[26120]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:58:33 np0005548790.localdomain sudo[26120]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:33 np0005548790.localdomain sshd[26135]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:58:33 np0005548790.localdomain sshd[26135]: Accepted publickey for ceph-admin from 192.168.122.103 port 35950 ssh2: RSA SHA256:HQYBT8n3HbnicDtP9tehd3+gJXMFtkw+fTMlmR2wCsE
Dec 06 07:58:33 np0005548790.localdomain systemd-logind[760]: New session 23 of user ceph-admin.
Dec 06 07:58:33 np0005548790.localdomain systemd[1]: Started Session 23 of User ceph-admin.
Dec 06 07:58:33 np0005548790.localdomain sshd[26135]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 06 07:58:33 np0005548790.localdomain sudo[26139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3.new
Dec 06 07:58:33 np0005548790.localdomain sudo[26139]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:58:33 np0005548790.localdomain sudo[26139]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:33 np0005548790.localdomain sshd[26154]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:58:34 np0005548790.localdomain sshd[26154]: Accepted publickey for ceph-admin from 192.168.122.103 port 35966 ssh2: RSA SHA256:HQYBT8n3HbnicDtP9tehd3+gJXMFtkw+fTMlmR2wCsE
Dec 06 07:58:34 np0005548790.localdomain systemd-logind[760]: New session 24 of user ceph-admin.
Dec 06 07:58:34 np0005548790.localdomain systemd[1]: Started Session 24 of User ceph-admin.
Dec 06 07:58:34 np0005548790.localdomain sshd[26154]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 06 07:58:34 np0005548790.localdomain sshd[26171]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:58:34 np0005548790.localdomain sshd[26171]: Accepted publickey for ceph-admin from 192.168.122.103 port 35978 ssh2: RSA SHA256:HQYBT8n3HbnicDtP9tehd3+gJXMFtkw+fTMlmR2wCsE
Dec 06 07:58:34 np0005548790.localdomain systemd-logind[760]: New session 25 of user ceph-admin.
Dec 06 07:58:34 np0005548790.localdomain systemd[1]: Started Session 25 of User ceph-admin.
Dec 06 07:58:34 np0005548790.localdomain sshd[26171]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 06 07:58:34 np0005548790.localdomain sudo[26175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3
Dec 06 07:58:34 np0005548790.localdomain sudo[26175]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:58:34 np0005548790.localdomain sudo[26175]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:34 np0005548790.localdomain sshd[26190]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 07:58:35 np0005548790.localdomain sshd[26190]: Accepted publickey for ceph-admin from 192.168.122.103 port 35990 ssh2: RSA SHA256:HQYBT8n3HbnicDtP9tehd3+gJXMFtkw+fTMlmR2wCsE
Dec 06 07:58:35 np0005548790.localdomain systemd-logind[760]: New session 26 of user ceph-admin.
Dec 06 07:58:35 np0005548790.localdomain systemd[1]: Started Session 26 of User ceph-admin.
Dec 06 07:58:35 np0005548790.localdomain sshd[26190]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 06 07:58:35 np0005548790.localdomain sudo[26194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host --expect-hostname np0005548790.localdomain
Dec 06 07:58:35 np0005548790.localdomain sudo[26194]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:58:35 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 07:58:35 np0005548790.localdomain sudo[26194]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:47 np0005548790.localdomain sudo[26229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 07:58:47 np0005548790.localdomain sudo[26229]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:58:47 np0005548790.localdomain sudo[26229]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:47 np0005548790.localdomain sudo[26244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:58:47 np0005548790.localdomain sudo[26244]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:58:47 np0005548790.localdomain sudo[26244]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:47 np0005548790.localdomain sudo[26259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 06 07:58:47 np0005548790.localdomain sudo[26259]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:58:47 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 07:58:47 np0005548790.localdomain sudo[26259]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:47 np0005548790.localdomain sudo[26294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:58:47 np0005548790.localdomain sudo[26294]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:58:47 np0005548790.localdomain sudo[26294]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:47 np0005548790.localdomain sudo[26309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 07:58:47 np0005548790.localdomain sudo[26309]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:58:48 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 07:58:48 np0005548790.localdomain sudo[26309]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:48 np0005548790.localdomain sudo[26363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:58:48 np0005548790.localdomain sudo[26363]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:58:48 np0005548790.localdomain sudo[26363]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:48 np0005548790.localdomain sudo[26378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 07:58:48 np0005548790.localdomain sudo[26378]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:58:48 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 07:58:48 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 07:58:48 np0005548790.localdomain systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 26406 (sysctl)
Dec 06 07:58:48 np0005548790.localdomain systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 06 07:58:48 np0005548790.localdomain systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 06 07:58:49 np0005548790.localdomain sudo[26378]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:49 np0005548790.localdomain sudo[26429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:58:49 np0005548790.localdomain sudo[26429]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:58:49 np0005548790.localdomain sudo[26429]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:49 np0005548790.localdomain sudo[26444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 06 07:58:49 np0005548790.localdomain sudo[26444]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:58:49 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 07:58:49 np0005548790.localdomain sudo[26444]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:49 np0005548790.localdomain sudo[26479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:58:49 np0005548790.localdomain sudo[26479]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:58:49 np0005548790.localdomain sudo[26479]: pam_unix(sudo:session): session closed for user root
Dec 06 07:58:49 np0005548790.localdomain sudo[26494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 -- inventory --format=json-pretty --filter-for-batch
Dec 06 07:58:49 np0005548790.localdomain sudo[26494]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:58:50 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 07:58:53 np0005548790.localdomain kernel: VFS: idmapped mount is not enabled.
Dec 06 07:59:13 np0005548790.localdomain podman[26547]: 
Dec 06 07:59:13 np0005548790.localdomain podman[26547]: 2025-12-06 07:59:13.711714058 +0000 UTC m=+23.301586127 container create 4e590d02a29d23854efdccd47d4501984736ff9dae5965e7acf82b5f6abd5d12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_easley, description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_BRANCH=main)
Dec 06 07:59:13 np0005548790.localdomain podman[26547]: 2025-12-06 07:58:50.44038563 +0000 UTC m=+0.030257709 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 07:59:13 np0005548790.localdomain systemd[1]: Created slice Slice /machine.
Dec 06 07:59:13 np0005548790.localdomain systemd[1]: Started libpod-conmon-4e590d02a29d23854efdccd47d4501984736ff9dae5965e7acf82b5f6abd5d12.scope.
Dec 06 07:59:13 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 07:59:13 np0005548790.localdomain podman[26547]: 2025-12-06 07:59:13.832286948 +0000 UTC m=+23.422159047 container init 4e590d02a29d23854efdccd47d4501984736ff9dae5965e7acf82b5f6abd5d12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_easley, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, release=1763362218, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True)
Dec 06 07:59:13 np0005548790.localdomain podman[26547]: 2025-12-06 07:59:13.846657422 +0000 UTC m=+23.436529521 container start 4e590d02a29d23854efdccd47d4501984736ff9dae5965e7acf82b5f6abd5d12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_easley, io.openshift.expose-services=, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.buildah.version=1.41.4, GIT_CLEAN=True, architecture=x86_64, distribution-scope=public, vcs-type=git, CEPH_POINT_RELEASE=, version=7, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 06 07:59:13 np0005548790.localdomain podman[26547]: 2025-12-06 07:59:13.84694057 +0000 UTC m=+23.436812669 container attach 4e590d02a29d23854efdccd47d4501984736ff9dae5965e7acf82b5f6abd5d12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_easley, version=7, GIT_CLEAN=True, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, ceph=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=1763362218)
Dec 06 07:59:13 np0005548790.localdomain suspicious_easley[26652]: 167 167
Dec 06 07:59:13 np0005548790.localdomain systemd[1]: libpod-4e590d02a29d23854efdccd47d4501984736ff9dae5965e7acf82b5f6abd5d12.scope: Deactivated successfully.
Dec 06 07:59:13 np0005548790.localdomain podman[26547]: 2025-12-06 07:59:13.85002945 +0000 UTC m=+23.439901589 container died 4e590d02a29d23854efdccd47d4501984736ff9dae5965e7acf82b5f6abd5d12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_easley, version=7, ceph=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.expose-services=, RELEASE=main, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., release=1763362218, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, GIT_CLEAN=True, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 06 07:59:13 np0005548790.localdomain podman[26657]: 2025-12-06 07:59:13.937759085 +0000 UTC m=+0.079102302 container remove 4e590d02a29d23854efdccd47d4501984736ff9dae5965e7acf82b5f6abd5d12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=suspicious_easley, vendor=Red Hat, Inc., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, architecture=x86_64, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, release=1763362218, io.buildah.version=1.41.4, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7)
Dec 06 07:59:13 np0005548790.localdomain systemd[1]: libpod-conmon-4e590d02a29d23854efdccd47d4501984736ff9dae5965e7acf82b5f6abd5d12.scope: Deactivated successfully.
Dec 06 07:59:14 np0005548790.localdomain podman[26678]: 
Dec 06 07:59:14 np0005548790.localdomain podman[26678]: 2025-12-06 07:59:14.117927977 +0000 UTC m=+0.043170265 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 07:59:14 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-2779cf51f39f4f792fa40990841d00fbb184dd68532b8168caa951bf39e034b4-merged.mount: Deactivated successfully.
Dec 06 07:59:18 np0005548790.localdomain podman[26678]: 2025-12-06 07:59:18.479215291 +0000 UTC m=+4.404457629 container create f0840864aa5081e901d012a73a3482763cadab6f12853f235edd283fe8472c38 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_saha, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, vcs-type=git, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, architecture=x86_64, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.buildah.version=1.41.4)
Dec 06 07:59:19 np0005548790.localdomain systemd[1]: Started libpod-conmon-f0840864aa5081e901d012a73a3482763cadab6f12853f235edd283fe8472c38.scope.
Dec 06 07:59:19 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 07:59:19 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7114b6f7936b60a11a6f01a1440c6063991c90129b491ffa02a2a85abd7a21d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:19 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7114b6f7936b60a11a6f01a1440c6063991c90129b491ffa02a2a85abd7a21d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:19 np0005548790.localdomain podman[26678]: 2025-12-06 07:59:19.072656837 +0000 UTC m=+4.997899125 container init f0840864aa5081e901d012a73a3482763cadab6f12853f235edd283fe8472c38 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_saha, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, build-date=2025-11-26T19:44:28Z, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, release=1763362218, GIT_CLEAN=True, io.buildah.version=1.41.4, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 06 07:59:19 np0005548790.localdomain podman[26678]: 2025-12-06 07:59:19.099430463 +0000 UTC m=+5.024672771 container start f0840864aa5081e901d012a73a3482763cadab6f12853f235edd283fe8472c38 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_saha, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_BRANCH=main, CEPH_POINT_RELEASE=, RELEASE=main, com.redhat.component=rhceph-container, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7)
Dec 06 07:59:19 np0005548790.localdomain podman[26678]: 2025-12-06 07:59:19.099751732 +0000 UTC m=+5.024994090 container attach f0840864aa5081e901d012a73a3482763cadab6f12853f235edd283fe8472c38 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_saha, version=7, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, release=1763362218, vcs-type=git, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, ceph=True, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]: [
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:     {
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:         "available": false,
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:         "ceph_device": false,
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:         "lsm_data": {},
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:         "lvs": [],
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:         "path": "/dev/sr0",
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:         "rejected_reasons": [
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:             "Has a FileSystem",
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:             "Insufficient space (<5GB)"
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:         ],
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:         "sys_api": {
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:             "actuators": null,
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:             "device_nodes": "sr0",
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:             "human_readable_size": "482.00 KB",
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:             "id_bus": "ata",
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:             "model": "QEMU DVD-ROM",
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:             "nr_requests": "2",
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:             "partitions": {},
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:             "path": "/dev/sr0",
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:             "removable": "1",
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:             "rev": "2.5+",
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:             "ro": "0",
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:             "rotational": "1",
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:             "sas_address": "",
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:             "sas_device_handle": "",
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:             "scheduler_mode": "mq-deadline",
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:             "sectors": 0,
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:             "sectorsize": "2048",
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:             "size": 493568.0,
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:             "support_discard": "0",
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:             "type": "disk",
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:             "vendor": "QEMU"
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:         }
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]:     }
Dec 06 07:59:19 np0005548790.localdomain brave_saha[27192]: ]
Dec 06 07:59:19 np0005548790.localdomain systemd[1]: libpod-f0840864aa5081e901d012a73a3482763cadab6f12853f235edd283fe8472c38.scope: Deactivated successfully.
Dec 06 07:59:19 np0005548790.localdomain podman[26678]: 2025-12-06 07:59:19.82958882 +0000 UTC m=+5.754831168 container died f0840864aa5081e901d012a73a3482763cadab6f12853f235edd283fe8472c38 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_saha, CEPH_POINT_RELEASE=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, release=1763362218, name=rhceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.expose-services=)
Dec 06 07:59:19 np0005548790.localdomain systemd[1]: tmp-crun.8ciEFD.mount: Deactivated successfully.
Dec 06 07:59:19 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-7114b6f7936b60a11a6f01a1440c6063991c90129b491ffa02a2a85abd7a21d9-merged.mount: Deactivated successfully.
Dec 06 07:59:19 np0005548790.localdomain podman[28417]: 2025-12-06 07:59:19.958851066 +0000 UTC m=+0.115573691 container remove f0840864aa5081e901d012a73a3482763cadab6f12853f235edd283fe8472c38 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_saha, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, distribution-scope=public, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, RELEASE=main, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 06 07:59:19 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 07:59:19 np0005548790.localdomain systemd[1]: libpod-conmon-f0840864aa5081e901d012a73a3482763cadab6f12853f235edd283fe8472c38.scope: Deactivated successfully.
Dec 06 07:59:20 np0005548790.localdomain sudo[26494]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:20 np0005548790.localdomain sudo[28430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:59:20 np0005548790.localdomain sudo[28430]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:59:20 np0005548790.localdomain sudo[28430]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:20 np0005548790.localdomain sudo[28445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 _orch set-coredump-overrides --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 --coredump-max-size=32G
Dec 06 07:59:20 np0005548790.localdomain sudo[28445]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:59:20 np0005548790.localdomain systemd[1]: systemd-coredump.socket: Deactivated successfully.
Dec 06 07:59:20 np0005548790.localdomain systemd[1]: Closed Process Core Dump Socket.
Dec 06 07:59:20 np0005548790.localdomain systemd[1]: Stopping Process Core Dump Socket...
Dec 06 07:59:20 np0005548790.localdomain systemd[1]: Listening on Process Core Dump Socket.
Dec 06 07:59:20 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 07:59:20 np0005548790.localdomain systemd-sysv-generator[28498]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 07:59:20 np0005548790.localdomain systemd-rc-local-generator[28491]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 07:59:20 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 07:59:20 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 07:59:20 np0005548790.localdomain systemd-rc-local-generator[28533]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 07:59:20 np0005548790.localdomain systemd-sysv-generator[28537]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 07:59:20 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 07:59:21 np0005548790.localdomain sudo[28445]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:46 np0005548790.localdomain sudo[28546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:59:46 np0005548790.localdomain sudo[28546]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:59:46 np0005548790.localdomain sudo[28546]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:46 np0005548790.localdomain sudo[28561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 07:59:46 np0005548790.localdomain sudo[28561]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:59:46 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 07:59:46 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 07:59:47 np0005548790.localdomain podman[28618]: 
Dec 06 07:59:47 np0005548790.localdomain podman[28618]: 2025-12-06 07:59:47.470757712 +0000 UTC m=+0.124768515 container create 42264782e03668f5ea371c450097f565f58140cd827952bb9859a54c318b60b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_lovelace, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, distribution-scope=public, name=rhceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., RELEASE=main, vcs-type=git, release=1763362218)
Dec 06 07:59:47 np0005548790.localdomain podman[28618]: 2025-12-06 07:59:47.385967927 +0000 UTC m=+0.039978660 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 07:59:47 np0005548790.localdomain systemd[1]: Started libpod-conmon-42264782e03668f5ea371c450097f565f58140cd827952bb9859a54c318b60b5.scope.
Dec 06 07:59:47 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 07:59:47 np0005548790.localdomain podman[28618]: 2025-12-06 07:59:47.537167283 +0000 UTC m=+0.191178056 container init 42264782e03668f5ea371c450097f565f58140cd827952bb9859a54c318b60b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_lovelace, io.openshift.tags=rhceph ceph, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, version=7, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, name=rhceph, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, release=1763362218, io.openshift.expose-services=)
Dec 06 07:59:47 np0005548790.localdomain podman[28618]: 2025-12-06 07:59:47.54895914 +0000 UTC m=+0.202969913 container start 42264782e03668f5ea371c450097f565f58140cd827952bb9859a54c318b60b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_lovelace, RELEASE=main, name=rhceph, GIT_CLEAN=True, ceph=True, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-type=git, release=1763362218, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 06 07:59:47 np0005548790.localdomain podman[28618]: 2025-12-06 07:59:47.549244888 +0000 UTC m=+0.203255651 container attach 42264782e03668f5ea371c450097f565f58140cd827952bb9859a54c318b60b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_lovelace, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, release=1763362218, io.buildah.version=1.41.4, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 06 07:59:47 np0005548790.localdomain upbeat_lovelace[28633]: 167 167
Dec 06 07:59:47 np0005548790.localdomain systemd[1]: libpod-42264782e03668f5ea371c450097f565f58140cd827952bb9859a54c318b60b5.scope: Deactivated successfully.
Dec 06 07:59:47 np0005548790.localdomain podman[28618]: 2025-12-06 07:59:47.553016665 +0000 UTC m=+0.207027478 container died 42264782e03668f5ea371c450097f565f58140cd827952bb9859a54c318b60b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_lovelace, RELEASE=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, vendor=Red Hat, Inc., io.buildah.version=1.41.4, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.openshift.tags=rhceph ceph)
Dec 06 07:59:47 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-1c2897728ee9eac6b689e532dfdfca2e06ff46c9bd6cb2a978439c645c1d8d73-merged.mount: Deactivated successfully.
Dec 06 07:59:47 np0005548790.localdomain podman[28638]: 2025-12-06 07:59:47.641838635 +0000 UTC m=+0.076141540 container remove 42264782e03668f5ea371c450097f565f58140cd827952bb9859a54c318b60b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_lovelace, GIT_CLEAN=True, distribution-scope=public, version=7, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218)
Dec 06 07:59:47 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 07:59:47 np0005548790.localdomain systemd[1]: libpod-conmon-42264782e03668f5ea371c450097f565f58140cd827952bb9859a54c318b60b5.scope: Deactivated successfully.
Dec 06 07:59:47 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 07:59:47 np0005548790.localdomain systemd-rc-local-generator[28679]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 07:59:47 np0005548790.localdomain systemd-sysv-generator[28684]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 07:59:47 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 07:59:47 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 07:59:48 np0005548790.localdomain systemd-rc-local-generator[28721]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 07:59:48 np0005548790.localdomain systemd-sysv-generator[28725]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 07:59:48 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 07:59:48 np0005548790.localdomain systemd[1]: Reached target All Ceph clusters and services.
Dec 06 07:59:48 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 07:59:48 np0005548790.localdomain systemd-rc-local-generator[28755]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 07:59:48 np0005548790.localdomain systemd-sysv-generator[28759]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 07:59:48 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 07:59:48 np0005548790.localdomain systemd[1]: Reached target Ceph cluster 1939e851-b10c-5c3b-9bb7-8e7f380233e8.
Dec 06 07:59:48 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 07:59:48 np0005548790.localdomain systemd-rc-local-generator[28796]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 07:59:48 np0005548790.localdomain systemd-sysv-generator[28801]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 07:59:48 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 07:59:48 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 07:59:48 np0005548790.localdomain systemd-rc-local-generator[28838]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 07:59:48 np0005548790.localdomain systemd-sysv-generator[28843]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 07:59:48 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 07:59:48 np0005548790.localdomain systemd[1]: Created slice Slice /system/ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8.
Dec 06 07:59:48 np0005548790.localdomain systemd[1]: Reached target System Time Set.
Dec 06 07:59:48 np0005548790.localdomain systemd[1]: Reached target System Time Synchronized.
Dec 06 07:59:48 np0005548790.localdomain systemd[1]: Starting Ceph crash.np0005548790 for 1939e851-b10c-5c3b-9bb7-8e7f380233e8...
Dec 06 07:59:48 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 07:59:49 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 06 07:59:49 np0005548790.localdomain podman[28900]: 
Dec 06 07:59:49 np0005548790.localdomain podman[28900]: 2025-12-06 07:59:49.225470163 +0000 UTC m=+0.067136763 container create 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, io.openshift.expose-services=, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, distribution-scope=public, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, name=rhceph, io.openshift.tags=rhceph ceph)
Dec 06 07:59:49 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/415b039821382b08e4307ba8db0dc67ce527df415e6b491ea386b3c8ca5d6e4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:49 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/415b039821382b08e4307ba8db0dc67ce527df415e6b491ea386b3c8ca5d6e4d/merged/etc/ceph/ceph.client.crash.np0005548790.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:49 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/415b039821382b08e4307ba8db0dc67ce527df415e6b491ea386b3c8ca5d6e4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:49 np0005548790.localdomain podman[28900]: 2025-12-06 07:59:49.19834855 +0000 UTC m=+0.040015130 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 07:59:49 np0005548790.localdomain podman[28900]: 2025-12-06 07:59:49.303549977 +0000 UTC m=+0.145216577 container init 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, version=7, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., vcs-type=git, CEPH_POINT_RELEASE=)
Dec 06 07:59:49 np0005548790.localdomain podman[28900]: 2025-12-06 07:59:49.311221586 +0000 UTC m=+0.152888196 container start 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.openshift.tags=rhceph ceph, ceph=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, distribution-scope=public, release=1763362218, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, vcs-type=git, architecture=x86_64)
Dec 06 07:59:49 np0005548790.localdomain bash[28900]: 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3
Dec 06 07:59:49 np0005548790.localdomain systemd[1]: Started Ceph crash.np0005548790 for 1939e851-b10c-5c3b-9bb7-8e7f380233e8.
Dec 06 07:59:49 np0005548790.localdomain sudo[28561]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:49 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790[28916]: INFO:ceph-crash:pinging cluster to exercise our key
Dec 06 07:59:49 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790[28916]: 2025-12-06T07:59:49.488+0000 7f6b0b498640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 06 07:59:49 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790[28916]: 2025-12-06T07:59:49.488+0000 7f6b0b498640 -1 AuthRegistry(0x7f6b040680d0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 06 07:59:49 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790[28916]: 2025-12-06T07:59:49.489+0000 7f6b0b498640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 06 07:59:49 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790[28916]: 2025-12-06T07:59:49.489+0000 7f6b0b498640 -1 AuthRegistry(0x7f6b0b497000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 06 07:59:49 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790[28916]: 2025-12-06T07:59:49.497+0000 7f6b09a0e640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 06 07:59:49 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790[28916]: 2025-12-06T07:59:49.498+0000 7f6b0920d640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 06 07:59:49 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790[28916]: 2025-12-06T07:59:49.499+0000 7f6b08a0c640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 06 07:59:49 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790[28916]: 2025-12-06T07:59:49.499+0000 7f6b0b498640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec 06 07:59:49 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790[28916]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec 06 07:59:49 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790[28916]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec 06 07:59:52 np0005548790.localdomain sudo[28933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 07:59:52 np0005548790.localdomain sudo[28933]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:59:52 np0005548790.localdomain sudo[28933]: pam_unix(sudo:session): session closed for user root
Dec 06 07:59:52 np0005548790.localdomain sudo[28948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 --yes --no-systemd
Dec 06 07:59:52 np0005548790.localdomain sudo[28948]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 07:59:53 np0005548790.localdomain podman[29001]: 
Dec 06 07:59:53 np0005548790.localdomain podman[29001]: 2025-12-06 07:59:53.01768632 +0000 UTC m=+0.056369016 container create c2289c98c881a9a3eec10193f5c300aae0452e7a765cb87846171cd6cd71b627 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_grothendieck, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, ceph=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z)
Dec 06 07:59:53 np0005548790.localdomain systemd[1]: Started libpod-conmon-c2289c98c881a9a3eec10193f5c300aae0452e7a765cb87846171cd6cd71b627.scope.
Dec 06 07:59:53 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 07:59:53 np0005548790.localdomain podman[29001]: 2025-12-06 07:59:53.080378295 +0000 UTC m=+0.119061001 container init c2289c98c881a9a3eec10193f5c300aae0452e7a765cb87846171cd6cd71b627 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_grothendieck, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, version=7, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=)
Dec 06 07:59:53 np0005548790.localdomain podman[29001]: 2025-12-06 07:59:53.088810196 +0000 UTC m=+0.127492912 container start c2289c98c881a9a3eec10193f5c300aae0452e7a765cb87846171cd6cd71b627 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_grothendieck, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, name=rhceph, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_CLEAN=True, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.tags=rhceph ceph)
Dec 06 07:59:53 np0005548790.localdomain podman[29001]: 2025-12-06 07:59:53.088944249 +0000 UTC m=+0.127626985 container attach c2289c98c881a9a3eec10193f5c300aae0452e7a765cb87846171cd6cd71b627 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_grothendieck, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, io.openshift.expose-services=, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True)
Dec 06 07:59:53 np0005548790.localdomain bold_grothendieck[29016]: 167 167
Dec 06 07:59:53 np0005548790.localdomain systemd[1]: libpod-c2289c98c881a9a3eec10193f5c300aae0452e7a765cb87846171cd6cd71b627.scope: Deactivated successfully.
Dec 06 07:59:53 np0005548790.localdomain podman[29001]: 2025-12-06 07:59:53.093430998 +0000 UTC m=+0.132113714 container died c2289c98c881a9a3eec10193f5c300aae0452e7a765cb87846171cd6cd71b627 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_grothendieck, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_BRANCH=main, distribution-scope=public, io.openshift.tags=rhceph ceph, vcs-type=git, version=7, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, com.redhat.component=rhceph-container, architecture=x86_64, vendor=Red Hat, Inc., release=1763362218, RELEASE=main)
Dec 06 07:59:53 np0005548790.localdomain podman[29001]: 2025-12-06 07:59:52.995741555 +0000 UTC m=+0.034424301 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 07:59:53 np0005548790.localdomain systemd[1]: tmp-crun.q8sAV0.mount: Deactivated successfully.
Dec 06 07:59:53 np0005548790.localdomain podman[29021]: 2025-12-06 07:59:53.181697471 +0000 UTC m=+0.079437623 container remove c2289c98c881a9a3eec10193f5c300aae0452e7a765cb87846171cd6cd71b627 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_grothendieck, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, ceph=True)
Dec 06 07:59:53 np0005548790.localdomain systemd[1]: libpod-conmon-c2289c98c881a9a3eec10193f5c300aae0452e7a765cb87846171cd6cd71b627.scope: Deactivated successfully.
Dec 06 07:59:53 np0005548790.localdomain podman[29044]: 
Dec 06 07:59:53 np0005548790.localdomain podman[29044]: 2025-12-06 07:59:53.36734141 +0000 UTC m=+0.061854353 container create ef78546c63e1cbb81ce9bf7d0f7eb041f228842c41b4ea38011610be2b9b9c27 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_noyce, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, com.redhat.component=rhceph-container, vcs-type=git, io.openshift.tags=rhceph ceph, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z)
Dec 06 07:59:53 np0005548790.localdomain systemd[1]: Started libpod-conmon-ef78546c63e1cbb81ce9bf7d0f7eb041f228842c41b4ea38011610be2b9b9c27.scope.
Dec 06 07:59:53 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 07:59:53 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd6cc32667569ffc9b1eec079502c6cbac470506d72b478b19062997b28e9dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:53 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd6cc32667569ffc9b1eec079502c6cbac470506d72b478b19062997b28e9dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:53 np0005548790.localdomain podman[29044]: 2025-12-06 07:59:53.34699391 +0000 UTC m=+0.041506843 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 07:59:53 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd6cc32667569ffc9b1eec079502c6cbac470506d72b478b19062997b28e9dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:53 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd6cc32667569ffc9b1eec079502c6cbac470506d72b478b19062997b28e9dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:53 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd6cc32667569ffc9b1eec079502c6cbac470506d72b478b19062997b28e9dd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 07:59:53 np0005548790.localdomain podman[29044]: 2025-12-06 07:59:53.484428714 +0000 UTC m=+0.178941647 container init ef78546c63e1cbb81ce9bf7d0f7eb041f228842c41b4ea38011610be2b9b9c27 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_noyce, com.redhat.component=rhceph-container, version=7, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, architecture=x86_64, name=rhceph, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.expose-services=, ceph=True, RELEASE=main, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, release=1763362218, io.openshift.tags=rhceph ceph, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 06 07:59:53 np0005548790.localdomain podman[29044]: 2025-12-06 07:59:53.493778281 +0000 UTC m=+0.188291214 container start ef78546c63e1cbb81ce9bf7d0f7eb041f228842c41b4ea38011610be2b9b9c27 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_noyce, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_CLEAN=True, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, RELEASE=main, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, ceph=True, release=1763362218, CEPH_POINT_RELEASE=)
Dec 06 07:59:53 np0005548790.localdomain podman[29044]: 2025-12-06 07:59:53.494044199 +0000 UTC m=+0.188557142 container attach ef78546c63e1cbb81ce9bf7d0f7eb041f228842c41b4ea38011610be2b9b9c27 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_noyce, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, architecture=x86_64, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, name=rhceph, description=Red Hat Ceph Storage 7, vcs-type=git, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, RELEASE=main, release=1763362218, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.)
Dec 06 07:59:53 np0005548790.localdomain condescending_noyce[29059]: --> passed data devices: 0 physical, 2 LVM
Dec 06 07:59:53 np0005548790.localdomain condescending_noyce[29059]: --> relative data size: 1.0
Dec 06 07:59:54 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 06 07:59:54 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-b66bd673d406b2e01dd95350c266737c7c98ec2098afd46935cb65b4895cc356-merged.mount: Deactivated successfully.
Dec 06 07:59:54 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f0a16764-58fb-4e66-a34c-f3b971e966ce
Dec 06 07:59:54 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 06 07:59:54 np0005548790.localdomain lvm[29113]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 07:59:54 np0005548790.localdomain lvm[29113]: VG ceph_vg0 finished
Dec 06 07:59:54 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Dec 06 07:59:54 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec 06 07:59:54 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 06 07:59:54 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 06 07:59:54 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Dec 06 07:59:55 np0005548790.localdomain condescending_noyce[29059]:  stderr: got monmap epoch 3
Dec 06 07:59:55 np0005548790.localdomain condescending_noyce[29059]: --> Creating keyring file for osd.0
Dec 06 07:59:55 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Dec 06 07:59:55 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Dec 06 07:59:55 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid f0a16764-58fb-4e66-a34c-f3b971e966ce --setuser ceph --setgroup ceph
Dec 06 07:59:57 np0005548790.localdomain condescending_noyce[29059]:  stderr: 2025-12-06T07:59:55.167+0000 7ff825633a80 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 06 07:59:57 np0005548790.localdomain condescending_noyce[29059]:  stderr: 2025-12-06T07:59:55.167+0000 7ff825633a80 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Dec 06 07:59:57 np0005548790.localdomain condescending_noyce[29059]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec 06 07:59:57 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 06 07:59:57 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 06 07:59:57 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 06 07:59:57 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 06 07:59:57 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 06 07:59:57 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 06 07:59:57 np0005548790.localdomain condescending_noyce[29059]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 06 07:59:57 np0005548790.localdomain condescending_noyce[29059]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec 06 07:59:57 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 06 07:59:57 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new ddd2117f-b4d7-4acb-bc27-d7fd63bcf07f
Dec 06 07:59:58 np0005548790.localdomain lvm[30049]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 06 07:59:58 np0005548790.localdomain lvm[30049]: VG ceph_vg1 finished
Dec 06 07:59:58 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 06 07:59:58 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-3
Dec 06 07:59:58 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Dec 06 07:59:58 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 06 07:59:58 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-3/block
Dec 06 07:59:58 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-3/activate.monmap
Dec 06 07:59:58 np0005548790.localdomain condescending_noyce[29059]:  stderr: got monmap epoch 3
Dec 06 07:59:58 np0005548790.localdomain condescending_noyce[29059]: --> Creating keyring file for osd.3
Dec 06 07:59:58 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/keyring
Dec 06 07:59:58 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/
Dec 06 07:59:58 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 3 --monmap /var/lib/ceph/osd/ceph-3/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-3/ --osd-uuid ddd2117f-b4d7-4acb-bc27-d7fd63bcf07f --setuser ceph --setgroup ceph
Dec 06 08:00:01 np0005548790.localdomain condescending_noyce[29059]:  stderr: 2025-12-06T07:59:58.965+0000 7ff3c9683a80 -1 bluestore(/var/lib/ceph/osd/ceph-3//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 06 08:00:01 np0005548790.localdomain condescending_noyce[29059]:  stderr: 2025-12-06T07:59:58.965+0000 7ff3c9683a80 -1 bluestore(/var/lib/ceph/osd/ceph-3/) _read_fsid unparsable uuid
Dec 06 08:00:01 np0005548790.localdomain condescending_noyce[29059]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Dec 06 08:00:01 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
Dec 06 08:00:01 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-3 --no-mon-config
Dec 06 08:00:01 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-3/block
Dec 06 08:00:01 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block
Dec 06 08:00:01 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 06 08:00:01 np0005548790.localdomain condescending_noyce[29059]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
Dec 06 08:00:01 np0005548790.localdomain condescending_noyce[29059]: --> ceph-volume lvm activate successful for osd ID: 3
Dec 06 08:00:01 np0005548790.localdomain condescending_noyce[29059]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Dec 06 08:00:01 np0005548790.localdomain systemd[1]: libpod-ef78546c63e1cbb81ce9bf7d0f7eb041f228842c41b4ea38011610be2b9b9c27.scope: Deactivated successfully.
Dec 06 08:00:01 np0005548790.localdomain systemd[1]: libpod-ef78546c63e1cbb81ce9bf7d0f7eb041f228842c41b4ea38011610be2b9b9c27.scope: Consumed 3.669s CPU time.
Dec 06 08:00:01 np0005548790.localdomain podman[29044]: 2025-12-06 08:00:01.305765337 +0000 UTC m=+8.000278320 container died ef78546c63e1cbb81ce9bf7d0f7eb041f228842c41b4ea38011610be2b9b9c27 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_noyce, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, ceph=True, release=1763362218, GIT_CLEAN=True, architecture=x86_64, com.redhat.component=rhceph-container, version=7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 06 08:00:01 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-6bd6cc32667569ffc9b1eec079502c6cbac470506d72b478b19062997b28e9dd-merged.mount: Deactivated successfully.
Dec 06 08:00:01 np0005548790.localdomain podman[30952]: 2025-12-06 08:00:01.404099088 +0000 UTC m=+0.085070984 container remove ef78546c63e1cbb81ce9bf7d0f7eb041f228842c41b4ea38011610be2b9b9c27 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_noyce, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, distribution-scope=public, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, release=1763362218, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z)
Dec 06 08:00:01 np0005548790.localdomain systemd[1]: libpod-conmon-ef78546c63e1cbb81ce9bf7d0f7eb041f228842c41b4ea38011610be2b9b9c27.scope: Deactivated successfully.
Dec 06 08:00:01 np0005548790.localdomain sudo[28948]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:01 np0005548790.localdomain sudo[30968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:00:01 np0005548790.localdomain sudo[30968]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:00:01 np0005548790.localdomain sudo[30968]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:01 np0005548790.localdomain sudo[30983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 -- lvm list --format json
Dec 06 08:00:01 np0005548790.localdomain sudo[30983]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:00:02 np0005548790.localdomain podman[31036]: 
Dec 06 08:00:02 np0005548790.localdomain podman[31036]: 2025-12-06 08:00:02.137095597 +0000 UTC m=+0.059454045 container create 087aa6199f456808e78415f8e731054d3dcc5df3105859b8bfe11841a904ad4f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_carver, release=1763362218, GIT_CLEAN=True, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, RELEASE=main, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, name=rhceph, io.openshift.expose-services=, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-type=git)
Dec 06 08:00:02 np0005548790.localdomain systemd[1]: Started libpod-conmon-087aa6199f456808e78415f8e731054d3dcc5df3105859b8bfe11841a904ad4f.scope.
Dec 06 08:00:02 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:00:02 np0005548790.localdomain podman[31036]: 2025-12-06 08:00:02.200835583 +0000 UTC m=+0.123194031 container init 087aa6199f456808e78415f8e731054d3dcc5df3105859b8bfe11841a904ad4f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_carver, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, distribution-scope=public, name=rhceph, vcs-type=git, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_CLEAN=True, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, release=1763362218, description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 08:00:02 np0005548790.localdomain podman[31036]: 2025-12-06 08:00:02.108951655 +0000 UTC m=+0.031310153 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 08:00:02 np0005548790.localdomain podman[31036]: 2025-12-06 08:00:02.209637493 +0000 UTC m=+0.131995941 container start 087aa6199f456808e78415f8e731054d3dcc5df3105859b8bfe11841a904ad4f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_carver, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_CLEAN=True, architecture=x86_64, distribution-scope=public, description=Red Hat Ceph Storage 7)
Dec 06 08:00:02 np0005548790.localdomain podman[31036]: 2025-12-06 08:00:02.209908951 +0000 UTC m=+0.132267399 container attach 087aa6199f456808e78415f8e731054d3dcc5df3105859b8bfe11841a904ad4f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_carver, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, vcs-type=git, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_BRANCH=main, version=7)
Dec 06 08:00:02 np0005548790.localdomain zealous_carver[31051]: 167 167
Dec 06 08:00:02 np0005548790.localdomain systemd[1]: libpod-087aa6199f456808e78415f8e731054d3dcc5df3105859b8bfe11841a904ad4f.scope: Deactivated successfully.
Dec 06 08:00:02 np0005548790.localdomain podman[31036]: 2025-12-06 08:00:02.213603116 +0000 UTC m=+0.135961584 container died 087aa6199f456808e78415f8e731054d3dcc5df3105859b8bfe11841a904ad4f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_carver, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, distribution-scope=public, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.component=rhceph-container, version=7, name=rhceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 06 08:00:02 np0005548790.localdomain podman[31056]: 2025-12-06 08:00:02.299123542 +0000 UTC m=+0.074703609 container remove 087aa6199f456808e78415f8e731054d3dcc5df3105859b8bfe11841a904ad4f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_carver, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, version=7, GIT_CLEAN=True, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, RELEASE=main, com.redhat.component=rhceph-container, name=rhceph, ceph=True, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7)
Dec 06 08:00:02 np0005548790.localdomain systemd[1]: libpod-conmon-087aa6199f456808e78415f8e731054d3dcc5df3105859b8bfe11841a904ad4f.scope: Deactivated successfully.
Dec 06 08:00:02 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f7f0dd69dbf8db7f06d448c7adbe014650ca34772095045ba26f6b89ebb64c0c-merged.mount: Deactivated successfully.
Dec 06 08:00:02 np0005548790.localdomain podman[31075]: 
Dec 06 08:00:02 np0005548790.localdomain podman[31075]: 2025-12-06 08:00:02.50971973 +0000 UTC m=+0.069808179 container create 5185d154563934a009f86fc8f608f4612250cfe834fa9cdf28f0b61f4cb5023e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_hofstadter, ceph=True, GIT_CLEAN=True, RELEASE=main, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, com.redhat.component=rhceph-container, architecture=x86_64, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, build-date=2025-11-26T19:44:28Z, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main)
Dec 06 08:00:02 np0005548790.localdomain systemd[1]: Started libpod-conmon-5185d154563934a009f86fc8f608f4612250cfe834fa9cdf28f0b61f4cb5023e.scope.
Dec 06 08:00:02 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:00:02 np0005548790.localdomain podman[31075]: 2025-12-06 08:00:02.482474064 +0000 UTC m=+0.042562513 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 08:00:02 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ad6550e53f2ab45a4a5620485872888e3950ca7f2a24abb4d1d8ead863e3585/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:02 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ad6550e53f2ab45a4a5620485872888e3950ca7f2a24abb4d1d8ead863e3585/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:02 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ad6550e53f2ab45a4a5620485872888e3950ca7f2a24abb4d1d8ead863e3585/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:02 np0005548790.localdomain podman[31075]: 2025-12-06 08:00:02.61325299 +0000 UTC m=+0.173341429 container init 5185d154563934a009f86fc8f608f4612250cfe834fa9cdf28f0b61f4cb5023e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_hofstadter, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, version=7, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, release=1763362218, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.openshift.tags=rhceph ceph, ceph=True, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 06 08:00:02 np0005548790.localdomain systemd[1]: tmp-crun.fTN5RG.mount: Deactivated successfully.
Dec 06 08:00:02 np0005548790.localdomain podman[31075]: 2025-12-06 08:00:02.628002469 +0000 UTC m=+0.188090918 container start 5185d154563934a009f86fc8f608f4612250cfe834fa9cdf28f0b61f4cb5023e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_hofstadter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, GIT_CLEAN=True, vendor=Red Hat, Inc., name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, release=1763362218, io.openshift.tags=rhceph ceph, io.openshift.expose-services=)
Dec 06 08:00:02 np0005548790.localdomain podman[31075]: 2025-12-06 08:00:02.628578797 +0000 UTC m=+0.188667246 container attach 5185d154563934a009f86fc8f608f4612250cfe834fa9cdf28f0b61f4cb5023e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_hofstadter, GIT_CLEAN=True, distribution-scope=public, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, RELEASE=main, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph)
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]: {
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:     "0": [
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:         {
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             "devices": [
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "/dev/loop3"
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             ],
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             "lv_name": "ceph_lv0",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             "lv_size": "7511998464",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=LGVG2M-osAj-cfmv-iHZY-DA2P-myZ7-0rCRl3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1939e851-b10c-5c3b-9bb7-8e7f380233e8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f0a16764-58fb-4e66-a34c-f3b971e966ce,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             "lv_uuid": "LGVG2M-osAj-cfmv-iHZY-DA2P-myZ7-0rCRl3",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             "name": "ceph_lv0",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             "tags": {
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "ceph.block_uuid": "LGVG2M-osAj-cfmv-iHZY-DA2P-myZ7-0rCRl3",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "ceph.cluster_fsid": "1939e851-b10c-5c3b-9bb7-8e7f380233e8",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "ceph.cluster_name": "ceph",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "ceph.crush_device_class": "",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "ceph.encrypted": "0",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "ceph.osd_fsid": "f0a16764-58fb-4e66-a34c-f3b971e966ce",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "ceph.osd_id": "0",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "ceph.type": "block",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "ceph.vdo": "0"
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             },
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             "type": "block",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             "vg_name": "ceph_vg0"
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:         }
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:     ],
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:     "3": [
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:         {
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             "devices": [
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "/dev/loop4"
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             ],
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             "lv_name": "ceph_lv1",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             "lv_size": "7511998464",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cnuGjO-DlLb-O21w-7Dtx-mGV7-gBdX-rohAKI,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=1939e851-b10c-5c3b-9bb7-8e7f380233e8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=ddd2117f-b4d7-4acb-bc27-d7fd63bcf07f,ceph.osd_id=3,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             "lv_uuid": "cnuGjO-DlLb-O21w-7Dtx-mGV7-gBdX-rohAKI",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             "name": "ceph_lv1",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             "tags": {
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "ceph.block_uuid": "cnuGjO-DlLb-O21w-7Dtx-mGV7-gBdX-rohAKI",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "ceph.cephx_lockbox_secret": "",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "ceph.cluster_fsid": "1939e851-b10c-5c3b-9bb7-8e7f380233e8",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "ceph.cluster_name": "ceph",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "ceph.crush_device_class": "",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "ceph.encrypted": "0",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "ceph.osd_fsid": "ddd2117f-b4d7-4acb-bc27-d7fd63bcf07f",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "ceph.osd_id": "3",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "ceph.type": "block",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:                 "ceph.vdo": "0"
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             },
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             "type": "block",
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:             "vg_name": "ceph_vg1"
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:         }
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]:     ]
Dec 06 08:00:02 np0005548790.localdomain magical_hofstadter[31090]: }
Dec 06 08:00:02 np0005548790.localdomain systemd[1]: libpod-5185d154563934a009f86fc8f608f4612250cfe834fa9cdf28f0b61f4cb5023e.scope: Deactivated successfully.
Dec 06 08:00:02 np0005548790.localdomain podman[31075]: 2025-12-06 08:00:02.99174468 +0000 UTC m=+0.551833129 container died 5185d154563934a009f86fc8f608f4612250cfe834fa9cdf28f0b61f4cb5023e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_hofstadter, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, com.redhat.component=rhceph-container, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, name=rhceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4)
Dec 06 08:00:03 np0005548790.localdomain podman[31101]: 2025-12-06 08:00:03.073925442 +0000 UTC m=+0.073751453 container remove 5185d154563934a009f86fc8f608f4612250cfe834fa9cdf28f0b61f4cb5023e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_hofstadter, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, version=7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, RELEASE=main, release=1763362218, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 08:00:03 np0005548790.localdomain systemd[1]: libpod-conmon-5185d154563934a009f86fc8f608f4612250cfe834fa9cdf28f0b61f4cb5023e.scope: Deactivated successfully.
Dec 06 08:00:03 np0005548790.localdomain sudo[30983]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:03 np0005548790.localdomain sudo[31115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:00:03 np0005548790.localdomain sudo[31115]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:00:03 np0005548790.localdomain sudo[31115]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:03 np0005548790.localdomain sudo[31130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 08:00:03 np0005548790.localdomain sudo[31130]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:00:03 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-8ad6550e53f2ab45a4a5620485872888e3950ca7f2a24abb4d1d8ead863e3585-merged.mount: Deactivated successfully.
Dec 06 08:00:03 np0005548790.localdomain podman[31187]: 
Dec 06 08:00:03 np0005548790.localdomain podman[31187]: 2025-12-06 08:00:03.84763936 +0000 UTC m=+0.104009553 container create a3429c98aedba703270490ff425db86698ec02815f895502b346343c815929e9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_wright, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, vendor=Red Hat, Inc., RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, architecture=x86_64, build-date=2025-11-26T19:44:28Z)
Dec 06 08:00:03 np0005548790.localdomain systemd[1]: Started libpod-conmon-a3429c98aedba703270490ff425db86698ec02815f895502b346343c815929e9.scope.
Dec 06 08:00:03 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:00:03 np0005548790.localdomain podman[31187]: 2025-12-06 08:00:03.918399755 +0000 UTC m=+0.174769978 container init a3429c98aedba703270490ff425db86698ec02815f895502b346343c815929e9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_wright, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_BRANCH=main, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, architecture=x86_64)
Dec 06 08:00:03 np0005548790.localdomain podman[31187]: 2025-12-06 08:00:03.821442383 +0000 UTC m=+0.077812596 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 08:00:03 np0005548790.localdomain podman[31187]: 2025-12-06 08:00:03.92837344 +0000 UTC m=+0.184743653 container start a3429c98aedba703270490ff425db86698ec02815f895502b346343c815929e9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_wright, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, CEPH_POINT_RELEASE=, vcs-type=git, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph, RELEASE=main)
Dec 06 08:00:03 np0005548790.localdomain podman[31187]: 2025-12-06 08:00:03.928712359 +0000 UTC m=+0.185082622 container attach a3429c98aedba703270490ff425db86698ec02815f895502b346343c815929e9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_wright, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, RELEASE=main, io.openshift.expose-services=, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, version=7, name=rhceph, build-date=2025-11-26T19:44:28Z, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, distribution-scope=public)
Dec 06 08:00:03 np0005548790.localdomain goofy_wright[31202]: 167 167
Dec 06 08:00:03 np0005548790.localdomain systemd[1]: libpod-a3429c98aedba703270490ff425db86698ec02815f895502b346343c815929e9.scope: Deactivated successfully.
Dec 06 08:00:03 np0005548790.localdomain podman[31187]: 2025-12-06 08:00:03.931868539 +0000 UTC m=+0.188238832 container died a3429c98aedba703270490ff425db86698ec02815f895502b346343c815929e9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_wright, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, ceph=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.component=rhceph-container, version=7, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 06 08:00:04 np0005548790.localdomain podman[31207]: 2025-12-06 08:00:04.018415295 +0000 UTC m=+0.078000683 container remove a3429c98aedba703270490ff425db86698ec02815f895502b346343c815929e9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_wright, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, distribution-scope=public, name=rhceph, ceph=True, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:00:04 np0005548790.localdomain systemd[1]: libpod-conmon-a3429c98aedba703270490ff425db86698ec02815f895502b346343c815929e9.scope: Deactivated successfully.
Dec 06 08:00:04 np0005548790.localdomain podman[31234]: 
Dec 06 08:00:04 np0005548790.localdomain podman[31234]: 2025-12-06 08:00:04.339894601 +0000 UTC m=+0.072150436 container create a2827d7e7fce25758a7dcf3c775fd7b38d30e1facac732ddf165b837f270c03a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0-activate-test, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, version=7, GIT_CLEAN=True, vendor=Red Hat, Inc., ceph=True, architecture=x86_64, vcs-type=git, name=rhceph, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=)
Dec 06 08:00:04 np0005548790.localdomain systemd[1]: Started libpod-conmon-a2827d7e7fce25758a7dcf3c775fd7b38d30e1facac732ddf165b837f270c03a.scope.
Dec 06 08:00:04 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a3e0f729c4249acd2cccfd18a27e5f4563b243d5873d0a9af8a0ea47a1d70dc6-merged.mount: Deactivated successfully.
Dec 06 08:00:04 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:00:04 np0005548790.localdomain podman[31234]: 2025-12-06 08:00:04.311597225 +0000 UTC m=+0.043853070 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 08:00:04 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb87d7ca387ec1c198285a316c4f31a7bf144c0f486c78afb33090b3af80b8d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:04 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb87d7ca387ec1c198285a316c4f31a7bf144c0f486c78afb33090b3af80b8d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:04 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb87d7ca387ec1c198285a316c4f31a7bf144c0f486c78afb33090b3af80b8d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:04 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb87d7ca387ec1c198285a316c4f31a7bf144c0f486c78afb33090b3af80b8d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:04 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb87d7ca387ec1c198285a316c4f31a7bf144c0f486c78afb33090b3af80b8d0/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:04 np0005548790.localdomain podman[31234]: 2025-12-06 08:00:04.471333995 +0000 UTC m=+0.203589840 container init a2827d7e7fce25758a7dcf3c775fd7b38d30e1facac732ddf165b837f270c03a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0-activate-test, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, RELEASE=main, io.buildah.version=1.41.4, GIT_BRANCH=main, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_CLEAN=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7)
Dec 06 08:00:04 np0005548790.localdomain podman[31234]: 2025-12-06 08:00:04.481046782 +0000 UTC m=+0.213302627 container start a2827d7e7fce25758a7dcf3c775fd7b38d30e1facac732ddf165b837f270c03a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0-activate-test, version=7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, release=1763362218, distribution-scope=public, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.openshift.expose-services=, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 06 08:00:04 np0005548790.localdomain podman[31234]: 2025-12-06 08:00:04.481537106 +0000 UTC m=+0.213792941 container attach a2827d7e7fce25758a7dcf3c775fd7b38d30e1facac732ddf165b837f270c03a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0-activate-test, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, release=1763362218, CEPH_POINT_RELEASE=, vcs-type=git, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4)
Dec 06 08:00:04 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0-activate-test[31250]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec 06 08:00:04 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0-activate-test[31250]:                             [--no-systemd] [--no-tmpfs]
Dec 06 08:00:04 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0-activate-test[31250]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 06 08:00:04 np0005548790.localdomain systemd[1]: libpod-a2827d7e7fce25758a7dcf3c775fd7b38d30e1facac732ddf165b837f270c03a.scope: Deactivated successfully.
Dec 06 08:00:04 np0005548790.localdomain podman[31234]: 2025-12-06 08:00:04.695460579 +0000 UTC m=+0.427716474 container died a2827d7e7fce25758a7dcf3c775fd7b38d30e1facac732ddf165b837f270c03a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0-activate-test, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vendor=Red Hat, Inc., io.openshift.expose-services=, RELEASE=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, release=1763362218, name=rhceph, build-date=2025-11-26T19:44:28Z, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64)
Dec 06 08:00:04 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cb87d7ca387ec1c198285a316c4f31a7bf144c0f486c78afb33090b3af80b8d0-merged.mount: Deactivated successfully.
Dec 06 08:00:04 np0005548790.localdomain systemd-journald[618]: Field hash table of /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation.
Dec 06 08:00:04 np0005548790.localdomain systemd-journald[618]: /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 06 08:00:04 np0005548790.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 08:00:04 np0005548790.localdomain podman[31255]: 2025-12-06 08:00:04.79413015 +0000 UTC m=+0.088733679 container remove a2827d7e7fce25758a7dcf3c775fd7b38d30e1facac732ddf165b837f270c03a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0-activate-test, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, release=1763362218, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhceph, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 08:00:04 np0005548790.localdomain systemd[1]: libpod-conmon-a2827d7e7fce25758a7dcf3c775fd7b38d30e1facac732ddf165b837f270c03a.scope: Deactivated successfully.
Dec 06 08:00:04 np0005548790.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 08:00:05 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:00:05 np0005548790.localdomain systemd-sysv-generator[31313]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:00:05 np0005548790.localdomain systemd-rc-local-generator[31309]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:00:05 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:00:05 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:00:05 np0005548790.localdomain systemd-sysv-generator[31357]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:00:05 np0005548790.localdomain systemd-rc-local-generator[31352]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:00:05 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:00:05 np0005548790.localdomain systemd[1]: Starting Ceph osd.0 for 1939e851-b10c-5c3b-9bb7-8e7f380233e8...
Dec 06 08:00:05 np0005548790.localdomain podman[31415]: 
Dec 06 08:00:05 np0005548790.localdomain podman[31415]: 2025-12-06 08:00:05.965875726 +0000 UTC m=+0.072892938 container create c0f859031143b09ea529abfca0872067df8c94817e667a792729d974345188b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0-activate, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-type=git, architecture=x86_64, description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_CLEAN=True, version=7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph)
Dec 06 08:00:06 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:00:06 np0005548790.localdomain podman[31415]: 2025-12-06 08:00:05.935313365 +0000 UTC m=+0.042330567 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 08:00:06 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a660b45e6a151d545abb20c8f6b217bfcd7969422977f83afa09ee86627201/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:06 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a660b45e6a151d545abb20c8f6b217bfcd7969422977f83afa09ee86627201/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:06 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a660b45e6a151d545abb20c8f6b217bfcd7969422977f83afa09ee86627201/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:06 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a660b45e6a151d545abb20c8f6b217bfcd7969422977f83afa09ee86627201/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:06 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a660b45e6a151d545abb20c8f6b217bfcd7969422977f83afa09ee86627201/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:06 np0005548790.localdomain podman[31415]: 2025-12-06 08:00:06.090976659 +0000 UTC m=+0.197993871 container init c0f859031143b09ea529abfca0872067df8c94817e667a792729d974345188b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0-activate, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1763362218, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, architecture=x86_64, build-date=2025-11-26T19:44:28Z, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 06 08:00:06 np0005548790.localdomain podman[31415]: 2025-12-06 08:00:06.101015765 +0000 UTC m=+0.208032977 container start c0f859031143b09ea529abfca0872067df8c94817e667a792729d974345188b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0-activate, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, release=1763362218, architecture=x86_64, RELEASE=main, version=7, com.redhat.component=rhceph-container, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph)
Dec 06 08:00:06 np0005548790.localdomain podman[31415]: 2025-12-06 08:00:06.101347924 +0000 UTC m=+0.208365136 container attach c0f859031143b09ea529abfca0872067df8c94817e667a792729d974345188b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0-activate, GIT_CLEAN=True, io.buildah.version=1.41.4, release=1763362218, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vcs-type=git, ceph=True, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph)
Dec 06 08:00:06 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0-activate[31429]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 06 08:00:06 np0005548790.localdomain bash[31415]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 06 08:00:06 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0-activate[31429]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Dec 06 08:00:06 np0005548790.localdomain bash[31415]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Dec 06 08:00:06 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0-activate[31429]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Dec 06 08:00:06 np0005548790.localdomain bash[31415]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Dec 06 08:00:06 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0-activate[31429]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 06 08:00:06 np0005548790.localdomain bash[31415]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 06 08:00:06 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0-activate[31429]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 06 08:00:06 np0005548790.localdomain bash[31415]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 06 08:00:06 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0-activate[31429]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 06 08:00:06 np0005548790.localdomain bash[31415]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 06 08:00:06 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0-activate[31429]: --> ceph-volume raw activate successful for osd ID: 0
Dec 06 08:00:06 np0005548790.localdomain bash[31415]: --> ceph-volume raw activate successful for osd ID: 0
Dec 06 08:00:06 np0005548790.localdomain systemd[1]: libpod-c0f859031143b09ea529abfca0872067df8c94817e667a792729d974345188b6.scope: Deactivated successfully.
Dec 06 08:00:06 np0005548790.localdomain podman[31415]: 2025-12-06 08:00:06.774693234 +0000 UTC m=+0.881710406 container died c0f859031143b09ea529abfca0872067df8c94817e667a792729d974345188b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0-activate, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-type=git, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, architecture=x86_64, description=Red Hat Ceph Storage 7, RELEASE=main, ceph=True, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1763362218, GIT_BRANCH=main, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 06 08:00:06 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-74a660b45e6a151d545abb20c8f6b217bfcd7969422977f83afa09ee86627201-merged.mount: Deactivated successfully.
Dec 06 08:00:06 np0005548790.localdomain podman[31549]: 2025-12-06 08:00:06.868140226 +0000 UTC m=+0.082646026 container remove c0f859031143b09ea529abfca0872067df8c94817e667a792729d974345188b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0-activate, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., RELEASE=main, architecture=x86_64, release=1763362218, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, distribution-scope=public, GIT_BRANCH=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, ceph=True)
Dec 06 08:00:07 np0005548790.localdomain podman[31609]: 
Dec 06 08:00:07 np0005548790.localdomain podman[31609]: 2025-12-06 08:00:07.144913219 +0000 UTC m=+0.042989406 container create 9a4ab317d35ff7274fdb32a3c56157b77311ce1b09311562aadf4d21a7edabbd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0, io.openshift.expose-services=, version=7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, distribution-scope=public, io.buildah.version=1.41.4, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, ceph=True, io.openshift.tags=rhceph ceph, RELEASE=main, name=rhceph, vcs-type=git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 06 08:00:07 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc3d7fbe97ff16aedf2894a6ab9fb01d6991a5cbaa04a61c4a6e3e7b51330de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:07 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc3d7fbe97ff16aedf2894a6ab9fb01d6991a5cbaa04a61c4a6e3e7b51330de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:07 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc3d7fbe97ff16aedf2894a6ab9fb01d6991a5cbaa04a61c4a6e3e7b51330de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:07 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc3d7fbe97ff16aedf2894a6ab9fb01d6991a5cbaa04a61c4a6e3e7b51330de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:07 np0005548790.localdomain podman[31609]: 2025-12-06 08:00:07.125808385 +0000 UTC m=+0.023884632 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 08:00:07 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc3d7fbe97ff16aedf2894a6ab9fb01d6991a5cbaa04a61c4a6e3e7b51330de/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:07 np0005548790.localdomain podman[31609]: 2025-12-06 08:00:07.241354326 +0000 UTC m=+0.139430503 container init 9a4ab317d35ff7274fdb32a3c56157b77311ce1b09311562aadf4d21a7edabbd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, com.redhat.component=rhceph-container, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, version=7, io.buildah.version=1.41.4, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, name=rhceph, io.openshift.tags=rhceph ceph, vcs-type=git, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 06 08:00:07 np0005548790.localdomain systemd[1]: tmp-crun.1WUHfm.mount: Deactivated successfully.
Dec 06 08:00:07 np0005548790.localdomain podman[31609]: 2025-12-06 08:00:07.251952358 +0000 UTC m=+0.150028545 container start 9a4ab317d35ff7274fdb32a3c56157b77311ce1b09311562aadf4d21a7edabbd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0, version=7, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, release=1763362218, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-type=git)
Dec 06 08:00:07 np0005548790.localdomain bash[31609]: 9a4ab317d35ff7274fdb32a3c56157b77311ce1b09311562aadf4d21a7edabbd
Dec 06 08:00:07 np0005548790.localdomain systemd[1]: Started Ceph osd.0 for 1939e851-b10c-5c3b-9bb7-8e7f380233e8.
Dec 06 08:00:07 np0005548790.localdomain sudo[31130]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:07 np0005548790.localdomain ceph-osd[31627]: set uid:gid to 167:167 (ceph:ceph)
Dec 06 08:00:07 np0005548790.localdomain ceph-osd[31627]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2
Dec 06 08:00:07 np0005548790.localdomain ceph-osd[31627]: pidfile_write: ignore empty --pid-file
Dec 06 08:00:07 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316690e00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 06 08:00:07 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316690e00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 06 08:00:07 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316690e00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 08:00:07 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Dec 06 08:00:07 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316691180 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 06 08:00:07 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316691180 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 06 08:00:07 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316691180 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 08:00:07 np0005548790.localdomain ceph-osd[31627]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Dec 06 08:00:07 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316691180 /var/lib/ceph/osd/ceph-0/block) close
Dec 06 08:00:07 np0005548790.localdomain sudo[31640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:00:07 np0005548790.localdomain sudo[31640]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:00:07 np0005548790.localdomain sudo[31640]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:07 np0005548790.localdomain sudo[31655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 08:00:07 np0005548790.localdomain sudo[31655]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:00:07 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316690e00 /var/lib/ceph/osd/ceph-0/block) close
Dec 06 08:00:07 np0005548790.localdomain ceph-osd[31627]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Dec 06 08:00:07 np0005548790.localdomain ceph-osd[31627]: load: jerasure load: lrc 
Dec 06 08:00:07 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316691180 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 06 08:00:07 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316691180 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 06 08:00:07 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316691180 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 08:00:07 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Dec 06 08:00:07 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316691180 /var/lib/ceph/osd/ceph-0/block) close
Dec 06 08:00:07 np0005548790.localdomain podman[31720]: 
Dec 06 08:00:08 np0005548790.localdomain podman[31720]: 2025-12-06 08:00:08.006365076 +0000 UTC m=+0.060733920 container create 6a8b744f9e85c1528b7a2cedf961f7448e80f9ee05e817d63f27b42116516d38 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_gates, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, CEPH_POINT_RELEASE=, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, name=rhceph, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, RELEASE=main, architecture=x86_64, version=7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc.)
Dec 06 08:00:08 np0005548790.localdomain systemd[1]: Started libpod-conmon-6a8b744f9e85c1528b7a2cedf961f7448e80f9ee05e817d63f27b42116516d38.scope.
Dec 06 08:00:08 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:00:08 np0005548790.localdomain podman[31720]: 2025-12-06 08:00:08.073074217 +0000 UTC m=+0.127443011 container init 6a8b744f9e85c1528b7a2cedf961f7448e80f9ee05e817d63f27b42116516d38 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_gates, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_CLEAN=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, com.redhat.component=rhceph-container, distribution-scope=public, vendor=Red Hat, Inc., GIT_BRANCH=main, name=rhceph, build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=)
Dec 06 08:00:08 np0005548790.localdomain podman[31720]: 2025-12-06 08:00:08.081938779 +0000 UTC m=+0.136307573 container start 6a8b744f9e85c1528b7a2cedf961f7448e80f9ee05e817d63f27b42116516d38 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_gates, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, version=7, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, architecture=x86_64, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=)
Dec 06 08:00:08 np0005548790.localdomain podman[31720]: 2025-12-06 08:00:08.082090663 +0000 UTC m=+0.136459457 container attach 6a8b744f9e85c1528b7a2cedf961f7448e80f9ee05e817d63f27b42116516d38 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_gates, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, release=1763362218, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, name=rhceph, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=)
Dec 06 08:00:08 np0005548790.localdomain jolly_gates[31735]: 167 167
Dec 06 08:00:08 np0005548790.localdomain systemd[1]: libpod-6a8b744f9e85c1528b7a2cedf961f7448e80f9ee05e817d63f27b42116516d38.scope: Deactivated successfully.
Dec 06 08:00:08 np0005548790.localdomain podman[31720]: 2025-12-06 08:00:08.086771217 +0000 UTC m=+0.141140011 container died 6a8b744f9e85c1528b7a2cedf961f7448e80f9ee05e817d63f27b42116516d38 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_gates, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, name=rhceph, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, distribution-scope=public, version=7, release=1763362218, build-date=2025-11-26T19:44:28Z, ceph=True)
Dec 06 08:00:08 np0005548790.localdomain podman[31720]: 2025-12-06 08:00:07.988429146 +0000 UTC m=+0.042797960 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316691180 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316691180 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316691180 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316691180 /var/lib/ceph/osd/ceph-0/block) close
Dec 06 08:00:08 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c53eea9265045afaf46c703a4f5bf1ac14cd5321a172b1d00181df92c4dd1340-merged.mount: Deactivated successfully.
Dec 06 08:00:08 np0005548790.localdomain podman[31740]: 2025-12-06 08:00:08.2803192 +0000 UTC m=+0.184993180 container remove 6a8b744f9e85c1528b7a2cedf961f7448e80f9ee05e817d63f27b42116516d38 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jolly_gates, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, RELEASE=main, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, version=7, io.buildah.version=1.41.4, name=rhceph, io.openshift.expose-services=, description=Red Hat Ceph Storage 7)
Dec 06 08:00:08 np0005548790.localdomain systemd[1]: libpod-conmon-6a8b744f9e85c1528b7a2cedf961f7448e80f9ee05e817d63f27b42116516d38.scope: Deactivated successfully.
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316691180 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316691180 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316691180 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316691500 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316691500 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316691500 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bluefs mount
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bluefs mount shared_bdev_used = 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: RocksDB version: 7.9.2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Git sha 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Compile date 2025-09-23 00:00:00
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: DB SUMMARY
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: DB Session ID:  LGHR94L6QD1L0ES9QDSV
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: CURRENT file:  CURRENT
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: IDENTITY file:  IDENTITY
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                         Options.error_if_exists: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.create_if_missing: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                         Options.paranoid_checks: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                                     Options.env: 0x5613174a7ea0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                                Options.info_log: 0x561317634be0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.max_file_opening_threads: 16
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                              Options.statistics: (nil)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.use_fsync: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.max_log_file_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                         Options.allow_fallocate: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.use_direct_reads: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.create_missing_column_families: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                              Options.db_log_dir: 
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                                 Options.wal_dir: db.wal
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.advise_random_on_open: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.write_buffer_manager: 0x56131667b4a0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                            Options.rate_limiter: (nil)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.unordered_write: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.row_cache: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                              Options.wal_filter: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.allow_ingest_behind: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.two_write_queues: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.manual_wal_flush: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.wal_compression: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.atomic_flush: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.log_readahead_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.allow_data_in_errors: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.db_host_id: __hostname__
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.max_background_jobs: 4
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.max_background_compactions: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.max_subcompactions: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.max_open_files: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.bytes_per_sync: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.max_background_flushes: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Compression algorithms supported:
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         kZSTD supported: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         kXpressCompression supported: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         kBZip2Compression supported: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         kLZ4Compression supported: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         kZlibCompression supported: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         kLZ4HCCompression supported: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         kSnappyCompression supported: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561317634da0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x561316668850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.table_properties_collectors: 
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561317634da0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x561316668850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561317634da0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x561316668850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561317634da0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x561316668850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561317634da0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x561316668850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561317634da0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x561316668850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561317634da0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x561316668850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561317634fc0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x5613166682d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561317634fc0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x5613166682d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561317634fc0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x5613166682d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ad9cd653-cbbf-4e19-9cc1-c2fca3201a16
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008008403994, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008008404145, "job": 1, "event": "recovery_finished"}
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: freelist init
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: freelist _read_cfg
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bluefs umount
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316691500 /var/lib/ceph/osd/ceph-0/block) close
Dec 06 08:00:08 np0005548790.localdomain podman[31968]: 
Dec 06 08:00:08 np0005548790.localdomain podman[31968]: 2025-12-06 08:00:08.507327086 +0000 UTC m=+0.042231844 container create fc9d1cc72fa368eabe65175179011aba6d3ca207366139e9c3638054d0599cea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3-activate-test, GIT_BRANCH=main, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., name=rhceph, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=)
Dec 06 08:00:08 np0005548790.localdomain systemd[1]: Started libpod-conmon-fc9d1cc72fa368eabe65175179011aba6d3ca207366139e9c3638054d0599cea.scope.
Dec 06 08:00:08 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:00:08 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb678841eae7bf0602607e5e5428357a0e7d75128e2dce54aabff73d68246822/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:08 np0005548790.localdomain podman[31968]: 2025-12-06 08:00:08.488431008 +0000 UTC m=+0.023335766 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 08:00:08 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb678841eae7bf0602607e5e5428357a0e7d75128e2dce54aabff73d68246822/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:08 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb678841eae7bf0602607e5e5428357a0e7d75128e2dce54aabff73d68246822/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:08 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb678841eae7bf0602607e5e5428357a0e7d75128e2dce54aabff73d68246822/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:08 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb678841eae7bf0602607e5e5428357a0e7d75128e2dce54aabff73d68246822/merged/var/lib/ceph/osd/ceph-3 supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:08 np0005548790.localdomain podman[31968]: 2025-12-06 08:00:08.636810544 +0000 UTC m=+0.171715332 container init fc9d1cc72fa368eabe65175179011aba6d3ca207366139e9c3638054d0599cea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3-activate-test, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., ceph=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, name=rhceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git)
Dec 06 08:00:08 np0005548790.localdomain podman[31968]: 2025-12-06 08:00:08.646221813 +0000 UTC m=+0.181126601 container start fc9d1cc72fa368eabe65175179011aba6d3ca207366139e9c3638054d0599cea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3-activate-test, vcs-type=git, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, version=7, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, com.redhat.component=rhceph-container)
Dec 06 08:00:08 np0005548790.localdomain podman[31968]: 2025-12-06 08:00:08.646420248 +0000 UTC m=+0.181325036 container attach fc9d1cc72fa368eabe65175179011aba6d3ca207366139e9c3638054d0599cea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3-activate-test, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, RELEASE=main, distribution-scope=public, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, ceph=True, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316691500 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316691500 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bdev(0x561316691500 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bluefs mount
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bluefs mount shared_bdev_used = 4718592
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: RocksDB version: 7.9.2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Git sha 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Compile date 2025-09-23 00:00:00
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: DB SUMMARY
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: DB Session ID:  LGHR94L6QD1L0ES9QDSU
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: CURRENT file:  CURRENT
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: IDENTITY file:  IDENTITY
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                         Options.error_if_exists: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.create_if_missing: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                         Options.paranoid_checks: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                                     Options.env: 0x561317690310
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                                Options.info_log: 0x561317635ce0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.max_file_opening_threads: 16
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                              Options.statistics: (nil)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.use_fsync: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.max_log_file_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                         Options.allow_fallocate: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.use_direct_reads: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.create_missing_column_families: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                              Options.db_log_dir: 
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                                 Options.wal_dir: db.wal
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.advise_random_on_open: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.write_buffer_manager: 0x56131667b5e0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                            Options.rate_limiter: (nil)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.unordered_write: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.row_cache: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                              Options.wal_filter: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.allow_ingest_behind: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.two_write_queues: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.manual_wal_flush: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.wal_compression: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.atomic_flush: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.log_readahead_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.allow_data_in_errors: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.db_host_id: __hostname__
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.max_background_jobs: 4
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.max_background_compactions: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.max_subcompactions: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.max_open_files: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.bytes_per_sync: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.max_background_flushes: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Compression algorithms supported:
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         kZSTD supported: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         kXpressCompression supported: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         kBZip2Compression supported: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         kLZ4Compression supported: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         kZlibCompression supported: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         kLZ4HCCompression supported: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         kSnappyCompression supported: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561317643660)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x5613166682d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.table_properties_collectors: 
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561317643660)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x5613166682d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561317643660)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x5613166682d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561317643660)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x5613166682d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561317643660)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x5613166682d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561317643660)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x5613166682d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561317643660)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x5613166682d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613176432c0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x561316669610
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613176432c0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x561316669610
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5613176432c0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x561316669610
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ad9cd653-cbbf-4e19-9cc1-c2fca3201a16
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008008668690, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008008673819, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008008, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ad9cd653-cbbf-4e19-9cc1-c2fca3201a16", "db_session_id": "LGHR94L6QD1L0ES9QDSU", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008008677964, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008008, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ad9cd653-cbbf-4e19-9cc1-c2fca3201a16", "db_session_id": "LGHR94L6QD1L0ES9QDSU", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008008681953, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008008, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ad9cd653-cbbf-4e19-9cc1-c2fca3201a16", "db_session_id": "LGHR94L6QD1L0ES9QDSU", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008008685354, "job": 1, "event": "recovery_finished"}
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x561316690e00
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: DB pointer 0x5613166c9a00
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                                          Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x561316669610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x561316669610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x561316669610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: <cls> /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: <cls> /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: _get_class not permitted to load lua
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: _get_class not permitted to load sdk
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: _get_class not permitted to load test_remote_reads
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: osd.0 0 load_pgs
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: osd.0 0 load_pgs opened 0 pgs
Dec 06 08:00:08 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0[31623]: 2025-12-06T08:00:08.720+0000 7f31bb440a80 -1 osd.0 0 log_to_monitors true
Dec 06 08:00:08 np0005548790.localdomain ceph-osd[31627]: osd.0 0 log_to_monitors true
Dec 06 08:00:08 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3-activate-test[31983]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec 06 08:00:08 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3-activate-test[31983]:                             [--no-systemd] [--no-tmpfs]
Dec 06 08:00:08 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3-activate-test[31983]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 06 08:00:08 np0005548790.localdomain systemd[1]: libpod-fc9d1cc72fa368eabe65175179011aba6d3ca207366139e9c3638054d0599cea.scope: Deactivated successfully.
Dec 06 08:00:08 np0005548790.localdomain podman[31968]: 2025-12-06 08:00:08.880769993 +0000 UTC m=+0.415674801 container died fc9d1cc72fa368eabe65175179011aba6d3ca207366139e9c3638054d0599cea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3-activate-test, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_CLEAN=True, version=7, com.redhat.component=rhceph-container, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 08:00:08 np0005548790.localdomain podman[32203]: 2025-12-06 08:00:08.95263321 +0000 UTC m=+0.062354047 container remove fc9d1cc72fa368eabe65175179011aba6d3ca207366139e9c3638054d0599cea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3-activate-test, io.openshift.expose-services=, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, release=1763362218, GIT_BRANCH=main, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, GIT_CLEAN=True, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=)
Dec 06 08:00:08 np0005548790.localdomain systemd[1]: libpod-conmon-fc9d1cc72fa368eabe65175179011aba6d3ca207366139e9c3638054d0599cea.scope: Deactivated successfully.
Dec 06 08:00:09 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-bb678841eae7bf0602607e5e5428357a0e7d75128e2dce54aabff73d68246822-merged.mount: Deactivated successfully.
Dec 06 08:00:09 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:00:09 np0005548790.localdomain systemd-rc-local-generator[32259]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:00:09 np0005548790.localdomain systemd-sysv-generator[32263]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:00:09 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:00:09 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:00:09 np0005548790.localdomain systemd-sysv-generator[32303]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:00:09 np0005548790.localdomain systemd-rc-local-generator[32298]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:00:09 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:00:09 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 06 08:00:09 np0005548790.localdomain systemd[1]: Starting Ceph osd.3 for 1939e851-b10c-5c3b-9bb7-8e7f380233e8...
Dec 06 08:00:09 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 06 08:00:10 np0005548790.localdomain podman[32363]: 
Dec 06 08:00:10 np0005548790.localdomain podman[32363]: 2025-12-06 08:00:10.052667724 +0000 UTC m=+0.070009065 container create 05b25040f60793bd1ff2e1aa78cc990ca54b96b7614378672d6ffe157780074a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3-activate, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, name=rhceph, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 08:00:10 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:00:10 np0005548790.localdomain podman[32363]: 2025-12-06 08:00:10.024717158 +0000 UTC m=+0.042058499 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 08:00:10 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c50b82e66f8c2a8d3a21911dbae2b63d24247370eb6c6166fe8a44b54b9895a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:10 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c50b82e66f8c2a8d3a21911dbae2b63d24247370eb6c6166fe8a44b54b9895a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:10 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c50b82e66f8c2a8d3a21911dbae2b63d24247370eb6c6166fe8a44b54b9895a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:10 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c50b82e66f8c2a8d3a21911dbae2b63d24247370eb6c6166fe8a44b54b9895a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:10 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c50b82e66f8c2a8d3a21911dbae2b63d24247370eb6c6166fe8a44b54b9895a/merged/var/lib/ceph/osd/ceph-3 supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:10 np0005548790.localdomain podman[32363]: 2025-12-06 08:00:10.183957364 +0000 UTC m=+0.201298705 container init 05b25040f60793bd1ff2e1aa78cc990ca54b96b7614378672d6ffe157780074a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3-activate, build-date=2025-11-26T19:44:28Z, RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, release=1763362218, io.buildah.version=1.41.4, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, version=7, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.openshift.expose-services=, distribution-scope=public)
Dec 06 08:00:10 np0005548790.localdomain podman[32363]: 2025-12-06 08:00:10.194561255 +0000 UTC m=+0.211902596 container start 05b25040f60793bd1ff2e1aa78cc990ca54b96b7614378672d6ffe157780074a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3-activate, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.openshift.expose-services=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, version=7, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True)
Dec 06 08:00:10 np0005548790.localdomain podman[32363]: 2025-12-06 08:00:10.196965133 +0000 UTC m=+0.214306514 container attach 05b25040f60793bd1ff2e1aa78cc990ca54b96b7614378672d6ffe157780074a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3-activate, ceph=True, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, architecture=x86_64, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.openshift.expose-services=, vcs-type=git)
Dec 06 08:00:10 np0005548790.localdomain ceph-osd[31627]: osd.0 0 done with init, starting boot process
Dec 06 08:00:10 np0005548790.localdomain ceph-osd[31627]: osd.0 0 start_boot
Dec 06 08:00:10 np0005548790.localdomain ceph-osd[31627]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 06 08:00:10 np0005548790.localdomain ceph-osd[31627]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 06 08:00:10 np0005548790.localdomain ceph-osd[31627]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 06 08:00:10 np0005548790.localdomain ceph-osd[31627]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 06 08:00:10 np0005548790.localdomain ceph-osd[31627]: osd.0 0  bench count 12288000 bsize 4 KiB
Dec 06 08:00:10 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3-activate[32378]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
Dec 06 08:00:10 np0005548790.localdomain bash[32363]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
Dec 06 08:00:10 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3-activate[32378]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-3 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Dec 06 08:00:10 np0005548790.localdomain bash[32363]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-3 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Dec 06 08:00:10 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3-activate[32378]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Dec 06 08:00:10 np0005548790.localdomain bash[32363]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Dec 06 08:00:10 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3-activate[32378]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 06 08:00:10 np0005548790.localdomain bash[32363]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 06 08:00:10 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3-activate[32378]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-3/block
Dec 06 08:00:10 np0005548790.localdomain bash[32363]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-3/block
Dec 06 08:00:10 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3-activate[32378]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
Dec 06 08:00:10 np0005548790.localdomain bash[32363]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
Dec 06 08:00:10 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3-activate[32378]: --> ceph-volume raw activate successful for osd ID: 3
Dec 06 08:00:10 np0005548790.localdomain bash[32363]: --> ceph-volume raw activate successful for osd ID: 3
Dec 06 08:00:10 np0005548790.localdomain systemd[1]: libpod-05b25040f60793bd1ff2e1aa78cc990ca54b96b7614378672d6ffe157780074a.scope: Deactivated successfully.
Dec 06 08:00:10 np0005548790.localdomain podman[32509]: 2025-12-06 08:00:10.968758058 +0000 UTC m=+0.052636481 container died 05b25040f60793bd1ff2e1aa78cc990ca54b96b7614378672d6ffe157780074a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3-activate, vcs-type=git, vendor=Red Hat, Inc., name=rhceph, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.buildah.version=1.41.4, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1763362218, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=)
Dec 06 08:00:11 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-6c50b82e66f8c2a8d3a21911dbae2b63d24247370eb6c6166fe8a44b54b9895a-merged.mount: Deactivated successfully.
Dec 06 08:00:11 np0005548790.localdomain podman[32509]: 2025-12-06 08:00:11.054492929 +0000 UTC m=+0.138371362 container remove 05b25040f60793bd1ff2e1aa78cc990ca54b96b7614378672d6ffe157780074a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3-activate, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, version=7, CEPH_POINT_RELEASE=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, com.redhat.component=rhceph-container, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, name=rhceph, architecture=x86_64, release=1763362218, GIT_CLEAN=True, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7)
Dec 06 08:00:11 np0005548790.localdomain podman[32568]: 
Dec 06 08:00:11 np0005548790.localdomain podman[32568]: 2025-12-06 08:00:11.393034162 +0000 UTC m=+0.093052971 container create e1be387a976ac0f984d37121edfef866cb484d28059afa3f43eed7c92730674a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3, GIT_BRANCH=main, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, distribution-scope=public, name=rhceph, GIT_CLEAN=True, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7)
Dec 06 08:00:11 np0005548790.localdomain systemd[1]: tmp-crun.1zOJfw.mount: Deactivated successfully.
Dec 06 08:00:11 np0005548790.localdomain podman[32568]: 2025-12-06 08:00:11.34415383 +0000 UTC m=+0.044172649 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 08:00:11 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c284c57e92d909893547fcd22e26fa2274a4209e216c7f2e83de775cb786e5a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:11 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c284c57e92d909893547fcd22e26fa2274a4209e216c7f2e83de775cb786e5a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:11 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c284c57e92d909893547fcd22e26fa2274a4209e216c7f2e83de775cb786e5a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:11 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c284c57e92d909893547fcd22e26fa2274a4209e216c7f2e83de775cb786e5a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:11 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c284c57e92d909893547fcd22e26fa2274a4209e216c7f2e83de775cb786e5a7/merged/var/lib/ceph/osd/ceph-3 supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:11 np0005548790.localdomain podman[32568]: 2025-12-06 08:00:11.510208 +0000 UTC m=+0.210226809 container init e1be387a976ac0f984d37121edfef866cb484d28059afa3f43eed7c92730674a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3, release=1763362218, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, com.redhat.component=rhceph-container, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 08:00:11 np0005548790.localdomain podman[32568]: 2025-12-06 08:00:11.531840616 +0000 UTC m=+0.231859435 container start e1be387a976ac0f984d37121edfef866cb484d28059afa3f43eed7c92730674a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3, vendor=Red Hat, Inc., RELEASE=main, name=rhceph, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, version=7, CEPH_POINT_RELEASE=, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, distribution-scope=public, release=1763362218, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64)
Dec 06 08:00:11 np0005548790.localdomain bash[32568]: e1be387a976ac0f984d37121edfef866cb484d28059afa3f43eed7c92730674a
Dec 06 08:00:11 np0005548790.localdomain systemd[1]: Started Ceph osd.3 for 1939e851-b10c-5c3b-9bb7-8e7f380233e8.
Dec 06 08:00:11 np0005548790.localdomain ceph-osd[32586]: set uid:gid to 167:167 (ceph:ceph)
Dec 06 08:00:11 np0005548790.localdomain ceph-osd[32586]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2
Dec 06 08:00:11 np0005548790.localdomain ceph-osd[32586]: pidfile_write: ignore empty --pid-file
Dec 06 08:00:11 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53998e00 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block
Dec 06 08:00:11 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53998e00 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument
Dec 06 08:00:11 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53998e00 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 08:00:11 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Dec 06 08:00:11 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53999180 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block
Dec 06 08:00:11 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53999180 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument
Dec 06 08:00:11 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53999180 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 08:00:11 np0005548790.localdomain ceph-osd[32586]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-3/block size 7.0 GiB
Dec 06 08:00:11 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53999180 /var/lib/ceph/osd/ceph-3/block) close
Dec 06 08:00:11 np0005548790.localdomain sudo[31655]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:11 np0005548790.localdomain sudo[32599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:00:11 np0005548790.localdomain sudo[32599]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:00:11 np0005548790.localdomain sudo[32599]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:11 np0005548790.localdomain sudo[32614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 -- raw list --format json
Dec 06 08:00:11 np0005548790.localdomain sudo[32614]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:00:11 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53998e00 /var/lib/ceph/osd/ceph-3/block) close
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: starting osd.3 osd_data /var/lib/ceph/osd/ceph-3 /var/lib/ceph/osd/ceph-3/journal
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: load: jerasure load: lrc 
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53998e00 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53998e00 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53998e00 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53998e00 /var/lib/ceph/osd/ceph-3/block) close
Dec 06 08:00:12 np0005548790.localdomain podman[32671]: 
Dec 06 08:00:12 np0005548790.localdomain podman[32671]: 2025-12-06 08:00:12.248558982 +0000 UTC m=+0.052869017 container create 9a6a4b1c2d780cc06ec278035f81ecd8330ce188ff54dd0707c6d77fd256163b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_cartwright, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, name=rhceph, release=1763362218, io.openshift.expose-services=, distribution-scope=public, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True)
Dec 06 08:00:12 np0005548790.localdomain systemd[1]: Started libpod-conmon-9a6a4b1c2d780cc06ec278035f81ecd8330ce188ff54dd0707c6d77fd256163b.scope.
Dec 06 08:00:12 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:00:12 np0005548790.localdomain podman[32671]: 2025-12-06 08:00:12.220626646 +0000 UTC m=+0.024936681 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 08:00:12 np0005548790.localdomain podman[32671]: 2025-12-06 08:00:12.337229407 +0000 UTC m=+0.141539432 container init 9a6a4b1c2d780cc06ec278035f81ecd8330ce188ff54dd0707c6d77fd256163b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_cartwright, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, ceph=True, com.redhat.component=rhceph-container, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, RELEASE=main, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, io.buildah.version=1.41.4)
Dec 06 08:00:12 np0005548790.localdomain clever_cartwright[32687]: 167 167
Dec 06 08:00:12 np0005548790.localdomain systemd[1]: libpod-9a6a4b1c2d780cc06ec278035f81ecd8330ce188ff54dd0707c6d77fd256163b.scope: Deactivated successfully.
Dec 06 08:00:12 np0005548790.localdomain podman[32671]: 2025-12-06 08:00:12.362301671 +0000 UTC m=+0.166611696 container start 9a6a4b1c2d780cc06ec278035f81ecd8330ce188ff54dd0707c6d77fd256163b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_cartwright, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, RELEASE=main, version=7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc.)
Dec 06 08:00:12 np0005548790.localdomain podman[32671]: 2025-12-06 08:00:12.362591969 +0000 UTC m=+0.166901994 container attach 9a6a4b1c2d780cc06ec278035f81ecd8330ce188ff54dd0707c6d77fd256163b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_cartwright, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, name=rhceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, version=7, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.openshift.expose-services=, RELEASE=main, vendor=Red Hat, Inc., GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 06 08:00:12 np0005548790.localdomain podman[32671]: 2025-12-06 08:00:12.366610264 +0000 UTC m=+0.170920289 container died 9a6a4b1c2d780cc06ec278035f81ecd8330ce188ff54dd0707c6d77fd256163b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_cartwright, ceph=True, RELEASE=main, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.expose-services=, version=7, GIT_CLEAN=True, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53998e00 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53998e00 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53998e00 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53998e00 /var/lib/ceph/osd/ceph-3/block) close
Dec 06 08:00:12 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-abe444c6ff178eeff5cf60829d7c2f73dd5a9a06daefaf29babf9453479f3cca-merged.mount: Deactivated successfully.
Dec 06 08:00:12 np0005548790.localdomain podman[32692]: 2025-12-06 08:00:12.512347905 +0000 UTC m=+0.144601200 container remove 9a6a4b1c2d780cc06ec278035f81ecd8330ce188ff54dd0707c6d77fd256163b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_cartwright, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=7, distribution-scope=public, name=rhceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, ceph=True)
Dec 06 08:00:12 np0005548790.localdomain systemd[1]: libpod-conmon-9a6a4b1c2d780cc06ec278035f81ecd8330ce188ff54dd0707c6d77fd256163b.scope: Deactivated successfully.
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: osd.3:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53998e00 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53998e00 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53998e00 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53999180 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53999180 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53999180 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-3/block size 7.0 GiB
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bluefs mount
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bluefs mount shared_bdev_used = 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: RocksDB version: 7.9.2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Git sha 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Compile date 2025-09-23 00:00:00
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: DB SUMMARY
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: DB Session ID:  3JG1F4P23ZMLGEEK6JT3
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: CURRENT file:  CURRENT
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: IDENTITY file:  IDENTITY
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                         Options.error_if_exists: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.create_if_missing: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                         Options.paranoid_checks: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                                     Options.env: 0x560b53c2ccb0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                                Options.info_log: 0x560b54938a00
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.max_file_opening_threads: 16
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                              Options.statistics: (nil)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.use_fsync: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.max_log_file_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                         Options.allow_fallocate: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.use_direct_reads: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.create_missing_column_families: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                              Options.db_log_dir: 
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                                 Options.wal_dir: db.wal
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.advise_random_on_open: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.write_buffer_manager: 0x560b53982140
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                            Options.rate_limiter: (nil)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.unordered_write: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.row_cache: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                              Options.wal_filter: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.allow_ingest_behind: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.two_write_queues: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.manual_wal_flush: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.wal_compression: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.atomic_flush: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.log_readahead_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.allow_data_in_errors: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.db_host_id: __hostname__
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.max_background_jobs: 4
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.max_background_compactions: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.max_subcompactions: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.max_open_files: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.bytes_per_sync: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.max_background_flushes: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Compression algorithms supported:
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         kZSTD supported: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         kXpressCompression supported: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         kBZip2Compression supported: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         kLZ4Compression supported: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         kZlibCompression supported: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         kLZ4HCCompression supported: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         kSnappyCompression supported: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b54938bc0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x560b53970850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.table_properties_collectors: 
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b54938bc0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x560b53970850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b54938bc0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x560b53970850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b54938bc0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x560b53970850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b54938bc0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x560b53970850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b54938bc0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x560b53970850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b54938bc0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x560b53970850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b54938de0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x560b539702d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:12 np0005548790.localdomain podman[32715]: 
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b54938de0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x560b539702d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b54938de0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x560b539702d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ef2dbc88-9309-462b-a4f8-633c52937c74
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008012697393, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008012697638, "job": 1, "event": "recovery_finished"}
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _open_super_meta old nid_max 1025
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _open_super_meta old blobid_max 10240
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _open_super_meta min_alloc_size 0x1000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: freelist init
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: freelist _read_cfg
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bluefs umount
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53999180 /var/lib/ceph/osd/ceph-3/block) close
Dec 06 08:00:12 np0005548790.localdomain podman[32715]: 2025-12-06 08:00:12.701992906 +0000 UTC m=+0.071222899 container create 517a879b152cf9a9b76a6860158db6e71c1fa8ecca8abcd8c27934f352e8d1a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_mayer, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, ceph=True, name=rhceph, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, distribution-scope=public, release=1763362218, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7)
Dec 06 08:00:12 np0005548790.localdomain systemd[1]: Started libpod-conmon-517a879b152cf9a9b76a6860158db6e71c1fa8ecca8abcd8c27934f352e8d1a5.scope.
Dec 06 08:00:12 np0005548790.localdomain podman[32715]: 2025-12-06 08:00:12.674468323 +0000 UTC m=+0.043698336 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 08:00:12 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:00:12 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3386cbd1f5799d22a24dad99a7dd66af77351ef4850bee5cb0965d52b7e1741a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:12 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3386cbd1f5799d22a24dad99a7dd66af77351ef4850bee5cb0965d52b7e1741a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:12 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3386cbd1f5799d22a24dad99a7dd66af77351ef4850bee5cb0965d52b7e1741a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:12 np0005548790.localdomain podman[32715]: 2025-12-06 08:00:12.843558439 +0000 UTC m=+0.212788462 container init 517a879b152cf9a9b76a6860158db6e71c1fa8ecca8abcd8c27934f352e8d1a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_mayer, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.component=rhceph-container, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, ceph=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_CLEAN=True)
Dec 06 08:00:12 np0005548790.localdomain podman[32715]: 2025-12-06 08:00:12.865404342 +0000 UTC m=+0.234634385 container start 517a879b152cf9a9b76a6860158db6e71c1fa8ecca8abcd8c27934f352e8d1a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_mayer, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, name=rhceph, description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2025-11-26T19:44:28Z, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_CLEAN=True, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git)
Dec 06 08:00:12 np0005548790.localdomain podman[32715]: 2025-12-06 08:00:12.86571333 +0000 UTC m=+0.234943413 container attach 517a879b152cf9a9b76a6860158db6e71c1fa8ecca8abcd8c27934f352e8d1a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_mayer, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, distribution-scope=public, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, ceph=True)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53999180 /var/lib/ceph/osd/ceph-3/block) open path /var/lib/ceph/osd/ceph-3/block
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53999180 /var/lib/ceph/osd/ceph-3/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-3/block failed: (22) Invalid argument
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bdev(0x560b53999180 /var/lib/ceph/osd/ceph-3/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-3/block size 7.0 GiB
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bluefs mount
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bluefs mount shared_bdev_used = 4718592
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: RocksDB version: 7.9.2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Git sha 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Compile date 2025-09-23 00:00:00
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: DB SUMMARY
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: DB Session ID:  3JG1F4P23ZMLGEEK6JT2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: CURRENT file:  CURRENT
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: IDENTITY file:  IDENTITY
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                         Options.error_if_exists: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.create_if_missing: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                         Options.paranoid_checks: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                                     Options.env: 0x560b53a244d0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                                Options.info_log: 0x560b549a95c0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.max_file_opening_threads: 16
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                              Options.statistics: (nil)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.use_fsync: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.max_log_file_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                         Options.allow_fallocate: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.use_direct_reads: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.create_missing_column_families: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                              Options.db_log_dir: 
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                                 Options.wal_dir: db.wal
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.advise_random_on_open: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.write_buffer_manager: 0x560b539835e0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                            Options.rate_limiter: (nil)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.unordered_write: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.row_cache: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                              Options.wal_filter: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.allow_ingest_behind: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.two_write_queues: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.manual_wal_flush: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.wal_compression: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.atomic_flush: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.log_readahead_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.allow_data_in_errors: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.db_host_id: __hostname__
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.max_background_jobs: 4
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.max_background_compactions: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.max_subcompactions: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.max_open_files: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.bytes_per_sync: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.max_background_flushes: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Compression algorithms supported:
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         kZSTD supported: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         kXpressCompression supported: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         kBZip2Compression supported: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         kLZ4Compression supported: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         kZlibCompression supported: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         kLZ4HCCompression supported: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         kSnappyCompression supported: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b549a8920)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x560b539702d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.table_properties_collectors: 
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b549a8920)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x560b539702d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b549a8920)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x560b539702d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b549a8920)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x560b539702d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b549a8920)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x560b539702d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b549a8920)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x560b539702d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b549a8920)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x560b539702d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b549a8b60)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x560b53971610
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b549a8b60)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x560b53971610
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:           Options.merge_operator: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b549a8b60)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x560b53971610
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.write_buffer_size: 16777216
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.max_write_buffer_number: 64
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.compression: LZ4
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.num_levels: 7
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.bloom_locality: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                               Options.ttl: 2592000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                       Options.enable_blob_files: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                           Options.min_blob_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ef2dbc88-9309-462b-a4f8-633c52937c74
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008012949405, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008012973717, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008012, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ef2dbc88-9309-462b-a4f8-633c52937c74", "db_session_id": "3JG1F4P23ZMLGEEK6JT2", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:00:12 np0005548790.localdomain ceph-osd[32586]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008012996274, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1607, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 466, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008012, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ef2dbc88-9309-462b-a4f8-633c52937c74", "db_session_id": "3JG1F4P23ZMLGEEK6JT2", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:00:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008013000242, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765008012, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ef2dbc88-9309-462b-a4f8-633c52937c74", "db_session_id": "3JG1F4P23ZMLGEEK6JT2", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 06 08:00:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory
Dec 06 08:00:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765008013026655, "job": 1, "event": "recovery_finished"}
Dec 06 08:00:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 06 08:00:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560b539dc700
Dec 06 08:00:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: DB pointer 0x560b54893a00
Dec 06 08:00:13 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 06 08:00:13 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _upgrade_super from 4, latest 4
Dec 06 08:00:13 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _upgrade_super done
Dec 06 08:00:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:00:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                                          Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b53971610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b53971610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b53971610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Dec 06 08:00:13 np0005548790.localdomain ceph-osd[32586]: <cls> /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 06 08:00:13 np0005548790.localdomain ceph-osd[32586]: <cls> /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 06 08:00:13 np0005548790.localdomain ceph-osd[32586]: _get_class not permitted to load lua
Dec 06 08:00:13 np0005548790.localdomain ceph-osd[32586]: _get_class not permitted to load sdk
Dec 06 08:00:13 np0005548790.localdomain ceph-osd[32586]: _get_class not permitted to load test_remote_reads
Dec 06 08:00:13 np0005548790.localdomain ceph-osd[32586]: osd.3 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 06 08:00:13 np0005548790.localdomain ceph-osd[32586]: osd.3 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 06 08:00:13 np0005548790.localdomain ceph-osd[32586]: osd.3 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 06 08:00:13 np0005548790.localdomain ceph-osd[32586]: osd.3 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 06 08:00:13 np0005548790.localdomain ceph-osd[32586]: osd.3 0 load_pgs
Dec 06 08:00:13 np0005548790.localdomain ceph-osd[32586]: osd.3 0 load_pgs opened 0 pgs
Dec 06 08:00:13 np0005548790.localdomain ceph-osd[32586]: osd.3 0 log_to_monitors true
Dec 06 08:00:13 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3[32582]: 2025-12-06T08:00:13.097+0000 7fe6278f5a80 -1 osd.3 0 log_to_monitors true
Dec 06 08:00:13 np0005548790.localdomain youthful_mayer[32925]: {
Dec 06 08:00:13 np0005548790.localdomain youthful_mayer[32925]:     "ddd2117f-b4d7-4acb-bc27-d7fd63bcf07f": {
Dec 06 08:00:13 np0005548790.localdomain youthful_mayer[32925]:         "ceph_fsid": "1939e851-b10c-5c3b-9bb7-8e7f380233e8",
Dec 06 08:00:13 np0005548790.localdomain youthful_mayer[32925]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 06 08:00:13 np0005548790.localdomain youthful_mayer[32925]:         "osd_id": 3,
Dec 06 08:00:13 np0005548790.localdomain youthful_mayer[32925]:         "osd_uuid": "ddd2117f-b4d7-4acb-bc27-d7fd63bcf07f",
Dec 06 08:00:13 np0005548790.localdomain youthful_mayer[32925]:         "type": "bluestore"
Dec 06 08:00:13 np0005548790.localdomain youthful_mayer[32925]:     },
Dec 06 08:00:13 np0005548790.localdomain youthful_mayer[32925]:     "f0a16764-58fb-4e66-a34c-f3b971e966ce": {
Dec 06 08:00:13 np0005548790.localdomain youthful_mayer[32925]:         "ceph_fsid": "1939e851-b10c-5c3b-9bb7-8e7f380233e8",
Dec 06 08:00:13 np0005548790.localdomain youthful_mayer[32925]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 06 08:00:13 np0005548790.localdomain youthful_mayer[32925]:         "osd_id": 0,
Dec 06 08:00:13 np0005548790.localdomain youthful_mayer[32925]:         "osd_uuid": "f0a16764-58fb-4e66-a34c-f3b971e966ce",
Dec 06 08:00:13 np0005548790.localdomain youthful_mayer[32925]:         "type": "bluestore"
Dec 06 08:00:13 np0005548790.localdomain youthful_mayer[32925]:     }
Dec 06 08:00:13 np0005548790.localdomain youthful_mayer[32925]: }
Dec 06 08:00:13 np0005548790.localdomain systemd[1]: libpod-517a879b152cf9a9b76a6860158db6e71c1fa8ecca8abcd8c27934f352e8d1a5.scope: Deactivated successfully.
Dec 06 08:00:13 np0005548790.localdomain podman[32715]: 2025-12-06 08:00:13.332403533 +0000 UTC m=+0.701633596 container died 517a879b152cf9a9b76a6860158db6e71c1fa8ecca8abcd8c27934f352e8d1a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_mayer, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, architecture=x86_64, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, release=1763362218, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.expose-services=)
Dec 06 08:00:13 np0005548790.localdomain systemd[1]: tmp-crun.G1hKAF.mount: Deactivated successfully.
Dec 06 08:00:13 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3386cbd1f5799d22a24dad99a7dd66af77351ef4850bee5cb0965d52b7e1741a-merged.mount: Deactivated successfully.
Dec 06 08:00:13 np0005548790.localdomain podman[33176]: 2025-12-06 08:00:13.463214319 +0000 UTC m=+0.121145391 container remove 517a879b152cf9a9b76a6860158db6e71c1fa8ecca8abcd8c27934f352e8d1a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_mayer, release=1763362218, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, GIT_BRANCH=main, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, name=rhceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 08:00:13 np0005548790.localdomain systemd[1]: libpod-conmon-517a879b152cf9a9b76a6860158db6e71c1fa8ecca8abcd8c27934f352e8d1a5.scope: Deactivated successfully.
Dec 06 08:00:13 np0005548790.localdomain sudo[32614]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:14 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 06 08:00:14 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 06 08:00:14 np0005548790.localdomain ceph-osd[31627]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 22.599 iops: 5785.299 elapsed_sec: 0.519
Dec 06 08:00:14 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [WRN] : OSD bench result of 5785.299193 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 06 08:00:14 np0005548790.localdomain ceph-osd[31627]: osd.0 0 waiting for initial osdmap
Dec 06 08:00:14 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0[31623]: 2025-12-06T08:00:14.462+0000 7f31b7bd4640 -1 osd.0 0 waiting for initial osdmap
Dec 06 08:00:14 np0005548790.localdomain ceph-osd[31627]: osd.0 11 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec 06 08:00:14 np0005548790.localdomain ceph-osd[31627]: osd.0 11 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec 06 08:00:14 np0005548790.localdomain ceph-osd[31627]: osd.0 11 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec 06 08:00:14 np0005548790.localdomain ceph-osd[31627]: osd.0 11 check_osdmap_features require_osd_release unknown -> reef
Dec 06 08:00:14 np0005548790.localdomain ceph-osd[31627]: osd.0 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 06 08:00:14 np0005548790.localdomain ceph-osd[31627]: osd.0 11 set_numa_affinity not setting numa affinity
Dec 06 08:00:14 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-0[31623]: 2025-12-06T08:00:14.479+0000 7f31b29e9640 -1 osd.0 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 06 08:00:14 np0005548790.localdomain ceph-osd[31627]: osd.0 11 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Dec 06 08:00:14 np0005548790.localdomain ceph-osd[32586]: osd.3 0 done with init, starting boot process
Dec 06 08:00:14 np0005548790.localdomain ceph-osd[32586]: osd.3 0 start_boot
Dec 06 08:00:14 np0005548790.localdomain ceph-osd[32586]: osd.3 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 06 08:00:14 np0005548790.localdomain ceph-osd[32586]: osd.3 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 06 08:00:14 np0005548790.localdomain ceph-osd[32586]: osd.3 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 06 08:00:14 np0005548790.localdomain ceph-osd[32586]: osd.3 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 06 08:00:14 np0005548790.localdomain ceph-osd[32586]: osd.3 0  bench count 12288000 bsize 4 KiB
Dec 06 08:00:14 np0005548790.localdomain ceph-osd[31627]: osd.0 12 state: booting -> active
Dec 06 08:00:15 np0005548790.localdomain sudo[33189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:00:15 np0005548790.localdomain sudo[33189]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:00:15 np0005548790.localdomain sudo[33189]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:15 np0005548790.localdomain sudo[33204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:00:15 np0005548790.localdomain sudo[33204]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:00:15 np0005548790.localdomain sudo[33204]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:15 np0005548790.localdomain sudo[33219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 08:00:15 np0005548790.localdomain sudo[33219]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:00:16 np0005548790.localdomain podman[33301]: 2025-12-06 08:00:16.210591096 +0000 UTC m=+0.084403965 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., ceph=True, GIT_BRANCH=main)
Dec 06 08:00:16 np0005548790.localdomain podman[33301]: 2025-12-06 08:00:16.344380657 +0000 UTC m=+0.218193596 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, architecture=x86_64, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, version=7, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., vcs-type=git, GIT_BRANCH=main, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 06 08:00:16 np0005548790.localdomain sudo[33219]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:16 np0005548790.localdomain sudo[33369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:00:16 np0005548790.localdomain sudo[33369]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:00:16 np0005548790.localdomain sudo[33369]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:16 np0005548790.localdomain sudo[33384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:00:16 np0005548790.localdomain sudo[33384]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:00:16 np0005548790.localdomain ceph-osd[31627]: osd.0 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 06 08:00:16 np0005548790.localdomain ceph-osd[31627]: osd.0 14 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec 06 08:00:16 np0005548790.localdomain ceph-osd[31627]: osd.0 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 06 08:00:17 np0005548790.localdomain sudo[33384]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:17 np0005548790.localdomain sudo[33430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:00:17 np0005548790.localdomain sudo[33430]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:00:17 np0005548790.localdomain sudo[33430]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:17 np0005548790.localdomain sudo[33445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 -- inventory --format=json-pretty --filter-for-batch
Dec 06 08:00:17 np0005548790.localdomain sudo[33445]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:00:18 np0005548790.localdomain podman[33502]: 
Dec 06 08:00:18 np0005548790.localdomain podman[33502]: 2025-12-06 08:00:18.207613469 +0000 UTC m=+0.061729159 container create 66c88c6782a09481731a612c98b1a5db8d3e68f4d7cd16b613aa8e56590e2c2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_mahavira, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vcs-type=git, RELEASE=main, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, GIT_CLEAN=True)
Dec 06 08:00:18 np0005548790.localdomain systemd[1]: Started libpod-conmon-66c88c6782a09481731a612c98b1a5db8d3e68f4d7cd16b613aa8e56590e2c2a.scope.
Dec 06 08:00:18 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:00:18 np0005548790.localdomain podman[33502]: 2025-12-06 08:00:18.171689856 +0000 UTC m=+0.025805536 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 08:00:18 np0005548790.localdomain podman[33502]: 2025-12-06 08:00:18.292003293 +0000 UTC m=+0.146118933 container init 66c88c6782a09481731a612c98b1a5db8d3e68f4d7cd16b613aa8e56590e2c2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_mahavira, architecture=x86_64, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, name=rhceph, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.expose-services=)
Dec 06 08:00:18 np0005548790.localdomain dreamy_mahavira[33517]: 167 167
Dec 06 08:00:18 np0005548790.localdomain systemd[1]: libpod-66c88c6782a09481731a612c98b1a5db8d3e68f4d7cd16b613aa8e56590e2c2a.scope: Deactivated successfully.
Dec 06 08:00:18 np0005548790.localdomain podman[33502]: 2025-12-06 08:00:18.309913923 +0000 UTC m=+0.164029593 container start 66c88c6782a09481731a612c98b1a5db8d3e68f4d7cd16b613aa8e56590e2c2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_mahavira, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, ceph=True, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, name=rhceph, GIT_CLEAN=True, architecture=x86_64, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4)
Dec 06 08:00:18 np0005548790.localdomain podman[33502]: 2025-12-06 08:00:18.310214501 +0000 UTC m=+0.164330181 container attach 66c88c6782a09481731a612c98b1a5db8d3e68f4d7cd16b613aa8e56590e2c2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_mahavira, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, ceph=True, release=1763362218, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=)
Dec 06 08:00:18 np0005548790.localdomain podman[33502]: 2025-12-06 08:00:18.312011463 +0000 UTC m=+0.166127163 container died 66c88c6782a09481731a612c98b1a5db8d3e68f4d7cd16b613aa8e56590e2c2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_mahavira, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, release=1763362218, RELEASE=main, name=rhceph, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, distribution-scope=public, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main)
Dec 06 08:00:18 np0005548790.localdomain podman[33522]: 2025-12-06 08:00:18.403349084 +0000 UTC m=+0.088437940 container remove 66c88c6782a09481731a612c98b1a5db8d3e68f4d7cd16b613aa8e56590e2c2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_mahavira, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, version=7, build-date=2025-11-26T19:44:28Z, release=1763362218, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7)
Dec 06 08:00:18 np0005548790.localdomain systemd[1]: libpod-conmon-66c88c6782a09481731a612c98b1a5db8d3e68f4d7cd16b613aa8e56590e2c2a.scope: Deactivated successfully.
Dec 06 08:00:18 np0005548790.localdomain podman[33544]: 
Dec 06 08:00:18 np0005548790.localdomain podman[33544]: 2025-12-06 08:00:18.58364265 +0000 UTC m=+0.068579744 container create 2c6d025db0922892197f7e7158cb43b1c1ceb7d7108886a651bded1507f58fe1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_hypatia, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, ceph=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, description=Red Hat Ceph Storage 7, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7)
Dec 06 08:00:18 np0005548790.localdomain systemd[1]: Started libpod-conmon-2c6d025db0922892197f7e7158cb43b1c1ceb7d7108886a651bded1507f58fe1.scope.
Dec 06 08:00:18 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:00:18 np0005548790.localdomain podman[33544]: 2025-12-06 08:00:18.551061951 +0000 UTC m=+0.035999115 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 08:00:18 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ff5ee9f8f9949aa952fe247e48ac0f954b592c3fcf633ee6ad00e73479fe355/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:18 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ff5ee9f8f9949aa952fe247e48ac0f954b592c3fcf633ee6ad00e73479fe355/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:18 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ff5ee9f8f9949aa952fe247e48ac0f954b592c3fcf633ee6ad00e73479fe355/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:00:18 np0005548790.localdomain podman[33544]: 2025-12-06 08:00:18.676603107 +0000 UTC m=+0.161540181 container init 2c6d025db0922892197f7e7158cb43b1c1ceb7d7108886a651bded1507f58fe1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_hypatia, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, name=rhceph, GIT_BRANCH=main, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, release=1763362218, version=7, io.openshift.tags=rhceph ceph)
Dec 06 08:00:18 np0005548790.localdomain podman[33544]: 2025-12-06 08:00:18.690506543 +0000 UTC m=+0.175443637 container start 2c6d025db0922892197f7e7158cb43b1c1ceb7d7108886a651bded1507f58fe1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_hypatia, GIT_CLEAN=True, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.expose-services=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, name=rhceph, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, version=7, release=1763362218, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 06 08:00:18 np0005548790.localdomain podman[33544]: 2025-12-06 08:00:18.690770061 +0000 UTC m=+0.175707165 container attach 2c6d025db0922892197f7e7158cb43b1c1ceb7d7108886a651bded1507f58fe1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_hypatia, name=rhceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, ceph=True, GIT_BRANCH=main, release=1763362218, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4)
Dec 06 08:00:18 np0005548790.localdomain ceph-osd[32586]: osd.3 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 16.712 iops: 4278.259 elapsed_sec: 0.701
Dec 06 08:00:18 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [WRN] : OSD bench result of 4278.258668 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.3. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 06 08:00:18 np0005548790.localdomain ceph-osd[32586]: osd.3 0 waiting for initial osdmap
Dec 06 08:00:18 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3[32582]: 2025-12-06T08:00:18.716+0000 7fe623874640 -1 osd.3 0 waiting for initial osdmap
Dec 06 08:00:18 np0005548790.localdomain ceph-osd[32586]: osd.3 15 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 06 08:00:18 np0005548790.localdomain ceph-osd[32586]: osd.3 15 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec 06 08:00:18 np0005548790.localdomain ceph-osd[32586]: osd.3 15 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 06 08:00:18 np0005548790.localdomain ceph-osd[32586]: osd.3 15 check_osdmap_features require_osd_release unknown -> reef
Dec 06 08:00:18 np0005548790.localdomain ceph-osd[32586]: osd.3 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 06 08:00:18 np0005548790.localdomain ceph-osd[32586]: osd.3 15 set_numa_affinity not setting numa affinity
Dec 06 08:00:18 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-osd-3[32582]: 2025-12-06T08:00:18.735+0000 7fe61ee9e640 -1 osd.3 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 06 08:00:18 np0005548790.localdomain ceph-osd[32586]: osd.3 15 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Dec 06 08:00:18 np0005548790.localdomain ceph-osd[32586]: osd.3 16 state: booting -> active
Dec 06 08:00:18 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 16 pg[1.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=16) [3,4,2] r=0 lpr=16 pi=[14,16)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:00:19 np0005548790.localdomain systemd[1]: tmp-crun.vZBjzq.mount: Deactivated successfully.
Dec 06 08:00:19 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a9c15ea97bb47fd5342dc50566958af6e411f1e6fd5458d4597c8d539286f5c2-merged.mount: Deactivated successfully.
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]: [
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:     {
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:         "available": false,
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:         "ceph_device": false,
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:         "lsm_data": {},
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:         "lvs": [],
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:         "path": "/dev/sr0",
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:         "rejected_reasons": [
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:             "Has a FileSystem",
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:             "Insufficient space (<5GB)"
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:         ],
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:         "sys_api": {
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:             "actuators": null,
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:             "device_nodes": "sr0",
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:             "human_readable_size": "482.00 KB",
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:             "id_bus": "ata",
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:             "model": "QEMU DVD-ROM",
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:             "nr_requests": "2",
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:             "partitions": {},
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:             "path": "/dev/sr0",
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:             "removable": "1",
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:             "rev": "2.5+",
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:             "ro": "0",
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:             "rotational": "1",
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:             "sas_address": "",
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:             "sas_device_handle": "",
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:             "scheduler_mode": "mq-deadline",
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:             "sectors": 0,
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:             "sectorsize": "2048",
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:             "size": 493568.0,
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:             "support_discard": "0",
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:             "type": "disk",
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:             "vendor": "QEMU"
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:         }
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]:     }
Dec 06 08:00:19 np0005548790.localdomain distracted_hypatia[33559]: ]
Dec 06 08:00:19 np0005548790.localdomain systemd[1]: libpod-2c6d025db0922892197f7e7158cb43b1c1ceb7d7108886a651bded1507f58fe1.scope: Deactivated successfully.
Dec 06 08:00:19 np0005548790.localdomain podman[33544]: 2025-12-06 08:00:19.498117447 +0000 UTC m=+0.983054621 container died 2c6d025db0922892197f7e7158cb43b1c1ceb7d7108886a651bded1507f58fe1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_hypatia, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, vcs-type=git, release=1763362218, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, version=7, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, ceph=True, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, CEPH_POINT_RELEASE=)
Dec 06 08:00:19 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-9ff5ee9f8f9949aa952fe247e48ac0f954b592c3fcf633ee6ad00e73479fe355-merged.mount: Deactivated successfully.
Dec 06 08:00:19 np0005548790.localdomain podman[35034]: 2025-12-06 08:00:19.594711689 +0000 UTC m=+0.086270039 container remove 2c6d025db0922892197f7e7158cb43b1c1ceb7d7108886a651bded1507f58fe1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_hypatia, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, ceph=True, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, architecture=x86_64, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, version=7, name=rhceph)
Dec 06 08:00:19 np0005548790.localdomain systemd[1]: libpod-conmon-2c6d025db0922892197f7e7158cb43b1c1ceb7d7108886a651bded1507f58fe1.scope: Deactivated successfully.
Dec 06 08:00:19 np0005548790.localdomain sudo[33445]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:19 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 17 pg[1.0( empty local-lis/les=16/17 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=16) [3,4,2] r=0 lpr=16 pi=[14,16)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:00:21 np0005548790.localdomain sshd[35049]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:00:22 np0005548790.localdomain sudo[35051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:00:22 np0005548790.localdomain sudo[35051]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:00:22 np0005548790.localdomain sudo[35051]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:24 np0005548790.localdomain sshd[35049]: Received disconnect from 45.78.219.217 port 37432:11: Bye Bye [preauth]
Dec 06 08:00:24 np0005548790.localdomain sshd[35049]: Disconnected from authenticating user root 45.78.219.217 port 37432 [preauth]
Dec 06 08:00:27 np0005548790.localdomain sudo[35066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:00:27 np0005548790.localdomain sudo[35066]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:00:27 np0005548790.localdomain sudo[35066]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:28 np0005548790.localdomain sudo[35081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 08:00:28 np0005548790.localdomain sudo[35081]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:00:28 np0005548790.localdomain systemd[1]: tmp-crun.dvAHvm.mount: Deactivated successfully.
Dec 06 08:00:28 np0005548790.localdomain podman[35165]: 2025-12-06 08:00:28.785574539 +0000 UTC m=+0.093185425 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, GIT_BRANCH=main, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, release=1763362218, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=)
Dec 06 08:00:28 np0005548790.localdomain podman[35165]: 2025-12-06 08:00:28.940239125 +0000 UTC m=+0.247850041 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, name=rhceph, io.openshift.expose-services=, ceph=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, RELEASE=main, io.buildah.version=1.41.4)
Dec 06 08:00:29 np0005548790.localdomain sudo[35081]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:29 np0005548790.localdomain sudo[35227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:00:29 np0005548790.localdomain sudo[35227]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:00:29 np0005548790.localdomain sudo[35227]: pam_unix(sudo:session): session closed for user root
Dec 06 08:00:45 np0005548790.localdomain systemd[25986]: Starting Mark boot as successful...
Dec 06 08:00:45 np0005548790.localdomain systemd[25986]: Finished Mark boot as successful.
Dec 06 08:00:53 np0005548790.localdomain sshd[35243]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:00:53 np0005548790.localdomain sshd[35243]: Invalid user sol from 193.32.162.146 port 56860
Dec 06 08:00:53 np0005548790.localdomain sshd[35243]: Connection closed by invalid user sol 193.32.162.146 port 56860 [preauth]
Dec 06 08:01:01 np0005548790.localdomain CROND[35246]: (root) CMD (run-parts /etc/cron.hourly)
Dec 06 08:01:01 np0005548790.localdomain run-parts[35249]: (/etc/cron.hourly) starting 0anacron
Dec 06 08:01:01 np0005548790.localdomain run-parts[35255]: (/etc/cron.hourly) finished 0anacron
Dec 06 08:01:01 np0005548790.localdomain CROND[35245]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 06 08:01:29 np0005548790.localdomain sudo[35256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:01:29 np0005548790.localdomain sudo[35256]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:01:29 np0005548790.localdomain sudo[35256]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:29 np0005548790.localdomain sudo[35271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 08:01:29 np0005548790.localdomain sudo[35271]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:01:30 np0005548790.localdomain systemd[1]: tmp-crun.wyjq57.mount: Deactivated successfully.
Dec 06 08:01:30 np0005548790.localdomain podman[35353]: 2025-12-06 08:01:30.591818191 +0000 UTC m=+0.089756171 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, RELEASE=main, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, com.redhat.component=rhceph-container, ceph=True, io.openshift.tags=rhceph ceph, release=1763362218, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, GIT_CLEAN=True, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 06 08:01:30 np0005548790.localdomain podman[35353]: 2025-12-06 08:01:30.713361802 +0000 UTC m=+0.211299762 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, name=rhceph, io.buildah.version=1.41.4, RELEASE=main, architecture=x86_64, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, release=1763362218, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=)
Dec 06 08:01:30 np0005548790.localdomain sudo[35271]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:31 np0005548790.localdomain sudo[35419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:01:31 np0005548790.localdomain sudo[35419]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:01:31 np0005548790.localdomain sudo[35419]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:31 np0005548790.localdomain sudo[35434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:01:31 np0005548790.localdomain sudo[35434]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:01:31 np0005548790.localdomain sudo[35434]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:32 np0005548790.localdomain sudo[35480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:01:32 np0005548790.localdomain sudo[35480]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:01:32 np0005548790.localdomain sudo[35480]: pam_unix(sudo:session): session closed for user root
Dec 06 08:01:34 np0005548790.localdomain sshd[24487]: Received disconnect from 192.168.122.100 port 52180:11: disconnected by user
Dec 06 08:01:34 np0005548790.localdomain sshd[24487]: Disconnected from user zuul 192.168.122.100 port 52180
Dec 06 08:01:34 np0005548790.localdomain sshd[24484]: pam_unix(sshd:session): session closed for user zuul
Dec 06 08:01:34 np0005548790.localdomain systemd[1]: session-13.scope: Deactivated successfully.
Dec 06 08:01:34 np0005548790.localdomain systemd[1]: session-13.scope: Consumed 20.880s CPU time.
Dec 06 08:01:34 np0005548790.localdomain systemd-logind[760]: Session 13 logged out. Waiting for processes to exit.
Dec 06 08:01:34 np0005548790.localdomain systemd-logind[760]: Removed session 13.
Dec 06 08:02:32 np0005548790.localdomain sudo[35495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:02:32 np0005548790.localdomain sudo[35495]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:02:32 np0005548790.localdomain sudo[35495]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:32 np0005548790.localdomain sudo[35510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:02:32 np0005548790.localdomain sudo[35510]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:02:33 np0005548790.localdomain sudo[35510]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:34 np0005548790.localdomain sudo[35557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:02:34 np0005548790.localdomain sudo[35557]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:02:34 np0005548790.localdomain sudo[35557]: pam_unix(sudo:session): session closed for user root
Dec 06 08:02:58 np0005548790.localdomain sshd[35572]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:03:01 np0005548790.localdomain anacron[6186]: Job `cron.weekly' started
Dec 06 08:03:01 np0005548790.localdomain anacron[6186]: Job `cron.weekly' terminated
Dec 06 08:03:34 np0005548790.localdomain sudo[35580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:03:34 np0005548790.localdomain sudo[35580]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:03:34 np0005548790.localdomain sudo[35580]: pam_unix(sudo:session): session closed for user root
Dec 06 08:03:34 np0005548790.localdomain sudo[35595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:03:34 np0005548790.localdomain sudo[35595]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:03:34 np0005548790.localdomain sudo[35595]: pam_unix(sudo:session): session closed for user root
Dec 06 08:03:35 np0005548790.localdomain sudo[35642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:03:35 np0005548790.localdomain sudo[35642]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:03:35 np0005548790.localdomain sudo[35642]: pam_unix(sudo:session): session closed for user root
Dec 06 08:03:45 np0005548790.localdomain systemd[25986]: Created slice User Background Tasks Slice.
Dec 06 08:03:45 np0005548790.localdomain systemd[25986]: Starting Cleanup of User's Temporary Files and Directories...
Dec 06 08:03:45 np0005548790.localdomain systemd[25986]: Finished Cleanup of User's Temporary Files and Directories.
Dec 06 08:04:11 np0005548790.localdomain sshd[35658]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:04:12 np0005548790.localdomain sshd[35658]: Invalid user sol from 193.32.162.146 port 41046
Dec 06 08:04:12 np0005548790.localdomain sshd[35658]: Connection closed by invalid user sol 193.32.162.146 port 41046 [preauth]
Dec 06 08:04:35 np0005548790.localdomain sudo[35660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:04:35 np0005548790.localdomain sudo[35660]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:04:35 np0005548790.localdomain sudo[35660]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:35 np0005548790.localdomain sudo[35675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:04:35 np0005548790.localdomain sudo[35675]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:04:36 np0005548790.localdomain sudo[35675]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:37 np0005548790.localdomain sudo[35721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:04:37 np0005548790.localdomain sudo[35721]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:04:37 np0005548790.localdomain sudo[35721]: pam_unix(sudo:session): session closed for user root
Dec 06 08:04:58 np0005548790.localdomain sshd[35572]: fatal: Timeout before authentication for 45.78.219.217 port 58284
Dec 06 08:05:14 np0005548790.localdomain sshd[35737]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:05:14 np0005548790.localdomain sshd[35737]: Accepted publickey for zuul from 192.168.122.100 port 43496 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 08:05:14 np0005548790.localdomain systemd-logind[760]: New session 27 of user zuul.
Dec 06 08:05:14 np0005548790.localdomain systemd[1]: Started Session 27 of User zuul.
Dec 06 08:05:14 np0005548790.localdomain sshd[35737]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 08:05:14 np0005548790.localdomain sudo[35783]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vlqysilejmmbgjnxjxavbxiimvruudlk ; /usr/bin/python3
Dec 06 08:05:14 np0005548790.localdomain sudo[35783]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 08:05:14 np0005548790.localdomain python3[35785]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 06 08:05:14 np0005548790.localdomain sudo[35783]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:15 np0005548790.localdomain sudo[35828]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vfthnwyeywqnmibjbmheidcaaitqeqqf ; /usr/bin/python3
Dec 06 08:05:15 np0005548790.localdomain sudo[35828]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 08:05:15 np0005548790.localdomain python3[35830]: ansible-setup Invoked with gather_subset=['!facter', '!ohai'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 08:05:15 np0005548790.localdomain sudo[35828]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:15 np0005548790.localdomain sudo[35848]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jakulfwlfiptdqhlimtvcqgnwnszpkcs ; /usr/bin/python3
Dec 06 08:05:15 np0005548790.localdomain sudo[35848]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 08:05:15 np0005548790.localdomain python3[35850]: ansible-user Invoked with name=tripleo-admin generate_ssh_key=False state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005548790.localdomain update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 06 08:05:15 np0005548790.localdomain useradd[35852]: new group: name=tripleo-admin, GID=1003
Dec 06 08:05:15 np0005548790.localdomain useradd[35852]: new user: name=tripleo-admin, UID=1003, GID=1003, home=/home/tripleo-admin, shell=/bin/bash, from=none
Dec 06 08:05:16 np0005548790.localdomain sudo[35848]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:16 np0005548790.localdomain sudo[35904]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zjlpwkcxjvywqlowogtjmwltyrmvxpdk ; /usr/bin/python3
Dec 06 08:05:16 np0005548790.localdomain sudo[35904]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 08:05:16 np0005548790.localdomain python3[35906]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/tripleo-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:05:16 np0005548790.localdomain sudo[35904]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:16 np0005548790.localdomain sudo[35947]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bqtqqjtzbaarcctmpfhjkgcegeyejvwz ; /usr/bin/python3
Dec 06 08:05:16 np0005548790.localdomain sudo[35947]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 08:05:16 np0005548790.localdomain python3[35949]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/tripleo-admin mode=288 owner=root group=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765008316.3030314-66341-173092758807143/source _original_basename=tmp93atiqas follow=False checksum=b3e7ecdcc699d217c6b083a91b07208207813d93 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:05:16 np0005548790.localdomain sudo[35947]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:17 np0005548790.localdomain sudo[35977]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wxpfsejsdenbkfjgjcspycbiqdeagefh ; /usr/bin/python3
Dec 06 08:05:17 np0005548790.localdomain sudo[35977]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 08:05:17 np0005548790.localdomain python3[35979]: ansible-file Invoked with path=/home/tripleo-admin state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:05:17 np0005548790.localdomain sudo[35977]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:17 np0005548790.localdomain sudo[35993]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kbavhdjpsuirzmvjuyftceygbjwizkyk ; /usr/bin/python3
Dec 06 08:05:17 np0005548790.localdomain sudo[35993]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 08:05:17 np0005548790.localdomain python3[35995]: ansible-file Invoked with path=/home/tripleo-admin/.ssh state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:05:17 np0005548790.localdomain sudo[35993]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:18 np0005548790.localdomain sudo[36009]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mpoaplgadglhnyjegxsfgayjvmpctuzj ; /usr/bin/python3
Dec 06 08:05:18 np0005548790.localdomain sudo[36009]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 08:05:18 np0005548790.localdomain python3[36011]: ansible-file Invoked with path=/home/tripleo-admin/.ssh/authorized_keys state=touch owner=tripleo-admin group=tripleo-admin mode=384 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:05:18 np0005548790.localdomain sudo[36009]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:18 np0005548790.localdomain sudo[36025]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-amfkbizvxpqghgtzmbmsuveigfaxvwgf ; /usr/bin/python3
Dec 06 08:05:18 np0005548790.localdomain sudo[36025]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 08:05:18 np0005548790.localdomain python3[36027]: ansible-lineinfile Invoked with path=/home/tripleo-admin/.ssh/authorized_keys line=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVgIoETU+ZMXzSQYJdf7tKLhQsLaB9easlDHbhHsBFXd1+Axjoyg338dVOvCx68r/a15lecdlSwbLqd4GXxUOdHnWLa1I9u6bd6azOwE0Dd6ZjnquN3BRq9dLJXMlKHhXMddL6WHNfxT/JOL+gKp0CM74naUBGqrzV05qlb19n7xZJtmxVohAGGeQdFwQJBVoQ6yZOjcJZ5CpbWCs4pFXZT/31fA0KIAJkrzAeUGRRkQEnzXY1riF0RHwvXaNJ0ZoAYfT7q263Pd5gnQEmpiBirUBH6CXJn4lIQyNMyVRbnKWemW9P1kyv2bjZUPg2b1xWBE7MBTs/wMt1RjdO9p+sxtwOd2IQMf1t3JLa2p3xqgxtGTMugpJUBr1TWwdLoHl+eAMuWZwAWofLWICHUlPzyTN8L8acu0im2eR60FEl9XdUjp8DYCBGxhhIVx+xZxj6nTnNc5T7GJpJlCTF+9YPlDVrLg8y/YXly0BoOqr7p+RaqMAJnoZymNDbuu9V3Vs= zuul-build-sshkey
                                                          regexp=Generated by TripleO state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:05:18 np0005548790.localdomain sudo[36025]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:19 np0005548790.localdomain python3[36041]: ansible-ping Invoked with data=pong
Dec 06 08:05:30 np0005548790.localdomain sshd[36042]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:05:30 np0005548790.localdomain sshd[36042]: Accepted publickey for tripleo-admin from 192.168.122.100 port 55068 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 08:05:30 np0005548790.localdomain systemd[1]: Created slice User Slice of UID 1003.
Dec 06 08:05:30 np0005548790.localdomain systemd[1]: Starting User Runtime Directory /run/user/1003...
Dec 06 08:05:30 np0005548790.localdomain systemd-logind[760]: New session 28 of user tripleo-admin.
Dec 06 08:05:30 np0005548790.localdomain systemd[1]: Finished User Runtime Directory /run/user/1003.
Dec 06 08:05:30 np0005548790.localdomain systemd[1]: Starting User Manager for UID 1003...
Dec 06 08:05:30 np0005548790.localdomain systemd[36046]: pam_unix(systemd-user:session): session opened for user tripleo-admin(uid=1003) by (uid=0)
Dec 06 08:05:30 np0005548790.localdomain systemd[36046]: Queued start job for default target Main User Target.
Dec 06 08:05:30 np0005548790.localdomain systemd[36046]: Created slice User Application Slice.
Dec 06 08:05:30 np0005548790.localdomain systemd[36046]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 06 08:05:30 np0005548790.localdomain systemd[36046]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 08:05:30 np0005548790.localdomain systemd[36046]: Reached target Paths.
Dec 06 08:05:30 np0005548790.localdomain systemd[36046]: Reached target Timers.
Dec 06 08:05:30 np0005548790.localdomain systemd[36046]: Starting D-Bus User Message Bus Socket...
Dec 06 08:05:30 np0005548790.localdomain systemd[36046]: Starting Create User's Volatile Files and Directories...
Dec 06 08:05:30 np0005548790.localdomain systemd[36046]: Finished Create User's Volatile Files and Directories.
Dec 06 08:05:30 np0005548790.localdomain systemd[36046]: Listening on D-Bus User Message Bus Socket.
Dec 06 08:05:30 np0005548790.localdomain systemd[36046]: Reached target Sockets.
Dec 06 08:05:30 np0005548790.localdomain systemd[36046]: Reached target Basic System.
Dec 06 08:05:30 np0005548790.localdomain systemd[36046]: Reached target Main User Target.
Dec 06 08:05:30 np0005548790.localdomain systemd[36046]: Startup finished in 94ms.
Dec 06 08:05:30 np0005548790.localdomain systemd[1]: Started User Manager for UID 1003.
Dec 06 08:05:30 np0005548790.localdomain systemd[1]: Started Session 28 of User tripleo-admin.
Dec 06 08:05:30 np0005548790.localdomain sshd[36042]: pam_unix(sshd:session): session opened for user tripleo-admin(uid=1003) by (uid=0)
Dec 06 08:05:31 np0005548790.localdomain sudo[36105]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mdqcufvgnfdjicmuaxbmqprgksgerfub ; /usr/bin/python3
Dec 06 08:05:31 np0005548790.localdomain sudo[36105]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:05:31 np0005548790.localdomain python3[36107]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 08:05:31 np0005548790.localdomain sudo[36105]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:36 np0005548790.localdomain sudo[36125]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iycfmdwuzyaylxdxykdkttyajzqfbzqt ; /usr/bin/python3
Dec 06 08:05:36 np0005548790.localdomain sudo[36125]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:05:36 np0005548790.localdomain python3[36127]: ansible-selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config
Dec 06 08:05:36 np0005548790.localdomain sudo[36125]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:37 np0005548790.localdomain sshd[36128]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:05:37 np0005548790.localdomain sudo[36129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:05:37 np0005548790.localdomain sudo[36129]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:05:37 np0005548790.localdomain sudo[36129]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:37 np0005548790.localdomain sudo[36158]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-riywmltxganubgqmkdplummtzvhucvnv ; /usr/bin/python3
Dec 06 08:05:37 np0005548790.localdomain sudo[36158]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:05:37 np0005548790.localdomain sudo[36159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:05:37 np0005548790.localdomain sudo[36159]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:05:37 np0005548790.localdomain python3[36174]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None
Dec 06 08:05:37 np0005548790.localdomain sudo[36158]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:37 np0005548790.localdomain sudo[36240]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-medogtauohgwrqdqmfitylhluvwwljmx ; /usr/bin/python3
Dec 06 08:05:37 np0005548790.localdomain sudo[36240]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:05:37 np0005548790.localdomain sudo[36159]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:37 np0005548790.localdomain python3[36242]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.a4tvd60etmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:05:37 np0005548790.localdomain sudo[36240]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:38 np0005548790.localdomain sudo[36282]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lvaooifdjkedjxhcnlotqdmifhdzznwp ; /usr/bin/python3
Dec 06 08:05:38 np0005548790.localdomain sudo[36282]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:05:38 np0005548790.localdomain python3[36284]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.a4tvd60etmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:05:38 np0005548790.localdomain sudo[36282]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:38 np0005548790.localdomain sudo[36285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:05:38 np0005548790.localdomain sudo[36285]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:05:38 np0005548790.localdomain sudo[36285]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:39 np0005548790.localdomain sudo[36313]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xslphcyrradxszdwjqzzwimxxkejmvyw ; /usr/bin/python3
Dec 06 08:05:39 np0005548790.localdomain sudo[36313]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:05:39 np0005548790.localdomain sshd[36128]: Received disconnect from 45.78.219.217 port 42020:11: Bye Bye [preauth]
Dec 06 08:05:39 np0005548790.localdomain sshd[36128]: Disconnected from authenticating user root 45.78.219.217 port 42020 [preauth]
Dec 06 08:05:39 np0005548790.localdomain python3[36315]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.a4tvd60etmphosts insertbefore=BOF block=172.17.0.106 np0005548788.localdomain np0005548788
                                                         172.18.0.106 np0005548788.storage.localdomain np0005548788.storage
                                                         172.20.0.106 np0005548788.storagemgmt.localdomain np0005548788.storagemgmt
                                                         172.17.0.106 np0005548788.internalapi.localdomain np0005548788.internalapi
                                                         172.19.0.106 np0005548788.tenant.localdomain np0005548788.tenant
                                                         192.168.122.106 np0005548788.ctlplane.localdomain np0005548788.ctlplane
                                                         172.17.0.107 np0005548789.localdomain np0005548789
                                                         172.18.0.107 np0005548789.storage.localdomain np0005548789.storage
                                                         172.20.0.107 np0005548789.storagemgmt.localdomain np0005548789.storagemgmt
                                                         172.17.0.107 np0005548789.internalapi.localdomain np0005548789.internalapi
                                                         172.19.0.107 np0005548789.tenant.localdomain np0005548789.tenant
                                                         192.168.122.107 np0005548789.ctlplane.localdomain np0005548789.ctlplane
                                                         172.17.0.108 np0005548790.localdomain np0005548790
                                                         172.18.0.108 np0005548790.storage.localdomain np0005548790.storage
                                                         172.20.0.108 np0005548790.storagemgmt.localdomain np0005548790.storagemgmt
                                                         172.17.0.108 np0005548790.internalapi.localdomain np0005548790.internalapi
                                                         172.19.0.108 np0005548790.tenant.localdomain np0005548790.tenant
                                                         192.168.122.108 np0005548790.ctlplane.localdomain np0005548790.ctlplane
                                                         172.17.0.103 np0005548785.localdomain np0005548785
                                                         172.18.0.103 np0005548785.storage.localdomain np0005548785.storage
                                                         172.20.0.103 np0005548785.storagemgmt.localdomain np0005548785.storagemgmt
                                                         172.17.0.103 np0005548785.internalapi.localdomain np0005548785.internalapi
                                                         172.19.0.103 np0005548785.tenant.localdomain np0005548785.tenant
                                                         192.168.122.103 np0005548785.ctlplane.localdomain np0005548785.ctlplane
                                                         172.17.0.104 np0005548786.localdomain np0005548786
                                                         172.18.0.104 np0005548786.storage.localdomain np0005548786.storage
                                                         172.20.0.104 np0005548786.storagemgmt.localdomain np0005548786.storagemgmt
                                                         172.17.0.104 np0005548786.internalapi.localdomain np0005548786.internalapi
                                                         172.19.0.104 np0005548786.tenant.localdomain np0005548786.tenant
                                                         192.168.122.104 np0005548786.ctlplane.localdomain np0005548786.ctlplane
                                                         172.17.0.105 np0005548787.localdomain np0005548787
                                                         172.18.0.105 np0005548787.storage.localdomain np0005548787.storage
                                                         172.20.0.105 np0005548787.storagemgmt.localdomain np0005548787.storagemgmt
                                                         172.17.0.105 np0005548787.internalapi.localdomain np0005548787.internalapi
                                                         172.19.0.105 np0005548787.tenant.localdomain np0005548787.tenant
                                                         192.168.122.105 np0005548787.ctlplane.localdomain np0005548787.ctlplane
                                                         
                                                         192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane
                                                         192.168.122.99  overcloud.ctlplane.localdomain
                                                         172.18.0.250  overcloud.storage.localdomain
                                                         172.20.0.140  overcloud.storagemgmt.localdomain
                                                         172.17.0.168  overcloud.internalapi.localdomain
                                                         172.21.0.196  overcloud.localdomain
                                                          marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:05:39 np0005548790.localdomain sudo[36313]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:39 np0005548790.localdomain sudo[36329]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ihddtojsnulvrqqbamoedjwykeaqemcz ; /usr/bin/python3
Dec 06 08:05:39 np0005548790.localdomain sudo[36329]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:05:39 np0005548790.localdomain python3[36331]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.a4tvd60etmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:05:39 np0005548790.localdomain sudo[36329]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:40 np0005548790.localdomain sudo[36346]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wfihzfibfwejtmhckurgzckxmcdrfkfa ; /usr/bin/python3
Dec 06 08:05:40 np0005548790.localdomain sudo[36346]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:05:40 np0005548790.localdomain python3[36348]: ansible-file Invoked with path=/tmp/ansible.a4tvd60etmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:05:40 np0005548790.localdomain sudo[36346]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:41 np0005548790.localdomain sudo[36362]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sxbgwquijufjurqtxzhctaweiojcklnj ; /usr/bin/python3
Dec 06 08:05:41 np0005548790.localdomain sudo[36362]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:05:41 np0005548790.localdomain python3[36364]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides rhosp-release _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:05:41 np0005548790.localdomain sudo[36362]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:41 np0005548790.localdomain sudo[36379]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-euvsdmnluvhnzrfjugyqcceqpejyqaeo ; /usr/bin/python3
Dec 06 08:05:41 np0005548790.localdomain sudo[36379]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:05:42 np0005548790.localdomain python3[36381]: ansible-ansible.legacy.dnf Invoked with name=['rhosp-release'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 08:05:45 np0005548790.localdomain sudo[36379]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:47 np0005548790.localdomain sudo[36398]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-urwupfzkcyuqobfsprexjjemmkebqdfy ; /usr/bin/python3
Dec 06 08:05:47 np0005548790.localdomain sudo[36398]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:05:47 np0005548790.localdomain python3[36400]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:05:47 np0005548790.localdomain sudo[36398]: pam_unix(sudo:session): session closed for user root
Dec 06 08:05:48 np0005548790.localdomain sudo[36415]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sxtfwbasjxhekcjsaymxkkbdvtwbtjrr ; /usr/bin/python3
Dec 06 08:05:48 np0005548790.localdomain sudo[36415]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:05:48 np0005548790.localdomain python3[36417]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'jq', 'nftables', 'openvswitch', 'openstack-heat-agents', 'openstack-selinux', 'os-net-config', 'python3-libselinux', 'python3-pyyaml', 'puppet-tripleo', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 08:06:04 np0005548790.localdomain groupadd[36589]: group added to /etc/group: name=puppet, GID=52
Dec 06 08:06:04 np0005548790.localdomain groupadd[36589]: group added to /etc/gshadow: name=puppet
Dec 06 08:06:04 np0005548790.localdomain groupadd[36589]: new group: name=puppet, GID=52
Dec 06 08:06:04 np0005548790.localdomain useradd[36596]: new user: name=puppet, UID=52, GID=52, home=/var/lib/puppet, shell=/sbin/nologin, from=none
Dec 06 08:06:38 np0005548790.localdomain sudo[37067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:06:38 np0005548790.localdomain sudo[37067]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:06:38 np0005548790.localdomain sudo[37067]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:38 np0005548790.localdomain sudo[37082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:06:38 np0005548790.localdomain sudo[37082]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:06:39 np0005548790.localdomain sudo[37082]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:39 np0005548790.localdomain sudo[37135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:06:39 np0005548790.localdomain sudo[37135]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:06:39 np0005548790.localdomain sudo[37135]: pam_unix(sudo:session): session closed for user root
Dec 06 08:06:58 np0005548790.localdomain kernel: SELinux:  Converting 2699 SID table entries...
Dec 06 08:06:58 np0005548790.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 08:06:58 np0005548790.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 06 08:06:58 np0005548790.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 08:06:58 np0005548790.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 06 08:06:58 np0005548790.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 08:06:58 np0005548790.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 08:06:58 np0005548790.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 08:06:58 np0005548790.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 06 08:06:58 np0005548790.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 08:06:58 np0005548790.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 06 08:06:58 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:06:58 np0005548790.localdomain systemd-rc-local-generator[37307]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:06:58 np0005548790.localdomain systemd-sysv-generator[37311]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:06:58 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:06:59 np0005548790.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 08:06:59 np0005548790.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 08:06:59 np0005548790.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 06 08:06:59 np0005548790.localdomain systemd[1]: run-r4ea78a106bf8421dabcf35cf68e916b2.service: Deactivated successfully.
Dec 06 08:07:00 np0005548790.localdomain sudo[36415]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:01 np0005548790.localdomain sudo[37751]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qmohicfezurnieilepaqaredtjxsrnfp ; /usr/bin/python3
Dec 06 08:07:01 np0005548790.localdomain sudo[37751]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:01 np0005548790.localdomain python3[37753]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:07:02 np0005548790.localdomain sudo[37751]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:03 np0005548790.localdomain sudo[37890]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nhtheopuqcypoverzftgybnuofzdhrgl ; /usr/bin/python3
Dec 06 08:07:03 np0005548790.localdomain sudo[37890]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:03 np0005548790.localdomain python3[37892]: ansible-ansible.legacy.systemd Invoked with name=openvswitch enabled=True state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:07:03 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:07:03 np0005548790.localdomain systemd-rc-local-generator[37919]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:07:03 np0005548790.localdomain systemd-sysv-generator[37922]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:07:03 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:07:04 np0005548790.localdomain sudo[37890]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:05 np0005548790.localdomain sudo[37944]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lhtbyptgsesnmgaixkubcqxqecgrkrmm ; /usr/bin/python3
Dec 06 08:07:05 np0005548790.localdomain sudo[37944]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:05 np0005548790.localdomain python3[37946]: ansible-file Invoked with path=/var/lib/heat-config/tripleo-config-download state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:07:05 np0005548790.localdomain sudo[37944]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:05 np0005548790.localdomain sudo[37960]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bqipfaevjtczdbezdqtbnmpijnzddfez ; /usr/bin/python3
Dec 06 08:07:05 np0005548790.localdomain sudo[37960]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:05 np0005548790.localdomain python3[37962]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides openstack-network-scripts _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:07:05 np0005548790.localdomain sudo[37960]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:06 np0005548790.localdomain sudo[37977]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-luneandplitskuokkaqferbjjqvtsuwk ; /usr/bin/python3
Dec 06 08:07:06 np0005548790.localdomain sudo[37977]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:06 np0005548790.localdomain python3[37979]: ansible-systemd Invoked with name=NetworkManager enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 06 08:07:06 np0005548790.localdomain sudo[37977]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:06 np0005548790.localdomain sudo[37995]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jgzekedjsabvwazibiebqvrmdtooqwca ; /usr/bin/python3
Dec 06 08:07:06 np0005548790.localdomain sudo[37995]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:07 np0005548790.localdomain python3[37997]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=dns value=none backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:07:07 np0005548790.localdomain sudo[37995]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:07 np0005548790.localdomain sudo[38013]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aqsepcqkycbxbzodsklymrsyfwzenopj ; /usr/bin/python3
Dec 06 08:07:07 np0005548790.localdomain sudo[38013]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:07 np0005548790.localdomain python3[38015]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=rc-manager value=unmanaged backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:07:07 np0005548790.localdomain sudo[38013]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:07 np0005548790.localdomain sudo[38031]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nltnohlvjbadezybtchqbeemvxkhjgol ; /usr/bin/python3
Dec 06 08:07:07 np0005548790.localdomain sudo[38031]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:08 np0005548790.localdomain python3[38033]: ansible-ansible.legacy.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 08:07:08 np0005548790.localdomain systemd[1]: Reloading Network Manager...
Dec 06 08:07:08 np0005548790.localdomain NetworkManager[5968]: <info>  [1765008428.1439] audit: op="reload" arg="0" pid=38036 uid=0 result="success"
Dec 06 08:07:08 np0005548790.localdomain NetworkManager[5968]: <info>  [1765008428.1450] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode,rc-manager (/etc/NetworkManager/NetworkManager.conf (lib: 00-server.conf) (run: 15-carrier-timeout.conf))
Dec 06 08:07:08 np0005548790.localdomain NetworkManager[5968]: <info>  [1765008428.1451] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Dec 06 08:07:08 np0005548790.localdomain systemd[1]: Reloaded Network Manager.
Dec 06 08:07:08 np0005548790.localdomain sudo[38031]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:08 np0005548790.localdomain sudo[38050]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-goxtgatytqiwbwjibqhcudsszoljiuic ; /usr/bin/python3
Dec 06 08:07:08 np0005548790.localdomain sudo[38050]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:08 np0005548790.localdomain python3[38052]: ansible-ansible.legacy.command Invoked with _raw_params=ln -f -s /usr/share/openstack-puppet/modules/* /etc/puppet/modules/ _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:07:08 np0005548790.localdomain sudo[38050]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:08 np0005548790.localdomain sudo[38067]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-obenkiclwfvinggvvskcswtogvapvuip ; /usr/bin/python3
Dec 06 08:07:08 np0005548790.localdomain sudo[38067]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:09 np0005548790.localdomain python3[38069]: ansible-stat Invoked with path=/usr/bin/ansible-playbook follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:07:09 np0005548790.localdomain sudo[38067]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:09 np0005548790.localdomain sudo[38085]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tsrvefblbzzzdpejzqwzoktyqkbnrqsi ; /usr/bin/python3
Dec 06 08:07:09 np0005548790.localdomain sudo[38085]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:09 np0005548790.localdomain python3[38087]: ansible-stat Invoked with path=/usr/bin/ansible-playbook-3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:07:09 np0005548790.localdomain sudo[38085]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:09 np0005548790.localdomain sudo[38101]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wwazlcxntsbusttfgxuymoiiwwpagoap ; /usr/bin/python3
Dec 06 08:07:09 np0005548790.localdomain sudo[38101]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:09 np0005548790.localdomain python3[38103]: ansible-file Invoked with state=link src=/usr/bin/ansible-playbook path=/usr/bin/ansible-playbook-3 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:07:09 np0005548790.localdomain sudo[38101]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:10 np0005548790.localdomain sudo[38117]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nidkbyheeetgnbaomhglciztzgeakuha ; /usr/bin/python3
Dec 06 08:07:10 np0005548790.localdomain sudo[38117]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:10 np0005548790.localdomain python3[38119]: ansible-tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 06 08:07:10 np0005548790.localdomain sudo[38117]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:10 np0005548790.localdomain sudo[38133]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-prpuetswbaowaaadnaonsmuwijhpqmfi ; /usr/bin/python3
Dec 06 08:07:10 np0005548790.localdomain sudo[38133]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:11 np0005548790.localdomain python3[38135]: ansible-stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:07:11 np0005548790.localdomain sudo[38133]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:11 np0005548790.localdomain sudo[38149]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-enogpovtopvknqqnaoutevypzmfomzet ; /usr/bin/python3
Dec 06 08:07:11 np0005548790.localdomain sudo[38149]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:11 np0005548790.localdomain python3[38151]: ansible-blockinfile Invoked with path=/tmp/ansible.o85r91yg block=[192.168.122.106]*,[np0005548788.ctlplane.localdomain]*,[172.17.0.106]*,[np0005548788.internalapi.localdomain]*,[172.18.0.106]*,[np0005548788.storage.localdomain]*,[172.20.0.106]*,[np0005548788.storagemgmt.localdomain]*,[172.19.0.106]*,[np0005548788.tenant.localdomain]*,[np0005548788.localdomain]*,[np0005548788]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxIoAQH9YZnGrAxYR5prFQwo6HY5mwdDjndb+bp2pwvtVLM4ABIdCi+K1wpbhOpoO7BsYOf/tdBqemvSDleNo/ZLh3v3MmoVtoTtQZqLWsAQWFgJCjcGUGB+H3CHhtbp706coVQMlGD+UQqpCBy8WamMB/Ldy+hSHbLHwzuMzj8tO90vUbEyuKgOuu/X3ZFa+Yjo/asQ+PTrVfirh1QvRQ9aK22xH89KbThA/1an4OjnNGLCP752auSQ894B21QLKfqaMGPlpbjU8Wr6MP4zKV9lUzpQiFr6IU6cd4CeIsJDj7FnAZuBSmi8ewgm/r4ZWkmCSlqw8OpMC5soJnm8Q4PJTIFvT9eyyFCh9xmQkMhzE8P332LtYjZ+vXhYFU14e04mOQx5UrtHN8uWJVbOAwtLNAcenHyRtCQGkAZ6f9q0OvSuYr+o3FhHhN5ABu32AKAD8YpkjLypi+PbaiKNQW8XzPAHHbV8CGZ4B09ZWeQY49VA0bPxIYBXd1mEBlXSE=
                                                         [192.168.122.107]*,[np0005548789.ctlplane.localdomain]*,[172.17.0.107]*,[np0005548789.internalapi.localdomain]*,[172.18.0.107]*,[np0005548789.storage.localdomain]*,[172.20.0.107]*,[np0005548789.storagemgmt.localdomain]*,[172.19.0.107]*,[np0005548789.tenant.localdomain]*,[np0005548789.localdomain]*,[np0005548789]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwH3rhRTvOINLmLdbeRXeXOiMzz+IXEuW2cXYAe50Wcc3ikH2RVGirWQrwLc8hAoA7UFCXADqEMxPg6/fLsQkbP7kLOpUtam8nuXvgt8VHM4RFl5wh9EOgZ7DWgjA7s3r2eQMcBhv82CjVMLY/YjnLuRNXCsJAqeG32qcKedKH/huEFvkb49U/UnNlxi5BfNrMlY9n5UQXE2rd6EKwP58aP/qQ1ie3p8nwHc36/MJcfEIABlLaoHK/LxnadOFTh93OkqVi7A0VQsKSmKD64nABiN7ML0NReoyRIQI5r3Dawe8v2K9jCBh5jY88TVsYUJqgwoZSSU73sYGHX4uF+PY8wL7qwn6mCzA17GGYeB8Dy0N8qwDqah6kUjpcLwGp7YaKf0FIZPBKcLVMrX6Tnwxer1j3kOIt3tgLZoz3mMfstWfCyvt9t+GEW5MCE+MBkY4Eree3uK7pI+wJ3vFQS9XVP00hjNiLWYmoaaW6rl8xtw7QtGhzmjcWbOxaZvHWE5E=
                                                         [192.168.122.108]*,[np0005548790.ctlplane.localdomain]*,[172.17.0.108]*,[np0005548790.internalapi.localdomain]*,[172.18.0.108]*,[np0005548790.storage.localdomain]*,[172.20.0.108]*,[np0005548790.storagemgmt.localdomain]*,[172.19.0.108]*,[np0005548790.tenant.localdomain]*,[np0005548790.localdomain]*,[np0005548790]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDmdMCy44p73Ui+o09YQitqR9FILqoJ6AGYYutFVH6wn5m1j6oEoI4XgVFPR3UpG3SXdoiG7m0DRxC/WZZMpZbaQ3ZHbJJioRh1hV5uQtK5k2gtmS8uePng5UprbLncMXf+HIxNRvirU3r6zdgNGAroK0rN0nWESi/FNb2flu9Aw9JAsgIAAouW4IUoeyMGZ1AflhRhsWsQMstM9UEeGU+iTqV7al1URVCSq1finY99m+QC+Pftpd2C/+agboOIiVa63+D/RqqfYqh4C/PYfDbssYjcZzk3P90+HQ6uMKexX3HRnFbyje4eLSBHC0pjr/4pNfk/eSpdHeyMAPsP+QlBztdcPj9OnjcmT9ymeJRKF7GwNIWg3Pn9L2yY50d8l9Zu6rNIDW786XNcbm88yHdCHA5FE1A8XTWQRQ3eUSUsmsvf03pExAouRM4Fj8dvCu6wzG2SuyWqmdT5yCNrUG0e1CeE6PcfTLBeS5CJAwn5HM8aUndQQldWmaUbMPL5Jis=
                                                         [192.168.122.103]*,[np0005548785.ctlplane.localdomain]*,[172.17.0.103]*,[np0005548785.internalapi.localdomain]*,[172.18.0.103]*,[np0005548785.storage.localdomain]*,[172.20.0.103]*,[np0005548785.storagemgmt.localdomain]*,[172.19.0.103]*,[np0005548785.tenant.localdomain]*,[np0005548785.localdomain]*,[np0005548785]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC89JzJHuRLDUgmU66VPdPVwYLrvslBwa5i2QfiUzrnpt1lKz8ayq6QMRy5y5GgfjQQhX/YZiAjUSoogVsYDkoDaImXdtfQHFlFMLTlJPiYcA/cGAwMAE/vifpWoztBHUXkJ5YWUojkXzGoR8d7ESx/tTLG/9QrQDsW6JcV18mcFCQZdeWYWGWdLn6ynmQOZ0N4U6mYK1FqE+GKgP6L9PEjkC1ePo81AnYcdQ5Z1IETdcCcJytdvvxH/Zie1PiAaMAgMYhsqu7+DZRRTvg+cEMw3mRVuodIyQEbpZs8MjR3itViRfZ+UqYi6uKDnz1viLL0aACaYhOLzrE7bQ6Sl4j1MnMrWncUOv3Sq2fus+Y6oYmed84E6HUNljte7vVP9jwPclbCAmj5WuC/Av9dSqqHEpPRbKJ4tAuBrO2LBKS7J62FjRYiY807V1viyxUgjK5FmsQyfVr3/YOirluSx54e4XwxxDrAjtrd0x68H7/Mt6HP/79cWKaVbC7XUckYRmE=
                                                         [192.168.122.104]*,[np0005548786.ctlplane.localdomain]*,[172.17.0.104]*,[np0005548786.internalapi.localdomain]*,[172.18.0.104]*,[np0005548786.storage.localdomain]*,[172.20.0.104]*,[np0005548786.storagemgmt.localdomain]*,[172.19.0.104]*,[np0005548786.tenant.localdomain]*,[np0005548786.localdomain]*,[np0005548786]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDURzBA/aIGrwPgaIApy0UCTi4wdQhfDEx0QfkSAIn0ZptZcOkaR8BWtl9GijRPEp++Ep4qU04JcwHO1ZULd2UnCdDeg1Imwnf7x9HQBjAr0mH+tE0t4MBLtBbrk8Ep5ggyKATK1CvEl3NuGIS4gSSUWxzkR74Iju/GtrEMuVnMSsOw+auBofiv1ne4zyXqQWZORiK32DSolw1KyXGLyqG+JOpl3Kza5o79S1KUghfRzskZMm/AxFYciPmg4EQK/jL9Izj7qq3v8MaL8baeyqNlPaaRKCh+pkZlYtoPzDhe+vn/jwnDmQgqC1Bh+dkNiKEVlWz3mxoiMoeLY3jP/tMF2M4M8puGakPc2sqJxk1++Tv/lFRO3zBS+V2kECKI5DtQI6XThfLYXxIQl5SHr4yGEoxhMNt6YNQPLp6lg30kHO24YyNNA7LPFYYoOGUCaq5ZVUCF9lagMxcgkN0Bs+ZZqeni+53RqxoutiRZ0m9pIiqxGjrJjbNFXmofgfDBcUE=
                                                         [192.168.122.105]*,[np0005548787.ctlplane.localdomain]*,[172.17.0.105]*,[np0005548787.internalapi.localdomain]*,[172.18.0.105]*,[np0005548787.storage.localdomain]*,[172.20.0.105]*,[np0005548787.storagemgmt.localdomain]*,[172.19.0.105]*,[np0005548787.tenant.localdomain]*,[np0005548787.localdomain]*,[np0005548787]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXe0UZ2kJKcvYaHSnjIOf3QqkGhArLo32nvDm8Pl8ZVNWfdRV8R+e17etAicDq//fxWC+U9jiHp4qI6/0Jm64rPocmJKaA+r79sNpv+598NlGtVUfTYQ34Ze9bgaPkjAwKfPNrzjSDChyfkys4Hm0J7ttog5rvMcuRelxkFmoonOcuzBC+9ufI6qld7br5w4WDookwamkefbMCiwAZxrw2bSjoTu7/TEFbt7SM0lUIdqP5WvxpWK52OkjnakQ0BL4QHdRYz1kBx/vS0TFxXb2pMO291dfkxDl3H2oXXZZYK/LWy3nZyJEX+mD5J6WOEs5HC5GQQ+CNEV0wa2e/gJA7KBsyL5T6RBtH8id22sBHZkzcaDhUz1ZABGAiOx4rdrr4YFFFy/u00nX3ZCuRBPXYh37Pafl7GXcSKyhTmkCZI0591RdNmb1duh9ZIObRmPVp2+WIheAFvS7EU4B0+ZjAEbDJgiSa9VlUrlRFX0ajcFHR8FnwNRcoERO3A3h4/Tc=
                                                          create=True state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:07:11 np0005548790.localdomain sudo[38149]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:12 np0005548790.localdomain sudo[38165]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aocgsvdvymvpteblfataaqffptuwjfuj ; /usr/bin/python3
Dec 06 08:07:12 np0005548790.localdomain sudo[38165]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:12 np0005548790.localdomain python3[38167]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.o85r91yg' > /etc/ssh/ssh_known_hosts _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:07:12 np0005548790.localdomain sudo[38165]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:12 np0005548790.localdomain sudo[38183]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-utfumpdfvarsqkhkraaynjcnsajnxycu ; /usr/bin/python3
Dec 06 08:07:12 np0005548790.localdomain sudo[38183]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:12 np0005548790.localdomain python3[38185]: ansible-file Invoked with path=/tmp/ansible.o85r91yg state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:07:12 np0005548790.localdomain sudo[38183]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:13 np0005548790.localdomain sudo[38199]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jvfkjidshamzvpycuymbjfdwjmhxdtjd ; /usr/bin/python3
Dec 06 08:07:13 np0005548790.localdomain sudo[38199]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:13 np0005548790.localdomain python3[38201]: ansible-file Invoked with path=/var/log/journal state=directory mode=0750 owner=root group=root setype=var_log_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:07:13 np0005548790.localdomain sudo[38199]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:13 np0005548790.localdomain sudo[38215]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jferwgvjlqlshawtyikzikreztjmxipx ; /usr/bin/python3
Dec 06 08:07:13 np0005548790.localdomain sudo[38215]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:13 np0005548790.localdomain python3[38217]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active cloud-init.service || systemctl is-enabled cloud-init.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:07:14 np0005548790.localdomain sudo[38215]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:14 np0005548790.localdomain sudo[38233]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wzkxjtkwcnlxxinmybahaythcqpqolkz ; /usr/bin/python3
Dec 06 08:07:14 np0005548790.localdomain sudo[38233]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:14 np0005548790.localdomain python3[38235]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline | grep -q cloud-init=disabled _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:07:14 np0005548790.localdomain sudo[38233]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:14 np0005548790.localdomain sudo[38252]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cblpqnsbzcemifougceuwoufkjtcjtij ; /usr/bin/python3
Dec 06 08:07:14 np0005548790.localdomain sudo[38252]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:14 np0005548790.localdomain python3[38254]: ansible-community.general.cloud_init_data_facts Invoked with filter=status
Dec 06 08:07:14 np0005548790.localdomain sudo[38252]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:14 np0005548790.localdomain sudo[38268]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bgpaekodqqdcsteipgmableblzcycksi ; /usr/bin/python3
Dec 06 08:07:14 np0005548790.localdomain sudo[38268]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:15 np0005548790.localdomain sudo[38268]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:15 np0005548790.localdomain sudo[38316]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uxgjulruujvimqcichaymteqfvalsyaw ; /usr/bin/python3
Dec 06 08:07:15 np0005548790.localdomain sudo[38316]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:15 np0005548790.localdomain sudo[38316]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:15 np0005548790.localdomain sudo[38359]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cekibdkurpanathwqfhcoalxmdnreknx ; /usr/bin/python3
Dec 06 08:07:15 np0005548790.localdomain sudo[38359]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:15 np0005548790.localdomain sudo[38359]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:17 np0005548790.localdomain sudo[38389]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-denqgdaeofbcdphhwqonqibfxlhfjiez ; /usr/bin/python3
Dec 06 08:07:17 np0005548790.localdomain sudo[38389]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:17 np0005548790.localdomain python3[38391]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:07:17 np0005548790.localdomain sudo[38389]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:17 np0005548790.localdomain sudo[38406]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ofwancfgtrlrzkabltsfuowxpfgeegsh ; /usr/bin/python3
Dec 06 08:07:17 np0005548790.localdomain sudo[38406]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:17 np0005548790.localdomain python3[38408]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 08:07:20 np0005548790.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Dec 06 08:07:20 np0005548790.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Dec 06 08:07:20 np0005548790.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 08:07:20 np0005548790.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 06 08:07:21 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:07:21 np0005548790.localdomain systemd-rc-local-generator[38490]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:07:21 np0005548790.localdomain systemd-sysv-generator[38496]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:07:21 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:07:21 np0005548790.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 08:07:21 np0005548790.localdomain systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 06 08:07:21 np0005548790.localdomain systemd[1]: tuned.service: Deactivated successfully.
Dec 06 08:07:21 np0005548790.localdomain systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 06 08:07:21 np0005548790.localdomain systemd[1]: tuned.service: Consumed 1.835s CPU time.
Dec 06 08:07:21 np0005548790.localdomain systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 06 08:07:21 np0005548790.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 08:07:21 np0005548790.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 06 08:07:21 np0005548790.localdomain systemd[1]: run-rc96e69b1a54640d5b2b04e30f8982e23.service: Deactivated successfully.
Dec 06 08:07:22 np0005548790.localdomain systemd[1]: Started Dynamic System Tuning Daemon.
Dec 06 08:07:22 np0005548790.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 08:07:22 np0005548790.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 06 08:07:23 np0005548790.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 08:07:23 np0005548790.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 06 08:07:23 np0005548790.localdomain systemd[1]: run-r035b31b9b3f24acf88d892a233fd53a3.service: Deactivated successfully.
Dec 06 08:07:23 np0005548790.localdomain sudo[38406]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:23 np0005548790.localdomain sudo[38843]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gagpjixuzmanicayiraquifwzmnecvwh ; /usr/bin/python3
Dec 06 08:07:23 np0005548790.localdomain sudo[38843]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:24 np0005548790.localdomain python3[38845]: ansible-systemd Invoked with name=tuned state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:07:24 np0005548790.localdomain systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 06 08:07:24 np0005548790.localdomain systemd[1]: tuned.service: Deactivated successfully.
Dec 06 08:07:24 np0005548790.localdomain systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 06 08:07:24 np0005548790.localdomain systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 06 08:07:25 np0005548790.localdomain systemd[1]: Started Dynamic System Tuning Daemon.
Dec 06 08:07:25 np0005548790.localdomain sudo[38843]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:25 np0005548790.localdomain sudo[39038]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ehopgljpphdtyodapajqgcmswsefamyq ; PATH=/bin:/usr/bin:/sbin:/usr/sbin /usr/bin/python3
Dec 06 08:07:25 np0005548790.localdomain sudo[39038]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:25 np0005548790.localdomain python3[39040]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:07:25 np0005548790.localdomain sudo[39038]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:26 np0005548790.localdomain sudo[39055]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mcbgyqsghxujnwopuvytctxzblnvmljd ; /usr/bin/python3
Dec 06 08:07:26 np0005548790.localdomain sudo[39055]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:26 np0005548790.localdomain python3[39057]: ansible-slurp Invoked with src=/etc/tuned/active_profile
Dec 06 08:07:26 np0005548790.localdomain sudo[39055]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:26 np0005548790.localdomain sudo[39071]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zxtlmvtfgujofqqgrnmnlqsiywxzegwe ; /usr/bin/python3
Dec 06 08:07:26 np0005548790.localdomain sudo[39071]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:26 np0005548790.localdomain python3[39073]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:07:27 np0005548790.localdomain sudo[39071]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:27 np0005548790.localdomain sudo[39087]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-whtetvcxwgfvsheztbywvtasojggtfyj ; PATH=/bin:/usr/bin:/sbin:/usr/sbin /usr/bin/python3
Dec 06 08:07:27 np0005548790.localdomain sudo[39087]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:27 np0005548790.localdomain python3[39089]: ansible-ansible.legacy.command Invoked with _raw_params=tuned-adm profile throughput-performance _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:07:27 np0005548790.localdomain sshd[39091]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:07:28 np0005548790.localdomain sshd[39091]: Invalid user solana from 193.32.162.146 port 53464
Dec 06 08:07:28 np0005548790.localdomain sshd[39091]: Connection closed by invalid user solana 193.32.162.146 port 53464 [preauth]
Dec 06 08:07:28 np0005548790.localdomain sudo[39087]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:29 np0005548790.localdomain sudo[39109]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yqcpeqgkeflgejybxsqlhzgzirdkwwdw ; /usr/bin/python3
Dec 06 08:07:29 np0005548790.localdomain sudo[39109]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:29 np0005548790.localdomain python3[39111]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:07:29 np0005548790.localdomain sudo[39109]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:29 np0005548790.localdomain sudo[39126]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cfcdvvvxervljwhrxkigtrbuywapjxad ; /usr/bin/python3
Dec 06 08:07:29 np0005548790.localdomain sudo[39126]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:29 np0005548790.localdomain python3[39128]: ansible-stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:07:29 np0005548790.localdomain sudo[39126]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:32 np0005548790.localdomain sudo[39142]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mcjdninotgainojfknrpidxcziyszakv ; /usr/bin/python3
Dec 06 08:07:32 np0005548790.localdomain sudo[39142]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:32 np0005548790.localdomain python3[39144]: ansible-replace Invoked with regexp=TRIPLEO_HEAT_TEMPLATE_KERNEL_ARGS dest=/etc/default/grub replace= path=/etc/default/grub backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:07:32 np0005548790.localdomain sudo[39142]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:37 np0005548790.localdomain sudo[39158]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ijzpootczqqoxvxtnshvozdugkakambj ; /usr/bin/python3
Dec 06 08:07:37 np0005548790.localdomain sudo[39158]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:37 np0005548790.localdomain python3[39160]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:07:37 np0005548790.localdomain sudo[39158]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:38 np0005548790.localdomain sudo[39206]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-apjewgnbiiietwfxyholwsuinnzudfhu ; /usr/bin/python3
Dec 06 08:07:38 np0005548790.localdomain sudo[39206]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:38 np0005548790.localdomain python3[39208]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:07:38 np0005548790.localdomain sudo[39206]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:38 np0005548790.localdomain sudo[39251]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mhapbslnvpwfslnmnxoxuagkoqqxbobt ; /usr/bin/python3
Dec 06 08:07:38 np0005548790.localdomain sudo[39251]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:38 np0005548790.localdomain python3[39253]: ansible-ansible.legacy.copy Invoked with mode=384 dest=/etc/puppet/hiera.yaml src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008457.8963106-71092-77380146369730/source _original_basename=tmpg6tm_gli follow=False checksum=aaf3699defba931d532f4955ae152f505046749a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:07:38 np0005548790.localdomain sudo[39251]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:38 np0005548790.localdomain sudo[39281]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ozjxrdziirhtjcjycpkzhqhiermdgxat ; /usr/bin/python3
Dec 06 08:07:38 np0005548790.localdomain sudo[39281]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:38 np0005548790.localdomain python3[39283]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:07:38 np0005548790.localdomain sudo[39281]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:39 np0005548790.localdomain sudo[39329]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-erhrekhnfgjxhzkamreliunbzfxufugw ; /usr/bin/python3
Dec 06 08:07:39 np0005548790.localdomain sudo[39329]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:39 np0005548790.localdomain python3[39331]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:07:39 np0005548790.localdomain sudo[39329]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:39 np0005548790.localdomain sudo[39372]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sykvoorkfjtbztmxzsnmefcaxyvfuews ; /usr/bin/python3
Dec 06 08:07:39 np0005548790.localdomain sudo[39372]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:39 np0005548790.localdomain python3[39374]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008459.3329623-71183-278232254996027/source dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json follow=False checksum=c7cc1670a1e268d7901b4353362279cc1f651214 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:07:40 np0005548790.localdomain sudo[39372]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:40 np0005548790.localdomain sudo[39375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:07:40 np0005548790.localdomain sudo[39375]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:07:40 np0005548790.localdomain sudo[39375]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:40 np0005548790.localdomain sudo[39404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:07:40 np0005548790.localdomain sudo[39404]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:07:40 np0005548790.localdomain sudo[39465]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-arvwdswxnkfrjsirtfdwmtvowsjacthz ; /usr/bin/python3
Dec 06 08:07:40 np0005548790.localdomain sudo[39465]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:40 np0005548790.localdomain python3[39479]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:07:40 np0005548790.localdomain sudo[39465]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:40 np0005548790.localdomain sudo[39404]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:40 np0005548790.localdomain sudo[39539]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sdwygewekaxflkdqvtikhknffsbxatzo ; /usr/bin/python3
Dec 06 08:07:40 np0005548790.localdomain sudo[39539]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:40 np0005548790.localdomain python3[39541]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008460.2344599-71243-84592849533719/source dest=/etc/puppet/hieradata/bootstrap_node.json mode=None follow=False _original_basename=bootstrap_node.j2 checksum=8c98a1379d65c02b867387467a21d26fe82a1c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:07:40 np0005548790.localdomain sudo[39539]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:41 np0005548790.localdomain sudo[39588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:07:41 np0005548790.localdomain sudo[39588]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:07:41 np0005548790.localdomain sudo[39588]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:41 np0005548790.localdomain sudo[39616]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hgfgjfhivnqfayihljmybsaugzdaptfl ; /usr/bin/python3
Dec 06 08:07:41 np0005548790.localdomain sudo[39616]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:41 np0005548790.localdomain python3[39618]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:07:41 np0005548790.localdomain sudo[39616]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:41 np0005548790.localdomain sudo[39659]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-atnjqsxgdbmcgmsevbxuyupheaoxwrkw ; /usr/bin/python3
Dec 06 08:07:41 np0005548790.localdomain sudo[39659]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:41 np0005548790.localdomain python3[39661]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008461.120072-71243-33222529384066/source dest=/etc/puppet/hieradata/vip_data.json mode=None follow=False _original_basename=vip_data.j2 checksum=2906872dac8eb33feea0b6fc0243b65109687e47 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:07:41 np0005548790.localdomain sudo[39659]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:42 np0005548790.localdomain sudo[39721]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rplspvvxnopfmqvgsiajhbwusgqslssb ; /usr/bin/python3
Dec 06 08:07:42 np0005548790.localdomain sudo[39721]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:42 np0005548790.localdomain python3[39723]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:07:42 np0005548790.localdomain sudo[39721]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:42 np0005548790.localdomain sudo[39764]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-umrcrtyslduyykdzjboozgmkbhnuenqc ; /usr/bin/python3
Dec 06 08:07:42 np0005548790.localdomain sudo[39764]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:42 np0005548790.localdomain python3[39766]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008462.109529-71243-251117153585077/source dest=/etc/puppet/hieradata/net_ip_map.json mode=None follow=False _original_basename=net_ip_map.j2 checksum=1bd75eeb71ad8a06f7ad5bd2e02e7279e09e867f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:07:42 np0005548790.localdomain sudo[39764]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:43 np0005548790.localdomain sudo[39826]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pttncqnniecstxamqkpydzmkmopgtgjo ; /usr/bin/python3
Dec 06 08:07:43 np0005548790.localdomain sudo[39826]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:43 np0005548790.localdomain python3[39828]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:07:43 np0005548790.localdomain sudo[39826]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:43 np0005548790.localdomain sudo[39869]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sxewhqotvmuouqwwgxqeirdyluktemka ; /usr/bin/python3
Dec 06 08:07:43 np0005548790.localdomain sudo[39869]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:43 np0005548790.localdomain python3[39871]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008462.9884322-71243-190373601261081/source dest=/etc/puppet/hieradata/cloud_domain.json mode=None follow=False _original_basename=cloud_domain.j2 checksum=5dd835a63e6a03d74797c2e2eadf4bea1cecd9d9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:07:43 np0005548790.localdomain sudo[39869]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:44 np0005548790.localdomain sudo[39931]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cvxdthfoxbaslwuvejtsepzirzckwpnu ; /usr/bin/python3
Dec 06 08:07:44 np0005548790.localdomain sudo[39931]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:44 np0005548790.localdomain python3[39933]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:07:44 np0005548790.localdomain sudo[39931]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:44 np0005548790.localdomain sudo[39974]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ymjzwhdonqejohijyabrwcssyblzhell ; /usr/bin/python3
Dec 06 08:07:44 np0005548790.localdomain sudo[39974]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:44 np0005548790.localdomain python3[39976]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008463.8393676-71243-106256547749062/source dest=/etc/puppet/hieradata/fqdn.json mode=None follow=False _original_basename=fqdn.j2 checksum=e3d90f9aa2791a308b592f65a5c9bdb40239aed9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:07:44 np0005548790.localdomain sudo[39974]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:44 np0005548790.localdomain sudo[40036]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jfnlqtdhwdhiikummgokqwudoeexpqqt ; /usr/bin/python3
Dec 06 08:07:44 np0005548790.localdomain sudo[40036]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:45 np0005548790.localdomain python3[40038]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:07:45 np0005548790.localdomain sudo[40036]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:45 np0005548790.localdomain sudo[40079]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oosjstvqigtlfzsdxblzfxyoaequnund ; /usr/bin/python3
Dec 06 08:07:45 np0005548790.localdomain sudo[40079]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:45 np0005548790.localdomain python3[40081]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008464.7122622-71243-260632971208438/source dest=/etc/puppet/hieradata/service_names.json mode=None follow=False _original_basename=service_names.j2 checksum=ff586b96402d8ae133745cf06f17e772b2f22d52 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:07:45 np0005548790.localdomain sudo[40079]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:45 np0005548790.localdomain systemd[36046]: Starting Mark boot as successful...
Dec 06 08:07:45 np0005548790.localdomain systemd[36046]: Finished Mark boot as successful.
Dec 06 08:07:45 np0005548790.localdomain sudo[40142]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wzahastjzpiwrteydfeoitjcjgxdqiyn ; /usr/bin/python3
Dec 06 08:07:45 np0005548790.localdomain sudo[40142]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:45 np0005548790.localdomain python3[40144]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:07:45 np0005548790.localdomain sudo[40142]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:46 np0005548790.localdomain sudo[40185]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ymxibxvggxwttpqipyyfglxdlyemailh ; /usr/bin/python3
Dec 06 08:07:46 np0005548790.localdomain sudo[40185]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:46 np0005548790.localdomain python3[40187]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008465.5706983-71243-70475811700843/source dest=/etc/puppet/hieradata/service_configs.json mode=None follow=False _original_basename=service_configs.j2 checksum=955531133cc86a259eb018c78aadbdeb821782e0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:07:46 np0005548790.localdomain sudo[40185]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:46 np0005548790.localdomain sudo[40247]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kbhopunmspoleujuzcetgbynyrushqhb ; /usr/bin/python3
Dec 06 08:07:46 np0005548790.localdomain sudo[40247]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:46 np0005548790.localdomain python3[40249]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:07:46 np0005548790.localdomain sudo[40247]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:46 np0005548790.localdomain sudo[40290]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zyklkdhijxwpfoukyvvynvigaxnmdtmr ; /usr/bin/python3
Dec 06 08:07:46 np0005548790.localdomain sudo[40290]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:47 np0005548790.localdomain python3[40292]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008466.4382637-71243-184517492614181/source dest=/etc/puppet/hieradata/extraconfig.json mode=None follow=False _original_basename=extraconfig.j2 checksum=5f36b2ea290645ee34d943220a14b54ee5ea5be5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:07:47 np0005548790.localdomain sudo[40290]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:47 np0005548790.localdomain sudo[40352]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-etpmvfmwoyxjfpcnkyxesmoejtivarzf ; /usr/bin/python3
Dec 06 08:07:47 np0005548790.localdomain sudo[40352]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:47 np0005548790.localdomain python3[40354]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:07:47 np0005548790.localdomain sudo[40352]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:47 np0005548790.localdomain sudo[40395]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cblywuqttczseywfqtpojzqzfkbetatq ; /usr/bin/python3
Dec 06 08:07:47 np0005548790.localdomain sudo[40395]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:47 np0005548790.localdomain python3[40397]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008467.2809458-71243-71850740629113/source dest=/etc/puppet/hieradata/role_extraconfig.json mode=None follow=False _original_basename=role_extraconfig.j2 checksum=34875968bf996542162e620523f9dcfb3deac331 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:07:47 np0005548790.localdomain sudo[40395]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:48 np0005548790.localdomain sudo[40457]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-axonpwelniejdtetdwcpfsgwnvndfggb ; /usr/bin/python3
Dec 06 08:07:48 np0005548790.localdomain sudo[40457]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:48 np0005548790.localdomain python3[40459]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:07:48 np0005548790.localdomain sudo[40457]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:48 np0005548790.localdomain sudo[40500]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-myqhpzsibdspvrtlbznmydxgushxdxmn ; /usr/bin/python3
Dec 06 08:07:48 np0005548790.localdomain sudo[40500]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:48 np0005548790.localdomain python3[40502]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008468.1079326-71243-5102989064449/source dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json mode=None follow=False _original_basename=ovn_chassis_mac_map.j2 checksum=ba860f01729c9499eb2a043288c1a12c3481c392 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:07:48 np0005548790.localdomain sudo[40500]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:49 np0005548790.localdomain sudo[40530]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hfjfjmxerfmgxrhciarzolhuljpngtsa ; /usr/bin/python3
Dec 06 08:07:49 np0005548790.localdomain sudo[40530]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:49 np0005548790.localdomain python3[40532]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:07:49 np0005548790.localdomain sudo[40530]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:49 np0005548790.localdomain sudo[40578]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lejdkbjdidjydnvnhvzsqdokcxrpexhq ; /usr/bin/python3
Dec 06 08:07:49 np0005548790.localdomain sudo[40578]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:50 np0005548790.localdomain python3[40580]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:07:50 np0005548790.localdomain sudo[40578]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:50 np0005548790.localdomain sudo[40621]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jlhdbqglnoivwciqdubiuateooyiywef ; /usr/bin/python3
Dec 06 08:07:50 np0005548790.localdomain sudo[40621]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:50 np0005548790.localdomain python3[40623]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/ansible_managed.json owner=root group=root mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008469.7944634-71868-185071258618092/source _original_basename=tmpssgdlq9f follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:07:50 np0005548790.localdomain sudo[40621]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:54 np0005548790.localdomain sudo[40651]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qaqgecilpzrwwlqouqoecrctzhmzpfgz ; /usr/bin/python3
Dec 06 08:07:54 np0005548790.localdomain sudo[40651]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:55 np0005548790.localdomain python3[40653]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_default_ipv4'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 08:07:55 np0005548790.localdomain sudo[40651]: pam_unix(sudo:session): session closed for user root
Dec 06 08:07:55 np0005548790.localdomain sudo[40712]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-udyrptwivjdnoihgtqxndylsskjpibsc ; /usr/bin/python3
Dec 06 08:07:55 np0005548790.localdomain sudo[40712]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:07:55 np0005548790.localdomain python3[40714]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 38.102.83.1 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:07:59 np0005548790.localdomain sudo[40712]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:00 np0005548790.localdomain sudo[40729]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ixblgrvbzawjzldirqrkxetboecqgdtt ; /usr/bin/python3
Dec 06 08:08:00 np0005548790.localdomain sudo[40729]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:00 np0005548790.localdomain python3[40731]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.10 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:08:04 np0005548790.localdomain sudo[40729]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:05 np0005548790.localdomain sudo[40746]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wxkunhyrbeuhbkwjpmvcjkscebfqflzt ; /usr/bin/python3
Dec 06 08:08:05 np0005548790.localdomain sudo[40746]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:05 np0005548790.localdomain python3[40748]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 192.168.122.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")
                                                         MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")
                                                         echo "$INT $MTU"
                                                          _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:08:05 np0005548790.localdomain sudo[40746]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:05 np0005548790.localdomain sudo[40769]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gcdmanuejboxxiblnjxsmqnfoyjftjwo ; /usr/bin/python3
Dec 06 08:08:05 np0005548790.localdomain sudo[40769]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:06 np0005548790.localdomain python3[40771]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:08:10 np0005548790.localdomain sudo[40769]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:10 np0005548790.localdomain sudo[40786]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uzpismrogykrzxdccxooixkwuyjcezyf ; /usr/bin/python3
Dec 06 08:08:10 np0005548790.localdomain sudo[40786]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:10 np0005548790.localdomain python3[40788]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.18.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")
                                                         MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")
                                                         echo "$INT $MTU"
                                                          _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:08:10 np0005548790.localdomain sudo[40786]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:10 np0005548790.localdomain sudo[40809]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qysypnxuqfbvzolwhakjfsdjkbltorwt ; /usr/bin/python3
Dec 06 08:08:10 np0005548790.localdomain sudo[40809]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:11 np0005548790.localdomain python3[40811]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.18.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:08:15 np0005548790.localdomain sudo[40809]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:15 np0005548790.localdomain sudo[40826]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rujzeofgscepdgdwjbchsbjnnvtjdtvr ; /usr/bin/python3
Dec 06 08:08:15 np0005548790.localdomain sudo[40826]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:15 np0005548790.localdomain python3[40828]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.18.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:08:19 np0005548790.localdomain sshd[40830]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:08:19 np0005548790.localdomain sudo[40826]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:19 np0005548790.localdomain sudo[40845]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vdhdubzjkkurxpinkerwgnfpqbypbqle ; /usr/bin/python3
Dec 06 08:08:19 np0005548790.localdomain sudo[40845]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:19 np0005548790.localdomain python3[40847]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.20.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")
                                                         MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")
                                                         echo "$INT $MTU"
                                                          _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:08:20 np0005548790.localdomain sudo[40845]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:20 np0005548790.localdomain sudo[40868]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wxcqyvvqqiedfjewwdaxkxmiuokovegl ; /usr/bin/python3
Dec 06 08:08:20 np0005548790.localdomain sudo[40868]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:20 np0005548790.localdomain python3[40870]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.20.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:08:22 np0005548790.localdomain sshd[40830]: Connection closed by 45.78.219.217 port 48148 [preauth]
Dec 06 08:08:24 np0005548790.localdomain sudo[40868]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:24 np0005548790.localdomain sudo[40885]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ioufcumtsgejwtzygjfgozbqeympdgoi ; /usr/bin/python3
Dec 06 08:08:24 np0005548790.localdomain sudo[40885]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:24 np0005548790.localdomain python3[40887]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.20.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:08:29 np0005548790.localdomain sudo[40885]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:29 np0005548790.localdomain sudo[40902]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jxwoccxuznorpzgqyskjpptywxogrkoy ; /usr/bin/python3
Dec 06 08:08:29 np0005548790.localdomain sudo[40902]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:29 np0005548790.localdomain python3[40904]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.17.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")
                                                         MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")
                                                         echo "$INT $MTU"
                                                          _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:08:29 np0005548790.localdomain sudo[40902]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:29 np0005548790.localdomain sudo[40925]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lpqvxzpcavqkggkaqpztqcwuobjntwth ; /usr/bin/python3
Dec 06 08:08:29 np0005548790.localdomain sudo[40925]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:29 np0005548790.localdomain python3[40927]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.17.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:08:33 np0005548790.localdomain sudo[40925]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:34 np0005548790.localdomain sudo[40942]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rzovcztbmqglhdgyqoigqwfbwisphxwv ; /usr/bin/python3
Dec 06 08:08:34 np0005548790.localdomain sudo[40942]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:34 np0005548790.localdomain python3[40944]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.17.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:08:38 np0005548790.localdomain sudo[40942]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:38 np0005548790.localdomain sudo[40959]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ioyjixjllvwvywmntdtlsvvbztrqnysy ; /usr/bin/python3
Dec 06 08:08:38 np0005548790.localdomain sudo[40959]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:38 np0005548790.localdomain python3[40961]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.19.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")
                                                         MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")
                                                         echo "$INT $MTU"
                                                          _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:08:38 np0005548790.localdomain sudo[40959]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:38 np0005548790.localdomain sudo[40982]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ynyeggodywfrihuohvtrdsmupeyukpne ; /usr/bin/python3
Dec 06 08:08:38 np0005548790.localdomain sudo[40982]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:39 np0005548790.localdomain python3[40984]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.19.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:08:41 np0005548790.localdomain sudo[40986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:08:41 np0005548790.localdomain sudo[40986]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:08:41 np0005548790.localdomain sudo[40986]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:41 np0005548790.localdomain sudo[41001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:08:41 np0005548790.localdomain sudo[41001]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:08:41 np0005548790.localdomain sudo[41001]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:43 np0005548790.localdomain sudo[40982]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:43 np0005548790.localdomain sudo[41061]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wlqnnyyfjjtaqhhknhngonybvgfujtxo ; /usr/bin/python3
Dec 06 08:08:43 np0005548790.localdomain sudo[41061]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:43 np0005548790.localdomain python3[41063]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.19.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:08:44 np0005548790.localdomain sudo[41065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:08:44 np0005548790.localdomain sudo[41065]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:08:44 np0005548790.localdomain sudo[41065]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:47 np0005548790.localdomain sudo[41061]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:50 np0005548790.localdomain sudo[41093]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jacfcxptepscveqcdvsfcmlcgjfkoizj ; /usr/bin/python3
Dec 06 08:08:50 np0005548790.localdomain sudo[41093]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:50 np0005548790.localdomain python3[41095]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:08:50 np0005548790.localdomain sudo[41093]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:50 np0005548790.localdomain sudo[41141]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xabjohonvgmjmqmnqshabfzyfbfamdyy ; /usr/bin/python3
Dec 06 08:08:50 np0005548790.localdomain sudo[41141]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:50 np0005548790.localdomain python3[41143]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:08:50 np0005548790.localdomain sudo[41141]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:51 np0005548790.localdomain sudo[41159]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rzlamttffbzsxfgafcsqugtwvakqxwga ; /usr/bin/python3
Dec 06 08:08:51 np0005548790.localdomain sudo[41159]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:51 np0005548790.localdomain python3[41161]: ansible-ansible.legacy.file Invoked with mode=384 dest=/etc/puppet/hiera.yaml _original_basename=tmp_mo9dxro recurse=False state=file path=/etc/puppet/hiera.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:08:51 np0005548790.localdomain sudo[41159]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:51 np0005548790.localdomain sudo[41189]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ycsmankloejsaswvixrtgurjzzwssvza ; /usr/bin/python3
Dec 06 08:08:51 np0005548790.localdomain sudo[41189]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:51 np0005548790.localdomain python3[41191]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:08:51 np0005548790.localdomain sudo[41189]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:52 np0005548790.localdomain sudo[41237]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cpyfqgmgkqyhnsezzavfstaekjwfootl ; /usr/bin/python3
Dec 06 08:08:52 np0005548790.localdomain sudo[41237]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:52 np0005548790.localdomain python3[41239]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:08:52 np0005548790.localdomain sudo[41237]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:52 np0005548790.localdomain sudo[41255]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gxjgejmftmmumpkcrzaihnpxdrqqwexw ; /usr/bin/python3
Dec 06 08:08:52 np0005548790.localdomain sudo[41255]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:52 np0005548790.localdomain python3[41257]: ansible-ansible.legacy.file Invoked with dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json recurse=False state=file path=/etc/puppet/hieradata/all_nodes.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:08:52 np0005548790.localdomain sudo[41255]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:52 np0005548790.localdomain sudo[41317]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qudjphlaephjoalhhhxrkujfmqsihmcy ; /usr/bin/python3
Dec 06 08:08:52 np0005548790.localdomain sudo[41317]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:53 np0005548790.localdomain python3[41319]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:08:53 np0005548790.localdomain sudo[41317]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:53 np0005548790.localdomain sudo[41335]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vwifszywseqdynbuwxrblgnzlfgkyivj ; /usr/bin/python3
Dec 06 08:08:53 np0005548790.localdomain sudo[41335]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:53 np0005548790.localdomain python3[41337]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/bootstrap_node.json _original_basename=bootstrap_node.j2 recurse=False state=file path=/etc/puppet/hieradata/bootstrap_node.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:08:53 np0005548790.localdomain sudo[41335]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:53 np0005548790.localdomain sudo[41397]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qwpjkmjafqgnlgeyakssgkgoinskmpol ; /usr/bin/python3
Dec 06 08:08:53 np0005548790.localdomain sudo[41397]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:53 np0005548790.localdomain python3[41399]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:08:53 np0005548790.localdomain sudo[41397]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:54 np0005548790.localdomain sudo[41415]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eggyvdnhmgakxhmslljijqftwhbibcqw ; /usr/bin/python3
Dec 06 08:08:54 np0005548790.localdomain sudo[41415]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:54 np0005548790.localdomain python3[41417]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/vip_data.json _original_basename=vip_data.j2 recurse=False state=file path=/etc/puppet/hieradata/vip_data.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:08:54 np0005548790.localdomain sudo[41415]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:54 np0005548790.localdomain sudo[41477]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pijicmntptmtavklglcrifxjnoiasudo ; /usr/bin/python3
Dec 06 08:08:54 np0005548790.localdomain sudo[41477]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:54 np0005548790.localdomain python3[41479]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:08:54 np0005548790.localdomain sudo[41477]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:54 np0005548790.localdomain sudo[41495]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ygsrcdzyliujemkscyuefiinqfsetkam ; /usr/bin/python3
Dec 06 08:08:54 np0005548790.localdomain sudo[41495]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:55 np0005548790.localdomain python3[41497]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/net_ip_map.json _original_basename=net_ip_map.j2 recurse=False state=file path=/etc/puppet/hieradata/net_ip_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:08:55 np0005548790.localdomain sudo[41495]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:55 np0005548790.localdomain sudo[41557]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bbtekskxyfugoszwsyjnuirjjxbvwtfv ; /usr/bin/python3
Dec 06 08:08:55 np0005548790.localdomain sudo[41557]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:55 np0005548790.localdomain python3[41559]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:08:55 np0005548790.localdomain sudo[41557]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:55 np0005548790.localdomain sudo[41575]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-socffeormcirlfojzfihnmeaykyhjkbi ; /usr/bin/python3
Dec 06 08:08:55 np0005548790.localdomain sudo[41575]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:55 np0005548790.localdomain python3[41577]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/cloud_domain.json _original_basename=cloud_domain.j2 recurse=False state=file path=/etc/puppet/hieradata/cloud_domain.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:08:55 np0005548790.localdomain sudo[41575]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:56 np0005548790.localdomain sudo[41637]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ubeatnwykrpknsexbkdjdvpuzbbpkwvj ; /usr/bin/python3
Dec 06 08:08:56 np0005548790.localdomain sudo[41637]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:56 np0005548790.localdomain python3[41639]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:08:56 np0005548790.localdomain sudo[41637]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:56 np0005548790.localdomain sudo[41655]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uczsmjhkifoggfztaoztjhwzmotmrhhq ; /usr/bin/python3
Dec 06 08:08:56 np0005548790.localdomain sudo[41655]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:56 np0005548790.localdomain python3[41657]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/fqdn.json _original_basename=fqdn.j2 recurse=False state=file path=/etc/puppet/hieradata/fqdn.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:08:56 np0005548790.localdomain sudo[41655]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:56 np0005548790.localdomain sudo[41717]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ydgyuxyqvfxexgbysolsumseggoekpvj ; /usr/bin/python3
Dec 06 08:08:56 np0005548790.localdomain sudo[41717]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:56 np0005548790.localdomain python3[41719]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:08:56 np0005548790.localdomain sudo[41717]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:57 np0005548790.localdomain sudo[41735]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aqmvjrldyouajsstbverccxovsroslka ; /usr/bin/python3
Dec 06 08:08:57 np0005548790.localdomain sudo[41735]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:57 np0005548790.localdomain python3[41737]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_names.json _original_basename=service_names.j2 recurse=False state=file path=/etc/puppet/hieradata/service_names.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:08:57 np0005548790.localdomain sudo[41735]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:57 np0005548790.localdomain sudo[41797]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-htkeijqgkvblvyijyzxztuuapbwauqmu ; /usr/bin/python3
Dec 06 08:08:57 np0005548790.localdomain sudo[41797]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:57 np0005548790.localdomain python3[41799]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:08:57 np0005548790.localdomain sudo[41797]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:57 np0005548790.localdomain sudo[41815]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-juuqjidoorlvqcyvcilkhvsojimyuggn ; /usr/bin/python3
Dec 06 08:08:57 np0005548790.localdomain sudo[41815]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:57 np0005548790.localdomain python3[41817]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_configs.json _original_basename=service_configs.j2 recurse=False state=file path=/etc/puppet/hieradata/service_configs.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:08:57 np0005548790.localdomain sudo[41815]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:58 np0005548790.localdomain sudo[41877]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fycyftpqbtldbweitssqhuhhkaynumbn ; /usr/bin/python3
Dec 06 08:08:58 np0005548790.localdomain sudo[41877]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:58 np0005548790.localdomain python3[41879]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:08:58 np0005548790.localdomain sudo[41877]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:58 np0005548790.localdomain sudo[41895]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vwthpqxmlaffpwmviircqgompxqkhnmw ; /usr/bin/python3
Dec 06 08:08:58 np0005548790.localdomain sudo[41895]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:58 np0005548790.localdomain python3[41897]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/extraconfig.json _original_basename=extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:08:58 np0005548790.localdomain sudo[41895]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:58 np0005548790.localdomain sudo[41957]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-reybtmchmzxjidjetdllzsdzquavnfur ; /usr/bin/python3
Dec 06 08:08:58 np0005548790.localdomain sudo[41957]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:59 np0005548790.localdomain python3[41959]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:08:59 np0005548790.localdomain sudo[41957]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:59 np0005548790.localdomain sudo[41975]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eakgbbapklfamvrqvgworzskurtvysdo ; /usr/bin/python3
Dec 06 08:08:59 np0005548790.localdomain sudo[41975]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:59 np0005548790.localdomain python3[41977]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/role_extraconfig.json _original_basename=role_extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/role_extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:08:59 np0005548790.localdomain sudo[41975]: pam_unix(sudo:session): session closed for user root
Dec 06 08:08:59 np0005548790.localdomain sudo[42037]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xyqacfzuxnzadfgnmoiecnlwullfkltc ; /usr/bin/python3
Dec 06 08:08:59 np0005548790.localdomain sudo[42037]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:08:59 np0005548790.localdomain python3[42039]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:08:59 np0005548790.localdomain sudo[42037]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:00 np0005548790.localdomain sudo[42055]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sfiovlijcuszyshozrlhqzdagjlcujkh ; /usr/bin/python3
Dec 06 08:09:00 np0005548790.localdomain sudo[42055]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:00 np0005548790.localdomain python3[42057]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json _original_basename=ovn_chassis_mac_map.j2 recurse=False state=file path=/etc/puppet/hieradata/ovn_chassis_mac_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:09:00 np0005548790.localdomain sudo[42055]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:00 np0005548790.localdomain sudo[42085]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-icjgyjvahswmlcakkjweitcripilnlrb ; /usr/bin/python3
Dec 06 08:09:00 np0005548790.localdomain sudo[42085]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:00 np0005548790.localdomain python3[42087]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:09:00 np0005548790.localdomain sudo[42085]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:01 np0005548790.localdomain sudo[42133]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ztvchyzcirskdjjyecifyukyfotkybdh ; /usr/bin/python3
Dec 06 08:09:01 np0005548790.localdomain sudo[42133]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:01 np0005548790.localdomain python3[42135]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:09:01 np0005548790.localdomain sudo[42133]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:01 np0005548790.localdomain sudo[42151]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gqhruwidayopwsuqqzsntrycohqdykqd ; /usr/bin/python3
Dec 06 08:09:01 np0005548790.localdomain sudo[42151]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:01 np0005548790.localdomain python3[42153]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=0644 dest=/etc/puppet/hieradata/ansible_managed.json _original_basename=tmplyxubyvc recurse=False state=file path=/etc/puppet/hieradata/ansible_managed.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:09:01 np0005548790.localdomain sudo[42151]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:04 np0005548790.localdomain sudo[42181]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zkhlipjnrzsgzllipztcqgcutakgyldq ; /usr/bin/python3
Dec 06 08:09:04 np0005548790.localdomain sudo[42181]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:04 np0005548790.localdomain python3[42183]: ansible-dnf Invoked with name=['firewalld'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 08:09:07 np0005548790.localdomain sudo[42181]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:09 np0005548790.localdomain sudo[42198]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lyptmmzwtnabxfdxshoggimjtqmpjmpq ; /usr/bin/python3
Dec 06 08:09:09 np0005548790.localdomain sudo[42198]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:09 np0005548790.localdomain python3[42200]: ansible-ansible.builtin.systemd Invoked with name=iptables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:09:09 np0005548790.localdomain sudo[42198]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:09 np0005548790.localdomain sudo[42216]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jtzsulqeszhhydpoiwxsyormrifzrfax ; /usr/bin/python3
Dec 06 08:09:09 np0005548790.localdomain sudo[42216]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:10 np0005548790.localdomain python3[42218]: ansible-ansible.builtin.systemd Invoked with name=ip6tables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:09:10 np0005548790.localdomain sudo[42216]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:10 np0005548790.localdomain sudo[42234]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xkzhqrcczioufqytpyzbkqrmrdrsijlf ; /usr/bin/python3
Dec 06 08:09:10 np0005548790.localdomain sudo[42234]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:10 np0005548790.localdomain python3[42236]: ansible-ansible.builtin.systemd Invoked with name=nftables state=started enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:09:10 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:09:10 np0005548790.localdomain systemd-rc-local-generator[42265]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:09:10 np0005548790.localdomain systemd-sysv-generator[42269]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:09:10 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:09:10 np0005548790.localdomain systemd[1]: Starting Netfilter Tables...
Dec 06 08:09:11 np0005548790.localdomain systemd[1]: Finished Netfilter Tables.
Dec 06 08:09:11 np0005548790.localdomain sudo[42234]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:11 np0005548790.localdomain sudo[42324]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wdngkhnpegqrdnicosooqtlxkwvbvwhc ; /usr/bin/python3
Dec 06 08:09:11 np0005548790.localdomain sudo[42324]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:11 np0005548790.localdomain python3[42326]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:09:11 np0005548790.localdomain sudo[42324]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:11 np0005548790.localdomain sudo[42367]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hceppxtynabywrsppwhdaevssixdqzal ; /usr/bin/python3
Dec 06 08:09:11 np0005548790.localdomain sudo[42367]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:12 np0005548790.localdomain python3[42369]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008551.4018726-74820-160361115537680/source _original_basename=iptables.nft follow=False checksum=ede9860c99075946a7bc827210247aac639bc84a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:09:12 np0005548790.localdomain sudo[42367]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:12 np0005548790.localdomain sudo[42397]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ygzwsunihncqhwkigwoptucgfqzsrwjq ; /usr/bin/python3
Dec 06 08:09:12 np0005548790.localdomain sudo[42397]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:12 np0005548790.localdomain python3[42399]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:09:12 np0005548790.localdomain sudo[42397]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:12 np0005548790.localdomain sudo[42415]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hwjkgluhvnddksakoipxjxhdwnlftqxj ; /usr/bin/python3
Dec 06 08:09:12 np0005548790.localdomain sudo[42415]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:12 np0005548790.localdomain python3[42417]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:09:12 np0005548790.localdomain sudo[42415]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:13 np0005548790.localdomain sudo[42464]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qybscwwqowipoqvrtvejnulgofzqozba ; /usr/bin/python3
Dec 06 08:09:13 np0005548790.localdomain sudo[42464]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:13 np0005548790.localdomain python3[42466]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:09:13 np0005548790.localdomain sudo[42464]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:13 np0005548790.localdomain sudo[42507]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fyqxhcyuivyvqezvihokptrjpbusbuxy ; /usr/bin/python3
Dec 06 08:09:13 np0005548790.localdomain sudo[42507]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:13 np0005548790.localdomain python3[42509]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008553.080179-74935-59563590789280/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:09:13 np0005548790.localdomain sudo[42507]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:14 np0005548790.localdomain sudo[42569]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vbmyxvsylopryxztyzvlrzlppzxwiles ; /usr/bin/python3
Dec 06 08:09:14 np0005548790.localdomain sudo[42569]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:14 np0005548790.localdomain python3[42571]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-update-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:09:14 np0005548790.localdomain sudo[42569]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:14 np0005548790.localdomain sudo[42612]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jyomvcifrpysuglyroylvxxpcbhxkzaa ; /usr/bin/python3
Dec 06 08:09:14 np0005548790.localdomain sudo[42612]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:14 np0005548790.localdomain python3[42614]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-update-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008554.0053346-74992-77739290831125/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:09:14 np0005548790.localdomain sudo[42612]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:15 np0005548790.localdomain sudo[42674]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-slvxllstxwsvbibkbuxgrewfhoybmwfq ; /usr/bin/python3
Dec 06 08:09:15 np0005548790.localdomain sudo[42674]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:15 np0005548790.localdomain python3[42676]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-flushes.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:09:15 np0005548790.localdomain sudo[42674]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:15 np0005548790.localdomain sudo[42717]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oqrgiacgcryghodqnrzujpoevvrnyloq ; /usr/bin/python3
Dec 06 08:09:15 np0005548790.localdomain sudo[42717]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:15 np0005548790.localdomain python3[42719]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-flushes.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008555.004543-75049-255971704367706/source mode=None follow=False _original_basename=flush-chain.j2 checksum=e8e7b8db0d61a7fe393441cc91613f470eb34a6e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:09:15 np0005548790.localdomain sudo[42717]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:16 np0005548790.localdomain sudo[42779]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-exzyrlbjlmdexdvfahpzufhdvrnobgvq ; /usr/bin/python3
Dec 06 08:09:16 np0005548790.localdomain sudo[42779]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:16 np0005548790.localdomain python3[42781]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-chains.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:09:16 np0005548790.localdomain sudo[42779]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:16 np0005548790.localdomain sudo[42822]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wcyhbhcsjktuhkrjokadvxxgqciitkzk ; /usr/bin/python3
Dec 06 08:09:16 np0005548790.localdomain sudo[42822]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:16 np0005548790.localdomain python3[42824]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-chains.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008555.8696775-75109-9620904355747/source mode=None follow=False _original_basename=chains.j2 checksum=e60ee651f5014e83924f4e901ecc8e25b1906610 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:09:16 np0005548790.localdomain sudo[42822]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:17 np0005548790.localdomain sudo[42884]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hqnalbeomjmodllvtlnszlymvjtqkzkr ; /usr/bin/python3
Dec 06 08:09:17 np0005548790.localdomain sudo[42884]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:17 np0005548790.localdomain python3[42886]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-rules.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:09:17 np0005548790.localdomain sudo[42884]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:17 np0005548790.localdomain sudo[42927]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cvhwvscaxcgzlmxgiymcfrnhfbsdseju ; /usr/bin/python3
Dec 06 08:09:17 np0005548790.localdomain sudo[42927]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:17 np0005548790.localdomain python3[42929]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-rules.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008556.7658374-75204-85915214978805/source mode=None follow=False _original_basename=ruleset.j2 checksum=0444e4206083f91e2fb2aabfa2928244c2db35ed backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:09:17 np0005548790.localdomain sudo[42927]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:18 np0005548790.localdomain sudo[42957]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hnllsurirocozyplrpfzkgwrxyjzjumm ; /usr/bin/python3
Dec 06 08:09:18 np0005548790.localdomain sudo[42957]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:18 np0005548790.localdomain python3[42959]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-chains.nft /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft /etc/nftables/tripleo-jumps.nft | nft -c -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:09:18 np0005548790.localdomain sudo[42957]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:19 np0005548790.localdomain sudo[43022]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zwcfgvikotizrtsmhcypxxoqwbswlffv ; /usr/bin/python3
Dec 06 08:09:19 np0005548790.localdomain sudo[43022]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:19 np0005548790.localdomain python3[43024]: ansible-ansible.builtin.blockinfile Invoked with path=/etc/sysconfig/nftables.conf backup=False validate=nft -c -f %s block=include "/etc/nftables/iptables.nft"
                                                         include "/etc/nftables/tripleo-chains.nft"
                                                         include "/etc/nftables/tripleo-rules.nft"
                                                         include "/etc/nftables/tripleo-jumps.nft"
                                                          state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:09:19 np0005548790.localdomain sudo[43022]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:19 np0005548790.localdomain sudo[43039]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sirrifoqmnkihobmdwstodqxdgbnyqan ; /usr/bin/python3
Dec 06 08:09:19 np0005548790.localdomain sudo[43039]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:19 np0005548790.localdomain python3[43041]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/tripleo-chains.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:09:19 np0005548790.localdomain sudo[43039]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:19 np0005548790.localdomain sudo[43056]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cgxrzxpwcuhdzxvmeuwsuhpkgflcqjry ; /usr/bin/python3
Dec 06 08:09:19 np0005548790.localdomain sudo[43056]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:19 np0005548790.localdomain python3[43058]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft | nft -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:09:19 np0005548790.localdomain sudo[43056]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:20 np0005548790.localdomain sudo[43075]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hnwiyzdnhvuqqjdlzvrmqieaukzmlhgh ; /usr/bin/python3
Dec 06 08:09:20 np0005548790.localdomain sudo[43075]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:20 np0005548790.localdomain python3[43077]: ansible-file Invoked with mode=0750 path=/var/log/containers/collectd setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:09:20 np0005548790.localdomain sudo[43075]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:20 np0005548790.localdomain sudo[43091]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-slanukvvokbikjucbiriozalxecryihv ; /usr/bin/python3
Dec 06 08:09:20 np0005548790.localdomain sudo[43091]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:20 np0005548790.localdomain python3[43093]: ansible-file Invoked with mode=0755 path=/var/lib/container-user-scripts/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:09:20 np0005548790.localdomain sudo[43091]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:20 np0005548790.localdomain sudo[43107]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rbvsvtlhwukoknyeuutxdieqhkuyfcth ; /usr/bin/python3
Dec 06 08:09:20 np0005548790.localdomain sudo[43107]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:21 np0005548790.localdomain python3[43109]: ansible-file Invoked with mode=0750 path=/var/log/containers/ceilometer setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:09:21 np0005548790.localdomain sudo[43107]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:21 np0005548790.localdomain sudo[43123]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-czmhynnlhnizyvdgmvfvasmmbuerxaop ; /usr/bin/python3
Dec 06 08:09:21 np0005548790.localdomain sudo[43123]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:21 np0005548790.localdomain python3[43125]: ansible-seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 06 08:09:22 np0005548790.localdomain sudo[43123]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:22 np0005548790.localdomain sudo[43143]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ikjgokdzhjldtpawmcmuehntwvhftibb ; /usr/bin/python3
Dec 06 08:09:22 np0005548790.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Dec 06 08:09:22 np0005548790.localdomain sudo[43143]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:22 np0005548790.localdomain python3[43145]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Dec 06 08:09:23 np0005548790.localdomain kernel: SELinux:  Converting 2703 SID table entries...
Dec 06 08:09:23 np0005548790.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 08:09:23 np0005548790.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 06 08:09:23 np0005548790.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 08:09:23 np0005548790.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 06 08:09:23 np0005548790.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 08:09:23 np0005548790.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 08:09:23 np0005548790.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 08:09:23 np0005548790.localdomain sudo[43143]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:23 np0005548790.localdomain sudo[43164]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vrspokuqpghcxwirisyfohxcjulfzita ; /usr/bin/python3
Dec 06 08:09:23 np0005548790.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec 06 08:09:23 np0005548790.localdomain sudo[43164]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:24 np0005548790.localdomain python3[43166]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/target(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Dec 06 08:09:24 np0005548790.localdomain kernel: SELinux:  Converting 2703 SID table entries...
Dec 06 08:09:24 np0005548790.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 08:09:24 np0005548790.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 06 08:09:24 np0005548790.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 08:09:24 np0005548790.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 06 08:09:24 np0005548790.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 08:09:24 np0005548790.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 08:09:24 np0005548790.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 08:09:25 np0005548790.localdomain sudo[43164]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:25 np0005548790.localdomain sudo[43192]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ntcrkhcmtjxyntgfqfanblakrjmbhngo ; /usr/bin/python3
Dec 06 08:09:25 np0005548790.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec 06 08:09:25 np0005548790.localdomain sudo[43192]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:25 np0005548790.localdomain python3[43194]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/var/lib/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Dec 06 08:09:26 np0005548790.localdomain kernel: SELinux:  Converting 2703 SID table entries...
Dec 06 08:09:26 np0005548790.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 08:09:26 np0005548790.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 06 08:09:26 np0005548790.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 08:09:26 np0005548790.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 06 08:09:26 np0005548790.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 08:09:26 np0005548790.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 08:09:26 np0005548790.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 08:09:26 np0005548790.localdomain sudo[43192]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:26 np0005548790.localdomain sudo[43213]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lghsyzdfhwznromfinnaybynekhnedsr ; /usr/bin/python3
Dec 06 08:09:26 np0005548790.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec 06 08:09:26 np0005548790.localdomain sudo[43213]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:26 np0005548790.localdomain python3[43215]: ansible-file Invoked with path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:09:26 np0005548790.localdomain sudo[43213]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:26 np0005548790.localdomain sudo[43229]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pvofwyzaeipjalpdfjlkrqwkqpvbzmno ; /usr/bin/python3
Dec 06 08:09:26 np0005548790.localdomain sudo[43229]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:27 np0005548790.localdomain python3[43231]: ansible-file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:09:27 np0005548790.localdomain sudo[43229]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:27 np0005548790.localdomain sudo[43245]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bjuczbzrygxkhodojqcpcwzjaueqdwzu ; /usr/bin/python3
Dec 06 08:09:27 np0005548790.localdomain sudo[43245]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:27 np0005548790.localdomain python3[43247]: ansible-file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:09:27 np0005548790.localdomain sudo[43245]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:27 np0005548790.localdomain sudo[43261]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nxmftbprpxppwbrgssdmahjwinygwvrx ; /usr/bin/python3
Dec 06 08:09:27 np0005548790.localdomain sudo[43261]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:27 np0005548790.localdomain python3[43263]: ansible-stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:09:27 np0005548790.localdomain sudo[43261]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:27 np0005548790.localdomain sudo[43277]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nfxhskocsngwiieqtgsjferibzgemdxf ; /usr/bin/python3
Dec 06 08:09:27 np0005548790.localdomain sudo[43277]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:27 np0005548790.localdomain python3[43279]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-enabled --quiet iscsi.service _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:09:28 np0005548790.localdomain sudo[43277]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:28 np0005548790.localdomain sudo[43294]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-asffbssccpakwotndpwbokbtiyfbygal ; /usr/bin/python3
Dec 06 08:09:28 np0005548790.localdomain sudo[43294]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:28 np0005548790.localdomain python3[43296]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 08:09:32 np0005548790.localdomain sudo[43294]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:32 np0005548790.localdomain sudo[43311]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-spqjkckinbmvyzifvaixgylrdijbfqyc ; /usr/bin/python3
Dec 06 08:09:32 np0005548790.localdomain sudo[43311]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:32 np0005548790.localdomain python3[43313]: ansible-file Invoked with path=/etc/modules-load.d state=directory mode=493 owner=root group=root setype=etc_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:09:32 np0005548790.localdomain sudo[43311]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:32 np0005548790.localdomain sudo[43359]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-riasmbrbbpelsxxdgtmtxgcabjczythw ; /usr/bin/python3
Dec 06 08:09:33 np0005548790.localdomain sudo[43359]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:33 np0005548790.localdomain python3[43361]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:09:33 np0005548790.localdomain sudo[43359]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:33 np0005548790.localdomain sudo[43402]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yvidbdrakvgaruykezoccnbscpyblcaj ; /usr/bin/python3
Dec 06 08:09:33 np0005548790.localdomain sudo[43402]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:33 np0005548790.localdomain python3[43404]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008572.834872-75966-266155170399993/source dest=/etc/modules-load.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:09:33 np0005548790.localdomain sudo[43402]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:33 np0005548790.localdomain sudo[43432]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-irpynbcexubvgjqzrcppsscbjxiwwhaw ; /usr/bin/python3
Dec 06 08:09:33 np0005548790.localdomain sudo[43432]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:34 np0005548790.localdomain python3[43434]: ansible-systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 08:09:34 np0005548790.localdomain systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 06 08:09:34 np0005548790.localdomain systemd[1]: Stopped Load Kernel Modules.
Dec 06 08:09:34 np0005548790.localdomain systemd[1]: Stopping Load Kernel Modules...
Dec 06 08:09:34 np0005548790.localdomain systemd[1]: Starting Load Kernel Modules...
Dec 06 08:09:34 np0005548790.localdomain kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 06 08:09:34 np0005548790.localdomain kernel: Bridge firewalling registered
Dec 06 08:09:34 np0005548790.localdomain systemd-modules-load[43437]: Inserted module 'br_netfilter'
Dec 06 08:09:34 np0005548790.localdomain systemd-modules-load[43437]: Module 'msr' is built in
Dec 06 08:09:34 np0005548790.localdomain systemd[1]: Finished Load Kernel Modules.
Dec 06 08:09:34 np0005548790.localdomain sudo[43432]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:35 np0005548790.localdomain sudo[43486]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dxvlixyqfvmxucqntluaafklaefjapbp ; /usr/bin/python3
Dec 06 08:09:35 np0005548790.localdomain sudo[43486]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:35 np0005548790.localdomain python3[43488]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:09:35 np0005548790.localdomain sudo[43486]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:35 np0005548790.localdomain sudo[43529]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vkypohnsekkqpcmzxyprhhlsferuikgc ; /usr/bin/python3
Dec 06 08:09:35 np0005548790.localdomain sudo[43529]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:36 np0005548790.localdomain python3[43531]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008575.3888474-76068-116512509560388/source dest=/etc/sysctl.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-sysctl.conf.j2 checksum=cddb9401fdafaaf28a4a94b98448f98ae93c94c9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:09:36 np0005548790.localdomain sudo[43529]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:36 np0005548790.localdomain sudo[43559]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pfjeyahdtcmzcxmbfhanuydwikilhlap ; /usr/bin/python3
Dec 06 08:09:36 np0005548790.localdomain sudo[43559]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:36 np0005548790.localdomain python3[43561]: ansible-sysctl Invoked with name=fs.aio-max-nr value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 06 08:09:36 np0005548790.localdomain sudo[43559]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:36 np0005548790.localdomain sudo[43576]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lzyyjoortkrvptvzvnnsofjpjbrttijd ; /usr/bin/python3
Dec 06 08:09:36 np0005548790.localdomain sudo[43576]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:36 np0005548790.localdomain python3[43578]: ansible-sysctl Invoked with name=fs.inotify.max_user_instances value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 06 08:09:36 np0005548790.localdomain sudo[43576]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:37 np0005548790.localdomain sudo[43594]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wmevffkppxzefvuervbkbypeissjhlnh ; /usr/bin/python3
Dec 06 08:09:37 np0005548790.localdomain sudo[43594]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:37 np0005548790.localdomain python3[43596]: ansible-sysctl Invoked with name=kernel.pid_max value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 06 08:09:37 np0005548790.localdomain sudo[43594]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:37 np0005548790.localdomain sudo[43612]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qlfvbldolzddtvaftkjflxzcsogjhvsi ; /usr/bin/python3
Dec 06 08:09:37 np0005548790.localdomain sudo[43612]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:37 np0005548790.localdomain python3[43614]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-arptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 06 08:09:37 np0005548790.localdomain sudo[43612]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:37 np0005548790.localdomain sudo[43629]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-flcqdlmcehmibdsxddhlumyolvdpbbpi ; /usr/bin/python3
Dec 06 08:09:37 np0005548790.localdomain sudo[43629]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:37 np0005548790.localdomain python3[43631]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-ip6tables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 06 08:09:38 np0005548790.localdomain sudo[43629]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:39 np0005548790.localdomain sudo[43646]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lioxaudbyvltqfpwbloljyifzqxcajfh ; /usr/bin/python3
Dec 06 08:09:39 np0005548790.localdomain sudo[43646]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:39 np0005548790.localdomain python3[43648]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-iptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 06 08:09:39 np0005548790.localdomain sudo[43646]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:39 np0005548790.localdomain sudo[43663]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ryxvmqjmuiadrhvwnjzsvevwiigaqjhd ; /usr/bin/python3
Dec 06 08:09:39 np0005548790.localdomain sudo[43663]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:39 np0005548790.localdomain python3[43665]: ansible-sysctl Invoked with name=net.ipv4.conf.all.rp_filter value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 06 08:09:39 np0005548790.localdomain sudo[43663]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:39 np0005548790.localdomain sudo[43681]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mgzyjtltviwbqmxlbburhlvnmsgbgfrx ; /usr/bin/python3
Dec 06 08:09:39 np0005548790.localdomain sudo[43681]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:39 np0005548790.localdomain python3[43683]: ansible-sysctl Invoked with name=net.ipv4.ip_forward value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 06 08:09:40 np0005548790.localdomain sudo[43681]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:41 np0005548790.localdomain sudo[43699]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dcfzgrbvooczggjqnmuchgqenqakkqzg ; /usr/bin/python3
Dec 06 08:09:41 np0005548790.localdomain sudo[43699]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:41 np0005548790.localdomain python3[43701]: ansible-sysctl Invoked with name=net.ipv4.ip_local_reserved_ports value=35357,49000-49001 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 06 08:09:41 np0005548790.localdomain sudo[43699]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:41 np0005548790.localdomain sudo[43717]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-smalgamgcruuabdcndpnrykmhdqewqub ; /usr/bin/python3
Dec 06 08:09:41 np0005548790.localdomain sudo[43717]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:41 np0005548790.localdomain python3[43719]: ansible-sysctl Invoked with name=net.ipv4.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 06 08:09:41 np0005548790.localdomain sudo[43717]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:41 np0005548790.localdomain sudo[43735]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mketkizvdzrvtukmmxntkjcmpnrknlnu ; /usr/bin/python3
Dec 06 08:09:41 np0005548790.localdomain sudo[43735]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:41 np0005548790.localdomain python3[43737]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh1 value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 06 08:09:41 np0005548790.localdomain sudo[43735]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:41 np0005548790.localdomain sudo[43753]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wniwtuffsqqascmjnvcyetgtotphtagm ; /usr/bin/python3
Dec 06 08:09:41 np0005548790.localdomain sudo[43753]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:42 np0005548790.localdomain python3[43755]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh2 value=2048 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 06 08:09:42 np0005548790.localdomain sudo[43753]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:42 np0005548790.localdomain sudo[43771]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mbvesxlslfvtuztorvhsflfqbvdyyclz ; /usr/bin/python3
Dec 06 08:09:42 np0005548790.localdomain sudo[43771]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:42 np0005548790.localdomain python3[43773]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh3 value=4096 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 06 08:09:42 np0005548790.localdomain sudo[43771]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:42 np0005548790.localdomain sudo[43789]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-myhqoevadmgipnocgyxrrultnlxcqloc ; /usr/bin/python3
Dec 06 08:09:42 np0005548790.localdomain sudo[43789]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:42 np0005548790.localdomain python3[43791]: ansible-sysctl Invoked with name=net.ipv6.conf.all.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 06 08:09:42 np0005548790.localdomain sudo[43789]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:42 np0005548790.localdomain sudo[43806]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uxlmunbobmecggpqktfjwhjawihxfpsx ; /usr/bin/python3
Dec 06 08:09:42 np0005548790.localdomain sudo[43806]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:43 np0005548790.localdomain python3[43808]: ansible-sysctl Invoked with name=net.ipv6.conf.all.forwarding value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 06 08:09:43 np0005548790.localdomain sudo[43806]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:43 np0005548790.localdomain sudo[43823]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tuwohvombmaxyyvpexwuzxhjfzefpafl ; /usr/bin/python3
Dec 06 08:09:43 np0005548790.localdomain sudo[43823]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:43 np0005548790.localdomain python3[43825]: ansible-sysctl Invoked with name=net.ipv6.conf.default.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 06 08:09:43 np0005548790.localdomain sudo[43823]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:43 np0005548790.localdomain sudo[43840]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mabzsyqrtymvxubnhewzezanvvlivjdl ; /usr/bin/python3
Dec 06 08:09:43 np0005548790.localdomain sudo[43840]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:43 np0005548790.localdomain python3[43842]: ansible-sysctl Invoked with name=net.ipv6.conf.lo.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 06 08:09:43 np0005548790.localdomain sudo[43840]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:43 np0005548790.localdomain sudo[43857]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hhncomnpvmrboxtwujgruiznbowskysn ; /usr/bin/python3
Dec 06 08:09:43 np0005548790.localdomain sudo[43857]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:43 np0005548790.localdomain python3[43859]: ansible-sysctl Invoked with name=net.ipv6.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 06 08:09:43 np0005548790.localdomain sudo[43857]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:44 np0005548790.localdomain sudo[43875]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-adrrasxtmqixjlwhugaqjnmlhgomdano ; /usr/bin/python3
Dec 06 08:09:44 np0005548790.localdomain sudo[43875]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:44 np0005548790.localdomain python3[43877]: ansible-systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 08:09:44 np0005548790.localdomain systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 06 08:09:44 np0005548790.localdomain systemd[1]: Stopped Apply Kernel Variables.
Dec 06 08:09:44 np0005548790.localdomain systemd[1]: Stopping Apply Kernel Variables...
Dec 06 08:09:44 np0005548790.localdomain systemd[1]: Starting Apply Kernel Variables...
Dec 06 08:09:44 np0005548790.localdomain systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 06 08:09:44 np0005548790.localdomain systemd[1]: Finished Apply Kernel Variables.
Dec 06 08:09:44 np0005548790.localdomain sudo[43875]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:44 np0005548790.localdomain sudo[43895]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tburqybdmtaqzjdhbvosyxtmlbbbusgn ; /usr/bin/python3
Dec 06 08:09:44 np0005548790.localdomain sudo[43895]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:44 np0005548790.localdomain python3[43897]: ansible-file Invoked with mode=0750 path=/var/log/containers/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:09:44 np0005548790.localdomain sudo[43895]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:44 np0005548790.localdomain sudo[43898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:09:44 np0005548790.localdomain sudo[43898]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:09:44 np0005548790.localdomain sudo[43898]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:44 np0005548790.localdomain sudo[43913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 06 08:09:44 np0005548790.localdomain sudo[43913]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:09:44 np0005548790.localdomain sudo[43940]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yzvkuyulljzpkzuttmeiivkxvjiqlszj ; /usr/bin/python3
Dec 06 08:09:44 np0005548790.localdomain sudo[43940]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:45 np0005548790.localdomain python3[43943]: ansible-file Invoked with path=/var/lib/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:09:45 np0005548790.localdomain sudo[43940]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:45 np0005548790.localdomain sudo[43969]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jkdoevubrxahruuaepkfgobcnmgswkcf ; /usr/bin/python3
Dec 06 08:09:45 np0005548790.localdomain sudo[43969]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:45 np0005548790.localdomain sudo[43913]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:45 np0005548790.localdomain python3[43977]: ansible-file Invoked with mode=0750 path=/var/log/containers/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:09:45 np0005548790.localdomain sudo[43969]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:45 np0005548790.localdomain sudo[43993]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mfclxoumjpomrucxyknjclpxcvvmdclb ; /usr/bin/python3
Dec 06 08:09:45 np0005548790.localdomain sudo[43993]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:45 np0005548790.localdomain python3[43995]: ansible-stat Invoked with path=/var/lib/nova/instances follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:09:45 np0005548790.localdomain sudo[43993]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:46 np0005548790.localdomain sudo[44009]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-creyaaqibbudffhvjvansmjicawqvbqi ; /usr/bin/python3
Dec 06 08:09:46 np0005548790.localdomain sudo[44009]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:46 np0005548790.localdomain python3[44011]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:09:46 np0005548790.localdomain sudo[44009]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:46 np0005548790.localdomain sudo[44012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:09:46 np0005548790.localdomain sudo[44012]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:09:46 np0005548790.localdomain sudo[44012]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:46 np0005548790.localdomain sudo[44028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:09:46 np0005548790.localdomain sudo[44028]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:09:46 np0005548790.localdomain sudo[44053]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jwwrhaqebktxbamwqehkxblrjyukmfqi ; /usr/bin/python3
Dec 06 08:09:46 np0005548790.localdomain sudo[44053]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:46 np0005548790.localdomain python3[44057]: ansible-file Invoked with path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:09:46 np0005548790.localdomain sudo[44053]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:46 np0005548790.localdomain sudo[44071]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-khqdaaoikqozmaotwnjgrlnksgpipxwi ; /usr/bin/python3
Dec 06 08:09:46 np0005548790.localdomain sudo[44071]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:46 np0005548790.localdomain python3[44080]: ansible-file Invoked with path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:09:46 np0005548790.localdomain sudo[44071]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:46 np0005548790.localdomain sudo[44106]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wpjbfrewpobzufjiagtrqblyfrpttpld ; /usr/bin/python3
Dec 06 08:09:46 np0005548790.localdomain sudo[44106]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:46 np0005548790.localdomain sudo[44028]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:46 np0005548790.localdomain python3[44113]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:09:47 np0005548790.localdomain sudo[44106]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:47 np0005548790.localdomain sudo[44134]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-grenjvscuvxxuxjpubntunavossuwrvw ; /usr/bin/python3
Dec 06 08:09:47 np0005548790.localdomain sudo[44134]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:47 np0005548790.localdomain python3[44136]: ansible-file Invoked with path=/etc/tmpfiles.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:09:47 np0005548790.localdomain sudo[44134]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:47 np0005548790.localdomain sudo[44137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:09:47 np0005548790.localdomain sudo[44137]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:09:47 np0005548790.localdomain sudo[44137]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:47 np0005548790.localdomain sudo[44197]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kakzggwlrrxwkukiveujzlhabiqvjmly ; /usr/bin/python3
Dec 06 08:09:47 np0005548790.localdomain sudo[44197]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:47 np0005548790.localdomain python3[44199]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-nova.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:09:47 np0005548790.localdomain sudo[44197]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:48 np0005548790.localdomain sudo[44240]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fonebayyfzogmcbamakqmcasjamiifiy ; /usr/bin/python3
Dec 06 08:09:48 np0005548790.localdomain sudo[44240]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:48 np0005548790.localdomain python3[44242]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-nova.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008587.582353-76679-81559880018206/source _original_basename=tmp9rqgyuqc follow=False checksum=f834349098718ec09c7562bcb470b717a83ff411 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:09:48 np0005548790.localdomain sudo[44240]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:48 np0005548790.localdomain sudo[44270]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pbxexdbyeveisbcqpsbxnbveebabkkyt ; /usr/bin/python3
Dec 06 08:09:48 np0005548790.localdomain sudo[44270]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:48 np0005548790.localdomain python3[44272]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-tmpfiles --create _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:09:48 np0005548790.localdomain sudo[44270]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:49 np0005548790.localdomain sudo[44287]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lmredkhowsaqbhepzwphuhlkkfowqnmo ; /usr/bin/python3
Dec 06 08:09:49 np0005548790.localdomain sudo[44287]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:49 np0005548790.localdomain python3[44289]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:09:49 np0005548790.localdomain sudo[44287]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:50 np0005548790.localdomain sudo[44335]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bbfcuoputnvewprqzfdeyzbwsqqsphdv ; /usr/bin/python3
Dec 06 08:09:50 np0005548790.localdomain sudo[44335]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:50 np0005548790.localdomain python3[44337]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/delay-nova-compute follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:09:50 np0005548790.localdomain sudo[44335]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:50 np0005548790.localdomain sudo[44378]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hotgofsaiixubnrjisqwmjojrxxwixdh ; /usr/bin/python3
Dec 06 08:09:50 np0005548790.localdomain sudo[44378]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:50 np0005548790.localdomain python3[44380]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/nova/delay-nova-compute mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008590.058058-76840-246178489256154/source _original_basename=tmp6fg6a4x8 follow=False checksum=f07ad3e8cf3766b3b3b07ae8278826a0ef3bb5e3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:09:50 np0005548790.localdomain sudo[44378]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:51 np0005548790.localdomain sudo[44408]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-szokqffjwhworqjgkagatbgbnlnashdy ; /usr/bin/python3
Dec 06 08:09:51 np0005548790.localdomain sudo[44408]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:51 np0005548790.localdomain python3[44410]: ansible-file Invoked with mode=0750 path=/var/log/containers/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:09:51 np0005548790.localdomain sudo[44408]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:51 np0005548790.localdomain sudo[44424]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fkxfsimjbafhtnjbvevagfoiggbwvrbr ; /usr/bin/python3
Dec 06 08:09:51 np0005548790.localdomain sudo[44424]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:51 np0005548790.localdomain python3[44426]: ansible-file Invoked with path=/etc/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:09:51 np0005548790.localdomain sudo[44424]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:51 np0005548790.localdomain sudo[44440]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-euatlffvjyyeqonnvsyoisbnoodgpnuj ; /usr/bin/python3
Dec 06 08:09:51 np0005548790.localdomain sudo[44440]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:51 np0005548790.localdomain python3[44442]: ansible-file Invoked with path=/etc/libvirt/secrets setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:09:51 np0005548790.localdomain sudo[44440]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:52 np0005548790.localdomain sudo[44456]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zkfdvfxdcwhgozvwnisxxfhhpuqqpjwy ; /usr/bin/python3
Dec 06 08:09:52 np0005548790.localdomain sudo[44456]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:52 np0005548790.localdomain python3[44458]: ansible-file Invoked with path=/etc/libvirt/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:09:52 np0005548790.localdomain sudo[44456]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:52 np0005548790.localdomain sudo[44472]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xecmoowtkizpbpoiyzrtnlqyjduwlcpm ; /usr/bin/python3
Dec 06 08:09:52 np0005548790.localdomain sudo[44472]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:52 np0005548790.localdomain python3[44474]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:09:52 np0005548790.localdomain sudo[44472]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:52 np0005548790.localdomain sudo[44488]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oxjjccuflunzjixksmkkxlwpmeaqsanf ; /usr/bin/python3
Dec 06 08:09:52 np0005548790.localdomain sudo[44488]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:52 np0005548790.localdomain python3[44490]: ansible-file Invoked with path=/var/cache/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:09:52 np0005548790.localdomain sudo[44488]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:53 np0005548790.localdomain sudo[44504]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xphvescofcnjqkjyftfdynpfhwzxrado ; /usr/bin/python3
Dec 06 08:09:53 np0005548790.localdomain sudo[44504]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:53 np0005548790.localdomain python3[44506]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:09:53 np0005548790.localdomain sudo[44504]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:53 np0005548790.localdomain sudo[44520]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kexjvhyogvwsralblqxbahooidrlesdv ; /usr/bin/python3
Dec 06 08:09:53 np0005548790.localdomain sudo[44520]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:53 np0005548790.localdomain python3[44522]: ansible-file Invoked with path=/run/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:09:53 np0005548790.localdomain sudo[44520]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:53 np0005548790.localdomain sudo[44536]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ttzjexdlzytpjqnvlzkenauriqjgyqpi ; /usr/bin/python3
Dec 06 08:09:53 np0005548790.localdomain sudo[44536]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:53 np0005548790.localdomain python3[44538]: ansible-file Invoked with mode=0770 path=/var/log/containers/libvirt/swtpm setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:09:53 np0005548790.localdomain sudo[44536]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:54 np0005548790.localdomain sudo[44552]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ngygvkmcmkitanopfjklpsuwprwpguxg ; /usr/bin/python3
Dec 06 08:09:54 np0005548790.localdomain sudo[44552]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:54 np0005548790.localdomain python3[44554]: ansible-group Invoked with gid=107 name=qemu state=present system=False local=False non_unique=False
Dec 06 08:09:54 np0005548790.localdomain groupadd[44555]: group added to /etc/group: name=qemu, GID=107
Dec 06 08:09:54 np0005548790.localdomain groupadd[44555]: group added to /etc/gshadow: name=qemu
Dec 06 08:09:54 np0005548790.localdomain groupadd[44555]: new group: name=qemu, GID=107
Dec 06 08:09:54 np0005548790.localdomain sudo[44552]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:54 np0005548790.localdomain sudo[44574]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eqalknczjwdjorjgimiipequiulthsih ; /usr/bin/python3
Dec 06 08:09:54 np0005548790.localdomain sudo[44574]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:54 np0005548790.localdomain python3[44576]: ansible-user Invoked with comment=qemu user group=qemu name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005548790.localdomain update_password=always groups=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 06 08:09:54 np0005548790.localdomain useradd[44578]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=none
Dec 06 08:09:54 np0005548790.localdomain sudo[44574]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:54 np0005548790.localdomain sudo[44598]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ykxjgulfhtajqvdyvreqcbondxpzdnhu ; /usr/bin/python3
Dec 06 08:09:54 np0005548790.localdomain sudo[44598]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:55 np0005548790.localdomain python3[44600]: ansible-file Invoked with group=qemu owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None serole=None selevel=None attributes=None
Dec 06 08:09:55 np0005548790.localdomain sudo[44598]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:55 np0005548790.localdomain sudo[44614]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dgnciuhdcyucalxviytuvlrgglwhctaf ; /usr/bin/python3
Dec 06 08:09:55 np0005548790.localdomain sudo[44614]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:55 np0005548790.localdomain python3[44616]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/rpm -q libvirt-daemon _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:09:55 np0005548790.localdomain sudo[44614]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:55 np0005548790.localdomain sudo[44663]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hvrlayzjmduvynhnxsolcukrlxnuxdnz ; /usr/bin/python3
Dec 06 08:09:55 np0005548790.localdomain sudo[44663]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:55 np0005548790.localdomain python3[44665]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-libvirt.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:09:55 np0005548790.localdomain sudo[44663]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:56 np0005548790.localdomain sudo[44706]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dbiqoxfsxyqakjfezlraanqrsuyobwfm ; /usr/bin/python3
Dec 06 08:09:56 np0005548790.localdomain sudo[44706]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:56 np0005548790.localdomain python3[44708]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-libvirt.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008595.5478594-77097-214292844204905/source _original_basename=tmpv4ul6ahz follow=False checksum=57f3ff94c666c6aae69ae22e23feb750cf9e8b13 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:09:56 np0005548790.localdomain sudo[44706]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:56 np0005548790.localdomain sudo[44736]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zpcmgfjhafrlvlhfcfckupziuglcgdry ; /usr/bin/python3
Dec 06 08:09:56 np0005548790.localdomain sudo[44736]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:56 np0005548790.localdomain python3[44738]: ansible-seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 06 08:09:57 np0005548790.localdomain sudo[44736]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:57 np0005548790.localdomain sudo[44763]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zsiphbkwsxtqabqqcttgxkzmfywwqgap ; /usr/bin/python3
Dec 06 08:09:57 np0005548790.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec 06 08:09:57 np0005548790.localdomain sudo[44763]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:57 np0005548790.localdomain python3[44765]: ansible-file Invoked with path=/etc/crypto-policies/local.d/gnutls-qemu.config state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:09:57 np0005548790.localdomain sudo[44763]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:58 np0005548790.localdomain sudo[44779]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lnatguyzrtaknzwrgmcyquknkrjqlhxe ; /usr/bin/python3
Dec 06 08:09:58 np0005548790.localdomain sudo[44779]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:58 np0005548790.localdomain python3[44781]: ansible-file Invoked with path=/run/libvirt setype=virt_var_run_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:09:58 np0005548790.localdomain sudo[44779]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:58 np0005548790.localdomain sudo[44795]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rpqbftgrnkxyaggnxnctnpxlpjzwrvzb ; /usr/bin/python3
Dec 06 08:09:58 np0005548790.localdomain sudo[44795]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:09:58 np0005548790.localdomain python3[44797]: ansible-seboolean Invoked with name=logrotate_read_inside_containers persistent=True state=True ignore_selinux_state=False
Dec 06 08:09:59 np0005548790.localdomain sudo[44795]: pam_unix(sudo:session): session closed for user root
Dec 06 08:09:59 np0005548790.localdomain sudo[44815]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yzhbkolkxyksvfblyodmllpwnpjvfwfo ; /usr/bin/python3
Dec 06 08:09:59 np0005548790.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Dec 06 08:09:59 np0005548790.localdomain sudo[44815]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:00 np0005548790.localdomain python3[44817]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 08:10:02 np0005548790.localdomain sudo[44815]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:02 np0005548790.localdomain sudo[44832]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-njkzjqvyoegqjmqxsbuhythndkftivnb ; /usr/bin/python3
Dec 06 08:10:02 np0005548790.localdomain sudo[44832]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:03 np0005548790.localdomain python3[44834]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_interfaces'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 08:10:03 np0005548790.localdomain sudo[44832]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:03 np0005548790.localdomain sudo[44893]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-muqqcxldrllyugjxdxwnnqcnrfpkipra ; /usr/bin/python3
Dec 06 08:10:03 np0005548790.localdomain sudo[44893]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:03 np0005548790.localdomain python3[44895]: ansible-file Invoked with path=/etc/containers/networks state=directory recurse=True mode=493 owner=root group=root force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:10:03 np0005548790.localdomain sudo[44893]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:04 np0005548790.localdomain sudo[44909]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xyfqwywnkbzthwdcwojozaekdtmoctba ; /usr/bin/python3
Dec 06 08:10:04 np0005548790.localdomain sudo[44909]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:04 np0005548790.localdomain python3[44911]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                                          _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:10:04 np0005548790.localdomain sudo[44909]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:04 np0005548790.localdomain sudo[44968]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ahwfsjhbqpjygwyqfsxiodaucfcrxwse ; /usr/bin/python3
Dec 06 08:10:04 np0005548790.localdomain sudo[44968]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:04 np0005548790.localdomain python3[44970]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:10:04 np0005548790.localdomain sudo[44968]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:05 np0005548790.localdomain sudo[45011]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hdwbatwwwwidsuvwyatavrpfbqzdqvkm ; /usr/bin/python3
Dec 06 08:10:05 np0005548790.localdomain sudo[45011]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:05 np0005548790.localdomain python3[45013]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008604.4596484-77446-122629591994523/source dest=/etc/containers/networks/podman.json mode=0644 owner=root group=root follow=False _original_basename=podman_network_config.j2 checksum=c08e06536b1638983d0ada766148e53f9b197e73 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:10:05 np0005548790.localdomain sudo[45011]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:05 np0005548790.localdomain sudo[45073]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ksgwopidlwqzvwuxudbotkenuwpjxwto ; /usr/bin/python3
Dec 06 08:10:05 np0005548790.localdomain sudo[45073]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:05 np0005548790.localdomain python3[45075]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:10:06 np0005548790.localdomain sudo[45073]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:06 np0005548790.localdomain sudo[45118]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hsfiqxjhjgtnfbhogbgduipmsymcncss ; /usr/bin/python3
Dec 06 08:10:06 np0005548790.localdomain sudo[45118]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:06 np0005548790.localdomain python3[45120]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008605.5917532-77500-257733230665816/source dest=/etc/containers/registries.conf owner=root group=root setype=etc_t mode=0644 follow=False _original_basename=registries.conf.j2 checksum=710a00cfb11a4c3eba9c028ef1984a9fea9ba83a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:10:06 np0005548790.localdomain sudo[45118]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:06 np0005548790.localdomain sudo[45148]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xsgyswsupiezypqjwkhcpzqmewltgoxv ; /usr/bin/python3
Dec 06 08:10:06 np0005548790.localdomain sudo[45148]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:06 np0005548790.localdomain python3[45150]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=containers option=pids_limit value=4096 backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:10:06 np0005548790.localdomain sudo[45148]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:06 np0005548790.localdomain sudo[45164]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fcngbxkfghjennaxpqlkuivcdotgwnll ; /usr/bin/python3
Dec 06 08:10:06 np0005548790.localdomain sudo[45164]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:07 np0005548790.localdomain python3[45166]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=events_logger value="journald" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:10:07 np0005548790.localdomain sudo[45164]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:07 np0005548790.localdomain sudo[45180]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wjkvsmdaquwcrgqkgnaxwbxoaysnkyft ; /usr/bin/python3
Dec 06 08:10:07 np0005548790.localdomain sudo[45180]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:07 np0005548790.localdomain python3[45182]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=runtime value="crun" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:10:07 np0005548790.localdomain sudo[45180]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:07 np0005548790.localdomain sudo[45196]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tsctdzztpwwhyypeqyfndtcbshpmqnom ; /usr/bin/python3
Dec 06 08:10:07 np0005548790.localdomain sudo[45196]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:07 np0005548790.localdomain python3[45198]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=network option=network_backend value="netavark" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:10:07 np0005548790.localdomain sudo[45196]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:08 np0005548790.localdomain sudo[45244]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-layefkmgabftwkrlivuawcqktnejhdmg ; /usr/bin/python3
Dec 06 08:10:08 np0005548790.localdomain sudo[45244]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:08 np0005548790.localdomain python3[45246]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:10:08 np0005548790.localdomain sudo[45244]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:10:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Cumulative writes: 3257 writes, 16K keys, 3257 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                                          Cumulative WAL: 3257 writes, 144 syncs, 22.62 writes per sync, written: 0.01 GB, 0.02 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 3257 writes, 16K keys, 3257 commit groups, 1.0 writes per commit group, ingest: 14.67 MB, 0.02 MB/s
                                                          Interval WAL: 3257 writes, 144 syncs, 22.62 writes per sync, written: 0.01 GB, 0.02 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x561316669610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x561316669610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x561316669610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Dec 06 08:10:08 np0005548790.localdomain sudo[45287]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fxbesczdguvluszlbdoqlwemfpepgrxx ; /usr/bin/python3
Dec 06 08:10:08 np0005548790.localdomain sudo[45287]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:08 np0005548790.localdomain python3[45289]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008608.1986158-77723-256847937167247/source _original_basename=tmppzc656k_ follow=False checksum=0bfbc70e9a4740c9004b9947da681f723d529c83 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:10:08 np0005548790.localdomain sudo[45287]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:09 np0005548790.localdomain sudo[45317]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kmsstflrhiilamghxnxjvdvllogkqgzb ; /usr/bin/python3
Dec 06 08:10:09 np0005548790.localdomain sudo[45317]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:09 np0005548790.localdomain python3[45319]: ansible-file Invoked with mode=0750 path=/var/log/containers/rsyslog setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:10:09 np0005548790.localdomain sudo[45317]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:09 np0005548790.localdomain sudo[45333]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xaymrkhodnrexfogzzkiogtdmzeykshk ; /usr/bin/python3
Dec 06 08:10:09 np0005548790.localdomain sudo[45333]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:09 np0005548790.localdomain python3[45335]: ansible-file Invoked with path=/var/lib/rsyslog.container setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:10:09 np0005548790.localdomain sudo[45333]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:10 np0005548790.localdomain sudo[45349]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ntartigbfnpnigpehalcvxbsqxjooxmn ; /usr/bin/python3
Dec 06 08:10:10 np0005548790.localdomain sudo[45349]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:10 np0005548790.localdomain python3[45351]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 08:10:12 np0005548790.localdomain sudo[45349]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:10:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Cumulative writes: 3387 writes, 16K keys, 3387 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.03 MB/s
                                                          Cumulative WAL: 3387 writes, 196 syncs, 17.28 writes per sync, written: 0.01 GB, 0.03 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 3387 writes, 16K keys, 3387 commit groups, 1.0 writes per commit group, ingest: 15.28 MB, 0.03 MB/s
                                                          Interval WAL: 3387 writes, 196 syncs, 17.28 writes per sync, written: 0.01 GB, 0.03 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b53971610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b53971610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b53971610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Dec 06 08:10:13 np0005548790.localdomain sudo[45398]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uelyzanbpfeeiaajfmnygabprarsddar ; /usr/bin/python3
Dec 06 08:10:13 np0005548790.localdomain sudo[45398]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:13 np0005548790.localdomain python3[45400]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:10:13 np0005548790.localdomain sudo[45398]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:14 np0005548790.localdomain sudo[45443]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rtdfuvycrvqurzmtryskgjtmuyezghmq ; /usr/bin/python3
Dec 06 08:10:14 np0005548790.localdomain sudo[45443]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:14 np0005548790.localdomain python3[45445]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008613.6448567-78097-163741570571780/source validate=/usr/sbin/sshd -T -f %s mode=None follow=False _original_basename=sshd_config_block.j2 checksum=913c99ed7d5c33615bfb07a6792a4ef143dcfd2b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:10:14 np0005548790.localdomain sudo[45443]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:14 np0005548790.localdomain sudo[45474]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xrsshtdgzmelawyvzjdychzuyatqaztu ; /usr/bin/python3
Dec 06 08:10:14 np0005548790.localdomain sudo[45474]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:14 np0005548790.localdomain python3[45476]: ansible-systemd Invoked with name=sshd state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:10:14 np0005548790.localdomain sshd[1129]: Received signal 15; terminating.
Dec 06 08:10:14 np0005548790.localdomain systemd[1]: Stopping OpenSSH server daemon...
Dec 06 08:10:14 np0005548790.localdomain systemd[1]: sshd.service: Deactivated successfully.
Dec 06 08:10:14 np0005548790.localdomain systemd[1]: Stopped OpenSSH server daemon.
Dec 06 08:10:14 np0005548790.localdomain systemd[1]: sshd.service: Consumed 3.261s CPU time, read 2.1M from disk, written 52.0K to disk.
Dec 06 08:10:14 np0005548790.localdomain systemd[1]: Stopped target sshd-keygen.target.
Dec 06 08:10:14 np0005548790.localdomain systemd[1]: Stopping sshd-keygen.target...
Dec 06 08:10:14 np0005548790.localdomain systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 08:10:14 np0005548790.localdomain systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 08:10:14 np0005548790.localdomain systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 08:10:14 np0005548790.localdomain systemd[1]: Reached target sshd-keygen.target.
Dec 06 08:10:15 np0005548790.localdomain systemd[1]: Starting OpenSSH server daemon...
Dec 06 08:10:15 np0005548790.localdomain sshd[45480]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:10:15 np0005548790.localdomain sshd[45480]: Server listening on 0.0.0.0 port 22.
Dec 06 08:10:15 np0005548790.localdomain sshd[45480]: Server listening on :: port 22.
Dec 06 08:10:15 np0005548790.localdomain systemd[1]: Started OpenSSH server daemon.
Dec 06 08:10:15 np0005548790.localdomain sudo[45474]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:15 np0005548790.localdomain sudo[45494]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mprrnpohzwnkbvrwrpqhqkglximvpqtz ; /usr/bin/python3
Dec 06 08:10:15 np0005548790.localdomain sudo[45494]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:15 np0005548790.localdomain python3[45496]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:10:15 np0005548790.localdomain sudo[45494]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:16 np0005548790.localdomain sudo[45512]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-burkcvymzwbzevhofgneolweedgvlixu ; /usr/bin/python3
Dec 06 08:10:16 np0005548790.localdomain sudo[45512]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:16 np0005548790.localdomain python3[45514]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:10:16 np0005548790.localdomain sudo[45512]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:16 np0005548790.localdomain sudo[45530]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qglxnkaiomavnloymuzbyaljqxldlkfj ; /usr/bin/python3
Dec 06 08:10:16 np0005548790.localdomain sudo[45530]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:16 np0005548790.localdomain python3[45532]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 08:10:19 np0005548790.localdomain sudo[45530]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:20 np0005548790.localdomain sudo[45579]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ysibvuxkroqckttqhymuglvhfzgaigiy ; /usr/bin/python3
Dec 06 08:10:20 np0005548790.localdomain sudo[45579]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:20 np0005548790.localdomain python3[45581]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:10:20 np0005548790.localdomain sudo[45579]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:20 np0005548790.localdomain sudo[45597]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nbgpsharqrueuntlaatezsovfdfcgvst ; /usr/bin/python3
Dec 06 08:10:20 np0005548790.localdomain sudo[45597]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:20 np0005548790.localdomain python3[45599]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=420 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:10:20 np0005548790.localdomain sudo[45597]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:21 np0005548790.localdomain sudo[45627]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-plyzvqpffclfqmldzvhyhvphsemyxyta ; /usr/bin/python3
Dec 06 08:10:21 np0005548790.localdomain sudo[45627]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:21 np0005548790.localdomain python3[45629]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:10:21 np0005548790.localdomain sudo[45627]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:21 np0005548790.localdomain sudo[45677]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-scwpodydffomuzxnpqcqahlcqwrvesxa ; /usr/bin/python3
Dec 06 08:10:21 np0005548790.localdomain sudo[45677]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:22 np0005548790.localdomain python3[45679]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:10:22 np0005548790.localdomain sudo[45677]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:22 np0005548790.localdomain sudo[45695]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ndceuxswjbikgigawqxxwmmqarbcmufo ; /usr/bin/python3
Dec 06 08:10:22 np0005548790.localdomain sudo[45695]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:22 np0005548790.localdomain python3[45697]: ansible-ansible.legacy.file Invoked with dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service recurse=False state=file path=/etc/systemd/system/chrony-online.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:10:22 np0005548790.localdomain sudo[45695]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:22 np0005548790.localdomain sudo[45725]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wkkiduuonvemoosvwxxtjbvyyjrebyom ; /usr/bin/python3
Dec 06 08:10:22 np0005548790.localdomain sudo[45725]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:22 np0005548790.localdomain python3[45727]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:10:22 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:10:23 np0005548790.localdomain systemd-rc-local-generator[45751]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:10:23 np0005548790.localdomain systemd-sysv-generator[45756]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:10:23 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:10:23 np0005548790.localdomain systemd[1]: Starting chronyd online sources service...
Dec 06 08:10:23 np0005548790.localdomain chronyc[45767]: 200 OK
Dec 06 08:10:23 np0005548790.localdomain systemd[1]: chrony-online.service: Deactivated successfully.
Dec 06 08:10:23 np0005548790.localdomain systemd[1]: Finished chronyd online sources service.
Dec 06 08:10:23 np0005548790.localdomain sudo[45725]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:23 np0005548790.localdomain sudo[45781]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dszlkiutskktlrfufiaguvzoiekmenfh ; /usr/bin/python3
Dec 06 08:10:23 np0005548790.localdomain sudo[45781]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:23 np0005548790.localdomain python3[45783]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:10:23 np0005548790.localdomain chronyd[25781]: System clock was stepped by -0.000009 seconds
Dec 06 08:10:23 np0005548790.localdomain sudo[45781]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:24 np0005548790.localdomain sudo[45798]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wafzbkdqnocsvfcpdnwzpprqkeuonwly ; /usr/bin/python3
Dec 06 08:10:24 np0005548790.localdomain sudo[45798]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:24 np0005548790.localdomain python3[45800]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:10:24 np0005548790.localdomain sudo[45798]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:24 np0005548790.localdomain sudo[45815]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aanjorffrnxrawnyaxcasgdrejnfyoef ; /usr/bin/python3
Dec 06 08:10:24 np0005548790.localdomain sudo[45815]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:24 np0005548790.localdomain python3[45817]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:10:24 np0005548790.localdomain chronyd[25781]: System clock was stepped by 0.000000 seconds
Dec 06 08:10:24 np0005548790.localdomain sudo[45815]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:24 np0005548790.localdomain sudo[45832]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gdncikjnnilllmienxakqcbayknjdkae ; /usr/bin/python3
Dec 06 08:10:24 np0005548790.localdomain sudo[45832]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:24 np0005548790.localdomain python3[45834]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:10:24 np0005548790.localdomain sudo[45832]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:25 np0005548790.localdomain sudo[45849]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gokyekdtotbufrsjjeuyjgsmnmcdeoin ; /usr/bin/python3
Dec 06 08:10:25 np0005548790.localdomain sudo[45849]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:25 np0005548790.localdomain python3[45851]: ansible-timezone Invoked with name=UTC hwclock=None
Dec 06 08:10:25 np0005548790.localdomain systemd[1]: Starting Time & Date Service...
Dec 06 08:10:25 np0005548790.localdomain systemd[1]: Started Time & Date Service.
Dec 06 08:10:25 np0005548790.localdomain sudo[45849]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:26 np0005548790.localdomain sudo[45869]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-itxkndmdqnpccwsesneeeipnpolmbrpq ; /usr/bin/python3
Dec 06 08:10:26 np0005548790.localdomain sudo[45869]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:26 np0005548790.localdomain python3[45871]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:10:26 np0005548790.localdomain sudo[45869]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:27 np0005548790.localdomain sudo[45886]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sizyzewjhyaevxmeawqensuqvrvvcchr ; PATH=/bin:/usr/bin:/sbin:/usr/sbin /usr/bin/python3
Dec 06 08:10:27 np0005548790.localdomain sudo[45886]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:27 np0005548790.localdomain python3[45888]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:10:27 np0005548790.localdomain sudo[45886]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:27 np0005548790.localdomain sudo[45903]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hrssslhxryfdwsfdnwvioyxybcakkefk ; /usr/bin/python3
Dec 06 08:10:27 np0005548790.localdomain sudo[45903]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:27 np0005548790.localdomain python3[45905]: ansible-slurp Invoked with src=/etc/tuned/active_profile
Dec 06 08:10:27 np0005548790.localdomain sudo[45903]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:27 np0005548790.localdomain sudo[45919]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vyaszwdlhxfglmgywjcocpogirjpaaug ; /usr/bin/python3
Dec 06 08:10:27 np0005548790.localdomain sudo[45919]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:28 np0005548790.localdomain python3[45921]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:10:28 np0005548790.localdomain sudo[45919]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:28 np0005548790.localdomain sudo[45935]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jcokoesomoxktbwlrxyqduvzvtanqkly ; /usr/bin/python3
Dec 06 08:10:28 np0005548790.localdomain sudo[45935]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:28 np0005548790.localdomain python3[45937]: ansible-file Invoked with mode=0750 path=/var/log/containers/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:10:28 np0005548790.localdomain sudo[45935]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:28 np0005548790.localdomain sudo[45951]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rnjgrnsueyzllxxccckzciinjhqpqicr ; /usr/bin/python3
Dec 06 08:10:28 np0005548790.localdomain sudo[45951]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:29 np0005548790.localdomain python3[45953]: ansible-file Invoked with path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:10:29 np0005548790.localdomain sudo[45951]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:29 np0005548790.localdomain sudo[45999]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hclhrpijltsppzcwoyojkqanuvpfmwjf ; /usr/bin/python3
Dec 06 08:10:29 np0005548790.localdomain sudo[45999]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:29 np0005548790.localdomain python3[46001]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/neutron-cleanup follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:10:29 np0005548790.localdomain sudo[45999]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:29 np0005548790.localdomain sudo[46042]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eyncpdfbadlmfkejfqvdhtgemyxlpvlv ; /usr/bin/python3
Dec 06 08:10:29 np0005548790.localdomain sudo[46042]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:29 np0005548790.localdomain python3[46044]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/neutron-cleanup force=True mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008629.217698-78950-112018318463449/source _original_basename=tmpm3tafmyf follow=False checksum=f9cc7d1e91fbae49caa7e35eb2253bba146a73b4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:10:29 np0005548790.localdomain sudo[46042]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:30 np0005548790.localdomain sudo[46104]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ammowsxxtbjioickcpzmzycxzfgfvamb ; /usr/bin/python3
Dec 06 08:10:30 np0005548790.localdomain sudo[46104]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:30 np0005548790.localdomain python3[46106]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/neutron-cleanup.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:10:30 np0005548790.localdomain sudo[46104]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:30 np0005548790.localdomain sudo[46147]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eepybmhqcupetmnrsmgxrevbmpvhzfrz ; /usr/bin/python3
Dec 06 08:10:30 np0005548790.localdomain sudo[46147]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:30 np0005548790.localdomain python3[46149]: ansible-ansible.legacy.copy Invoked with dest=/usr/lib/systemd/system/neutron-cleanup.service force=True src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008630.1044629-79061-69023071310806/source _original_basename=tmpfdovork6 follow=False checksum=6b6cd9f074903a28d054eb530a10c7235d0c39fc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:10:30 np0005548790.localdomain sudo[46147]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:31 np0005548790.localdomain sudo[46177]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dhzflcgpdbychmlduakwsyfungwhsnjr ; /usr/bin/python3
Dec 06 08:10:31 np0005548790.localdomain sudo[46177]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:31 np0005548790.localdomain python3[46179]: ansible-ansible.legacy.systemd Invoked with enabled=True name=neutron-cleanup daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 06 08:10:31 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:10:31 np0005548790.localdomain systemd-sysv-generator[46211]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:10:31 np0005548790.localdomain systemd-rc-local-generator[46208]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:10:31 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:10:31 np0005548790.localdomain sudo[46177]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:31 np0005548790.localdomain sudo[46231]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ltpmpgmzwmhlndklhdpxtzsgboqzubwn ; /usr/bin/python3
Dec 06 08:10:31 np0005548790.localdomain sudo[46231]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:32 np0005548790.localdomain python3[46233]: ansible-file Invoked with mode=0750 path=/var/log/containers/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:10:32 np0005548790.localdomain sudo[46231]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:32 np0005548790.localdomain sudo[46247]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dtrttpqdehoodthagkgaioqrtkmqajcv ; /usr/bin/python3
Dec 06 08:10:32 np0005548790.localdomain sudo[46247]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:32 np0005548790.localdomain python3[46249]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns add ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:10:32 np0005548790.localdomain systemd[36046]: Created slice User Background Tasks Slice.
Dec 06 08:10:32 np0005548790.localdomain systemd[36046]: Starting Cleanup of User's Temporary Files and Directories...
Dec 06 08:10:32 np0005548790.localdomain sudo[46247]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:32 np0005548790.localdomain systemd[36046]: Finished Cleanup of User's Temporary Files and Directories.
Dec 06 08:10:32 np0005548790.localdomain sudo[46265]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-osncdurvzcoyamxywbmkfndrgmdgwdth ; /usr/bin/python3
Dec 06 08:10:32 np0005548790.localdomain sudo[46265]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:32 np0005548790.localdomain python3[46267]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns delete ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:10:32 np0005548790.localdomain systemd[1]: run-netns-ns_temp.mount: Deactivated successfully.
Dec 06 08:10:32 np0005548790.localdomain sudo[46265]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:32 np0005548790.localdomain sudo[46282]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cpvjokinnxsetmoprxdeeoqgopsahopw ; /usr/bin/python3
Dec 06 08:10:32 np0005548790.localdomain sudo[46282]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:33 np0005548790.localdomain python3[46284]: ansible-file Invoked with path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:10:33 np0005548790.localdomain sudo[46282]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:33 np0005548790.localdomain sudo[46298]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uufrtbgbbuxeqzveqeazqyznglpschuh ; /usr/bin/python3
Dec 06 08:10:33 np0005548790.localdomain sudo[46298]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:33 np0005548790.localdomain python3[46300]: ansible-file Invoked with path=/var/lib/neutron/kill_scripts state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:10:33 np0005548790.localdomain sudo[46298]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:33 np0005548790.localdomain sudo[46346]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fgjjwtmfcekvyxsmspgerzgecgbycztp ; /usr/bin/python3
Dec 06 08:10:33 np0005548790.localdomain sudo[46346]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:33 np0005548790.localdomain python3[46348]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:10:34 np0005548790.localdomain sudo[46346]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:34 np0005548790.localdomain sudo[46389]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zpjrcdrepcgjmgncejkskgptzhalxihu ; /usr/bin/python3
Dec 06 08:10:34 np0005548790.localdomain sudo[46389]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:34 np0005548790.localdomain python3[46391]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008633.7180834-79223-95803786086691/source _original_basename=tmp79rado7i follow=False checksum=2f369fbe8f83639cdfd4efc53e7feb4ee77d1ed7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:10:34 np0005548790.localdomain sudo[46389]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:46 np0005548790.localdomain sshd[46406]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:10:47 np0005548790.localdomain sshd[46406]: Invalid user solana from 193.32.162.146 port 37660
Dec 06 08:10:47 np0005548790.localdomain sshd[46406]: Connection closed by invalid user solana 193.32.162.146 port 37660 [preauth]
Dec 06 08:10:47 np0005548790.localdomain sudo[46408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:10:47 np0005548790.localdomain sudo[46408]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:10:47 np0005548790.localdomain sudo[46408]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:47 np0005548790.localdomain sudo[46423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:10:47 np0005548790.localdomain sudo[46423]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:10:48 np0005548790.localdomain sudo[46423]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:49 np0005548790.localdomain sudo[46470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:10:49 np0005548790.localdomain sudo[46470]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:10:49 np0005548790.localdomain sudo[46470]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:52 np0005548790.localdomain sshd[46485]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:10:53 np0005548790.localdomain sshd[46485]: Received disconnect from 45.78.219.217 port 58242:11: Bye Bye [preauth]
Dec 06 08:10:53 np0005548790.localdomain sshd[46485]: Disconnected from authenticating user root 45.78.219.217 port 58242 [preauth]
Dec 06 08:10:55 np0005548790.localdomain systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 06 08:10:55 np0005548790.localdomain sshd[46489]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:10:57 np0005548790.localdomain sshd[46489]: Invalid user admin from 45.135.232.92 port 39092
Dec 06 08:10:57 np0005548790.localdomain sudo[46504]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uevxrlojfbfmdulvqgrwalfvzoojeurm ; /usr/bin/python3
Dec 06 08:10:57 np0005548790.localdomain sudo[46504]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:57 np0005548790.localdomain python3[46506]: ansible-file Invoked with path=/var/log/containers state=directory setype=container_file_t selevel=s0 mode=488 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 08:10:57 np0005548790.localdomain sudo[46504]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:57 np0005548790.localdomain sshd[46489]: Connection reset by invalid user admin 45.135.232.92 port 39092 [preauth]
Dec 06 08:10:57 np0005548790.localdomain sshd[46521]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:10:57 np0005548790.localdomain sudo[46520]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ofufbdgxaamlgfylsdqpjzcxearzfyez ; /usr/bin/python3
Dec 06 08:10:57 np0005548790.localdomain sudo[46520]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:58 np0005548790.localdomain python3[46523]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None setype=None attributes=None
Dec 06 08:10:58 np0005548790.localdomain sudo[46520]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:58 np0005548790.localdomain sudo[46537]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-squatndqcmkvoyqyurgiwxivrzlkyidy ; /usr/bin/python3
Dec 06 08:10:58 np0005548790.localdomain sudo[46537]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:58 np0005548790.localdomain python3[46539]: ansible-file Invoked with path=/var/lib/tripleo-config state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 08:10:58 np0005548790.localdomain sudo[46537]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:58 np0005548790.localdomain sudo[46554]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bedpvskhllwygbmzznaoeavwyyjqrdve ; /usr/bin/python3
Dec 06 08:10:58 np0005548790.localdomain sudo[46554]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:58 np0005548790.localdomain python3[46556]: ansible-file Invoked with path=/var/lib/container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:10:58 np0005548790.localdomain sudo[46554]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:59 np0005548790.localdomain sudo[46570]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zaxkfocjzypqirlmvemekvqdokflkgmr ; /usr/bin/python3
Dec 06 08:10:59 np0005548790.localdomain sudo[46570]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:59 np0005548790.localdomain python3[46572]: ansible-file Invoked with path=/var/lib/docker-container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:10:59 np0005548790.localdomain sudo[46570]: pam_unix(sudo:session): session closed for user root
Dec 06 08:10:59 np0005548790.localdomain sudo[46586]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mppkjrpmxwnjsqeyopchkohcswvgiuak ; /usr/bin/python3
Dec 06 08:10:59 np0005548790.localdomain sudo[46586]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:10:59 np0005548790.localdomain python3[46588]: ansible-community.general.sefcontext Invoked with target=/var/lib/container-config-scripts(/.*)? setype=container_file_t state=present ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Dec 06 08:11:00 np0005548790.localdomain kernel: SELinux:  Converting 2706 SID table entries...
Dec 06 08:11:00 np0005548790.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 08:11:00 np0005548790.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 06 08:11:00 np0005548790.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 08:11:00 np0005548790.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 06 08:11:00 np0005548790.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 08:11:00 np0005548790.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 08:11:00 np0005548790.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 08:11:00 np0005548790.localdomain sudo[46586]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:00 np0005548790.localdomain sshd[46521]: Invalid user admin from 45.135.232.92 port 39114
Dec 06 08:11:00 np0005548790.localdomain sudo[46607]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hliitzkmfbriphwexkqxjkcwoauyngmo ; /usr/bin/python3
Dec 06 08:11:00 np0005548790.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec 06 08:11:00 np0005548790.localdomain sudo[46607]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:11:00 np0005548790.localdomain python3[46609]: ansible-file Invoked with path=/var/lib/container-config-scripts state=directory setype=container_file_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:11:00 np0005548790.localdomain sudo[46607]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:01 np0005548790.localdomain sudo[46623]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iaxqvbltwxrokrjchkgwwdpialyjboqn ; /usr/bin/python3
Dec 06 08:11:01 np0005548790.localdomain sudo[46623]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:11:01 np0005548790.localdomain sshd[46521]: Connection reset by invalid user admin 45.135.232.92 port 39114 [preauth]
Dec 06 08:11:01 np0005548790.localdomain sudo[46623]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:01 np0005548790.localdomain sshd[46626]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:11:01 np0005548790.localdomain sudo[46672]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oocmqdjvlhocdjnunthuxtcuixynjbkw ; /usr/bin/python3
Dec 06 08:11:01 np0005548790.localdomain sudo[46672]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:11:01 np0005548790.localdomain sudo[46672]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:02 np0005548790.localdomain sudo[46716]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nbahymyoqmvsdzrxuvbuwjeskcdcehcc ; /usr/bin/python3
Dec 06 08:11:02 np0005548790.localdomain sudo[46716]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:11:02 np0005548790.localdomain sudo[46716]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:02 np0005548790.localdomain sudo[46746]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ubgvhcjxlcrubjylguogzmjgjagpytwc ; /usr/bin/python3
Dec 06 08:11:02 np0005548790.localdomain sudo[46746]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:11:03 np0005548790.localdomain python3[46748]: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-config/container-startup-config config_data={'step_1': {'metrics_qdr': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, 'metrics_qdr_init_logs': {'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}}, 'step_2': {'create_haproxy_wrapper': {'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, 'create_virtlogd_wrapper': {'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, 'nova_compute_init_log': {'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, 'nova_virtqemud_init_logs': {'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}}, 'step_3': {'ceilometer_init_log': {'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'collectd': {'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, 'iscsid': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, 'nova_statedir_owner': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, 'nova_virtlogd_wrapper': {'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, 'nova_virtnodedevd': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, 'nova_virtproxyd': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, 'nova_virtqemud': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, 'nova_virtsecretd': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, 'nova_virtstoraged': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, 'rsyslog': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}}, 'step_4': {'ceilometer_agent_compute': {'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'ceilometer_agent_ipmi': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'configure_cms_options': {'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, 'logrotate_crond': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, 'nova_libvirt_init_secret': {'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, 'nova_migration_target': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, 'ovn_controller': {'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, 'ovn_metadata_agent': {'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, 'setup_ovs_manager': {'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}}, 'step_5': {'nova_compute': {'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, 'nova_wait_for_compute_service': {'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}}}
Dec 06 08:11:03 np0005548790.localdomain sudo[46746]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:03 np0005548790.localdomain rsyslogd[759]: message too long (31243) with configured size 8096, begin of message is: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-c [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ]
Dec 06 08:11:03 np0005548790.localdomain sudo[46762]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zjnytzluunfqxqhtcenwdnpudwtleuwd ; /usr/bin/python3
Dec 06 08:11:03 np0005548790.localdomain sudo[46762]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:11:03 np0005548790.localdomain python3[46764]: ansible-file Invoked with path=/var/lib/kolla/config_files state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 08:11:03 np0005548790.localdomain sudo[46762]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:03 np0005548790.localdomain sudo[46778]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jepfcdbkmeqjwviytivakhcpxwxpjzpy ; /usr/bin/python3
Dec 06 08:11:03 np0005548790.localdomain sudo[46778]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:11:03 np0005548790.localdomain sshd[46626]: Connection reset by authenticating user root 45.135.232.92 port 39136 [preauth]
Dec 06 08:11:03 np0005548790.localdomain python3[46780]: ansible-file Invoked with path=/var/lib/config-data mode=493 state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 08:11:03 np0005548790.localdomain sudo[46778]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:04 np0005548790.localdomain sshd[46781]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:11:04 np0005548790.localdomain sudo[46795]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-okkghjmvklcnamvrfawrdlesiwtogxvx ; /usr/bin/python3
Dec 06 08:11:04 np0005548790.localdomain sudo[46795]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:11:04 np0005548790.localdomain python3[46797]: ansible-tripleo_container_configs Invoked with config_data={'/var/lib/kolla/config_files/ceilometer-agent-ipmi.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /var/log/ceilometer/ipmi.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/ceilometer_agent_compute.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/collectd.json': {'command': '/usr/sbin/collectd -f', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/collectd.d'}], 'permissions': [{'owner': 'collectd:collectd', 'path': '/var/log/collectd', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/scripts', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/config-scripts', 'recurse': True}]}, '/var/lib/kolla/config_files/iscsid.json': {'command': '/usr/sbin/iscsid -f', 'config_files': [{'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/'}]}, '/var/lib/kolla/config_files/logrotate-crond.json': {'command': '/usr/sbin/crond -s -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/metrics_qdr.json': {'command': '/usr/sbin/qdrouterd -c /etc/qpid-dispatch/qdrouterd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'qdrouterd:qdrouterd', 'path': '/var/lib/qdrouterd', 'recurse': True}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/certs/metrics_qdr.crt'}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/private/metrics_qdr.key'}]}, '/var/lib/kolla/config_files/nova-migration-target.json': {'command': 'dumb-init --single-child -- /usr/sbin/sshd -D -p 2022', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ssh/', 'owner': 'root', 'perm': '0600', 'source': '/host-ssh/ssh_host_*_key'}]}, '/var/lib/kolla/config_files/nova_compute.json': {'command': '/var/lib/nova/delay-nova-compute --delay 180 --nova-binary /usr/bin/nova-compute ', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}, {'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_wait_for_compute_service.py', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_virtlogd.json': {'command': '/usr/local/bin/virtlogd_wrapper', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtnodedevd.json': {'command': '/usr/sbin/virtnodedevd --config /etc/libvirt/virtnodedevd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtproxyd.json': {'command': '/usr/sbin/virtproxyd --config /etc/libvirt/virtproxyd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtqemud.json': {'command': '/usr/sbin/virtqemud --config /etc/libvirt/virtqemud.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtsecretd.json': {'command': '/usr/sbin/virtsecretd --config /etc/libvirt/virtsecretd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtstoraged.json': {'command': '/usr/sbin/virtstoraged --config /etc/libvirt/virtstoraged.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/ovn_controller.json': {'command': '/usr/bin/ovn-controller --pidfile --log-file unix:/run/openvswitch/db.sock ', 'permissions': [{'owner': 'root:root', 'path': '/var/log/openvswitch', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/ovn', 'recurse': True}]}, '/var/lib/kolla/config_files/ovn_metadata_agent.json': {'command': '/usr/bin/networking-ovn-metadata-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --log-file=/var/log/neutron/ovn-metadata-agent.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'neutron:neutron', 'path': '/var/log/neutron', 'recurse': True}, {'owner': 'neutron:neutron', 'path': '/var/lib/neutron', 'recurse': True}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/certs/ovn_metadata.crt', 'perm': '0644'}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/private/ovn_metadata.key', 'perm': '0644'}]}, '/var/lib/kolla/config_files/rsyslog.json': {'command': '/usr/sbin/rsyslogd -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'root:root', 'path': '/var/lib/rsyslog', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/rsyslog', 'recurse': True}]}}
Dec 06 08:11:04 np0005548790.localdomain sudo[46795]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:05 np0005548790.localdomain sshd[46781]: Invalid user demo from 45.135.232.92 port 39170
Dec 06 08:11:06 np0005548790.localdomain sshd[46781]: Connection reset by invalid user demo 45.135.232.92 port 39170 [preauth]
Dec 06 08:11:06 np0005548790.localdomain sshd[46799]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:11:08 np0005548790.localdomain sshd[46799]: Invalid user op from 45.135.232.92 port 32334
Dec 06 08:11:09 np0005548790.localdomain sshd[46799]: Connection reset by invalid user op 45.135.232.92 port 32334 [preauth]
Dec 06 08:11:09 np0005548790.localdomain sudo[46846]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zlvtuqcrjbnibmmckdlwpbdhnvznbzdf ; /usr/bin/python3
Dec 06 08:11:09 np0005548790.localdomain sudo[46846]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:11:09 np0005548790.localdomain python3[46848]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:11:09 np0005548790.localdomain sudo[46846]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:10 np0005548790.localdomain sudo[46889]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wpkkewaauqijozecpkzwclhzzbfosqlc ; /usr/bin/python3
Dec 06 08:11:10 np0005548790.localdomain sudo[46889]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:11:10 np0005548790.localdomain python3[46891]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008669.4703047-80759-110002395736822/source _original_basename=tmp32d4505s follow=False checksum=dfdcc7695edd230e7a2c06fc7b739bfa56506d8f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:11:10 np0005548790.localdomain sudo[46889]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:10 np0005548790.localdomain sudo[46919]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-phapvaisevknkbrynifsahkmdipijmwk ; /usr/bin/python3
Dec 06 08:11:10 np0005548790.localdomain sudo[46919]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:11:10 np0005548790.localdomain python3[46921]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:11:11 np0005548790.localdomain sudo[46919]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:12 np0005548790.localdomain sudo[46969]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bdbknktntshfufqoubhrllxdcwhkhwul ; /usr/bin/python3
Dec 06 08:11:12 np0005548790.localdomain sudo[46969]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:11:12 np0005548790.localdomain sudo[46969]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:12 np0005548790.localdomain sudo[47012]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rmwbkiekwyfcwyglqutmbnwdjjrgdbwp ; /usr/bin/python3
Dec 06 08:11:12 np0005548790.localdomain sudo[47012]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:11:12 np0005548790.localdomain sudo[47012]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:13 np0005548790.localdomain sudo[47042]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ikutcdnftjvhqrlnldasfqefolbtsvme ; /usr/bin/python3
Dec 06 08:11:13 np0005548790.localdomain sudo[47042]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:11:13 np0005548790.localdomain python3[47044]: ansible-file Invoked with path=/var/lib/container-puppet state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 08:11:13 np0005548790.localdomain sudo[47042]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:13 np0005548790.localdomain sudo[47090]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sdtcegycyydpzpeftwansnaxskmycbia ; /usr/bin/python3
Dec 06 08:11:13 np0005548790.localdomain sudo[47090]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:11:14 np0005548790.localdomain sudo[47090]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:14 np0005548790.localdomain sudo[47133]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vwocwffjssobyfnahycjwgmykcvqblrd ; /usr/bin/python3
Dec 06 08:11:14 np0005548790.localdomain sudo[47133]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:11:14 np0005548790.localdomain sudo[47133]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:15 np0005548790.localdomain sudo[47163]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wmvhqieneffbygnoichywdzhlzggpyou ; /usr/bin/python3
Dec 06 08:11:15 np0005548790.localdomain sudo[47163]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:11:15 np0005548790.localdomain python3[47165]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Dec 06 08:11:15 np0005548790.localdomain sudo[47163]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:17 np0005548790.localdomain sudo[47179]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cdmjmiochercebbqctakkvjistsxbccp ; /usr/bin/python3
Dec 06 08:11:17 np0005548790.localdomain sudo[47179]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:11:17 np0005548790.localdomain python3[47181]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q lvm2 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:11:17 np0005548790.localdomain sudo[47179]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:18 np0005548790.localdomain sudo[47196]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lfcrgtfglsmddywgdzhqnrzytygkwvuh ; /usr/bin/python3
Dec 06 08:11:18 np0005548790.localdomain sudo[47196]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:11:18 np0005548790.localdomain python3[47198]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 08:11:22 np0005548790.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Dec 06 08:11:22 np0005548790.localdomain dbus-broker-launch[14969]: Noticed file-system modification, trigger reload.
Dec 06 08:11:22 np0005548790.localdomain dbus-broker-launch[14969]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 06 08:11:22 np0005548790.localdomain dbus-broker-launch[14969]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 06 08:11:22 np0005548790.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Dec 06 08:11:22 np0005548790.localdomain systemd[1]: Reexecuting.
Dec 06 08:11:22 np0005548790.localdomain systemd[1]: systemd 252-14.el9_2.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 06 08:11:22 np0005548790.localdomain systemd[1]: Detected virtualization kvm.
Dec 06 08:11:22 np0005548790.localdomain systemd[1]: Detected architecture x86-64.
Dec 06 08:11:22 np0005548790.localdomain systemd-rc-local-generator[47249]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:11:22 np0005548790.localdomain systemd-sysv-generator[47253]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:11:22 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:11:30 np0005548790.localdomain kernel: SELinux:  Converting 2706 SID table entries...
Dec 06 08:11:30 np0005548790.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 08:11:30 np0005548790.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 06 08:11:30 np0005548790.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 08:11:30 np0005548790.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 06 08:11:30 np0005548790.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 08:11:30 np0005548790.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 08:11:30 np0005548790.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 08:11:30 np0005548790.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Dec 06 08:11:30 np0005548790.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec 06 08:11:30 np0005548790.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Dec 06 08:11:31 np0005548790.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 08:11:31 np0005548790.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 06 08:11:31 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:11:32 np0005548790.localdomain systemd-rc-local-generator[47344]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:11:32 np0005548790.localdomain systemd-sysv-generator[47348]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:11:32 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:11:32 np0005548790.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 08:11:32 np0005548790.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 08:11:32 np0005548790.localdomain systemd-journald[618]: Journal stopped
Dec 06 08:11:32 np0005548790.localdomain systemd-journald[618]: Received SIGTERM from PID 1 (systemd).
Dec 06 08:11:32 np0005548790.localdomain systemd[1]: Stopping Journal Service...
Dec 06 08:11:32 np0005548790.localdomain systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 06 08:11:32 np0005548790.localdomain systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 06 08:11:32 np0005548790.localdomain systemd[1]: Stopped Journal Service.
Dec 06 08:11:32 np0005548790.localdomain systemd[1]: systemd-journald.service: Consumed 1.731s CPU time.
Dec 06 08:11:32 np0005548790.localdomain systemd[1]: Starting Journal Service...
Dec 06 08:11:32 np0005548790.localdomain systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 06 08:11:32 np0005548790.localdomain systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 06 08:11:32 np0005548790.localdomain systemd[1]: systemd-udevd.service: Consumed 2.917s CPU time.
Dec 06 08:11:32 np0005548790.localdomain systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 06 08:11:32 np0005548790.localdomain systemd-journald[47675]: Journal started
Dec 06 08:11:32 np0005548790.localdomain systemd-journald[47675]: Runtime Journal (/run/log/journal/4b30904fc4748c16d0c72dbebcabab49) is 12.2M, max 314.7M, 302.5M free.
Dec 06 08:11:32 np0005548790.localdomain systemd[1]: Started Journal Service.
Dec 06 08:11:32 np0005548790.localdomain systemd-journald[47675]: Field hash table of /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation.
Dec 06 08:11:32 np0005548790.localdomain systemd-journald[47675]: /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 06 08:11:32 np0005548790.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 08:11:32 np0005548790.localdomain systemd-udevd[47684]: Using default interface naming scheme 'rhel-9.0'.
Dec 06 08:11:32 np0005548790.localdomain systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 06 08:11:32 np0005548790.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 08:11:32 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:11:32 np0005548790.localdomain systemd-rc-local-generator[48236]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:11:32 np0005548790.localdomain systemd-sysv-generator[48239]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:11:32 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:11:32 np0005548790.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 08:11:33 np0005548790.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 08:11:33 np0005548790.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 06 08:11:33 np0005548790.localdomain systemd[1]: man-db-cache-update.service: Consumed 1.546s CPU time.
Dec 06 08:11:33 np0005548790.localdomain systemd[1]: run-r6881084a375241dc971277e9aa05b260.service: Deactivated successfully.
Dec 06 08:11:33 np0005548790.localdomain systemd[1]: run-r9ee67df43b904ee8ba92e05b41437869.service: Deactivated successfully.
Dec 06 08:11:34 np0005548790.localdomain sudo[47196]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:34 np0005548790.localdomain sudo[48684]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wtottgqfmvzgqhhvrilrdjtmflpdleok ; /usr/bin/python3
Dec 06 08:11:34 np0005548790.localdomain sudo[48684]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:11:34 np0005548790.localdomain python3[48686]: ansible-sysctl Invoked with name=vm.unprivileged_userfaultfd reload=True state=present sysctl_file=/etc/sysctl.d/99-tripleo-postcopy.conf sysctl_set=True value=1 ignoreerrors=False
Dec 06 08:11:34 np0005548790.localdomain sudo[48684]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:35 np0005548790.localdomain sudo[48704]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lxqrzwcthwurrqqemezqjwsqdmowxwls ; /usr/bin/python3
Dec 06 08:11:35 np0005548790.localdomain sudo[48704]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:11:35 np0005548790.localdomain python3[48706]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ksm.service || systemctl is-enabled ksm.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:11:35 np0005548790.localdomain sudo[48704]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:36 np0005548790.localdomain sudo[48722]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xiltxxtuyzehmlpotdjkbgqvcyoxndzv ; /usr/bin/python3
Dec 06 08:11:36 np0005548790.localdomain sudo[48722]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:11:36 np0005548790.localdomain python3[48724]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 06 08:11:36 np0005548790.localdomain python3[48724]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 --format json
Dec 06 08:11:36 np0005548790.localdomain python3[48724]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 -q --tls-verify=false
Dec 06 08:11:43 np0005548790.localdomain podman[48736]: 2025-12-06 08:11:36.692229118 +0000 UTC m=+0.040300790 image pull  registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Dec 06 08:11:43 np0005548790.localdomain python3[48724]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect bac901955dcf7a32a493c6ef724c092009bbc18467858aa8c55e916b8c2b2b8f --format json
Dec 06 08:11:43 np0005548790.localdomain sudo[48722]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:43 np0005548790.localdomain sudo[48836]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ekeczcxrocrxfcrubuqwcrdweabvtlpl ; /usr/bin/python3
Dec 06 08:11:43 np0005548790.localdomain sudo[48836]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:11:43 np0005548790.localdomain python3[48838]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 06 08:11:43 np0005548790.localdomain python3[48838]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 --format json
Dec 06 08:11:44 np0005548790.localdomain python3[48838]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 -q --tls-verify=false
Dec 06 08:11:50 np0005548790.localdomain sudo[48915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:11:50 np0005548790.localdomain sudo[48915]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:11:50 np0005548790.localdomain sudo[48915]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:50 np0005548790.localdomain sudo[48930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 08:11:50 np0005548790.localdomain sudo[48930]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:11:51 np0005548790.localdomain podman[48851]: 2025-12-06 08:11:44.091276611 +0000 UTC m=+0.028462556 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Dec 06 08:11:51 np0005548790.localdomain python3[48838]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 44feaf8d87c1d40487578230316b622680576d805efdb45dfeea6aad464b41f1 --format json
Dec 06 08:11:51 np0005548790.localdomain sudo[48836]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:51 np0005548790.localdomain sudo[49041]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xrvlhajfocqiprcwhvhtrqjhjgxmwbbs ; /usr/bin/python3
Dec 06 08:11:51 np0005548790.localdomain sudo[49041]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:11:51 np0005548790.localdomain podman[49051]: 2025-12-06 08:11:51.817850395 +0000 UTC m=+0.094190925 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, build-date=2025-11-26T19:44:28Z, version=7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.expose-services=, name=rhceph, release=1763362218, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, ceph=True, vcs-type=git, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:11:51 np0005548790.localdomain python3[49048]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 06 08:11:51 np0005548790.localdomain python3[49048]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 --format json
Dec 06 08:11:51 np0005548790.localdomain podman[49051]: 2025-12-06 08:11:51.8962365 +0000 UTC m=+0.172577020 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_BRANCH=main, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 08:11:51 np0005548790.localdomain python3[49048]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 -q --tls-verify=false
Dec 06 08:11:52 np0005548790.localdomain sudo[48930]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:52 np0005548790.localdomain sudo[49139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:11:52 np0005548790.localdomain sudo[49139]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:11:52 np0005548790.localdomain sudo[49139]: pam_unix(sudo:session): session closed for user root
Dec 06 08:11:52 np0005548790.localdomain sudo[49154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:11:52 np0005548790.localdomain sudo[49154]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:11:52 np0005548790.localdomain sudo[49154]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:00 np0005548790.localdomain sudo[49362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:12:00 np0005548790.localdomain sudo[49362]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:12:00 np0005548790.localdomain sudo[49362]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:11 np0005548790.localdomain podman[49091]: 2025-12-06 08:11:51.950353793 +0000 UTC m=+0.028632400 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 06 08:12:11 np0005548790.localdomain python3[49048]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 3a088c12511c977065fdc5f1594cba7b1a79f163578a6ffd0ac4a475b8e67938 --format json
Dec 06 08:12:11 np0005548790.localdomain sudo[49041]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:11 np0005548790.localdomain sudo[50186]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lzbrmrhanqodfyvfmwvzahbfgxuzztgt ; /usr/bin/python3
Dec 06 08:12:11 np0005548790.localdomain sudo[50186]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:12:11 np0005548790.localdomain python3[50188]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 06 08:12:11 np0005548790.localdomain python3[50188]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 --format json
Dec 06 08:12:11 np0005548790.localdomain python3[50188]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 -q --tls-verify=false
Dec 06 08:12:33 np0005548790.localdomain podman[50202]: 2025-12-06 08:12:11.724482468 +0000 UTC m=+0.031071657 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 06 08:12:33 np0005548790.localdomain python3[50188]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 514d439186251360cf734cbc6d4a44c834664891872edf3798a653dfaacf10c0 --format json
Dec 06 08:12:33 np0005548790.localdomain sudo[50186]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:33 np0005548790.localdomain sudo[50407]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tvfvhjalujeffltieivtxfaauxsdgkys ; /usr/bin/python3
Dec 06 08:12:33 np0005548790.localdomain sudo[50407]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:12:36 np0005548790.localdomain python3[50409]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 06 08:12:36 np0005548790.localdomain python3[50409]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 --format json
Dec 06 08:12:36 np0005548790.localdomain python3[50409]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 -q --tls-verify=false
Dec 06 08:12:40 np0005548790.localdomain podman[50422]: 2025-12-06 08:12:36.196586331 +0000 UTC m=+0.044456043 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1
Dec 06 08:12:40 np0005548790.localdomain python3[50409]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect a9dd7a2ac6f35cb086249f87f74e2f8e74e7e2ad5141ce2228263be6faedce26 --format json
Dec 06 08:12:40 np0005548790.localdomain sudo[50407]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:40 np0005548790.localdomain sudo[50511]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kkhvpnrsdgyvqjggsamkfdaisujyuxvu ; /usr/bin/python3
Dec 06 08:12:40 np0005548790.localdomain sudo[50511]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:12:40 np0005548790.localdomain python3[50513]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 06 08:12:40 np0005548790.localdomain python3[50513]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 --format json
Dec 06 08:12:40 np0005548790.localdomain python3[50513]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 -q --tls-verify=false
Dec 06 08:12:45 np0005548790.localdomain podman[50526]: 2025-12-06 08:12:40.877415667 +0000 UTC m=+0.043799496 image pull  registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Dec 06 08:12:45 np0005548790.localdomain python3[50513]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 24976907b2c2553304119aba5731a800204d664feed24ca9eb7f2b4c7d81016b --format json
Dec 06 08:12:45 np0005548790.localdomain sudo[50511]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:45 np0005548790.localdomain sudo[50603]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fqkzinnaeadjloxqbxnhxwljysxfxaio ; /usr/bin/python3
Dec 06 08:12:45 np0005548790.localdomain sudo[50603]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:12:45 np0005548790.localdomain python3[50605]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 06 08:12:45 np0005548790.localdomain python3[50605]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 --format json
Dec 06 08:12:45 np0005548790.localdomain python3[50605]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 -q --tls-verify=false
Dec 06 08:12:48 np0005548790.localdomain podman[50619]: 2025-12-06 08:12:45.842486011 +0000 UTC m=+0.042968722 image pull  registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Dec 06 08:12:48 np0005548790.localdomain python3[50605]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 57163a7b21fdbb804a27897cb6e6052a5e5c7a339c45d663e80b52375a760dcf --format json
Dec 06 08:12:48 np0005548790.localdomain sudo[50603]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:48 np0005548790.localdomain sudo[50694]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fyatxijuohkcrsgdyrrrrfntiwlpqrwo ; /usr/bin/python3
Dec 06 08:12:48 np0005548790.localdomain sudo[50694]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:12:49 np0005548790.localdomain python3[50696]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 06 08:12:49 np0005548790.localdomain python3[50696]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 --format json
Dec 06 08:12:49 np0005548790.localdomain python3[50696]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 -q --tls-verify=false
Dec 06 08:12:50 np0005548790.localdomain podman[50708]: 2025-12-06 08:12:49.25113614 +0000 UTC m=+0.038605284 image pull  registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Dec 06 08:12:50 np0005548790.localdomain python3[50696]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 076d82a27d63c8328729ed27ceb4291585ae18d017befe6fe353df7aa11715ae --format json
Dec 06 08:12:51 np0005548790.localdomain sudo[50694]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:51 np0005548790.localdomain sudo[50785]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qiuhcwvqnkmwytldaqfxtfwcqchdtefo ; /usr/bin/python3
Dec 06 08:12:51 np0005548790.localdomain sudo[50785]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:12:51 np0005548790.localdomain python3[50787]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 06 08:12:51 np0005548790.localdomain python3[50787]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 --format json
Dec 06 08:12:51 np0005548790.localdomain python3[50787]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 -q --tls-verify=false
Dec 06 08:12:53 np0005548790.localdomain podman[50800]: 2025-12-06 08:12:51.484713925 +0000 UTC m=+0.040564056 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1
Dec 06 08:12:53 np0005548790.localdomain python3[50787]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect d0dbcb95546840a8d088df044347a7877ad5ea45a2ddba0578e9bb5de4ab0da5 --format json
Dec 06 08:12:53 np0005548790.localdomain sudo[50785]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:53 np0005548790.localdomain sudo[50877]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qsqjyxehsumwdivcwvtnykhaytnqrxtm ; /usr/bin/python3
Dec 06 08:12:53 np0005548790.localdomain sudo[50877]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:12:53 np0005548790.localdomain python3[50879]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 06 08:12:53 np0005548790.localdomain python3[50879]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 --format json
Dec 06 08:12:53 np0005548790.localdomain python3[50879]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 -q --tls-verify=false
Dec 06 08:12:57 np0005548790.localdomain podman[50891]: 2025-12-06 08:12:53.999841536 +0000 UTC m=+0.044022521 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Dec 06 08:12:57 np0005548790.localdomain python3[50879]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect e6e981540e553415b2d6eda490d7683db07164af2e7a0af8245623900338a4d6 --format json
Dec 06 08:12:57 np0005548790.localdomain sudo[50877]: pam_unix(sudo:session): session closed for user root
Dec 06 08:12:58 np0005548790.localdomain sudo[50977]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ymcyaxsiwokonfzjpwirvgohdbxplode ; /usr/bin/python3
Dec 06 08:12:58 np0005548790.localdomain sudo[50977]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:12:58 np0005548790.localdomain python3[50979]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 06 08:12:58 np0005548790.localdomain python3[50979]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 --format json
Dec 06 08:12:58 np0005548790.localdomain python3[50979]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 -q --tls-verify=false
Dec 06 08:13:00 np0005548790.localdomain sudo[51030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:13:00 np0005548790.localdomain sudo[51030]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:13:00 np0005548790.localdomain sudo[51030]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:00 np0005548790.localdomain sudo[51045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:13:00 np0005548790.localdomain sudo[51045]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:13:00 np0005548790.localdomain podman[50992]: 2025-12-06 08:12:58.357043869 +0000 UTC m=+0.041972875 image pull  registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Dec 06 08:13:00 np0005548790.localdomain python3[50979]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 87ee88cbf01fb42e0b22747072843bcca6130a90eda4de6e74b3ccd847bb4040 --format json
Dec 06 08:13:00 np0005548790.localdomain sudo[50977]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:01 np0005548790.localdomain sudo[51045]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:01 np0005548790.localdomain sudo[51128]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hmfbdgaykdpfputmtiksugrvhrjrovhv ; /usr/bin/python3
Dec 06 08:13:01 np0005548790.localdomain sudo[51128]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:01 np0005548790.localdomain python3[51130]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:13:01 np0005548790.localdomain sudo[51128]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:01 np0005548790.localdomain sudo[51133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:13:01 np0005548790.localdomain sudo[51133]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:13:01 np0005548790.localdomain sudo[51133]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:02 np0005548790.localdomain sudo[51193]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-agazvrovxjqcdtljjlgxqwnajfydzdui ; /usr/bin/python3
Dec 06 08:13:02 np0005548790.localdomain sudo[51193]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:02 np0005548790.localdomain sudo[51193]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:02 np0005548790.localdomain sudo[51211]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-invznwyuuznjyswfzpxgtmxaofkndpzf ; /usr/bin/python3
Dec 06 08:13:02 np0005548790.localdomain sudo[51211]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:02 np0005548790.localdomain sudo[51211]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:03 np0005548790.localdomain sudo[51315]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-engczasjrfogpuumzrwgxjeidhiusxvk ; ANSIBLE_ASYNC_DIR=/tmp/.ansible_async /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008782.7134645-83661-76726415112899/async_wrapper.py 929449313404 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008782.7134645-83661-76726415112899/AnsiballZ_command.py _
Dec 06 08:13:03 np0005548790.localdomain sudo[51315]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 06 08:13:03 np0005548790.localdomain ansible-async_wrapper.py[51317]: Invoked with 929449313404 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008782.7134645-83661-76726415112899/AnsiballZ_command.py _
Dec 06 08:13:03 np0005548790.localdomain ansible-async_wrapper.py[51320]: Starting module and watcher
Dec 06 08:13:03 np0005548790.localdomain ansible-async_wrapper.py[51320]: Start watching 51321 (3600)
Dec 06 08:13:03 np0005548790.localdomain ansible-async_wrapper.py[51321]: Start module (51321)
Dec 06 08:13:03 np0005548790.localdomain ansible-async_wrapper.py[51317]: Return async_wrapper task started.
Dec 06 08:13:03 np0005548790.localdomain sudo[51315]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:03 np0005548790.localdomain sudo[51336]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bybidibtjcqohstwqyhxfzrrxwwfovox ; /usr/bin/python3
Dec 06 08:13:03 np0005548790.localdomain sudo[51336]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:03 np0005548790.localdomain python3[51341]: ansible-ansible.legacy.async_status Invoked with jid=929449313404.51317 mode=status _async_dir=/tmp/.ansible_async
Dec 06 08:13:03 np0005548790.localdomain sudo[51336]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:06 np0005548790.localdomain puppet-user[51340]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 06 08:13:06 np0005548790.localdomain puppet-user[51340]:    (file: /etc/puppet/hiera.yaml)
Dec 06 08:13:06 np0005548790.localdomain puppet-user[51340]: Warning: Undefined variable '::deploy_config_name';
Dec 06 08:13:06 np0005548790.localdomain puppet-user[51340]:    (file & line not available)
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]:    (file & line not available)
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]: Notice: Compiled catalog for np0005548790.localdomain in environment production in 0.14 seconds
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Exec[directory-create-etc-my.cnf.d]/returns: executed successfully
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]: Notice: Applied catalog in 0.09 seconds
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]: Application:
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]:    Initial environment: production
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]:    Converged environment: production
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]:          Run mode: user
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]: Changes:
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]:             Total: 3
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]: Events:
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]:           Success: 3
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]:             Total: 3
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]: Resources:
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]:           Changed: 3
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]:       Out of sync: 3
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]:             Total: 10
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]: Time:
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]:          Schedule: 0.00
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]:              File: 0.00
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]:              Exec: 0.02
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]:            Augeas: 0.06
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]:    Transaction evaluation: 0.09
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]:    Catalog application: 0.09
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]:    Config retrieval: 0.17
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]:          Last run: 1765008787
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]:        Filebucket: 0.00
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]:             Total: 0.09
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]: Version:
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]:            Config: 1765008786
Dec 06 08:13:07 np0005548790.localdomain puppet-user[51340]:            Puppet: 7.10.0
Dec 06 08:13:07 np0005548790.localdomain ansible-async_wrapper.py[51321]: Module complete (51321)
Dec 06 08:13:08 np0005548790.localdomain ansible-async_wrapper.py[51320]: Done in kid B.
Dec 06 08:13:13 np0005548790.localdomain sudo[51558]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-phtylgqpzteuzpjxxkekqyonbxzoyxhg ; /usr/bin/python3
Dec 06 08:13:13 np0005548790.localdomain sudo[51558]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:13 np0005548790.localdomain python3[51560]: ansible-ansible.legacy.async_status Invoked with jid=929449313404.51317 mode=status _async_dir=/tmp/.ansible_async
Dec 06 08:13:13 np0005548790.localdomain sudo[51558]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:14 np0005548790.localdomain sudo[51574]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xcsijotpbopaqdagzjanzsyjeddgkgzv ; /usr/bin/python3
Dec 06 08:13:14 np0005548790.localdomain sudo[51574]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:14 np0005548790.localdomain python3[51576]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 08:13:14 np0005548790.localdomain sudo[51574]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:14 np0005548790.localdomain sudo[51590]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-skjbvaxquzducjjaernoovmifgnbcqqp ; /usr/bin/python3
Dec 06 08:13:14 np0005548790.localdomain sudo[51590]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:14 np0005548790.localdomain python3[51592]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:13:14 np0005548790.localdomain sudo[51590]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:15 np0005548790.localdomain sudo[51638]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-toavdsqqttddwbyfgmfkpnpjzoasqszj ; /usr/bin/python3
Dec 06 08:13:15 np0005548790.localdomain sudo[51638]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:16 np0005548790.localdomain python3[51640]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:13:16 np0005548790.localdomain sudo[51638]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:16 np0005548790.localdomain sudo[51681]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-asasjljwezcydtovxbdifucxgbpkwphe ; /usr/bin/python3
Dec 06 08:13:16 np0005548790.localdomain sudo[51681]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:16 np0005548790.localdomain python3[51683]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/container-puppet/puppetlabs/facter.conf setype=svirt_sandbox_file_t selevel=s0 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008795.1033204-84019-158385674091634/source _original_basename=tmplff1nb81 follow=False checksum=53908622cb869db5e2e2a68e737aa2ab1a872111 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 08:13:16 np0005548790.localdomain sudo[51681]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:17 np0005548790.localdomain sudo[51711]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-itpjtxduurprbziwhuuomwksdbekpluf ; /usr/bin/python3
Dec 06 08:13:17 np0005548790.localdomain sudo[51711]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:17 np0005548790.localdomain python3[51713]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:13:17 np0005548790.localdomain sudo[51711]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:17 np0005548790.localdomain sudo[51727]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ltefpuisclrytoqxnrtlyofbdllirfal ; /usr/bin/python3
Dec 06 08:13:17 np0005548790.localdomain sudo[51727]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:17 np0005548790.localdomain sudo[51727]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:18 np0005548790.localdomain sudo[51814]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zoghikgerxmmulhporimsszkmkqjtbcr ; /usr/bin/python3
Dec 06 08:13:18 np0005548790.localdomain sudo[51814]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:18 np0005548790.localdomain python3[51816]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Dec 06 08:13:18 np0005548790.localdomain sudo[51814]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:18 np0005548790.localdomain sudo[51833]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zmpbwvzvkwnjwpjlgcvzgonkuyqxaovh ; /usr/bin/python3
Dec 06 08:13:18 np0005548790.localdomain sudo[51833]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:18 np0005548790.localdomain python3[51835]: ansible-file Invoked with path=/var/lib/tripleo-config/container-puppet-config mode=448 recurse=True setype=container_file_t force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 08:13:18 np0005548790.localdomain sudo[51833]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:18 np0005548790.localdomain sudo[51849]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gjviuzuaneeqbbigcbpumhtqbopmygnw ; /usr/bin/python3
Dec 06 08:13:18 np0005548790.localdomain sudo[51849]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:19 np0005548790.localdomain python3[51851]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=False puppet_config=/var/lib/container-puppet/container-puppet.json short_hostname=np0005548790 step=1 update_config_hash_only=False
Dec 06 08:13:19 np0005548790.localdomain sudo[51849]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:19 np0005548790.localdomain sudo[51865]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ryzxhpjdmdhodogpkemmybadremrrdmo ; /usr/bin/python3
Dec 06 08:13:19 np0005548790.localdomain sudo[51865]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:19 np0005548790.localdomain python3[51867]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:13:19 np0005548790.localdomain sudo[51865]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:19 np0005548790.localdomain sudo[51881]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xyqceuwbtlmktzdlkxochlzcvtapoyuv ; /usr/bin/python3
Dec 06 08:13:19 np0005548790.localdomain sudo[51881]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:19 np0005548790.localdomain python3[51883]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True
Dec 06 08:13:20 np0005548790.localdomain sudo[51881]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:20 np0005548790.localdomain sudo[51897]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tihttfrltckrhmjeywjokaehkkpzmxwo ; /usr/bin/python3
Dec 06 08:13:20 np0005548790.localdomain sudo[51897]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:20 np0005548790.localdomain python3[51899]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 06 08:13:20 np0005548790.localdomain sudo[51897]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:21 np0005548790.localdomain sudo[51939]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-akynwczzqugtbnqtxiybyrvsjdkeccvq ; /usr/bin/python3
Dec 06 08:13:21 np0005548790.localdomain sudo[51939]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:21 np0005548790.localdomain python3[51941]: ansible-tripleo_container_manage Invoked with config_id=tripleo_puppet_step1 config_dir=/var/lib/tripleo-config/container-puppet-config/step_1 config_patterns=container-puppet-*.json config_overrides={} concurrency=6 log_base_path=/var/log/containers/stdouts debug=False
Dec 06 08:13:21 np0005548790.localdomain podman[52116]: 2025-12-06 08:13:21.692930041 +0000 UTC m=+0.029975559 image pull  registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Dec 06 08:13:21 np0005548790.localdomain podman[52116]: 2025-12-06 08:13:21.839842865 +0000 UTC m=+0.176888423 container create e8dc841051e50a0c2ae9627be2c0b696203e864cab463ec312a3cce428337eb3 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=container-puppet-iscsid, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc.)
Dec 06 08:13:21 np0005548790.localdomain podman[52154]: 2025-12-06 08:13:21.882147628 +0000 UTC m=+0.200938819 container create eb6133e988b02363a75dbcc7a467edd17a75a8e084238a3bdd8a4d10d41619bf (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, container_name=container-puppet-collectd, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_puppet_step1, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:13:21 np0005548790.localdomain systemd[1]: Started libpod-conmon-e8dc841051e50a0c2ae9627be2c0b696203e864cab463ec312a3cce428337eb3.scope.
Dec 06 08:13:21 np0005548790.localdomain podman[52133]: 2025-12-06 08:13:21.704408703 +0000 UTC m=+0.033778111 image pull  registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Dec 06 08:13:21 np0005548790.localdomain podman[52119]: 2025-12-06 08:13:21.705344389 +0000 UTC m=+0.051697650 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 06 08:13:21 np0005548790.localdomain podman[52133]: 2025-12-06 08:13:21.92035926 +0000 UTC m=+0.249728658 container create 773c852679a9b7349ef8e9bc7e9228330c2e3ad6625820b8bfcd29e9c9904040 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_puppet_step1, batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=container-puppet-metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044)
Dec 06 08:13:21 np0005548790.localdomain podman[52154]: 2025-12-06 08:13:21.820313633 +0000 UTC m=+0.139104824 image pull  registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Dec 06 08:13:21 np0005548790.localdomain podman[52136]: 2025-12-06 08:13:21.82459994 +0000 UTC m=+0.150539305 image pull  registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Dec 06 08:13:21 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:13:21 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdff378601a5745eaa4f1d4df62f2aeddf56fee1bd1fce6d42c6ce595385b282/merged/tmp/iscsi.host supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:21 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdff378601a5745eaa4f1d4df62f2aeddf56fee1bd1fce6d42c6ce595385b282/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:21 np0005548790.localdomain systemd[1]: Started libpod-conmon-eb6133e988b02363a75dbcc7a467edd17a75a8e084238a3bdd8a4d10d41619bf.scope.
Dec 06 08:13:21 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:13:21 np0005548790.localdomain systemd[1]: Started libpod-conmon-773c852679a9b7349ef8e9bc7e9228330c2e3ad6625820b8bfcd29e9c9904040.scope.
Dec 06 08:13:21 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cef3ed0275ec9a23efe0f943d6fe49d48d180e84df9235ed237a3514d395164/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:21 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:13:21 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/853ff4856ee5a83cc87cbeb349af5655b5d75f579ca17ec21c7fae4967e407aa/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:21 np0005548790.localdomain podman[52154]: 2025-12-06 08:13:21.971250737 +0000 UTC m=+0.290041898 container init eb6133e988b02363a75dbcc7a467edd17a75a8e084238a3bdd8a4d10d41619bf (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, architecture=x86_64, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.component=openstack-collectd-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, container_name=container-puppet-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 collectd)
Dec 06 08:13:21 np0005548790.localdomain podman[52154]: 2025-12-06 08:13:21.981531128 +0000 UTC m=+0.300322299 container start eb6133e988b02363a75dbcc7a467edd17a75a8e084238a3bdd8a4d10d41619bf (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=container-puppet-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_id=tripleo_puppet_step1, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1761123044, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:13:21 np0005548790.localdomain podman[52154]: 2025-12-06 08:13:21.981859426 +0000 UTC m=+0.300650607 container attach eb6133e988b02363a75dbcc7a467edd17a75a8e084238a3bdd8a4d10d41619bf (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=container-puppet-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, release=1761123044)
Dec 06 08:13:21 np0005548790.localdomain podman[52116]: 2025-12-06 08:13:21.991849808 +0000 UTC m=+0.328895346 container init e8dc841051e50a0c2ae9627be2c0b696203e864cab463ec312a3cce428337eb3 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, version=17.1.12, container_name=container-puppet-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team)
Dec 06 08:13:22 np0005548790.localdomain podman[52116]: 2025-12-06 08:13:21.999923768 +0000 UTC m=+0.336969286 container start e8dc841051e50a0c2ae9627be2c0b696203e864cab463ec312a3cce428337eb3 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=container-puppet-iscsid, release=1761123044, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible)
Dec 06 08:13:22 np0005548790.localdomain podman[52116]: 2025-12-06 08:13:22.000211776 +0000 UTC m=+0.337257354 container attach e8dc841051e50a0c2ae9627be2c0b696203e864cab463ec312a3cce428337eb3 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=container-puppet-iscsid, io.openshift.expose-services=, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_puppet_step1)
Dec 06 08:13:23 np0005548790.localdomain podman[52136]: 2025-12-06 08:13:23.500072712 +0000 UTC m=+1.826012087 container create 41b6dc47e65f36adb0a60e82f18204d3877fb9a54a464dca7e04c2ab17b033fd (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, distribution-scope=public, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=container-puppet-crond, batch=17.1_20251118.1, version=17.1.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, tcib_managed=true, architecture=x86_64, release=1761123044, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Dec 06 08:13:23 np0005548790.localdomain systemd[1]: Started libpod-conmon-41b6dc47e65f36adb0a60e82f18204d3877fb9a54a464dca7e04c2ab17b033fd.scope.
Dec 06 08:13:23 np0005548790.localdomain podman[52133]: 2025-12-06 08:13:23.555696289 +0000 UTC m=+1.885065707 container init 773c852679a9b7349ef8e9bc7e9228330c2e3ad6625820b8bfcd29e9c9904040 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, container_name=container-puppet-metrics_qdr, batch=17.1_20251118.1, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_puppet_step1, version=17.1.12, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=)
Dec 06 08:13:23 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:13:23 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8550f340eb477249447dfc212906241c2d1057a5f1e3afa20f33c907252b7739/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:23 np0005548790.localdomain podman[52133]: 2025-12-06 08:13:23.570141382 +0000 UTC m=+1.899510810 container start 773c852679a9b7349ef8e9bc7e9228330c2e3ad6625820b8bfcd29e9c9904040 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=container-puppet-metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:13:23 np0005548790.localdomain podman[52133]: 2025-12-06 08:13:23.570378379 +0000 UTC m=+1.899747797 container attach 773c852679a9b7349ef8e9bc7e9228330c2e3ad6625820b8bfcd29e9c9904040 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, container_name=container-puppet-metrics_qdr, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, tcib_managed=true, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team)
Dec 06 08:13:23 np0005548790.localdomain podman[52136]: 2025-12-06 08:13:23.57962133 +0000 UTC m=+1.905560675 container init 41b6dc47e65f36adb0a60e82f18204d3877fb9a54a464dca7e04c2ab17b033fd (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=container-puppet-crond, release=1761123044, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20251118.1, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:13:23 np0005548790.localdomain podman[52136]: 2025-12-06 08:13:23.58692398 +0000 UTC m=+1.912863315 container start 41b6dc47e65f36adb0a60e82f18204d3877fb9a54a464dca7e04c2ab17b033fd (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_puppet_step1, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.12, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, container_name=container-puppet-crond, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-cron-container)
Dec 06 08:13:23 np0005548790.localdomain podman[52136]: 2025-12-06 08:13:23.58729408 +0000 UTC m=+1.913233415 container attach 41b6dc47e65f36adb0a60e82f18204d3877fb9a54a464dca7e04c2ab17b033fd (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_puppet_step1, version=17.1.12, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=container-puppet-crond, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']})
Dec 06 08:13:23 np0005548790.localdomain podman[52119]: 2025-12-06 08:13:23.611579151 +0000 UTC m=+1.957932432 container create 5dcd4454f7c400ecbfe68f933050a0972d1730268cac5585eea99eae264b5e58 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.buildah.version=1.41.4, distribution-scope=public, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=container-puppet-nova_libvirt, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, version=17.1.12, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true)
Dec 06 08:13:23 np0005548790.localdomain systemd[1]: Started libpod-conmon-5dcd4454f7c400ecbfe68f933050a0972d1730268cac5585eea99eae264b5e58.scope.
Dec 06 08:13:23 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:13:23 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b09684bd34ad3c8fd079e7ba80e7282e1f6c9c49f3e804a6f19145e8413aff6f/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:23 np0005548790.localdomain podman[52119]: 2025-12-06 08:13:23.676468161 +0000 UTC m=+2.022821432 container init 5dcd4454f7c400ecbfe68f933050a0972d1730268cac5585eea99eae264b5e58 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_puppet_step1, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=container-puppet-nova_libvirt, io.openshift.expose-services=, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., release=1761123044, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 06 08:13:23 np0005548790.localdomain podman[52119]: 2025-12-06 08:13:23.686857483 +0000 UTC m=+2.033210764 container start 5dcd4454f7c400ecbfe68f933050a0972d1730268cac5585eea99eae264b5e58 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, release=1761123044, config_id=tripleo_puppet_step1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=container-puppet-nova_libvirt, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt)
Dec 06 08:13:23 np0005548790.localdomain podman[52119]: 2025-12-06 08:13:23.68782283 +0000 UTC m=+2.034176161 container attach 5dcd4454f7c400ecbfe68f933050a0972d1730268cac5585eea99eae264b5e58 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=container-puppet-nova_libvirt, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:35:22Z, version=17.1.12, config_id=tripleo_puppet_step1, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:13:24 np0005548790.localdomain podman[52048]: 2025-12-06 08:13:21.615341315 +0000 UTC m=+0.041911623 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1
Dec 06 08:13:24 np0005548790.localdomain podman[52338]: 2025-12-06 08:13:24.61632938 +0000 UTC m=+0.078416958 container create d4b5d718ec3cc7e829835eddfc0b32c8c2c1aacff4a11827338bfade57727103 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:59Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.expose-services=, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-central-container, batch=17.1_20251118.1, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-ceilometer-central, container_name=container-puppet-ceilometer, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 08:13:24 np0005548790.localdomain systemd[1]: Started libpod-conmon-d4b5d718ec3cc7e829835eddfc0b32c8c2c1aacff4a11827338bfade57727103.scope.
Dec 06 08:13:24 np0005548790.localdomain systemd[1]: tmp-crun.JgQ0mr.mount: Deactivated successfully.
Dec 06 08:13:24 np0005548790.localdomain podman[52338]: 2025-12-06 08:13:24.568593669 +0000 UTC m=+0.030681277 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1
Dec 06 08:13:24 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:13:24 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32b5c90c9b92f21ac24e890e56ff13c8f62f2e0d1099ae340a806b11e7e1b242/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:24 np0005548790.localdomain podman[52338]: 2025-12-06 08:13:24.687696636 +0000 UTC m=+0.149784214 container init d4b5d718ec3cc7e829835eddfc0b32c8c2c1aacff4a11827338bfade57727103 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, container_name=container-puppet-ceilometer, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:59Z, io.openshift.expose-services=, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, batch=17.1_20251118.1, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, com.redhat.component=openstack-ceilometer-central-container)
Dec 06 08:13:24 np0005548790.localdomain podman[52338]: 2025-12-06 08:13:24.695432206 +0000 UTC m=+0.157519784 container start d4b5d718ec3cc7e829835eddfc0b32c8c2c1aacff4a11827338bfade57727103 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:59Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-central, distribution-scope=public, architecture=x86_64, version=17.1.12, container_name=container-puppet-ceilometer, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, name=rhosp17/openstack-ceilometer-central, config_id=tripleo_puppet_step1, com.redhat.component=openstack-ceilometer-central-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com)
Dec 06 08:13:24 np0005548790.localdomain podman[52338]: 2025-12-06 08:13:24.695663712 +0000 UTC m=+0.157751280 container attach d4b5d718ec3cc7e829835eddfc0b32c8c2c1aacff4a11827338bfade57727103 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_puppet_step1, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-central, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:59Z, name=rhosp17/openstack-ceilometer-central, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-central-container, release=1761123044, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, container_name=container-puppet-ceilometer, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central)
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]:    (file: /etc/puppet/hiera.yaml)
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Warning: Undefined variable '::deploy_config_name';
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]:    (file & line not available)
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]:    (file & line not available)
Dec 06 08:13:25 np0005548790.localdomain ovs-vsctl[52516]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]:    (file: /etc/puppet/hiera.yaml)
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]: Warning: Undefined variable '::deploy_config_name';
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]:    (file & line not available)
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]:    (file & line not available)
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]: Notice: Compiled catalog for np0005548790.localdomain in environment production in 0.10 seconds
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52298]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52298]:    (file: /etc/puppet/hiera.yaml)
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52298]: Warning: Undefined variable '::deploy_config_name';
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52298]:    (file & line not available)
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]:    (file: /etc/puppet/hiera.yaml)
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]: Warning: Undefined variable '::deploy_config_name';
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]:    (file & line not available)
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]:    (file: /etc/puppet/hiera.yaml)
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]: Warning: Undefined variable '::deploy_config_name';
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]:    (file & line not available)
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52298]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52298]:    (file & line not available)
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]:    (file & line not available)
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]:    (file & line not available)
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[sync-iqn-to-host]/returns: executed successfully
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]: Notice: Compiled catalog for np0005548790.localdomain in environment production in 0.08 seconds
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]: Notice: Accepting previously invalid value for target type 'Integer'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]: Notice: Compiled catalog for np0005548790.localdomain in environment production in 0.12 seconds
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{sha256}1c3202f58bd2ae16cb31badcbb7f0d4e6697157b987d1887736ad96bb73d70b0'
Dec 06 08:13:25 np0005548790.localdomain crontab[52690]: (root) LIST (root)
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created
Dec 06 08:13:25 np0005548790.localdomain crontab[52691]: (root) REPLACE (root)
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: Compiled catalog for np0005548790.localdomain in environment production in 0.36 seconds
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]: Notice: Applied catalog in 0.04 seconds
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]: Application:
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]:    Initial environment: production
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]:    Converged environment: production
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]:          Run mode: user
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]: Changes:
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]:             Total: 2
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]: Events:
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]:           Success: 2
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]:             Total: 2
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]: Resources:
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]:           Changed: 2
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]:       Out of sync: 2
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]:           Skipped: 7
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]:             Total: 9
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]: Time:
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]:              File: 0.01
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]:              Cron: 0.01
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]:    Transaction evaluation: 0.04
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]:    Catalog application: 0.04
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]:    Config retrieval: 0.11
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]:          Last run: 1765008805
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]:             Total: 0.04
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]: Version:
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]:            Config: 1765008805
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52278]:            Puppet: 7.10.0
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/owner: owner changed 'qdrouterd' to 'root'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/group: group changed 'qdrouterd' to 'root'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/mode: mode changed '0700' to '0755'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]: Notice: /Stage[main]/Qdr::Config/File[/etc/qpid-dispatch/ssl]/ensure: created
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]: Notice: /Stage[main]/Qdr::Config/File[qdrouterd.conf]/content: content changed '{sha256}89e10d8896247f992c5f0baf027c25a8ca5d0441be46d8859d9db2067ea74cd3' to '{sha256}9508998a05743a2cd340d01710403b2cbd2dff6376d51ca34c062370a10995a0'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd]/ensure: created
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd/metrics_qdr.log]/ensure: created
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]: Notice: Applied catalog in 0.03 seconds
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]: Application:
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]:    Initial environment: production
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]:    Converged environment: production
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]:          Run mode: user
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]: Changes:
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]:             Total: 7
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]: Events:
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]:           Success: 7
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]:             Total: 7
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]: Resources:
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]:           Skipped: 13
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]:           Changed: 5
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]:       Out of sync: 5
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]:             Total: 20
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]: Time:
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]:              File: 0.01
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]:    Transaction evaluation: 0.02
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]:    Catalog application: 0.03
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]:    Config retrieval: 0.15
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]:          Last run: 1765008805
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]:             Total: 0.03
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]: Version:
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]:            Config: 1765008805
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52276]:            Puppet: 7.10.0
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52298]: Warning: Scope(Class[Nova]): The os_region_name parameter is deprecated and will be removed \
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52298]: in a future release. Use nova::cinder::os_region_name instead
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52298]: Warning: Scope(Class[Nova]): The catalog_info parameter is deprecated and will be removed \
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52298]: in a future release. Use nova::cinder::catalog_info instead
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/content: content changed '{sha256}aea388a73ebafc7e07a81ddb930a91099211f660eee55fbf92c13007a77501e5' to '{sha256}2523d01ee9c3022c0e9f61d896b1474a168e18472aee141cc278e69fe13f41c1'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/owner: owner changed 'collectd' to 'root'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/group: group changed 'collectd' to 'root'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/mode: mode changed '0644' to '0640'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/owner: owner changed 'collectd' to 'root'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/group: group changed 'collectd' to 'root'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/mode: mode changed '0755' to '0750'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-cpu.conf]/ensure: removed
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-interface.conf]/ensure: removed
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-load.conf]/ensure: removed
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-memory.conf]/ensure: removed
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-syslog.conf]/ensure: removed
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/apache.conf]/ensure: removed
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/dns.conf]/ensure: removed
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ipmi.conf]/ensure: removed
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mcelog.conf]/ensure: removed
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mysql.conf]/ensure: removed
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-events.conf]/ensure: removed
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-stats.conf]/ensure: removed
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ping.conf]/ensure: removed
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/pmu.conf]/ensure: removed
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/rdt.conf]/ensure: removed
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/sensors.conf]/ensure: removed
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/snmp.conf]/ensure: removed
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/write_prometheus.conf]/ensure: removed
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Plugin::Python/File[/usr/lib/python3.9/site-packages]/mode: mode changed '0755' to '0750'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52298]: Warning: Unknown variable: '::nova::compute::verify_glance_signatures'. (file: /etc/puppet/modules/nova/manifests/glance.pp, line: 62, column: 41)
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Plugin::Python/Collectd::Plugin[python]/File[python.load]/ensure: defined content as '{sha256}0163924a0099dd43fe39cb85e836df147fd2cfee8197dc6866d3c384539eb6ee'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Plugin::Python/Concat[/etc/collectd.d/python-config.conf]/File[/etc/collectd.d/python-config.conf]/ensure: defined content as '{sha256}2e5fb20e60b30f84687fc456a37fc62451000d2d85f5bbc1b3fca3a5eac9deeb'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Plugin::Logfile/Collectd::Plugin[logfile]/File[logfile.load]/ensure: defined content as '{sha256}07bbda08ef9b824089500bdc6ac5a86e7d1ef2ae3ed4ed423c0559fe6361e5af'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Plugin::Amqp1/Collectd::Plugin[amqp1]/File[amqp1.load]/ensure: defined content as '{sha256}8dd3769945b86c38433504b97f7851a931eb3c94b667298d10a9796a3d020595'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Plugin::Ceph/Collectd::Plugin[ceph]/File[ceph.load]/ensure: defined content as '{sha256}c796abffda2e860875295b4fc11cc95c6032b4e13fa8fb128e839a305aa1676c'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Plugin::Cpu/Collectd::Plugin[cpu]/File[cpu.load]/ensure: defined content as '{sha256}67d4c8bf6bf5785f4cb6b596712204d9eacbcebbf16fe289907195d4d3cb0e34'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Plugin::Df/Collectd::Plugin[df]/File[df.load]/ensure: defined content as '{sha256}edeb4716d96fc9dca2c6adfe07bae70ba08c6af3944a3900581cba0f08f3c4ba'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52298]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_base_images'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 44, column: 5)
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52298]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_original_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 48, column: 5)
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52298]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_resized_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 52, column: 5)
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Plugin::Disk/Collectd::Plugin[disk]/File[disk.load]/ensure: defined content as '{sha256}1d0cb838278f3226fcd381f0fc2e0e1abaf0d590f4ba7bcb2fc6ec113d3ebde7'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[hugepages.load]/ensure: defined content as '{sha256}9b9f35b65a73da8d4037e4355a23b678f2cf61997ccf7a5e1adf2a7ce6415827'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[older_hugepages.load]/ensure: removed
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Plugin::Interface/Collectd::Plugin[interface]/File[interface.load]/ensure: defined content as '{sha256}b76b315dc312e398940fe029c6dbc5c18d2b974ff7527469fc7d3617b5222046'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Plugin::Load/Collectd::Plugin[load]/File[load.load]/ensure: defined content as '{sha256}af2403f76aebd2f10202d66d2d55e1a8d987eed09ced5a3e3873a4093585dc31'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Plugin::Memory/Collectd::Plugin[memory]/File[memory.load]/ensure: defined content as '{sha256}0f270425ee6b05fc9440ee32b9afd1010dcbddd9b04ca78ff693858f7ecb9d0e'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Plugin::Unixsock/Collectd::Plugin[unixsock]/File[unixsock.load]/ensure: defined content as '{sha256}9d1ec1c51ba386baa6f62d2e019dbd6998ad924bf868b3edc2d24d3dc3c63885'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52298]: Warning: Scope(Class[Tripleo::Profile::Base::Nova::Compute]): The keymgr_backend parameter has been deprecated
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Plugin::Uptime/Collectd::Plugin[uptime]/File[uptime.load]/ensure: defined content as '{sha256}f7a26c6369f904d0ca1af59627ebea15f5e72160bcacdf08d217af282b42e5c0'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[virt.load]/ensure: defined content as '{sha256}9a2bcf913f6bf8a962a0ff351a9faea51ae863cc80af97b77f63f8ab68941c62'
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[older_virt.load]/ensure: removed
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52298]: Warning: Scope(Class[Nova::Compute]): vcpu_pin_set is deprecated, instead use cpu_dedicated_set or cpu_shared_set.
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52298]: Warning: Scope(Class[Nova::Compute]): verify_glance_signatures is deprecated. Use the same parameter in nova::glance
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Notice: Applied catalog in 0.24 seconds
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Application:
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]:    Initial environment: production
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]:    Converged environment: production
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]:          Run mode: user
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Changes:
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]:             Total: 43
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Events:
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]:           Success: 43
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]:             Total: 43
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Resources:
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]:           Skipped: 14
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]:           Changed: 38
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]:       Out of sync: 38
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]:             Total: 82
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Time:
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]:       Concat file: 0.00
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]:              File: 0.10
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]:    Transaction evaluation: 0.24
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]:    Catalog application: 0.24
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]:    Config retrieval: 0.43
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]:          Last run: 1765008805
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]:    Concat fragment: 0.00
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]:             Total: 0.24
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]: Version:
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]:            Config: 1765008805
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52243]:            Puppet: 7.10.0
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Augeas[chap_algs in /etc/iscsi/iscsid.conf]/returns: executed successfully
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]: Notice: Applied catalog in 0.46 seconds
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]: Application:
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]:    Initial environment: production
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]:    Converged environment: production
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]:          Run mode: user
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]: Changes:
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]:             Total: 4
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]: Events:
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]:           Success: 4
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]:             Total: 4
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]: Resources:
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]:           Changed: 4
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]:       Out of sync: 4
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]:           Skipped: 8
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]:             Total: 13
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]: Time:
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]:              File: 0.00
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]:              Exec: 0.05
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]:    Config retrieval: 0.14
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]:            Augeas: 0.40
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]:    Transaction evaluation: 0.46
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]:    Catalog application: 0.46
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]:          Last run: 1765008805
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]:             Total: 0.46
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]: Version:
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]:            Config: 1765008805
Dec 06 08:13:25 np0005548790.localdomain puppet-user[52244]:            Puppet: 7.10.0
Dec 06 08:13:25 np0005548790.localdomain systemd[1]: libpod-41b6dc47e65f36adb0a60e82f18204d3877fb9a54a464dca7e04c2ab17b033fd.scope: Deactivated successfully.
Dec 06 08:13:25 np0005548790.localdomain systemd[1]: libpod-41b6dc47e65f36adb0a60e82f18204d3877fb9a54a464dca7e04c2ab17b033fd.scope: Consumed 2.143s CPU time.
Dec 06 08:13:25 np0005548790.localdomain podman[52136]: 2025-12-06 08:13:25.880932433 +0000 UTC m=+4.206871768 container died 41b6dc47e65f36adb0a60e82f18204d3877fb9a54a464dca7e04c2ab17b033fd (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-cron, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, container_name=container-puppet-crond, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z)
Dec 06 08:13:25 np0005548790.localdomain systemd[1]: libpod-773c852679a9b7349ef8e9bc7e9228330c2e3ad6625820b8bfcd29e9c9904040.scope: Deactivated successfully.
Dec 06 08:13:25 np0005548790.localdomain systemd[1]: libpod-773c852679a9b7349ef8e9bc7e9228330c2e3ad6625820b8bfcd29e9c9904040.scope: Consumed 2.176s CPU time.
Dec 06 08:13:25 np0005548790.localdomain podman[52133]: 2025-12-06 08:13:25.903108647 +0000 UTC m=+4.232478095 container died 773c852679a9b7349ef8e9bc7e9228330c2e3ad6625820b8bfcd29e9c9904040 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, version=17.1.12, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_puppet_step1, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=container-puppet-metrics_qdr, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:13:25 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-41b6dc47e65f36adb0a60e82f18204d3877fb9a54a464dca7e04c2ab17b033fd-userdata-shm.mount: Deactivated successfully.
Dec 06 08:13:25 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-8550f340eb477249447dfc212906241c2d1057a5f1e3afa20f33c907252b7739-merged.mount: Deactivated successfully.
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52298]: Warning: Scope(Class[Nova::Compute::Libvirt]): nova::compute::libvirt::images_type will be required if rbd ephemeral storage is used.
Dec 06 08:13:26 np0005548790.localdomain podman[52760]: 2025-12-06 08:13:26.060270292 +0000 UTC m=+0.171744754 container cleanup 41b6dc47e65f36adb0a60e82f18204d3877fb9a54a464dca7e04c2ab17b033fd (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=container-puppet-crond, config_id=tripleo_puppet_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=)
Dec 06 08:13:26 np0005548790.localdomain systemd[1]: libpod-conmon-41b6dc47e65f36adb0a60e82f18204d3877fb9a54a464dca7e04c2ab17b033fd.scope: Deactivated successfully.
Dec 06 08:13:26 np0005548790.localdomain python3[51941]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-crond --conmon-pidfile /run/container-puppet-crond.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005548790 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::logging::logrotate --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-crond --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-crond.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Dec 06 08:13:26 np0005548790.localdomain podman[52772]: 2025-12-06 08:13:26.112272969 +0000 UTC m=+0.195900292 container cleanup 773c852679a9b7349ef8e9bc7e9228330c2e3ad6625820b8bfcd29e9c9904040 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=container-puppet-metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:13:26 np0005548790.localdomain systemd[1]: libpod-conmon-773c852679a9b7349ef8e9bc7e9228330c2e3ad6625820b8bfcd29e9c9904040.scope: Deactivated successfully.
Dec 06 08:13:26 np0005548790.localdomain python3[51941]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-metrics_qdr --conmon-pidfile /run/container-puppet-metrics_qdr.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005548790 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=metrics_qdr --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::metrics::qdr
                                                          --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-metrics_qdr --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-metrics_qdr.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Dec 06 08:13:26 np0005548790.localdomain systemd[1]: libpod-e8dc841051e50a0c2ae9627be2c0b696203e864cab463ec312a3cce428337eb3.scope: Deactivated successfully.
Dec 06 08:13:26 np0005548790.localdomain systemd[1]: libpod-e8dc841051e50a0c2ae9627be2c0b696203e864cab463ec312a3cce428337eb3.scope: Consumed 2.556s CPU time.
Dec 06 08:13:26 np0005548790.localdomain podman[52116]: 2025-12-06 08:13:26.153869803 +0000 UTC m=+4.490915321 container died e8dc841051e50a0c2ae9627be2c0b696203e864cab463ec312a3cce428337eb3 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=container-puppet-iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=)
Dec 06 08:13:26 np0005548790.localdomain systemd[1]: libpod-eb6133e988b02363a75dbcc7a467edd17a75a8e084238a3bdd8a4d10d41619bf.scope: Deactivated successfully.
Dec 06 08:13:26 np0005548790.localdomain systemd[1]: libpod-eb6133e988b02363a75dbcc7a467edd17a75a8e084238a3bdd8a4d10d41619bf.scope: Consumed 2.548s CPU time.
Dec 06 08:13:26 np0005548790.localdomain podman[52154]: 2025-12-06 08:13:26.202859648 +0000 UTC m=+4.521650799 container died eb6133e988b02363a75dbcc7a467edd17a75a8e084238a3bdd8a4d10d41619bf (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, config_id=tripleo_puppet_step1, io.openshift.expose-services=, container_name=container-puppet-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, name=rhosp17/openstack-collectd, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:13:26 np0005548790.localdomain podman[52901]: 2025-12-06 08:13:26.285495351 +0000 UTC m=+0.075079688 container cleanup eb6133e988b02363a75dbcc7a467edd17a75a8e084238a3bdd8a4d10d41619bf (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=container-puppet-collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, config_id=tripleo_puppet_step1, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:13:26 np0005548790.localdomain systemd[1]: libpod-conmon-eb6133e988b02363a75dbcc7a467edd17a75a8e084238a3bdd8a4d10d41619bf.scope: Deactivated successfully.
Dec 06 08:13:26 np0005548790.localdomain podman[52872]: 2025-12-06 08:13:26.317157003 +0000 UTC m=+0.148526099 container cleanup e8dc841051e50a0c2ae9627be2c0b696203e864cab463ec312a3cce428337eb3 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=container-puppet-iscsid, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_puppet_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-18T23:44:13Z, architecture=x86_64, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 08:13:26 np0005548790.localdomain systemd[1]: libpod-conmon-e8dc841051e50a0c2ae9627be2c0b696203e864cab463ec312a3cce428337eb3.scope: Deactivated successfully.
Dec 06 08:13:26 np0005548790.localdomain python3[51941]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-iscsid --conmon-pidfile /run/container-puppet-iscsid.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005548790 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::iscsid
                                                          --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-iscsid --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-iscsid.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/iscsi:/tmp/iscsi.host:z --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Dec 06 08:13:26 np0005548790.localdomain python3[51941]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-collectd --conmon-pidfile /run/container-puppet-collectd.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005548790 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,collectd_client_config,exec --env NAME=collectd --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::metrics::collectd --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-collectd --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-collectd.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Dec 06 08:13:26 np0005548790.localdomain podman[53015]: 2025-12-06 08:13:26.527115177 +0000 UTC m=+0.080817274 container create 6e80b042a7bc10a229abe8f900647c0b2562df0883ef609b8df977034dbee64b (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, tcib_managed=true, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=container-puppet-rsyslog, build-date=2025-11-18T22:49:49Z, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.expose-services=)
Dec 06 08:13:26 np0005548790.localdomain systemd[1]: Started libpod-conmon-6e80b042a7bc10a229abe8f900647c0b2562df0883ef609b8df977034dbee64b.scope.
Dec 06 08:13:26 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:13:26 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68867986d4a2077de0c22951b19fb4fc1ac570d0cdbe733dc50c260ed931a7ca/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:26 np0005548790.localdomain podman[53015]: 2025-12-06 08:13:26.575121335 +0000 UTC m=+0.128823442 container init 6e80b042a7bc10a229abe8f900647c0b2562df0883ef609b8df977034dbee64b (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, com.redhat.component=openstack-rsyslog-container, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=container-puppet-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-rsyslog, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.4)
Dec 06 08:13:26 np0005548790.localdomain podman[53015]: 2025-12-06 08:13:26.476476456 +0000 UTC m=+0.030178583 image pull  registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Dec 06 08:13:26 np0005548790.localdomain podman[53015]: 2025-12-06 08:13:26.581692515 +0000 UTC m=+0.135394602 container start 6e80b042a7bc10a229abe8f900647c0b2562df0883ef609b8df977034dbee64b (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-11-18T22:49:49Z, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, container_name=container-puppet-rsyslog, com.redhat.component=openstack-rsyslog-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git)
Dec 06 08:13:26 np0005548790.localdomain podman[53015]: 2025-12-06 08:13:26.581941802 +0000 UTC m=+0.135643929 container attach 6e80b042a7bc10a229abe8f900647c0b2562df0883ef609b8df977034dbee64b (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, container_name=container-puppet-rsyslog, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-rsyslog, build-date=2025-11-18T22:49:49Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc.)
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52298]: Notice: Compiled catalog for np0005548790.localdomain in environment production in 1.23 seconds
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]:    (file: /etc/puppet/hiera.yaml)
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]: Warning: Undefined variable '::deploy_config_name';
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]:    (file & line not available)
Dec 06 08:13:26 np0005548790.localdomain podman[53101]: 2025-12-06 08:13:26.657594274 +0000 UTC m=+0.070994567 container create 42cbb5b17a474b1efb0ddcb731d593f9c995ee48c1582e119f713d0ed18b27c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=container-puppet-ovn_controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc.)
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]:    (file & line not available)
Dec 06 08:13:26 np0005548790.localdomain systemd[1]: Started libpod-conmon-42cbb5b17a474b1efb0ddcb731d593f9c995ee48c1582e119f713d0ed18b27c8.scope.
Dec 06 08:13:26 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:13:26 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ebbf80d85ec0610e007daf7d98c19344ed72ab9cca52ad6ba1866e75af9720/merged/etc/sysconfig/modules supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:26 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ebbf80d85ec0610e007daf7d98c19344ed72ab9cca52ad6ba1866e75af9720/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:26 np0005548790.localdomain podman[53101]: 2025-12-06 08:13:26.717335682 +0000 UTC m=+0.130735965 container init 42cbb5b17a474b1efb0ddcb731d593f9c995ee48c1582e119f713d0ed18b27c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, container_name=container-puppet-ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_puppet_step1, batch=17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']})
Dec 06 08:13:26 np0005548790.localdomain podman[53101]: 2025-12-06 08:13:26.623122934 +0000 UTC m=+0.036523197 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Dec 06 08:13:26 np0005548790.localdomain podman[53101]: 2025-12-06 08:13:26.72532683 +0000 UTC m=+0.138727083 container start 42cbb5b17a474b1efb0ddcb731d593f9c995ee48c1582e119f713d0ed18b27c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, config_id=tripleo_puppet_step1, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, tcib_managed=true, batch=17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, name=rhosp17/openstack-ovn-controller, release=1761123044, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=container-puppet-ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 08:13:26 np0005548790.localdomain podman[53101]: 2025-12-06 08:13:26.725482234 +0000 UTC m=+0.138882517 container attach 42cbb5b17a474b1efb0ddcb731d593f9c995ee48c1582e119f713d0ed18b27c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-18T23:34:05Z, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=container-puppet-ovn_controller, release=1761123044, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]: Warning: Unknown variable: '::ceilometer::cache_backend'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 145, column: 39)
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]: Warning: Unknown variable: '::ceilometer::memcache_servers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 146, column: 39)
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]: Warning: Unknown variable: '::ceilometer::cache_tls_enabled'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 147, column: 39)
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]: Warning: Unknown variable: '::ceilometer::cache_tls_cafile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 148, column: 39)
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]: Warning: Unknown variable: '::ceilometer::cache_tls_certfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 149, column: 39)
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]: Warning: Unknown variable: '::ceilometer::cache_tls_keyfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 150, column: 39)
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]: Warning: Unknown variable: '::ceilometer::cache_tls_allowed_ciphers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 151, column: 39)
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]: Warning: Unknown variable: '::ceilometer::manage_backend_package'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 152, column: 39)
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_password'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 63, column: 25)
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_url'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 68, column: 25)
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_region'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 69, column: 28)
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 70, column: 25)
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_tenant_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 71, column: 29)
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_cacert'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 72, column: 23)
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_endpoint_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 73, column: 26)
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 74, column: 33)
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_project_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 75, column: 36)
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 76, column: 26)
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{sha256}86610d84e745a3992358ae0b747297805d075492e5114c666fa08f8aecce7da0' to '{sha256}b5992c61c5e6c0fa60ac7720677a0efdfb73ceba695978e2f56794a0d035436f'
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created
Dec 06 08:13:26 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-853ff4856ee5a83cc87cbeb349af5655b5d75f579ca17ec21c7fae4967e407aa-merged.mount: Deactivated successfully.
Dec 06 08:13:26 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-773c852679a9b7349ef8e9bc7e9228330c2e3ad6625820b8bfcd29e9c9904040-userdata-shm.mount: Deactivated successfully.
Dec 06 08:13:26 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-6cef3ed0275ec9a23efe0f943d6fe49d48d180e84df9235ed237a3514d395164-merged.mount: Deactivated successfully.
Dec 06 08:13:26 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eb6133e988b02363a75dbcc7a467edd17a75a8e084238a3bdd8a4d10d41619bf-userdata-shm.mount: Deactivated successfully.
Dec 06 08:13:26 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-bdff378601a5745eaa4f1d4df62f2aeddf56fee1bd1fce6d42c6ce595385b282-merged.mount: Deactivated successfully.
Dec 06 08:13:26 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8dc841051e50a0c2ae9627be2c0b696203e864cab463ec312a3cce428337eb3-userdata-shm.mount: Deactivated successfully.
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{sha256}78510a0d6f14b269ddeb9f9638dfdfba9f976d370ee2ec04ba25352a8af6df35' to '{sha256}6d7bcae773217a30c0772f75d0d1b6d21f5d64e72853f5e3d91bb47799dbb7fe'
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52298]: Warning: Empty environment setting 'TLS_PASSWORD'
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52298]:    (file: /etc/puppet/modules/tripleo/manifests/profile/base/nova/libvirt.pp, line: 182)
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{sha256}0d05a8832f36c0517b84e9c3ad11069d531c7d2be5297661e5552fd29e3a5e47' to '{sha256}8f9f91b7bc846aa12da1e2df7356fc45f862596082e133d7976104ee8d1893c1'
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File_line[nova_migration_logindefs]/ensure: created
Dec 06 08:13:26 np0005548790.localdomain puppet-user[52385]: Notice: Compiled catalog for np0005548790.localdomain in environment production in 0.37 seconds
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/never_download_image_if_on_rbd]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/disable_compute_service_check_for_ffu]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_url]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/region_name]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/username]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/password]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_name]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/interface]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/user_domain_name]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_domain_name]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_type]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[compute/instance_discovery_method]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/cpu_allocation_ratio]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[polling/tenant_name_discovery]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/disk_allocation_ratio]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/dhcp_domain]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/backend]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/enabled]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/memcache_servers]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/tls_enabled]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova/Nova_config[vif_plug_ovs/ovsdb_connection]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/rpc_address_prefix]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/notify_address_prefix]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova/Nova_config[cinder/cross_az_attach]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Glance/Nova_config[glance/valid_interfaces]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Notice: Applied catalog in 0.42 seconds
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Application:
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]:    Initial environment: production
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]:    Converged environment: production
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]:          Run mode: user
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Changes:
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]:             Total: 31
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Events:
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]:           Success: 31
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]:             Total: 31
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Resources:
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]:           Skipped: 22
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]:           Changed: 31
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]:       Out of sync: 31
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]:             Total: 151
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Time:
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]:           Package: 0.01
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]:    Ceilometer config: 0.35
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]:    Transaction evaluation: 0.41
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]:    Catalog application: 0.42
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]:    Config retrieval: 0.43
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]:          Last run: 1765008807
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]:         Resources: 0.00
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]:             Total: 0.42
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]: Version:
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]:            Config: 1765008806
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52385]:            Puppet: 7.10.0
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/valid_interfaces]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/password]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_type]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_url]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/region_name]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_name]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_domain_name]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/username]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/user_domain_name]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/os_region_name]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/catalog_info]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/manager_interval]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_base_images]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_original_minimum_age_seconds]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_resized_minimum_age_seconds]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/precache_concurrency]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain systemd[1]: libpod-d4b5d718ec3cc7e829835eddfc0b32c8c2c1aacff4a11827338bfade57727103.scope: Deactivated successfully.
Dec 06 08:13:27 np0005548790.localdomain systemd[1]: libpod-d4b5d718ec3cc7e829835eddfc0b32c8c2c1aacff4a11827338bfade57727103.scope: Consumed 2.819s CPU time.
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Provider/Nova_config[compute/provider_config_location]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Provider/File[/etc/nova/provider_config]/ensure: created
Dec 06 08:13:27 np0005548790.localdomain podman[53238]: 2025-12-06 08:13:27.988307638 +0000 UTC m=+0.093607653 container died d4b5d718ec3cc7e829835eddfc0b32c8c2c1aacff4a11827338bfade57727103 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, version=17.1.12, config_id=tripleo_puppet_step1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, com.redhat.component=openstack-ceilometer-central-container, url=https://www.redhat.com, container_name=container-puppet-ceilometer, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-central, build-date=2025-11-19T00:11:59Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:13:28 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d4b5d718ec3cc7e829835eddfc0b32c8c2c1aacff4a11827338bfade57727103-userdata-shm.mount: Deactivated successfully.
Dec 06 08:13:28 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-32b5c90c9b92f21ac24e890e56ff13c8f62f2e0d1099ae340a806b11e7e1b242-merged.mount: Deactivated successfully.
Dec 06 08:13:28 np0005548790.localdomain podman[53238]: 2025-12-06 08:13:28.030810647 +0000 UTC m=+0.136110612 container cleanup d4b5d718ec3cc7e829835eddfc0b32c8c2c1aacff4a11827338bfade57727103 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-central-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, version=17.1.12, container_name=container-puppet-ceilometer, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-central, build-date=2025-11-19T00:11:59Z, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Dec 06 08:13:28 np0005548790.localdomain systemd[1]: libpod-conmon-d4b5d718ec3cc7e829835eddfc0b32c8c2c1aacff4a11827338bfade57727103.scope: Deactivated successfully.
Dec 06 08:13:28 np0005548790.localdomain python3[51941]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ceilometer --conmon-pidfile /run/container-puppet-ceilometer.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005548790 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::ceilometer::agent::polling
                                                         include tripleo::profile::base::ceilometer::agent::polling
                                                          --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ceilometer --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ceilometer.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/use_cow_images]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/mkisofs_cmd]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_huge_pages]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/resume_guests_state_on_host_boot]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/sync_power_state_interval]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/live_migration_wait_for_vif_plug]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/max_disk_devices_to_attach]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]:    (file: /etc/puppet/hiera.yaml)
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]: Warning: Undefined variable '::deploy_config_name';
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]:    (file & line not available)
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]:    (file & line not available)
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/server_proxyclient_address]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]: Notice: Compiled catalog for np0005548790.localdomain in environment production in 0.24 seconds
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/valid_interfaces]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53189]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53189]:    (file: /etc/puppet/hiera.yaml)
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53189]: Warning: Undefined variable '::deploy_config_name';
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53189]:    (file & line not available)
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53189]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53189]:    (file & line not available)
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_tunnelled]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]: Notice: /Stage[main]/Rsyslog::Base/File[/etc/rsyslog.conf]/content: content changed '{sha256}d6f679f6a4eb6f33f9fc20c846cb30bef93811e1c86bc4da1946dc3100b826c3' to '{sha256}7963bd801fadd49a17561f4d3f80738c3f504b413b11c443432d8303138041f2'
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_post_copy]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_auto_converge]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tls]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tcp]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]: Notice: /Stage[main]/Rsyslog::Config::Global/Rsyslog::Component::Global_config[MaxMessageSize]/Rsyslog::Generate_concat[rsyslog::concat::global_config::MaxMessageSize]/Concat[/etc/rsyslog.d/00_rsyslog.conf]/File[/etc/rsyslog.d/00_rsyslog.conf]/ensure: defined content as '{sha256}a291d5cc6d5884a978161f4c7b5831d43edd07797cc590bae366e7f150b8643b'
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]: Notice: /Stage[main]/Rsyslog::Config::Templates/Rsyslog::Component::Template[rsyslog-node-index]/Rsyslog::Generate_concat[rsyslog::concat::template::rsyslog-node-index]/Concat[/etc/rsyslog.d/50_openstack_logs.conf]/File[/etc/rsyslog.d/50_openstack_logs.conf]/ensure: defined content as '{sha256}4995dec1de43605ce2fdacf1dc62707c72c1392d5ccdd9f8517c61e2c25d87dd'
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{sha256}9f797f9d49cf12085061840a6e15e35ef08aaf3c80bbe03bcf23d28dd55767ae'
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]: Notice: Applied catalog in 0.14 seconds
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]: Application:
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]:    Initial environment: production
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]:    Converged environment: production
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]:          Run mode: user
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]: Changes:
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]:             Total: 3
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]: Events:
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]:           Success: 3
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]:             Total: 3
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]: Resources:
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]:           Skipped: 11
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]:           Changed: 3
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]:       Out of sync: 3
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]:             Total: 25
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]: Time:
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]:       Concat file: 0.00
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]:    Concat fragment: 0.00
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]:              File: 0.04
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]:    Transaction evaluation: 0.13
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]:    Catalog application: 0.14
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]:    Config retrieval: 0.29
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]:          Last run: 1765008808
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]:             Total: 0.14
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]: Version:
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]:            Config: 1765008808
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53160]:            Puppet: 7.10.0
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_store_name]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_poll_interval]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_timeout]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/preallocate_images]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/server_listen]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53189]: Notice: Compiled catalog for np0005548790.localdomain in environment production in 0.26 seconds
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain ovs-vsctl[53432]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote=tcp:172.17.0.103:6642,tcp:172.17.0.104:6642,tcp:172.17.0.105:6642
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53189]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_machine_type]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain ovs-vsctl[53438]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-type=geneve
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53189]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-type]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/rx_queue_size]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain ovs-vsctl[53441]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-ip=172.19.0.108
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53189]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-ip]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/tx_queue_size]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/file_backed_memory]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/volume_use_multipath]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain ovs-vsctl[53451]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:hostname=np0005548790.localdomain
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53189]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:hostname]/value: value changed 'np0005548790.novalocal' to 'np0005548790.localdomain'
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/num_pcie_ports]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/mem_stats_period_seconds]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain ovs-vsctl[53460]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge=br-int
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53189]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/pmem_namespaces]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/swtpm_enabled]/ensure: created
Dec 06 08:13:28 np0005548790.localdomain ovs-vsctl[53462]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote-probe-interval=60000
Dec 06 08:13:28 np0005548790.localdomain puppet-user[53189]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote-probe-interval]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain ovs-vsctl[53469]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-openflow-probe-interval=60
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-openflow-probe-interval]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain ovs-vsctl[53476]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-monitor-all=true
Dec 06 08:13:29 np0005548790.localdomain systemd[1]: libpod-6e80b042a7bc10a229abe8f900647c0b2562df0883ef609b8df977034dbee64b.scope: Deactivated successfully.
Dec 06 08:13:29 np0005548790.localdomain systemd[1]: libpod-6e80b042a7bc10a229abe8f900647c0b2562df0883ef609b8df977034dbee64b.scope: Consumed 2.249s CPU time.
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-monitor-all]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_model_extra_flags]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain podman[53015]: 2025-12-06 08:13:29.041416996 +0000 UTC m=+2.595119083 container died 6e80b042a7bc10a229abe8f900647c0b2562df0883ef609b8df977034dbee64b (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:49Z, container_name=container-puppet-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, com.redhat.component=openstack-rsyslog-container, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp17/openstack-rsyslog, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, tcib_managed=true)
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_filters]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_outputs]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain ovs-vsctl[53484]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-ofctrl-wait-before-clear=8000
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_filters]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_outputs]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_filters]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-ofctrl-wait-before-clear]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_outputs]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_filters]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_outputs]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_filters]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain ovs-vsctl[53492]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-tos=0
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_outputs]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_filters]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_outputs]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-tos]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_group]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_ro]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_rw]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain ovs-vsctl[53494]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-chassis-mac-mappings=datacentre:fa:16:3e:95:0a:04
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_ro_perms]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-chassis-mac-mappings]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_rw_perms]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_group]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_ro]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_rw]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_ro_perms]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_rw_perms]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_group]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_ro]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_rw]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_ro_perms]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain ovs-vsctl[53496]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge-mappings=datacentre:br-ex
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_rw_perms]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_group]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_ro]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_rw]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge-mappings]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_ro_perms]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_rw_perms]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_group]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_ro]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_rw]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_ro_perms]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_rw_perms]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain ovs-vsctl[53498]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-match-northd-version=false
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-match-northd-version]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain ovs-vsctl[53500]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:garp-max-timeout-sec=0
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:garp-max-timeout-sec]/ensure: created
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]: Notice: Applied catalog in 0.41 seconds
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]: Application:
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]:    Initial environment: production
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]:    Converged environment: production
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]:          Run mode: user
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]: Changes:
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]:             Total: 14
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]: Events:
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]:           Success: 14
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]:             Total: 14
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]: Resources:
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]:           Skipped: 12
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]:           Changed: 14
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]:       Out of sync: 14
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]:             Total: 29
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]: Time:
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]:              Exec: 0.01
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]:    Config retrieval: 0.30
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]:         Vs config: 0.35
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]:    Transaction evaluation: 0.40
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]:    Catalog application: 0.41
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]:          Last run: 1765008809
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]:             Total: 0.41
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]: Version:
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]:            Config: 1765008808
Dec 06 08:13:29 np0005548790.localdomain puppet-user[53189]:            Puppet: 7.10.0
Dec 06 08:13:29 np0005548790.localdomain systemd[1]: tmp-crun.7BloJ3.mount: Deactivated successfully.
Dec 06 08:13:29 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6e80b042a7bc10a229abe8f900647c0b2562df0883ef609b8df977034dbee64b-userdata-shm.mount: Deactivated successfully.
Dec 06 08:13:29 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-68867986d4a2077de0c22951b19fb4fc1ac570d0cdbe733dc50c260ed931a7ca-merged.mount: Deactivated successfully.
Dec 06 08:13:29 np0005548790.localdomain systemd[1]: libpod-42cbb5b17a474b1efb0ddcb731d593f9c995ee48c1582e119f713d0ed18b27c8.scope: Deactivated successfully.
Dec 06 08:13:29 np0005548790.localdomain systemd[1]: libpod-42cbb5b17a474b1efb0ddcb731d593f9c995ee48c1582e119f713d0ed18b27c8.scope: Consumed 2.783s CPU time.
Dec 06 08:13:29 np0005548790.localdomain podman[53101]: 2025-12-06 08:13:29.835569943 +0000 UTC m=+3.248970276 container died 42cbb5b17a474b1efb0ddcb731d593f9c995ee48c1582e119f713d0ed18b27c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, release=1761123044, vcs-type=git, tcib_managed=true, architecture=x86_64, build-date=2025-11-18T23:34:05Z, version=17.1.12, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, container_name=container-puppet-ovn_controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1)
Dec 06 08:13:29 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully
Dec 06 08:13:29 np0005548790.localdomain podman[53479]: 2025-12-06 08:13:29.984886154 +0000 UTC m=+0.938244127 container cleanup 6e80b042a7bc10a229abe8f900647c0b2562df0883ef609b8df977034dbee64b (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, com.redhat.component=openstack-rsyslog-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:49Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 rsyslog, container_name=container-puppet-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, name=rhosp17/openstack-rsyslog, io.openshift.expose-services=, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public)
Dec 06 08:13:29 np0005548790.localdomain python3[51941]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-rsyslog --conmon-pidfile /run/container-puppet-rsyslog.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005548790 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment --env NAME=rsyslog --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::logging::rsyslog --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-rsyslog --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-rsyslog.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Dec 06 08:13:29 np0005548790.localdomain systemd[1]: libpod-conmon-6e80b042a7bc10a229abe8f900647c0b2562df0883ef609b8df977034dbee64b.scope: Deactivated successfully.
Dec 06 08:13:30 np0005548790.localdomain podman[53058]: 2025-12-06 08:13:26.563155789 +0000 UTC m=+0.034644695 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1
Dec 06 08:13:30 np0005548790.localdomain podman[53629]: 2025-12-06 08:13:30.031795552 +0000 UTC m=+0.186987537 container cleanup 42cbb5b17a474b1efb0ddcb731d593f9c995ee48c1582e119f713d0ed18b27c8 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, container_name=container-puppet-ovn_controller, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 08:13:30 np0005548790.localdomain systemd[1]: libpod-conmon-42cbb5b17a474b1efb0ddcb731d593f9c995ee48c1582e119f713d0ed18b27c8.scope: Deactivated successfully.
Dec 06 08:13:30 np0005548790.localdomain python3[51941]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ovn_controller --conmon-pidfile /run/container-puppet-ovn_controller.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005548790 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,vs_config,exec --env NAME=ovn_controller --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::neutron::agents::ovn
                                                          --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ovn_controller --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ovn_controller.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /etc/sysconfig/modules:/etc/sysconfig/modules --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Dec 06 08:13:30 np0005548790.localdomain podman[53710]: 2025-12-06 08:13:30.250609687 +0000 UTC m=+0.084437793 container create 491a4f070065d442be36abbbebd6631cf4085791323afb07f183c459a03b0157 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, architecture=x86_64, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, version=17.1.12, name=rhosp17/openstack-neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, io.openshift.expose-services=, com.redhat.component=openstack-neutron-server-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=container-puppet-neutron, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-server, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:23:27Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1)
Dec 06 08:13:30 np0005548790.localdomain systemd[1]: Started libpod-conmon-491a4f070065d442be36abbbebd6631cf4085791323afb07f183c459a03b0157.scope.
Dec 06 08:13:30 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:13:30 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/208fd3e28e1d3a8776a6524bc8771d61ec27c4bea36118c21f18f1762d939041/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:30 np0005548790.localdomain podman[53710]: 2025-12-06 08:13:30.205904708 +0000 UTC m=+0.039732864 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1
Dec 06 08:13:30 np0005548790.localdomain podman[53710]: 2025-12-06 08:13:30.308054243 +0000 UTC m=+0.141882379 container init 491a4f070065d442be36abbbebd6631cf4085791323afb07f183c459a03b0157 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_puppet_step1, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-server-container, io.openshift.expose-services=, name=rhosp17/openstack-neutron-server, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, container_name=container-puppet-neutron, build-date=2025-11-19T00:23:27Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server)
Dec 06 08:13:30 np0005548790.localdomain podman[53710]: 2025-12-06 08:13:30.316664708 +0000 UTC m=+0.150492834 container start 491a4f070065d442be36abbbebd6631cf4085791323afb07f183c459a03b0157 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, container_name=container-puppet-neutron, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:23:27Z, vcs-type=git, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, version=17.1.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-neutron-server, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-server-container, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Dec 06 08:13:30 np0005548790.localdomain podman[53710]: 2025-12-06 08:13:30.316980346 +0000 UTC m=+0.150808542 container attach 491a4f070065d442be36abbbebd6631cf4085791323afb07f183c459a03b0157 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, description=Red Hat OpenStack Platform 17.1 neutron-server, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., container_name=container-puppet-neutron, build-date=2025-11-19T00:23:27Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, release=1761123044, architecture=x86_64, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-server-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server)
Dec 06 08:13:30 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Migration::Qemu/Augeas[qemu-conf-migration-ports]/returns: executed successfully
Dec 06 08:13:30 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created
Dec 06 08:13:30 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created
Dec 06 08:13:30 np0005548790.localdomain systemd[1]: tmp-crun.8d8Kzl.mount: Deactivated successfully.
Dec 06 08:13:30 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-b0ebbf80d85ec0610e007daf7d98c19344ed72ab9cca52ad6ba1866e75af9720-merged.mount: Deactivated successfully.
Dec 06 08:13:30 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-42cbb5b17a474b1efb0ddcb731d593f9c995ee48c1582e119f713d0ed18b27c8-userdata-shm.mount: Deactivated successfully.
Dec 06 08:13:30 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created
Dec 06 08:13:30 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created
Dec 06 08:13:30 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created
Dec 06 08:13:30 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/tls_enabled]/ensure: created
Dec 06 08:13:30 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Dec 06 08:13:30 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created
Dec 06 08:13:30 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_type]/ensure: created
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/region_name]/ensure: created
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_url]/ensure: created
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/username]/ensure: created
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/password]/ensure: created
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/user_domain_name]/ensure: created
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_name]/ensure: created
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_domain_name]/ensure: created
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/send_service_user_token]/ensure: created
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]: Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/ensure: defined content as '{sha256}66a7ab6cc1a19ea5002a5aaa2cfb2f196778c89c859d0afac926fe3fac9c75a4'
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]: Notice: Applied catalog in 4.71 seconds
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]: Application:
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:    Initial environment: production
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:    Converged environment: production
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:          Run mode: user
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]: Changes:
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:             Total: 183
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]: Events:
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:           Success: 183
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:             Total: 183
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]: Resources:
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:           Changed: 183
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:       Out of sync: 183
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:           Skipped: 57
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:             Total: 487
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]: Time:
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:       Concat file: 0.00
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:    Concat fragment: 0.00
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:            Anchor: 0.00
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:         File line: 0.00
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:    Virtlogd config: 0.00
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:    Virtstoraged config: 0.01
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:    Virtqemud config: 0.01
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:              Exec: 0.02
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:    Virtsecretd config: 0.02
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:           Package: 0.03
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:    Virtnodedevd config: 0.03
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:    Virtproxyd config: 0.03
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:              File: 0.03
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:            Augeas: 1.22
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:    Config retrieval: 1.48
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:          Last run: 1765008811
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:       Nova config: 3.10
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:    Transaction evaluation: 4.69
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:    Catalog application: 4.71
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:         Resources: 0.00
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:             Total: 4.71
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]: Version:
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:            Config: 1765008805
Dec 06 08:13:31 np0005548790.localdomain puppet-user[52298]:            Puppet: 7.10.0
Dec 06 08:13:31 np0005548790.localdomain sshd[53750]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:13:32 np0005548790.localdomain puppet-user[53744]: Error: Facter: error while resolving custom fact "haproxy_version": undefined method `strip' for nil:NilClass
Dec 06 08:13:32 np0005548790.localdomain puppet-user[53744]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 06 08:13:32 np0005548790.localdomain puppet-user[53744]:    (file: /etc/puppet/hiera.yaml)
Dec 06 08:13:32 np0005548790.localdomain puppet-user[53744]: Warning: Undefined variable '::deploy_config_name';
Dec 06 08:13:32 np0005548790.localdomain puppet-user[53744]:    (file & line not available)
Dec 06 08:13:32 np0005548790.localdomain puppet-user[53744]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 06 08:13:32 np0005548790.localdomain puppet-user[53744]:    (file & line not available)
Dec 06 08:13:32 np0005548790.localdomain puppet-user[53744]: Warning: Unknown variable: 'dhcp_agents_per_net'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp, line: 154, column: 37)
Dec 06 08:13:32 np0005548790.localdomain systemd[1]: libpod-5dcd4454f7c400ecbfe68f933050a0972d1730268cac5585eea99eae264b5e58.scope: Deactivated successfully.
Dec 06 08:13:32 np0005548790.localdomain systemd[1]: libpod-5dcd4454f7c400ecbfe68f933050a0972d1730268cac5585eea99eae264b5e58.scope: Consumed 8.196s CPU time.
Dec 06 08:13:32 np0005548790.localdomain podman[53858]: 2025-12-06 08:13:32.531804392 +0000 UTC m=+0.050057346 container died 5dcd4454f7c400ecbfe68f933050a0972d1730268cac5585eea99eae264b5e58 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=container-puppet-nova_libvirt, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Dec 06 08:13:32 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5dcd4454f7c400ecbfe68f933050a0972d1730268cac5585eea99eae264b5e58-userdata-shm.mount: Deactivated successfully.
Dec 06 08:13:32 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-b09684bd34ad3c8fd079e7ba80e7282e1f6c9c49f3e804a6f19145e8413aff6f-merged.mount: Deactivated successfully.
Dec 06 08:13:32 np0005548790.localdomain podman[53858]: 2025-12-06 08:13:32.661829376 +0000 UTC m=+0.180082300 container cleanup 5dcd4454f7c400ecbfe68f933050a0972d1730268cac5585eea99eae264b5e58 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.4, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, build-date=2025-11-19T00:35:22Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=container-puppet-nova_libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_puppet_step1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:13:32 np0005548790.localdomain systemd[1]: libpod-conmon-5dcd4454f7c400ecbfe68f933050a0972d1730268cac5585eea99eae264b5e58.scope: Deactivated successfully.
Dec 06 08:13:32 np0005548790.localdomain python3[51941]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-nova_libvirt --conmon-pidfile /run/container-puppet-nova_libvirt.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005548790 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env STEP_CONFIG=include ::tripleo::packages
                                                         # TODO(emilien): figure how to deal with libvirt profile.
                                                         # We'll probably treat it like we do with Neutron plugins.
                                                         # Until then, just include it in the default nova-compute role.
                                                         include tripleo::profile::base::nova::compute::libvirt
                                                         
                                                         include tripleo::profile::base::nova::libvirt
                                                         
                                                         include tripleo::profile::base::nova::compute::libvirt_guests
                                                         
                                                         include tripleo::profile::base::sshd
                                                         include tripleo::profile::base::nova::migration::target --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-nova_libvirt --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-nova_libvirt.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 06 08:13:32 np0005548790.localdomain puppet-user[53744]: Notice: Compiled catalog for np0005548790.localdomain in environment production in 0.63 seconds
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/vlan_transparent]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron/Neutron_config[agent/report_interval]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/debug]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/state_path]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/hwol_qos_enabled]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[agent/root_helper]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection_timeout]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovsdb_probe_interval]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_nb_connection]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_sb_connection]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Notice: Applied catalog in 0.45 seconds
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Application:
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]:    Initial environment: production
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]:    Converged environment: production
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]:          Run mode: user
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Changes:
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]:             Total: 33
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Events:
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]:           Success: 33
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]:             Total: 33
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Resources:
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]:           Skipped: 21
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]:           Changed: 33
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]:       Out of sync: 33
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]:             Total: 155
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Time:
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]:         Resources: 0.00
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]:    Ovn metadata agent config: 0.02
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]:    Neutron config: 0.36
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]:    Transaction evaluation: 0.44
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]:    Catalog application: 0.45
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]:    Config retrieval: 0.70
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]:          Last run: 1765008813
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]:             Total: 0.45
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]: Version:
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]:            Config: 1765008812
Dec 06 08:13:33 np0005548790.localdomain puppet-user[53744]:            Puppet: 7.10.0
Dec 06 08:13:34 np0005548790.localdomain systemd[1]: libpod-491a4f070065d442be36abbbebd6631cf4085791323afb07f183c459a03b0157.scope: Deactivated successfully.
Dec 06 08:13:34 np0005548790.localdomain systemd[1]: libpod-491a4f070065d442be36abbbebd6631cf4085791323afb07f183c459a03b0157.scope: Consumed 3.525s CPU time.
Dec 06 08:13:34 np0005548790.localdomain podman[53710]: 2025-12-06 08:13:34.02223656 +0000 UTC m=+3.856064716 container died 491a4f070065d442be36abbbebd6631cf4085791323afb07f183c459a03b0157 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, container_name=container-puppet-neutron, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, build-date=2025-11-19T00:23:27Z, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-server, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-server-container, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-server, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-server, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server)
Dec 06 08:13:34 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-491a4f070065d442be36abbbebd6631cf4085791323afb07f183c459a03b0157-userdata-shm.mount: Deactivated successfully.
Dec 06 08:13:34 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-208fd3e28e1d3a8776a6524bc8771d61ec27c4bea36118c21f18f1762d939041-merged.mount: Deactivated successfully.
Dec 06 08:13:34 np0005548790.localdomain podman[53928]: 2025-12-06 08:13:34.147109283 +0000 UTC m=+0.113605417 container cleanup 491a4f070065d442be36abbbebd6631cf4085791323afb07f183c459a03b0157 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, url=https://www.redhat.com, config_id=tripleo_puppet_step1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:23:27Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.41.4, release=1761123044, container_name=container-puppet-neutron, tcib_managed=true, com.redhat.component=openstack-neutron-server-container, name=rhosp17/openstack-neutron-server, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:13:34 np0005548790.localdomain systemd[1]: libpod-conmon-491a4f070065d442be36abbbebd6631cf4085791323afb07f183c459a03b0157.scope: Deactivated successfully.
Dec 06 08:13:34 np0005548790.localdomain python3[51941]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-neutron --conmon-pidfile /run/container-puppet-neutron.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005548790 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config --env NAME=neutron --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::neutron::ovn_metadata
                                                          --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-neutron --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005548790', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-neutron.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1
Dec 06 08:13:34 np0005548790.localdomain sudo[51939]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:35 np0005548790.localdomain sudo[53978]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cimdaomrubmnkpawktxlyotntogbqsrc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:35 np0005548790.localdomain sudo[53978]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:35 np0005548790.localdomain python3[53980]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:13:35 np0005548790.localdomain sudo[53978]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:35 np0005548790.localdomain sudo[53994]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mwkxquyujkhwbviwivkturogtfnllzvz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:35 np0005548790.localdomain sudo[53994]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:35 np0005548790.localdomain sudo[53994]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:36 np0005548790.localdomain sudo[54010]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cuvuighyoiootrkdorydezupssvsyioz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:36 np0005548790.localdomain sudo[54010]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:36 np0005548790.localdomain python3[54012]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:13:36 np0005548790.localdomain sudo[54010]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:36 np0005548790.localdomain sudo[54060]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ahrvzpcxcfmihmxqmmejthrfurrlzopy ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:36 np0005548790.localdomain sudo[54060]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:36 np0005548790.localdomain python3[54062]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:13:36 np0005548790.localdomain sudo[54060]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:36 np0005548790.localdomain sudo[54103]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wihugsdxzdntpvmdoevwmoyutgzotjzr ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:36 np0005548790.localdomain sudo[54103]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:37 np0005548790.localdomain python3[54105]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008816.4722052-84654-188363975720309/source dest=/usr/libexec/tripleo-container-shutdown mode=0700 owner=root group=root _original_basename=tripleo-container-shutdown follow=False checksum=7d67b1986212f5548057505748cd74cfcf9c0d35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:13:37 np0005548790.localdomain sudo[54103]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:37 np0005548790.localdomain sudo[54165]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zaflrlmywfizdrsoovziplkvhdzhyswa ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:37 np0005548790.localdomain sudo[54165]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:37 np0005548790.localdomain python3[54167]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:13:37 np0005548790.localdomain sudo[54165]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:37 np0005548790.localdomain sudo[54208]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dlwpjefvuwtrtpuqwzykltirgrgnlsah ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:37 np0005548790.localdomain sudo[54208]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:38 np0005548790.localdomain python3[54210]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008817.2956529-84654-42960171357101/source dest=/usr/libexec/tripleo-start-podman-container mode=0700 owner=root group=root _original_basename=tripleo-start-podman-container follow=False checksum=536965633b8d3b1ce794269ffb07be0105a560a0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:13:38 np0005548790.localdomain sudo[54208]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:38 np0005548790.localdomain sudo[54270]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ffmsatwupmfrxnedxzbclpletveetecd ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:38 np0005548790.localdomain sudo[54270]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:38 np0005548790.localdomain python3[54272]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:13:38 np0005548790.localdomain sudo[54270]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:38 np0005548790.localdomain sudo[54313]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kzkmtrprxfqgvdtmhyyduljxndnoegsu ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:38 np0005548790.localdomain sudo[54313]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:38 np0005548790.localdomain python3[54315]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008818.25932-84731-215882110758165/source dest=/usr/lib/systemd/system/tripleo-container-shutdown.service mode=0644 owner=root group=root _original_basename=tripleo-container-shutdown-service follow=False checksum=66c1d41406ba8714feb9ed0a35259a7a57ef9707 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:13:38 np0005548790.localdomain sudo[54313]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:39 np0005548790.localdomain sudo[54375]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iftctovdupdjtsumaxisjaltjnyomluf ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:39 np0005548790.localdomain sudo[54375]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:39 np0005548790.localdomain python3[54377]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:13:39 np0005548790.localdomain sudo[54375]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:39 np0005548790.localdomain sudo[54418]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yenebxyxcucawdftbhcpetbjmqaufqmb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:39 np0005548790.localdomain sudo[54418]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:39 np0005548790.localdomain python3[54420]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008819.1189885-84748-238739960610787/source dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset mode=0644 owner=root group=root _original_basename=91-tripleo-container-shutdown-preset follow=False checksum=bccb1207dcbcfaa5ca05f83c8f36ce4c2460f081 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:13:39 np0005548790.localdomain sudo[54418]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:40 np0005548790.localdomain sudo[54448]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dvfdadqqthkyvpzzjdwzqdejrctqajsh ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:40 np0005548790.localdomain sudo[54448]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:40 np0005548790.localdomain python3[54450]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:13:40 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:13:40 np0005548790.localdomain systemd-sysv-generator[54481]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:13:40 np0005548790.localdomain systemd-rc-local-generator[54477]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:13:40 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:13:40 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:13:40 np0005548790.localdomain systemd-sysv-generator[54518]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:13:40 np0005548790.localdomain systemd-rc-local-generator[54513]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:13:40 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:13:40 np0005548790.localdomain systemd[1]: Starting TripleO Container Shutdown...
Dec 06 08:13:40 np0005548790.localdomain systemd[1]: Finished TripleO Container Shutdown.
Dec 06 08:13:40 np0005548790.localdomain sudo[54448]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:41 np0005548790.localdomain sudo[54572]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hivrktskpvfbypfokjyswoadgpowhtii ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:41 np0005548790.localdomain sudo[54572]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:41 np0005548790.localdomain python3[54574]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:13:41 np0005548790.localdomain sudo[54572]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:41 np0005548790.localdomain sshd[53750]: Received disconnect from 45.78.219.217 port 39494:11: Bye Bye [preauth]
Dec 06 08:13:41 np0005548790.localdomain sshd[53750]: Disconnected from authenticating user root 45.78.219.217 port 39494 [preauth]
Dec 06 08:13:41 np0005548790.localdomain sudo[54615]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vvwvjyufmornrqpudjbklduhsuinohyg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:41 np0005548790.localdomain sudo[54615]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:41 np0005548790.localdomain python3[54617]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008821.0561366-84782-190969819271123/source dest=/usr/lib/systemd/system/netns-placeholder.service mode=0644 owner=root group=root _original_basename=netns-placeholder-service follow=False checksum=8e9c6d5ce3a6e7f71c18780ec899f32f23de4c71 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:13:41 np0005548790.localdomain sudo[54615]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:42 np0005548790.localdomain sudo[54677]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-htldobsmobgcoesttcnavjekzzrxqzce ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:42 np0005548790.localdomain sudo[54677]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:42 np0005548790.localdomain python3[54679]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:13:42 np0005548790.localdomain sudo[54677]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:42 np0005548790.localdomain sudo[54720]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-twelbikifdhwdxyihzpgulppafttddjp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:42 np0005548790.localdomain sudo[54720]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:42 np0005548790.localdomain python3[54722]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008821.9898522-84861-217045620335944/source dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset mode=0644 owner=root group=root _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:13:42 np0005548790.localdomain sudo[54720]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:42 np0005548790.localdomain sudo[54750]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xblyxzvwnryrvouupmahwwrjxfwdndbj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:42 np0005548790.localdomain sudo[54750]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:43 np0005548790.localdomain python3[54752]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:13:43 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:13:43 np0005548790.localdomain systemd-rc-local-generator[54775]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:13:43 np0005548790.localdomain systemd-sysv-generator[54781]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:13:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:13:43 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:13:43 np0005548790.localdomain systemd-rc-local-generator[54815]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:13:43 np0005548790.localdomain systemd-sysv-generator[54822]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:13:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:13:43 np0005548790.localdomain systemd[1]: Starting Create netns directory...
Dec 06 08:13:43 np0005548790.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 08:13:43 np0005548790.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 08:13:43 np0005548790.localdomain systemd[1]: Finished Create netns directory.
Dec 06 08:13:43 np0005548790.localdomain sudo[54750]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:44 np0005548790.localdomain sudo[54843]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yuvsyvddvobopzbjehovghjstldleyeu ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:44 np0005548790.localdomain sudo[54843]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:44 np0005548790.localdomain python3[54845]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Dec 06 08:13:44 np0005548790.localdomain python3[54845]: ansible-container_puppet_config [WARNING] Config change detected for metrics_qdr, new hash: c93aed9c81ad5102fc4c6784fdec0c75
Dec 06 08:13:44 np0005548790.localdomain python3[54845]: ansible-container_puppet_config [WARNING] Config change detected for collectd, new hash: d31718fcd17fdeee6489534105191c7a
Dec 06 08:13:44 np0005548790.localdomain python3[54845]: ansible-container_puppet_config [WARNING] Config change detected for iscsid, new hash: 02a6418b6bc78669b6757e55b0a3cf68
Dec 06 08:13:44 np0005548790.localdomain python3[54845]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtlogd_wrapper, new hash: 1c14d9f34e8565ad391b489e982af70f
Dec 06 08:13:44 np0005548790.localdomain python3[54845]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtnodedevd, new hash: 1c14d9f34e8565ad391b489e982af70f
Dec 06 08:13:44 np0005548790.localdomain python3[54845]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtproxyd, new hash: 1c14d9f34e8565ad391b489e982af70f
Dec 06 08:13:44 np0005548790.localdomain python3[54845]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtqemud, new hash: 1c14d9f34e8565ad391b489e982af70f
Dec 06 08:13:44 np0005548790.localdomain python3[54845]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtsecretd, new hash: 1c14d9f34e8565ad391b489e982af70f
Dec 06 08:13:44 np0005548790.localdomain python3[54845]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtstoraged, new hash: 1c14d9f34e8565ad391b489e982af70f
Dec 06 08:13:44 np0005548790.localdomain python3[54845]: ansible-container_puppet_config [WARNING] Config change detected for rsyslog, new hash: 46b3928e39956af0ccbc08ab55267e91
Dec 06 08:13:44 np0005548790.localdomain python3[54845]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_compute, new hash: 94eddc2d1a780b6dc03d015a7bd0e411
Dec 06 08:13:44 np0005548790.localdomain python3[54845]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_ipmi, new hash: 94eddc2d1a780b6dc03d015a7bd0e411
Dec 06 08:13:44 np0005548790.localdomain python3[54845]: ansible-container_puppet_config [WARNING] Config change detected for logrotate_crond, new hash: 53ed83bb0cae779ff95edb2002262c6f
Dec 06 08:13:44 np0005548790.localdomain python3[54845]: ansible-container_puppet_config [WARNING] Config change detected for nova_libvirt_init_secret, new hash: 1c14d9f34e8565ad391b489e982af70f
Dec 06 08:13:44 np0005548790.localdomain python3[54845]: ansible-container_puppet_config [WARNING] Config change detected for nova_migration_target, new hash: 1c14d9f34e8565ad391b489e982af70f
Dec 06 08:13:44 np0005548790.localdomain python3[54845]: ansible-container_puppet_config [WARNING] Config change detected for ovn_metadata_agent, new hash: 9b9208098644933bd8c0484efcd7b934
Dec 06 08:13:44 np0005548790.localdomain python3[54845]: ansible-container_puppet_config [WARNING] Config change detected for nova_compute, new hash: 02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f
Dec 06 08:13:44 np0005548790.localdomain python3[54845]: ansible-container_puppet_config [WARNING] Config change detected for nova_wait_for_compute_service, new hash: 1c14d9f34e8565ad391b489e982af70f
Dec 06 08:13:44 np0005548790.localdomain sudo[54843]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:44 np0005548790.localdomain sudo[54859]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-orhbrnqrratjsiqzaskukgwbfcieqpoc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:44 np0005548790.localdomain sudo[54859]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:44 np0005548790.localdomain sudo[54859]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:45 np0005548790.localdomain sudo[54901]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ooqjheikfpevqsvovbyqwdgopuuvnuda ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:45 np0005548790.localdomain sudo[54901]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:45 np0005548790.localdomain python3[54903]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step1 config_dir=/var/lib/tripleo-config/container-startup-config/step_1 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Dec 06 08:13:46 np0005548790.localdomain podman[54942]: 2025-12-06 08:13:46.036744986 +0000 UTC m=+0.080279239 container create 1f94a247a08f33253de5dca9e3f989f11a3fd35790ec1527de15dbb86278f118 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, name=rhosp17/openstack-qdrouterd, tcib_managed=true, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr_init_logs, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:13:46 np0005548790.localdomain systemd[1]: Started libpod-conmon-1f94a247a08f33253de5dca9e3f989f11a3fd35790ec1527de15dbb86278f118.scope.
Dec 06 08:13:46 np0005548790.localdomain podman[54942]: 2025-12-06 08:13:45.994304649 +0000 UTC m=+0.037838922 image pull  registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Dec 06 08:13:46 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:13:46 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bc46538263e83004254135f9d6e87019950ff3d90284a88c9b18ce035a29527/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:46 np0005548790.localdomain podman[54942]: 2025-12-06 08:13:46.125744932 +0000 UTC m=+0.169279185 container init 1f94a247a08f33253de5dca9e3f989f11a3fd35790ec1527de15dbb86278f118 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, container_name=metrics_qdr_init_logs, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:13:46 np0005548790.localdomain podman[54942]: 2025-12-06 08:13:46.135695074 +0000 UTC m=+0.179229327 container start 1f94a247a08f33253de5dca9e3f989f11a3fd35790ec1527de15dbb86278f118 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr_init_logs, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:13:46 np0005548790.localdomain podman[54942]: 2025-12-06 08:13:46.135982411 +0000 UTC m=+0.179516664 container attach 1f94a247a08f33253de5dca9e3f989f11a3fd35790ec1527de15dbb86278f118 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, config_id=tripleo_step1, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, version=17.1.12, container_name=metrics_qdr_init_logs, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:13:46 np0005548790.localdomain systemd[1]: libpod-1f94a247a08f33253de5dca9e3f989f11a3fd35790ec1527de15dbb86278f118.scope: Deactivated successfully.
Dec 06 08:13:46 np0005548790.localdomain podman[54942]: 2025-12-06 08:13:46.145770728 +0000 UTC m=+0.189304991 container died 1f94a247a08f33253de5dca9e3f989f11a3fd35790ec1527de15dbb86278f118 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr_init_logs, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64)
Dec 06 08:13:46 np0005548790.localdomain podman[54961]: 2025-12-06 08:13:46.233825618 +0000 UTC m=+0.074898342 container cleanup 1f94a247a08f33253de5dca9e3f989f11a3fd35790ec1527de15dbb86278f118 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, container_name=metrics_qdr_init_logs, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com)
Dec 06 08:13:46 np0005548790.localdomain systemd[1]: libpod-conmon-1f94a247a08f33253de5dca9e3f989f11a3fd35790ec1527de15dbb86278f118.scope: Deactivated successfully.
Dec 06 08:13:46 np0005548790.localdomain python3[54903]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr_init_logs --conmon-pidfile /run/metrics_qdr_init_logs.pid --detach=False --label config_id=tripleo_step1 --label container_name=metrics_qdr_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 /bin/bash -c chown -R qdrouterd:qdrouterd /var/log/qdrouterd
Dec 06 08:13:46 np0005548790.localdomain podman[55034]: 2025-12-06 08:13:46.678310024 +0000 UTC m=+0.074755338 container create ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:13:46 np0005548790.localdomain systemd[1]: Started libpod-conmon-ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.scope.
Dec 06 08:13:46 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:13:46 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/415e7a279decd7116c2befbd34e92cf4f0c1820f58473bd34c5452500e4d856c/merged/var/lib/qdrouterd supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:46 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/415e7a279decd7116c2befbd34e92cf4f0c1820f58473bd34c5452500e4d856c/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff)
Dec 06 08:13:46 np0005548790.localdomain podman[55034]: 2025-12-06 08:13:46.642642133 +0000 UTC m=+0.039087417 image pull  registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Dec 06 08:13:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:13:46 np0005548790.localdomain podman[55034]: 2025-12-06 08:13:46.786962086 +0000 UTC m=+0.183407440 container init ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:13:46 np0005548790.localdomain sudo[55055]: qdrouterd : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 08:13:46 np0005548790.localdomain sudo[55055]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42465)
Dec 06 08:13:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:13:46 np0005548790.localdomain podman[55034]: 2025-12-06 08:13:46.836424565 +0000 UTC m=+0.232869889 container start ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true)
Dec 06 08:13:46 np0005548790.localdomain python3[54903]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr --conmon-pidfile /run/metrics_qdr.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=c93aed9c81ad5102fc4c6784fdec0c75 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step1 --label container_name=metrics_qdr --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr.log --network host --privileged=False --user qdrouterd --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro --volume /var/lib/metrics_qdr:/var/lib/qdrouterd:z --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Dec 06 08:13:46 np0005548790.localdomain sudo[55055]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:46 np0005548790.localdomain podman[55057]: 2025-12-06 08:13:46.91625658 +0000 UTC m=+0.089369026 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=starting, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=)
Dec 06 08:13:47 np0005548790.localdomain sudo[54901]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:47 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-0bc46538263e83004254135f9d6e87019950ff3d90284a88c9b18ce035a29527-merged.mount: Deactivated successfully.
Dec 06 08:13:47 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1f94a247a08f33253de5dca9e3f989f11a3fd35790ec1527de15dbb86278f118-userdata-shm.mount: Deactivated successfully.
Dec 06 08:13:47 np0005548790.localdomain podman[55057]: 2025-12-06 08:13:47.154099144 +0000 UTC m=+0.327211490 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=)
Dec 06 08:13:47 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:13:47 np0005548790.localdomain sudo[55127]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kmoucrczzkpluzgnbmaqjtzbzbmybskl ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:47 np0005548790.localdomain sudo[55127]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:47 np0005548790.localdomain python3[55129]: ansible-file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:13:47 np0005548790.localdomain sudo[55127]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:47 np0005548790.localdomain sudo[55143]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-naspqxtncdqsoyhtnrnukukbqcdkphbt ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:47 np0005548790.localdomain sudo[55143]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:47 np0005548790.localdomain python3[55145]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_metrics_qdr_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:13:47 np0005548790.localdomain sudo[55143]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:48 np0005548790.localdomain sudo[55204]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pesqkdjrqeuwmohntyogurbvxddsjwdd ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:48 np0005548790.localdomain sudo[55204]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:48 np0005548790.localdomain python3[55206]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765008827.7429829-85162-148650734545386/source dest=/etc/systemd/system/tripleo_metrics_qdr.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:13:48 np0005548790.localdomain sudo[55204]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:48 np0005548790.localdomain sudo[55220]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-feufcqfjfnsstpllrhuipbwzmnvhndje ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:48 np0005548790.localdomain sudo[55220]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:48 np0005548790.localdomain python3[55222]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 08:13:48 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:13:48 np0005548790.localdomain systemd-rc-local-generator[55243]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:13:48 np0005548790.localdomain systemd-sysv-generator[55248]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:13:48 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:13:49 np0005548790.localdomain sudo[55220]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:49 np0005548790.localdomain sudo[55271]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-meqsaavttdgiovqygsaqazrurawprdvp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:13:49 np0005548790.localdomain sudo[55271]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:49 np0005548790.localdomain python3[55273]: ansible-systemd Invoked with state=restarted name=tripleo_metrics_qdr.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:13:49 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:13:49 np0005548790.localdomain systemd-sysv-generator[55303]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:13:49 np0005548790.localdomain systemd-rc-local-generator[55298]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:13:49 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:13:49 np0005548790.localdomain systemd[1]: Starting metrics_qdr container...
Dec 06 08:13:49 np0005548790.localdomain systemd[1]: Started metrics_qdr container.
Dec 06 08:13:49 np0005548790.localdomain sudo[55271]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:50 np0005548790.localdomain sudo[55350]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bfrtzomykmsodkbitghgguhseesjauyr ; /usr/bin/python3
Dec 06 08:13:50 np0005548790.localdomain sudo[55350]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:50 np0005548790.localdomain python3[55352]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks1.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:13:50 np0005548790.localdomain sudo[55350]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:50 np0005548790.localdomain sudo[55398]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hozetpdtixpttylfrhjqacbdsbxtdqfm ; /usr/bin/python3
Dec 06 08:13:50 np0005548790.localdomain sudo[55398]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:51 np0005548790.localdomain sudo[55398]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:51 np0005548790.localdomain sudo[55441]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vppwkbpgcdzyadaicaebwcomtbtszeuq ; /usr/bin/python3
Dec 06 08:13:51 np0005548790.localdomain sudo[55441]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:51 np0005548790.localdomain sudo[55441]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:51 np0005548790.localdomain sudo[55471]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bqobhhihumfuezqrxkxqhdjlpqvwcnaj ; /usr/bin/python3
Dec 06 08:13:51 np0005548790.localdomain sudo[55471]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:51 np0005548790.localdomain python3[55473]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks1.json short_hostname=np0005548790 step=1 update_config_hash_only=False
Dec 06 08:13:51 np0005548790.localdomain sudo[55471]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:52 np0005548790.localdomain sudo[55487]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jiwbyjwrkmndogocoekgrumxtgcbahnn ; /usr/bin/python3
Dec 06 08:13:52 np0005548790.localdomain sudo[55487]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:52 np0005548790.localdomain python3[55489]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:13:52 np0005548790.localdomain sudo[55487]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:52 np0005548790.localdomain sudo[55503]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zhejlntpxpreisspjnuqjqzedwqrkydx ; /usr/bin/python3
Dec 06 08:13:52 np0005548790.localdomain sudo[55503]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:13:52 np0005548790.localdomain python3[55505]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True
Dec 06 08:13:52 np0005548790.localdomain sudo[55503]: pam_unix(sudo:session): session closed for user root
Dec 06 08:13:54 np0005548790.localdomain sshd[55506]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:13:54 np0005548790.localdomain sshd[55506]: Invalid user solana from 193.32.162.146 port 50074
Dec 06 08:13:54 np0005548790.localdomain sshd[55506]: Connection closed by invalid user solana 193.32.162.146 port 50074 [preauth]
Dec 06 08:14:01 np0005548790.localdomain sudo[55508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:14:01 np0005548790.localdomain sudo[55508]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:14:01 np0005548790.localdomain sudo[55508]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:01 np0005548790.localdomain sudo[55523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:14:01 np0005548790.localdomain sudo[55523]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:14:02 np0005548790.localdomain sudo[55523]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:03 np0005548790.localdomain sudo[55570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:14:03 np0005548790.localdomain sudo[55570]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:14:03 np0005548790.localdomain sudo[55570]: pam_unix(sudo:session): session closed for user root
Dec 06 08:14:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:14:17 np0005548790.localdomain podman[55585]: 2025-12-06 08:14:17.591002847 +0000 UTC m=+0.101054656 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public)
Dec 06 08:14:17 np0005548790.localdomain podman[55585]: 2025-12-06 08:14:17.783373381 +0000 UTC m=+0.293425100 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:14:17 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:14:30 np0005548790.localdomain sshd[55614]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:14:40 np0005548790.localdomain sshd[55614]: error: kex_exchange_identification: read: Connection timed out
Dec 06 08:14:40 np0005548790.localdomain sshd[55614]: banner exchange: Connection from 14.103.111.135 port 48314: Connection timed out
Dec 06 08:14:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:14:48 np0005548790.localdomain podman[55615]: 2025-12-06 08:14:48.563827609 +0000 UTC m=+0.079795137 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1)
Dec 06 08:14:48 np0005548790.localdomain podman[55615]: 2025-12-06 08:14:48.783212258 +0000 UTC m=+0.299179416 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, version=17.1.12, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:14:48 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:15:03 np0005548790.localdomain sudo[55644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:15:03 np0005548790.localdomain sudo[55644]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:15:03 np0005548790.localdomain sudo[55644]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:03 np0005548790.localdomain sudo[55659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:15:03 np0005548790.localdomain sudo[55659]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:15:03 np0005548790.localdomain sudo[55659]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:04 np0005548790.localdomain sudo[55706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:15:04 np0005548790.localdomain sudo[55706]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:15:04 np0005548790.localdomain sudo[55706]: pam_unix(sudo:session): session closed for user root
Dec 06 08:15:15 np0005548790.localdomain sshd[55721]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:15:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:15:19 np0005548790.localdomain podman[55722]: 2025-12-06 08:15:19.554734237 +0000 UTC m=+0.070483801 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, config_id=tripleo_step1, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, batch=17.1_20251118.1)
Dec 06 08:15:19 np0005548790.localdomain podman[55722]: 2025-12-06 08:15:19.733175935 +0000 UTC m=+0.248925569 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, distribution-scope=public, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:15:19 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:15:26 np0005548790.localdomain sshd[55721]: error: kex_exchange_identification: read: Connection timed out
Dec 06 08:15:26 np0005548790.localdomain sshd[55721]: banner exchange: Connection from 14.103.138.116 port 42750: Connection timed out
Dec 06 08:15:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:15:50 np0005548790.localdomain podman[55751]: 2025-12-06 08:15:50.56879661 +0000 UTC m=+0.078960253 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, container_name=metrics_qdr, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:15:50 np0005548790.localdomain podman[55751]: 2025-12-06 08:15:50.753329114 +0000 UTC m=+0.263492757 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:15:50 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:16:01 np0005548790.localdomain sshd[55780]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:16:02 np0005548790.localdomain sshd[55782]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:16:03 np0005548790.localdomain sshd[55780]: Received disconnect from 103.226.138.52 port 43026:11: Bye Bye [preauth]
Dec 06 08:16:03 np0005548790.localdomain sshd[55780]: Disconnected from authenticating user root 103.226.138.52 port 43026 [preauth]
Dec 06 08:16:04 np0005548790.localdomain sudo[55783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:16:04 np0005548790.localdomain sudo[55783]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:16:04 np0005548790.localdomain sudo[55783]: pam_unix(sudo:session): session closed for user root
Dec 06 08:16:04 np0005548790.localdomain sudo[55798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:16:04 np0005548790.localdomain sudo[55798]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:16:05 np0005548790.localdomain sudo[55798]: pam_unix(sudo:session): session closed for user root
Dec 06 08:16:06 np0005548790.localdomain sudo[55846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:16:06 np0005548790.localdomain sudo[55846]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:16:06 np0005548790.localdomain sudo[55846]: pam_unix(sudo:session): session closed for user root
Dec 06 08:16:07 np0005548790.localdomain sshd[55861]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:16:11 np0005548790.localdomain sshd[55861]: Received disconnect from 45.78.219.217 port 60040:11: Bye Bye [preauth]
Dec 06 08:16:11 np0005548790.localdomain sshd[55861]: Disconnected from authenticating user root 45.78.219.217 port 60040 [preauth]
Dec 06 08:16:13 np0005548790.localdomain sshd[55782]: error: kex_exchange_identification: read: Connection timed out
Dec 06 08:16:13 np0005548790.localdomain sshd[55782]: banner exchange: Connection from 180.184.134.158 port 43366: Connection timed out
Dec 06 08:16:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:16:21 np0005548790.localdomain systemd[1]: tmp-crun.P4kVkr.mount: Deactivated successfully.
Dec 06 08:16:21 np0005548790.localdomain podman[55863]: 2025-12-06 08:16:21.566116887 +0000 UTC m=+0.083923589 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:16:21 np0005548790.localdomain podman[55863]: 2025-12-06 08:16:21.78418448 +0000 UTC m=+0.301991162 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, container_name=metrics_qdr, tcib_managed=true, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, config_id=tripleo_step1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Dec 06 08:16:21 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:16:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:16:52 np0005548790.localdomain systemd[1]: tmp-crun.uqUk2G.mount: Deactivated successfully.
Dec 06 08:16:52 np0005548790.localdomain podman[55892]: 2025-12-06 08:16:52.550066262 +0000 UTC m=+0.069013356 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, container_name=metrics_qdr, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:16:52 np0005548790.localdomain podman[55892]: 2025-12-06 08:16:52.749212537 +0000 UTC m=+0.268159631 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4)
Dec 06 08:16:52 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:16:59 np0005548790.localdomain sshd[55921]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:17:00 np0005548790.localdomain sshd[55921]: Received disconnect from 35.247.75.98 port 53586:11: Bye Bye [preauth]
Dec 06 08:17:00 np0005548790.localdomain sshd[55921]: Disconnected from authenticating user root 35.247.75.98 port 53586 [preauth]
Dec 06 08:17:02 np0005548790.localdomain sshd[55923]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:17:03 np0005548790.localdomain sshd[55923]: Invalid user sol from 193.32.162.146 port 34252
Dec 06 08:17:03 np0005548790.localdomain sshd[55923]: Connection closed by invalid user sol 193.32.162.146 port 34252 [preauth]
Dec 06 08:17:06 np0005548790.localdomain sudo[55926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:17:06 np0005548790.localdomain sudo[55926]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:17:06 np0005548790.localdomain sudo[55926]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:06 np0005548790.localdomain sudo[55941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:17:06 np0005548790.localdomain sudo[55941]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:17:06 np0005548790.localdomain sudo[55941]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:07 np0005548790.localdomain sudo[55987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:17:07 np0005548790.localdomain sudo[55987]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:17:07 np0005548790.localdomain sudo[55987]: pam_unix(sudo:session): session closed for user root
Dec 06 08:17:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:17:23 np0005548790.localdomain systemd[1]: tmp-crun.9jXw2W.mount: Deactivated successfully.
Dec 06 08:17:23 np0005548790.localdomain podman[56002]: 2025-12-06 08:17:23.560567507 +0000 UTC m=+0.078289418 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, distribution-scope=public, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, batch=17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:17:23 np0005548790.localdomain podman[56002]: 2025-12-06 08:17:23.735133736 +0000 UTC m=+0.252855647 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, architecture=x86_64, version=17.1.12, vcs-type=git)
Dec 06 08:17:23 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:17:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:17:54 np0005548790.localdomain podman[56030]: 2025-12-06 08:17:54.55474015 +0000 UTC m=+0.072480211 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://www.redhat.com)
Dec 06 08:17:54 np0005548790.localdomain podman[56030]: 2025-12-06 08:17:54.779352682 +0000 UTC m=+0.297092753 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, distribution-scope=public)
Dec 06 08:17:54 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:18:07 np0005548790.localdomain sudo[56059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:18:07 np0005548790.localdomain sudo[56059]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:18:07 np0005548790.localdomain sudo[56059]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:07 np0005548790.localdomain sudo[56074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:18:07 np0005548790.localdomain sudo[56074]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:18:08 np0005548790.localdomain sudo[56074]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:09 np0005548790.localdomain sudo[56121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:18:09 np0005548790.localdomain sudo[56121]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:18:09 np0005548790.localdomain sudo[56121]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:18:25 np0005548790.localdomain podman[56136]: 2025-12-06 08:18:25.59843337 +0000 UTC m=+0.073475188 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:18:25 np0005548790.localdomain podman[56136]: 2025-12-06 08:18:25.799163026 +0000 UTC m=+0.274204854 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, tcib_managed=true, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:18:25 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:18:27 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 20 pg[2.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [3,1,5] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:29 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 21 pg[2.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [3,1,5] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:31 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 22 pg[3.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1,3,2] r=1 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:33 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 24 pg[4.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [5,0,1] r=1 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:34 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 25 pg[5.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [4,5,3] r=2 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:37 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 29 pg[2.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.215983391s) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active pruub 1120.068725586s@ mbc={}] start_peering_interval up [3,1,5] -> [3,1,5], acting [3,1,5] -> [3,1,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:37 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 29 pg[2.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=29 pruub=15.215983391s) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown pruub 1120.068725586s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.19( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.18( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.17( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.16( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.15( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.14( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.13( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.12( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.11( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.10( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.f( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.e( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.d( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.c( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.a( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.b( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.3( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.1( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.7( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.4( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.2( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.6( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.5( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.8( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.9( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.1b( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.1a( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.1d( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.1e( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.1f( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.1c( empty local-lis/les=20/21 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.12( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.15( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.14( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.11( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.f( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.d( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.c( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.0( empty local-lis/les=29/30 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.10( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.b( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.7( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.3( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.6( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.4( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.5( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.a( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.9( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.1( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.13( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.1b( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.8( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.2( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.1f( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.1e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.17( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.1c( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.16( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.18( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.19( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.1d( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:38 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 30 pg[2.1a( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=20/20 les/c/f=21/21/0 sis=29) [3,1,5] r=0 lpr=29 pi=[20,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:39 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 31 pg[3.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=31 pruub=15.685674667s) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 active pruub 1121.976806641s@ mbc={}] start_peering_interval up [1,3,2] -> [1,3,2], acting [1,3,2] -> [1,3,2], acting_primary 1 -> 1, up_primary 1 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:39 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 31 pg[3.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=31 pruub=15.682583809s) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1121.976806641s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:39 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 31 pg[4.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=31 pruub=10.199378014s) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 active pruub 1120.878173828s@ mbc={}] start_peering_interval up [5,0,1] -> [5,0,1], acting [5,0,1] -> [5,0,1], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:39 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 31 pg[4.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=31 pruub=10.195084572s) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1120.878173828s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:39 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Dec 06 08:18:39 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.1c( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.16( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.17( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.15( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.14( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.13( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.12( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.d( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.11( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.1( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.9( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.6( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.a( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.b( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.8( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.1f( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.4( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.1e( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.7( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.3( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.10( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.1d( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.c( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.2( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.f( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.18( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.1a( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.e( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.19( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.1b( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 32 pg[4.5( empty local-lis/les=24/25 n=0 ec=31/24 lis/c=24/24 les/c/f=25/25/0 sis=31) [5,0,1] r=1 lpr=31 pi=[24,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.1e( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.1c( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.1f( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.1a( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.1b( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.8( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.4( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.7( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.5( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.1d( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.6( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.3( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.2( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.b( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.a( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.1( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.d( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.c( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.f( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.11( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.e( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.10( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.13( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.12( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.15( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.14( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.17( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.19( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.18( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.16( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 32 pg[3.9( empty local-lis/les=22/23 n=0 ec=31/22 lis/c=22/22 les/c/f=23/23/0 sis=31) [1,3,2] r=1 lpr=31 pi=[22,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec 06 08:18:40 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec 06 08:18:41 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 33 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=8.847837448s) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 active pruub 1117.197998047s@ mbc={}] start_peering_interval up [4,5,3] -> [4,5,3], acting [4,5,3] -> [4,5,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:41 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 33 pg[5.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=33 pruub=8.844660759s) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1117.197998047s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:41 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Dec 06 08:18:41 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.e( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.19( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.4( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.d( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.c( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.17( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.15( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.16( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.14( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.13( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.11( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.10( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.1e( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.12( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.1f( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.f( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.9( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.8( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.a( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.b( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.7( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.5( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.2( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.3( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.1( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.6( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.1d( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.1c( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.1b( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.1a( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:42 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 34 pg[5.18( empty local-lis/les=25/26 n=0 ec=33/25 lis/c=25/25 les/c/f=26/26/0 sis=33) [4,5,3] r=2 lpr=33 pi=[25,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:44 np0005548790.localdomain sshd[56166]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:18:45 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec 06 08:18:45 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec 06 08:18:46 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 35 pg[6.0( empty local-lis/les=0/0 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [5,0,1] r=1 lpr=35 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:47 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 37 pg[7.0( empty local-lis/les=0/0 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [0,1,5] r=0 lpr=37 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:47 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 2.12 deep-scrub starts
Dec 06 08:18:47 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 2.12 deep-scrub ok
Dec 06 08:18:48 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 38 pg[7.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [0,1,5] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:51 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec 06 08:18:51 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec 06 08:18:52 np0005548790.localdomain sudo[56169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:18:52 np0005548790.localdomain sudo[56169]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:18:52 np0005548790.localdomain sudo[56169]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.1e( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.344844818s) [5,3,4] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.384521484s@ mbc={}] start_peering_interval up [4,5,3] -> [5,3,4], acting [4,5,3] -> [5,3,4], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.18( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.295213699s) [5,1,3] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.334960938s@ mbc={}] start_peering_interval up [1,3,2] -> [5,1,3], acting [1,3,2] -> [5,1,3], acting_primary 1 -> 5, up_primary 1 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.1e( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.344774246s) [5,3,4] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.384521484s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.1f( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.345003128s) [3,4,2] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.384887695s@ mbc={}] start_peering_interval up [4,5,3] -> [3,4,2], acting [4,5,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.1f( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.345003128s) [3,4,2] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.384887695s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.18( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.295147896s) [5,1,3] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1131.334960938s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.19( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294910431s) [5,0,1] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.334838867s@ mbc={}] start_peering_interval up [1,3,2] -> [5,0,1], acting [1,3,2] -> [5,0,1], acting_primary 1 -> 5, up_primary 1 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.19( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294776917s) [5,0,1] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1131.334838867s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.17( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.257468224s) [3,1,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.297607422s@ mbc={}] start_peering_interval up [3,1,5] -> [3,1,2], acting [3,1,5] -> [3,1,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.10( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.344063759s) [0,4,5] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.384155273s@ mbc={}] start_peering_interval up [4,5,3] -> [0,4,5], acting [4,5,3] -> [0,4,5], acting_primary 4 -> 0, up_primary 4 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.17( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.257468224s) [3,1,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1129.297607422s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.10( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.344013214s) [0,4,5] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.384155273s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.16( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.257119179s) [0,1,2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.297729492s@ mbc={}] start_peering_interval up [3,1,5] -> [0,1,2], acting [3,1,5] -> [0,1,2], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.18( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.257772446s) [4,3,2] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.297851562s@ mbc={}] start_peering_interval up [3,1,5] -> [4,3,2], acting [3,1,5] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.11( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.343489647s) [3,4,5] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.384033203s@ mbc={}] start_peering_interval up [4,5,3] -> [3,4,5], acting [4,5,3] -> [3,4,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.16( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294312477s) [3,2,4] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.334960938s@ mbc={}] start_peering_interval up [1,3,2] -> [3,2,4], acting [1,3,2] -> [3,2,4], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.16( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294312477s) [3,2,4] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1131.334960938s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.11( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.343489647s) [3,4,5] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.384033203s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.17( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294435501s) [5,3,1] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.335327148s@ mbc={}] start_peering_interval up [1,3,2] -> [5,3,1], acting [1,3,2] -> [5,3,1], acting_primary 1 -> 5, up_primary 1 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.18( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.256986618s) [4,3,2] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.297851562s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.17( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294414520s) [5,3,1] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1131.335327148s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.16( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.256760597s) [0,1,2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.297729492s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.19( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.256948471s) [2,0,4] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.297851562s@ mbc={}] start_peering_interval up [3,1,5] -> [2,0,4], acting [3,1,5] -> [2,0,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.15( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.249951363s) [1,2,3] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.291137695s@ mbc={}] start_peering_interval up [3,1,5] -> [1,2,3], acting [3,1,5] -> [1,2,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.12( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.343367577s) [1,3,5] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.384521484s@ mbc={}] start_peering_interval up [4,5,3] -> [1,3,5], acting [4,5,3] -> [1,3,5], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.15( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.249918938s) [1,2,3] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.291137695s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.14( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294059753s) [3,4,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.335327148s@ mbc={}] start_peering_interval up [1,3,2] -> [3,4,2], acting [1,3,2] -> [3,4,2], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.12( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.343262672s) [1,3,5] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.384521484s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.15( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.293511391s) [1,3,5] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.334960938s@ mbc={}] start_peering_interval up [1,3,2] -> [1,3,5], acting [1,3,2] -> [1,3,5], acting_primary 1 -> 1, up_primary 1 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.14( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294059753s) [3,4,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1131.335327148s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.15( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.293488503s) [1,3,5] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1131.334960938s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.14( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.249556541s) [3,4,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.291015625s@ mbc={}] start_peering_interval up [3,1,5] -> [3,4,2], acting [3,1,5] -> [3,4,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.14( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.342728615s) [2,4,0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.384399414s@ mbc={}] start_peering_interval up [4,5,3] -> [2,4,0], acting [4,5,3] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.14( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.342701912s) [2,4,0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.384399414s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.12( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.293409348s) [2,3,1] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.335083008s@ mbc={}] start_peering_interval up [1,3,2] -> [2,3,1], acting [1,3,2] -> [2,3,1], acting_primary 1 -> 2, up_primary 1 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.12( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.293367386s) [2,3,1] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1131.335083008s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.14( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.249556541s) [3,4,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1129.291015625s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.15( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.342264175s) [3,2,1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.384155273s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,1], acting [4,5,3] -> [3,2,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.13( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.250746727s) [1,0,2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.292602539s@ mbc={}] start_peering_interval up [3,1,5] -> [1,0,2], acting [3,1,5] -> [1,0,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.15( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.342264175s) [3,2,1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.384155273s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.12( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.249138832s) [4,2,3] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.291015625s@ mbc={}] start_peering_interval up [3,1,5] -> [4,2,3], acting [3,1,5] -> [4,2,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.13( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.293183327s) [3,2,1] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.335083008s@ mbc={}] start_peering_interval up [1,3,2] -> [3,2,1], acting [1,3,2] -> [3,2,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.13( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.250659943s) [1,0,2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.292602539s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.13( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.293183327s) [3,2,1] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1131.335083008s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.12( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.249111176s) [4,2,3] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.291015625s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.16( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.342030525s) [3,5,1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.384155273s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,1], acting [4,5,3] -> [3,5,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.11( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.249110222s) [0,2,4] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.291259766s@ mbc={}] start_peering_interval up [3,1,5] -> [0,2,4], acting [3,1,5] -> [0,2,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.11( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.249023438s) [0,2,4] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.291259766s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.17( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.341767311s) [5,1,0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.383911133s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,0], acting [4,5,3] -> [5,1,0], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.17( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.341588020s) [5,1,0] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.383911133s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.10( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.249093056s) [4,5,3] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.291625977s@ mbc={}] start_peering_interval up [3,1,5] -> [4,5,3], acting [3,1,5] -> [4,5,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.11( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294477463s) [4,0,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.336914062s@ mbc={}] start_peering_interval up [1,3,2] -> [4,0,2], acting [1,3,2] -> [4,0,2], acting_primary 1 -> 4, up_primary 1 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.19( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.255395889s) [2,0,4] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.297851562s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.16( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.342030525s) [3,5,1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.384155273s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.10( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.249065399s) [4,5,3] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.291625977s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.11( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294457436s) [4,0,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1131.336914062s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.8( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.342115402s) [1,2,3] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.384643555s@ mbc={}] start_peering_interval up [4,5,3] -> [1,2,3], acting [4,5,3] -> [1,2,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.8( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.342093468s) [1,2,3] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.384643555s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.f( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.248574257s) [4,0,5] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.291259766s@ mbc={}] start_peering_interval up [3,1,5] -> [4,0,5], acting [3,1,5] -> [4,0,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.10( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294545174s) [3,1,5] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.337402344s@ mbc={}] start_peering_interval up [1,3,2] -> [3,1,5], acting [1,3,2] -> [3,1,5], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.9( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.341689110s) [0,4,2] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.384521484s@ mbc={}] start_peering_interval up [4,5,3] -> [0,4,2], acting [4,5,3] -> [0,4,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.f( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294194221s) [0,1,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.337036133s@ mbc={}] start_peering_interval up [1,3,2] -> [0,1,2], acting [1,3,2] -> [0,1,2], acting_primary 1 -> 0, up_primary 1 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.f( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.248542786s) [4,0,5] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.291259766s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.10( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294545174s) [3,1,5] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1131.337402344s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.f( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294148445s) [0,1,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1131.337036133s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.9( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.341635704s) [0,4,2] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.384521484s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.248369217s) [2,4,3] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.291259766s@ mbc={}] start_peering_interval up [3,1,5] -> [2,4,3], acting [3,1,5] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.a( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.342182159s) [2,1,3] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.385253906s@ mbc={}] start_peering_interval up [4,5,3] -> [2,1,3], acting [4,5,3] -> [2,1,3], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.a( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.342163086s) [2,1,3] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.385253906s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.248277664s) [2,4,3] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.291259766s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.d( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.248517990s) [1,0,5] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.291625977s@ mbc={}] start_peering_interval up [3,1,5] -> [1,0,5], acting [3,1,5] -> [1,0,5], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.c( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.248387337s) [1,5,0] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.291503906s@ mbc={}] start_peering_interval up [3,1,5] -> [1,5,0], acting [3,1,5] -> [1,5,0], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.c( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.248368263s) [1,5,0] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.291503906s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.d( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.248470306s) [1,0,5] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.291625977s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.c( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294240952s) [0,5,1] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.337402344s@ mbc={}] start_peering_interval up [1,3,2] -> [0,5,1], acting [1,3,2] -> [0,5,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.c( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294182777s) [0,5,1] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1131.337402344s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.d( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.293858528s) [3,1,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.337036133s@ mbc={}] start_peering_interval up [1,3,2] -> [3,1,2], acting [1,3,2] -> [3,1,2], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.d( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.293858528s) [3,1,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1131.337036133s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.c( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.340556145s) [5,0,4] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.383911133s@ mbc={}] start_peering_interval up [4,5,3] -> [5,0,4], acting [4,5,3] -> [5,0,4], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.a( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.296958923s) [3,5,1] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.340209961s@ mbc={}] start_peering_interval up [1,3,2] -> [3,5,1], acting [1,3,2] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.b( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.248411179s) [1,3,2] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.291870117s@ mbc={}] start_peering_interval up [3,1,5] -> [1,3,2], acting [3,1,5] -> [1,3,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.c( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.340518951s) [5,0,4] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.383911133s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.a( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.296958923s) [3,5,1] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1131.340209961s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.b( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.248380661s) [1,3,2] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.291870117s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.a( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.248791695s) [1,5,3] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.292236328s@ mbc={}] start_peering_interval up [3,1,5] -> [1,5,3], acting [3,1,5] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.d( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.340486526s) [4,0,2] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.384033203s@ mbc={}] start_peering_interval up [4,5,3] -> [4,0,2], acting [4,5,3] -> [4,0,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.a( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.248764992s) [1,5,3] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.292236328s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.b( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.297248840s) [2,0,1] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.340698242s@ mbc={}] start_peering_interval up [1,3,2] -> [2,0,1], acting [1,3,2] -> [2,0,1], acting_primary 1 -> 2, up_primary 1 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.4( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.340413094s) [1,2,3] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.384033203s@ mbc={}] start_peering_interval up [4,5,3] -> [1,2,3], acting [4,5,3] -> [1,2,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.d( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.340434074s) [4,0,2] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.384033203s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.b( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.297224045s) [2,0,1] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1131.340698242s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.4( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.340391159s) [1,2,3] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.384033203s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.3( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.248128891s) [3,2,1] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.291870117s@ mbc={}] start_peering_interval up [3,1,5] -> [3,2,1], acting [3,1,5] -> [3,2,1], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.2( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.293308258s) [5,4,0] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.337036133s@ mbc={}] start_peering_interval up [1,3,2] -> [5,4,0], acting [1,3,2] -> [5,4,0], acting_primary 1 -> 5, up_primary 1 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.1( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.293305397s) [2,3,4] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.337036133s@ mbc={}] start_peering_interval up [1,3,2] -> [2,3,4], acting [1,3,2] -> [2,3,4], acting_primary 1 -> 2, up_primary 1 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.7( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.340744972s) [3,5,4] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.384521484s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,4], acting [4,5,3] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.2( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.293175697s) [5,4,0] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1131.337036133s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.1( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.293275833s) [2,3,4] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1131.337036133s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.7( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.340744972s) [3,5,4] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.384521484s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.6( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.341273308s) [2,0,4] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.385253906s@ mbc={}] start_peering_interval up [4,5,3] -> [2,0,4], acting [4,5,3] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.3( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.248128891s) [3,2,1] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1129.291870117s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.6( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.341244698s) [2,0,4] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.385253906s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.7( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.247752190s) [3,1,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.291870117s@ mbc={}] start_peering_interval up [3,1,5] -> [3,1,2], acting [3,1,5] -> [3,1,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.7( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.247752190s) [3,1,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1129.291870117s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.6( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.293488503s) [5,0,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.337646484s@ mbc={}] start_peering_interval up [1,3,2] -> [5,0,4], acting [1,3,2] -> [5,0,4], acting_primary 1 -> 5, up_primary 1 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.1( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.248374939s) [2,4,3] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.292480469s@ mbc={}] start_peering_interval up [3,1,5] -> [2,4,3], acting [3,1,5] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.6( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.293467522s) [5,0,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1131.337646484s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.1( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.248311996s) [2,4,3] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.292480469s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.5( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.340517044s) [5,3,1] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.384643555s@ mbc={}] start_peering_interval up [4,5,3] -> [5,3,1], acting [4,5,3] -> [5,3,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.5( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.340458870s) [5,3,1] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.384643555s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.4( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.247896194s) [5,1,3] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.292236328s@ mbc={}] start_peering_interval up [3,1,5] -> [5,1,3], acting [3,1,5] -> [5,1,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.3( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.293116570s) [0,2,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.337402344s@ mbc={}] start_peering_interval up [1,3,2] -> [0,2,4], acting [1,3,2] -> [0,2,4], acting_primary 1 -> 0, up_primary 1 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.3( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.340577126s) [2,3,1] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.384887695s@ mbc={}] start_peering_interval up [4,5,3] -> [2,3,1], acting [4,5,3] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.3( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.293027878s) [0,2,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1131.337402344s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.2( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.253096581s) [3,5,4] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.297607422s@ mbc={}] start_peering_interval up [3,1,5] -> [3,5,4], acting [3,1,5] -> [3,5,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.4( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.247865677s) [5,1,3] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.292236328s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.5( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.292918205s) [3,5,4] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.337524414s@ mbc={}] start_peering_interval up [1,3,2] -> [3,5,4], acting [1,3,2] -> [3,5,4], acting_primary 1 -> 3, up_primary 1 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.3( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.340497971s) [2,3,1] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.384887695s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.2( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.340298653s) [0,5,1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.384887695s@ mbc={}] start_peering_interval up [4,5,3] -> [0,5,1], acting [4,5,3] -> [0,5,1], acting_primary 4 -> 0, up_primary 4 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.2( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.340270996s) [0,5,1] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.384887695s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.5( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.292918205s) [3,5,4] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1131.337524414s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.5( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.247591019s) [1,2,0] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.292236328s@ mbc={}] start_peering_interval up [3,1,5] -> [1,2,0], acting [3,1,5] -> [1,2,0], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.1( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.340350151s) [3,5,1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.385131836s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,1], acting [4,5,3] -> [3,5,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.4( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.292864799s) [2,1,3] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.337646484s@ mbc={}] start_peering_interval up [1,3,2] -> [2,1,3], acting [1,3,2] -> [2,1,3], acting_primary 1 -> 2, up_primary 1 -> 2, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.4( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.292836189s) [2,1,3] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1131.337646484s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.5( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.247558594s) [1,2,0] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.292236328s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.1( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.340350151s) [3,5,1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1133.385131836s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.2( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.253096581s) [3,5,4] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1129.297607422s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.7( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.292577744s) [2,1,3] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.337646484s@ mbc={}] start_peering_interval up [1,3,2] -> [2,1,3], acting [1,3,2] -> [2,1,3], acting_primary 1 -> 2, up_primary 1 -> 2, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.7( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.292554855s) [2,1,3] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1131.337646484s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.6( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.247049332s) [2,1,3] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.292114258s@ mbc={}] start_peering_interval up [3,1,5] -> [2,1,3], acting [3,1,5] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.f( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.339537621s) [0,1,2] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.384521484s@ mbc={}] start_peering_interval up [4,5,3] -> [0,1,2], acting [4,5,3] -> [0,1,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.6( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.247018814s) [2,1,3] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.292114258s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.f( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.339481354s) [0,1,2] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.384521484s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.9( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294887543s) [4,3,2] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.340087891s@ mbc={}] start_peering_interval up [1,3,2] -> [4,3,2], acting [1,3,2] -> [4,3,2], acting_primary 1 -> 4, up_primary 1 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.8( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.252223015s) [0,1,2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.297485352s@ mbc={}] start_peering_interval up [3,1,5] -> [0,1,2], acting [3,1,5] -> [0,1,2], acting_primary 3 -> 0, up_primary 3 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.9( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294859886s) [4,3,2] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1131.340087891s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.9( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.247076035s) [5,0,4] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.292358398s@ mbc={}] start_peering_interval up [3,1,5] -> [5,0,4], acting [3,1,5] -> [5,0,4], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.8( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.292304993s) [4,5,0] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.337768555s@ mbc={}] start_peering_interval up [1,3,2] -> [4,5,0], acting [1,3,2] -> [4,5,0], acting_primary 1 -> 4, up_primary 1 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.8( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.251940727s) [0,1,2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.297485352s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.1d( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.339687347s) [5,0,4] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.385253906s@ mbc={}] start_peering_interval up [4,5,3] -> [5,0,4], acting [4,5,3] -> [5,0,4], acting_primary 4 -> 5, up_primary 4 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.8( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.292225838s) [4,5,0] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1131.337768555s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.1b( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294926643s) [4,0,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.340454102s@ mbc={}] start_peering_interval up [1,3,2] -> [4,0,2], acting [1,3,2] -> [4,0,2], acting_primary 1 -> 4, up_primary 1 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.9( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.246904373s) [5,0,4] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.292358398s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.1d( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.339620590s) [5,0,4] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.385253906s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.1b( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294857979s) [4,0,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1131.340454102s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.1a( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.252209663s) [3,4,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.297851562s@ mbc={}] start_peering_interval up [3,1,5] -> [3,4,2], acting [3,1,5] -> [3,4,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.1a( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.252209663s) [3,4,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1129.297851562s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.1a( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.289131165s) [4,5,0] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.334838867s@ mbc={}] start_peering_interval up [1,3,2] -> [4,5,0], acting [1,3,2] -> [4,5,0], acting_primary 1 -> 4, up_primary 1 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.1c( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.339132309s) [0,4,2] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.385009766s@ mbc={}] start_peering_interval up [4,5,3] -> [0,4,2], acting [4,5,3] -> [0,4,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.1a( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.289081573s) [4,5,0] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1131.334838867s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.1b( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.246905327s) [1,0,2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.292724609s@ mbc={}] start_peering_interval up [3,1,5] -> [1,0,2], acting [3,1,5] -> [1,0,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.1b( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.246878624s) [1,0,2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.292724609s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.1b( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.339313507s) [0,5,4] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.385253906s@ mbc={}] start_peering_interval up [4,5,3] -> [0,5,4], acting [4,5,3] -> [0,5,4], acting_primary 4 -> 0, up_primary 4 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.1c( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.339083672s) [0,4,2] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.385009766s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.1c( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.251752853s) [4,3,2] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.297729492s@ mbc={}] start_peering_interval up [3,1,5] -> [4,3,2], acting [3,1,5] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.1b( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.339232445s) [0,5,4] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.385253906s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.1d( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294282913s) [1,0,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.340332031s@ mbc={}] start_peering_interval up [1,3,2] -> [1,0,2], acting [1,3,2] -> [1,0,2], acting_primary 1 -> 1, up_primary 1 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.1c( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.251719475s) [4,3,2] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.297729492s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.1a( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.339097977s) [1,0,5] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.385253906s@ mbc={}] start_peering_interval up [4,5,3] -> [1,0,5], acting [4,5,3] -> [1,0,5], acting_primary 4 -> 1, up_primary 4 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.1d( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294251442s) [1,0,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1131.340332031s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.1c( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294288635s) [0,5,1] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.340454102s@ mbc={}] start_peering_interval up [1,3,2] -> [0,5,1], acting [1,3,2] -> [0,5,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.19( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.337731361s) [2,3,1] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.383911133s@ mbc={}] start_peering_interval up [4,5,3] -> [2,3,1], acting [4,5,3] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.1f( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294210434s) [2,3,4] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.340454102s@ mbc={}] start_peering_interval up [1,3,2] -> [2,3,4], acting [1,3,2] -> [2,3,4], acting_primary 1 -> 2, up_primary 1 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.19( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.337689400s) [2,3,1] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.383911133s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.1d( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.251575470s) [4,3,2] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.297851562s@ mbc={}] start_peering_interval up [3,1,5] -> [4,3,2], acting [3,1,5] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.1c( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294206619s) [0,5,1] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1131.340454102s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.1a( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338919640s) [1,0,5] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.385253906s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.1e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.251254082s) [5,1,3] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.297607422s@ mbc={}] start_peering_interval up [3,1,5] -> [5,1,3], acting [3,1,5] -> [5,1,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.1d( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.251521111s) [4,3,2] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.297851562s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.1e( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.251223564s) [5,1,3] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.297607422s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.1f( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.294031143s) [2,3,4] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1131.340454102s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.18( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338545799s) [0,1,5] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active pruub 1133.385131836s@ mbc={}] start_peering_interval up [4,5,3] -> [0,1,5], acting [4,5,3] -> [0,1,5], acting_primary 4 -> 0, up_primary 4 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.1e( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.293839455s) [5,4,3] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1131.340454102s@ mbc={}] start_peering_interval up [1,3,2] -> [5,4,3], acting [1,3,2] -> [5,4,3], acting_primary 1 -> 5, up_primary 1 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[3.1e( empty local-lis/les=31/32 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.293807030s) [5,4,3] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1131.340454102s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[5.18( empty local-lis/les=33/34 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41 pruub=13.338423729s) [0,1,5] r=-1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1133.385131836s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.1f( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.250763893s) [2,3,4] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active pruub 1129.297607422s@ mbc={}] start_peering_interval up [3,1,5] -> [2,3,4], acting [3,1,5] -> [2,3,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[2.1f( empty local-lis/les=29/30 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=9.250703812s) [2,3,4] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1129.297607422s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[2.11( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [0,2,4] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[5.10( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [0,4,5] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[2.16( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [0,1,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[5.f( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [0,1,2] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[2.8( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [0,1,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[5.2( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [0,5,1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[5.18( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [0,1,5] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[3.3( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [0,2,4] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[5.1b( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [0,5,4] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[3.1c( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [0,5,1] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[3.c( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [0,5,1] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[3.f( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [0,1,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[5.9( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [0,4,2] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[5.1c( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [0,4,2] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.c( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.275738716s) [0,5,1] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.709594727s@ mbc={}] start_peering_interval up [5,0,1] -> [0,5,1], acting [5,0,1] -> [0,5,1], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.c( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.275738716s) [0,5,1] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1135.709594727s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.1f( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.278512955s) [4,3,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.712402344s@ mbc={}] start_peering_interval up [5,0,1] -> [4,3,2], acting [5,0,1] -> [4,3,2], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.1e( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.279014587s) [2,0,1] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.712890625s@ mbc={}] start_peering_interval up [5,0,1] -> [2,0,1], acting [5,0,1] -> [2,0,1], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.1f( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.278465271s) [4,3,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.712402344s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.1e( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.278951645s) [2,0,1] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.712890625s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.7( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.276699066s) [2,1,3] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.710815430s@ mbc={}] start_peering_interval up [5,0,1] -> [2,1,3], acting [5,0,1] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.7( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.276675224s) [2,1,3] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.710815430s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.b( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.282204628s) [2,0,4] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.716308594s@ mbc={}] start_peering_interval up [5,0,1] -> [2,0,4], acting [5,0,1] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.b( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.282176018s) [2,0,4] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.716308594s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.1b( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.276818275s) [3,5,1] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.710937500s@ mbc={}] start_peering_interval up [5,0,1] -> [3,5,1], acting [5,0,1] -> [3,5,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.6( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.276782036s) [4,5,0] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.710937500s@ mbc={}] start_peering_interval up [5,0,1] -> [4,5,0], acting [5,0,1] -> [4,5,0], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.1b( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.276792526s) [3,5,1] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.710937500s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.6( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.276742935s) [4,5,0] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.710937500s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.5( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.276687622s) [3,1,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.711059570s@ mbc={}] start_peering_interval up [5,0,1] -> [3,1,2], acting [5,0,1] -> [3,1,2], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.5( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.276666641s) [3,1,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.711059570s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.a( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.276423454s) [0,5,1] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.710815430s@ mbc={}] start_peering_interval up [5,0,1] -> [0,5,1], acting [5,0,1] -> [0,5,1], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.a( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.276423454s) [0,5,1] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1135.710815430s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.9( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.276688576s) [1,5,0] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.711181641s@ mbc={}] start_peering_interval up [5,0,1] -> [1,5,0], acting [5,0,1] -> [1,5,0], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.1a( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.275117874s) [3,2,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.709594727s@ mbc={}] start_peering_interval up [5,0,1] -> [3,2,4], acting [5,0,1] -> [3,2,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.9( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.276666641s) [1,5,0] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.711181641s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.1d( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.275312424s) [4,3,5] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.709838867s@ mbc={}] start_peering_interval up [5,0,1] -> [4,3,5], acting [5,0,1] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.1a( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.275086403s) [3,2,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.709594727s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.1d( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.275230408s) [4,3,5] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.709838867s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.19( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.274929047s) [1,2,0] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.709472656s@ mbc={}] start_peering_interval up [5,0,1] -> [1,2,0], acting [5,0,1] -> [1,2,0], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.19( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.274909019s) [1,2,0] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.709472656s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.3( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.275819778s) [1,3,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.710571289s@ mbc={}] start_peering_interval up [5,0,1] -> [1,3,2], acting [5,0,1] -> [1,3,2], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.3( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.275793076s) [1,3,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.710571289s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.4( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.275569916s) [5,1,3] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.710327148s@ mbc={}] start_peering_interval up [5,0,1] -> [5,1,3], acting [5,0,1] -> [5,1,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.2( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.275578499s) [1,3,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.710327148s@ mbc={}] start_peering_interval up [5,0,1] -> [1,3,2], acting [5,0,1] -> [1,3,2], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.4( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.275527954s) [5,1,3] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.710327148s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.1( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.274997711s) [4,0,2] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.709838867s@ mbc={}] start_peering_interval up [5,0,1] -> [4,0,2], acting [5,0,1] -> [4,0,2], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.d( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.275740623s) [3,1,5] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.710571289s@ mbc={}] start_peering_interval up [5,0,1] -> [3,1,5], acting [5,0,1] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.2( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.275556564s) [1,3,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.710327148s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.d( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.275716782s) [3,1,5] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.710571289s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.1( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.274954796s) [4,0,2] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.709838867s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.f( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.275600433s) [2,4,3] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.710571289s@ mbc={}] start_peering_interval up [5,0,1] -> [2,4,3], acting [5,0,1] -> [2,4,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.e( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.274682999s) [3,4,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.709594727s@ mbc={}] start_peering_interval up [5,0,1] -> [3,4,2], acting [5,0,1] -> [3,4,2], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.f( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.275578499s) [2,4,3] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.710571289s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.e( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.274651527s) [3,4,2] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.709594727s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.10( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.277372360s) [5,4,3] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.712402344s@ mbc={}] start_peering_interval up [5,0,1] -> [5,4,3], acting [5,0,1] -> [5,4,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.10( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.277344704s) [5,4,3] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.712402344s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.12( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.277334213s) [5,1,3] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.712402344s@ mbc={}] start_peering_interval up [5,0,1] -> [5,1,3], acting [5,0,1] -> [5,1,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.11( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.277804375s) [2,0,4] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.712890625s@ mbc={}] start_peering_interval up [5,0,1] -> [2,0,4], acting [5,0,1] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.13( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.281321526s) [0,1,5] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.716430664s@ mbc={}] start_peering_interval up [5,0,1] -> [0,1,5], acting [5,0,1] -> [0,1,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.14( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.275968552s) [4,2,0] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.711181641s@ mbc={}] start_peering_interval up [5,0,1] -> [4,2,0], acting [5,0,1] -> [4,2,0], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.12( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.277255058s) [5,1,3] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.712402344s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.14( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.275939941s) [4,2,0] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.711181641s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.11( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.277691841s) [2,0,4] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.712890625s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.15( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.275974274s) [4,2,0] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.711303711s@ mbc={}] start_peering_interval up [5,0,1] -> [4,2,0], acting [5,0,1] -> [4,2,0], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.13( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.281321526s) [0,1,5] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown pruub 1135.716430664s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.15( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.275954247s) [4,2,0] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.711303711s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.17( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.277509689s) [5,3,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.712890625s@ mbc={}] start_peering_interval up [5,0,1] -> [5,3,4], acting [5,0,1] -> [5,3,4], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.18( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.274121284s) [3,2,1] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.709472656s@ mbc={}] start_peering_interval up [5,0,1] -> [3,2,1], acting [5,0,1] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.1c( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.277474403s) [1,2,3] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.712890625s@ mbc={}] start_peering_interval up [5,0,1] -> [1,2,3], acting [5,0,1] -> [1,2,3], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.17( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.277474403s) [5,3,4] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.712890625s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.1c( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.277449608s) [1,2,3] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.712890625s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.18( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.274083138s) [3,2,1] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.709472656s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.8( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.275174141s) [1,3,5] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active pruub 1135.710815430s@ mbc={}] start_peering_interval up [5,0,1] -> [1,3,5], acting [5,0,1] -> [1,3,5], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[4.8( empty local-lis/les=31/32 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=11.275154114s) [1,3,5] r=-1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1135.710815430s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[4.1b( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [3,5,1] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[4.5( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [3,1,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[4.1a( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [3,2,4] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[4.d( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [3,1,5] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[4.e( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [3,4,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:53 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[4.18( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [3,2,1] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[4.10( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,4,3] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[4.12( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,1,3] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[4.17( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,3,4] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[4.4( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,1,3] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[4.f( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [2,4,3] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[3.19( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,1] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[4.1f( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,3,2] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[4.7( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [2,1,3] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[5.6( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [2,0,4] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[4.3( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [1,3,2] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[3.2( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,4,0] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[4.2( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [1,3,2] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[2.f( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [4,0,5] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[4.8( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [1,3,5] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[3.6( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [5,0,4] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[5.c( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [5,0,4] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[2.9( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [5,0,4] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[4.1c( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [1,2,3] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[5.14( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [2,4,0] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[5.17( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [5,1,0] r=2 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[5.1d( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [5,0,4] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 41 pg[4.1d( empty local-lis/les=0/0 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,3,5] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[2.19( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [2,0,4] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[5.1a( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [1,0,5] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[5.1f( empty local-lis/les=41/42 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [3,4,2] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[2.14( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [3,4,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[3.14( empty local-lis/les=41/42 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [3,4,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[2.5( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [1,2,0] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[2.1b( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [1,0,2] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[3.16( empty local-lis/les=41/42 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [3,2,4] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[5.11( empty local-lis/les=41/42 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [3,4,5] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[2.1a( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [3,4,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[2.c( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [1,5,0] r=2 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[2.d( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [1,0,5] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[2.13( empty local-lis/les=0/0 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [1,0,2] r=1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[3.b( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [2,0,1] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[3.11( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,0,2] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[3.8( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,5,0] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[5.7( empty local-lis/les=41/42 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [3,5,4] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[2.2( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [3,5,4] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[5.d( empty local-lis/les=0/0 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [4,0,2] r=1 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[3.1a( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,5,0] r=2 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[3.5( empty local-lis/les=41/42 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [3,5,4] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[3.1b( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [4,0,2] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 41 pg[3.1d( empty local-lis/les=0/0 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [1,0,2] r=1 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[4.d( empty local-lis/les=41/42 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [3,1,5] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[3.10( empty local-lis/les=41/42 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [3,1,5] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[5.16( empty local-lis/les=41/42 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [3,5,1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[3.13( empty local-lis/les=41/42 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [3,2,1] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[5.15( empty local-lis/les=41/42 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [3,2,1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[2.17( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [3,1,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[3.a( empty local-lis/les=41/42 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [3,5,1] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[5.1( empty local-lis/les=41/42 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [3,5,1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[4.1b( empty local-lis/les=41/42 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [3,5,1] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 42 pg[4.c( empty local-lis/les=41/42 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [0,5,1] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 42 pg[3.f( empty local-lis/les=41/42 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [0,1,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 42 pg[4.a( empty local-lis/les=41/42 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [0,5,1] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 42 pg[3.c( empty local-lis/les=41/42 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [0,5,1] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[3.d( empty local-lis/les=41/42 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [3,1,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[2.3( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [3,2,1] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[2.7( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [3,1,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[4.5( empty local-lis/les=41/42 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [3,1,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[4.e( empty local-lis/les=41/42 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [3,4,2] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[4.18( empty local-lis/les=41/42 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [3,2,1] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 42 pg[4.1a( empty local-lis/les=41/42 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [3,2,4] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 42 pg[5.1c( empty local-lis/les=41/42 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [0,4,2] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 42 pg[5.9( empty local-lis/les=41/42 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [0,4,2] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 42 pg[3.3( empty local-lis/les=41/42 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [0,2,4] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 42 pg[5.1b( empty local-lis/les=41/42 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [0,5,4] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 42 pg[2.11( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [0,2,4] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 42 pg[5.10( empty local-lis/les=41/42 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [0,4,5] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 42 pg[5.18( empty local-lis/les=41/42 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [0,1,5] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 42 pg[2.16( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [0,1,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 42 pg[2.8( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=29/29 les/c/f=30/30/0 sis=41) [0,1,2] r=0 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 42 pg[5.f( empty local-lis/les=41/42 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [0,1,2] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 42 pg[4.13( empty local-lis/les=41/42 n=0 ec=31/24 lis/c=31/31 les/c/f=32/32/0 sis=41) [0,1,5] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 42 pg[3.1c( empty local-lis/les=41/42 n=0 ec=31/22 lis/c=31/31 les/c/f=32/32/0 sis=41) [0,5,1] r=0 lpr=41 pi=[31,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=41/42 n=0 ec=33/25 lis/c=33/33 les/c/f=34/34/0 sis=41) [0,5,1] r=0 lpr=41 pi=[33,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:18:54 np0005548790.localdomain sudo[56185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:18:54 np0005548790.localdomain sudo[56185]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:18:54 np0005548790.localdomain sudo[56185]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:55 np0005548790.localdomain sudo[56200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:18:55 np0005548790.localdomain sudo[56200]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:18:55 np0005548790.localdomain sudo[56200]: pam_unix(sudo:session): session closed for user root
Dec 06 08:18:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:18:56 np0005548790.localdomain systemd[1]: tmp-crun.3KCxlZ.mount: Deactivated successfully.
Dec 06 08:18:56 np0005548790.localdomain podman[56215]: 2025-12-06 08:18:56.573364095 +0000 UTC m=+0.086081508 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:18:56 np0005548790.localdomain podman[56215]: 2025-12-06 08:18:56.770231467 +0000 UTC m=+0.282948860 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, release=1761123044)
Dec 06 08:18:56 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:18:57 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 5.1b deep-scrub starts
Dec 06 08:18:58 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Dec 06 08:19:00 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec 06 08:19:00 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec 06 08:19:01 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec 06 08:19:01 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec 06 08:19:01 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 3.d scrub starts
Dec 06 08:19:01 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 3.d scrub ok
Dec 06 08:19:03 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 4.a deep-scrub starts
Dec 06 08:19:03 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 4.a deep-scrub ok
Dec 06 08:19:06 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec 06 08:19:06 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec 06 08:19:09 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 4.c deep-scrub starts
Dec 06 08:19:09 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 4.c deep-scrub ok
Dec 06 08:19:10 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec 06 08:19:10 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec 06 08:19:11 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec 06 08:19:11 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec 06 08:19:11 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec 06 08:19:11 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec 06 08:19:13 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Dec 06 08:19:13 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Dec 06 08:19:15 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Dec 06 08:19:15 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Dec 06 08:19:16 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec 06 08:19:19 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec 06 08:19:19 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec 06 08:19:20 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec 06 08:19:22 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec 06 08:19:22 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec 06 08:19:22 np0005548790.localdomain sudo[56257]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ygkarprnjkwxqfmhxbshpwzapsfbmzue ; /usr/bin/python3
Dec 06 08:19:22 np0005548790.localdomain sudo[56257]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:19:22 np0005548790.localdomain python3[56259]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:19:22 np0005548790.localdomain sudo[56257]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:23 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 3.c scrub starts
Dec 06 08:19:23 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 3.c scrub ok
Dec 06 08:19:24 np0005548790.localdomain sudo[56273]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-onbuejbdkvychewfbeeokdxnlimtfxwb ; /usr/bin/python3
Dec 06 08:19:24 np0005548790.localdomain sudo[56273]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:19:24 np0005548790.localdomain python3[56275]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:19:24 np0005548790.localdomain sudo[56273]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:26 np0005548790.localdomain sudo[56289]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jkwqqkkljdpvavqhjaqqlwqvrdhjapru ; /usr/bin/python3
Dec 06 08:19:26 np0005548790.localdomain sudo[56289]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:19:26 np0005548790.localdomain python3[56291]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:19:26 np0005548790.localdomain sudo[56289]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:27 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 3.a deep-scrub starts
Dec 06 08:19:27 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 3.a deep-scrub ok
Dec 06 08:19:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:19:27 np0005548790.localdomain systemd[1]: tmp-crun.UmAfsD.mount: Deactivated successfully.
Dec 06 08:19:27 np0005548790.localdomain podman[56292]: 2025-12-06 08:19:27.557751428 +0000 UTC m=+0.077130243 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:19:27 np0005548790.localdomain podman[56292]: 2025-12-06 08:19:27.764025896 +0000 UTC m=+0.283404671 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.12, container_name=metrics_qdr, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., url=https://www.redhat.com)
Dec 06 08:19:27 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:19:29 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec 06 08:19:29 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec 06 08:19:29 np0005548790.localdomain sudo[56367]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-opjtukvpjwdhonkyseciynqxilkgmrwc ; /usr/bin/python3
Dec 06 08:19:29 np0005548790.localdomain sudo[56367]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:19:30 np0005548790.localdomain python3[56369]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:19:30 np0005548790.localdomain sudo[56367]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:30 np0005548790.localdomain sudo[56410]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fqqffyagklvqbehmsxhopvfoxieehduq ; /usr/bin/python3
Dec 06 08:19:30 np0005548790.localdomain sudo[56410]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:19:30 np0005548790.localdomain python3[56412]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009169.698921-92087-83171847218457/source dest=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring mode=600 _original_basename=ceph.client.openstack.keyring follow=False checksum=9d631b6552ddeaa0e75a39b18f2bdb583e0e85e3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:19:30 np0005548790.localdomain sudo[56410]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:33 np0005548790.localdomain sshd[56166]: Connection closed by 45.78.219.217 port 51684 [preauth]
Dec 06 08:19:35 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec 06 08:19:35 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec 06 08:19:35 np0005548790.localdomain sudo[56472]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vnbyskndmhyahvctiqnlwgtnhowlnpja ; /usr/bin/python3
Dec 06 08:19:35 np0005548790.localdomain sudo[56472]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:19:35 np0005548790.localdomain python3[56474]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:19:35 np0005548790.localdomain sudo[56472]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:35 np0005548790.localdomain sudo[56515]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xocefjfewviyxpxjqbawqaeouyiobeav ; /usr/bin/python3
Dec 06 08:19:35 np0005548790.localdomain sudo[56515]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:19:36 np0005548790.localdomain python3[56517]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009175.2599905-92087-219146093370364/source dest=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring mode=600 _original_basename=ceph.client.manila.keyring follow=False checksum=04fcaa63c42fa3b2b702e4421ebc774041538ebd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:19:36 np0005548790.localdomain sudo[56515]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:36 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 3.1c deep-scrub starts
Dec 06 08:19:36 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 3.1c deep-scrub ok
Dec 06 08:19:36 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec 06 08:19:36 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec 06 08:19:36 np0005548790.localdomain ceph-osd[32586]: osd.3 43 crush map has features 432629239337189376, adjusting msgr requires for clients
Dec 06 08:19:36 np0005548790.localdomain ceph-osd[32586]: osd.3 43 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Dec 06 08:19:36 np0005548790.localdomain ceph-osd[32586]: osd.3 43 crush map has features 3314933000854323200, adjusting msgr requires for osds
Dec 06 08:19:36 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 43 pg[2.1f( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.515188217s) [5,3,4] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 1177.080566406s@ mbc={}] start_peering_interval up [2,3,4] -> [5,3,4], acting [2,3,4] -> [5,3,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:36 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 43 pg[2.1f( empty local-lis/les=41/42 n=0 ec=29/20 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=13.515074730s) [5,3,4] r=1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1177.080566406s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:37 np0005548790.localdomain sshd[56532]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:19:37 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec 06 08:19:37 np0005548790.localdomain ceph-osd[31627]: osd.0 43 crush map has features 432629239337189376, adjusting msgr requires for clients
Dec 06 08:19:37 np0005548790.localdomain ceph-osd[31627]: osd.0 43 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Dec 06 08:19:37 np0005548790.localdomain ceph-osd[31627]: osd.0 43 crush map has features 3314933000854323200, adjusting msgr requires for osds
Dec 06 08:19:37 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec 06 08:19:38 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Dec 06 08:19:38 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Dec 06 08:19:38 np0005548790.localdomain sshd[56532]: Received disconnect from 103.226.138.52 port 59440:11: Bye Bye [preauth]
Dec 06 08:19:38 np0005548790.localdomain sshd[56532]: Disconnected from authenticating user root 103.226.138.52 port 59440 [preauth]
Dec 06 08:19:40 np0005548790.localdomain sudo[56579]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zzkmdrjzyfgcdzunximjfigjogibapds ; /usr/bin/python3
Dec 06 08:19:40 np0005548790.localdomain sudo[56579]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:19:40 np0005548790.localdomain python3[56581]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:19:40 np0005548790.localdomain sudo[56579]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:41 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec 06 08:19:41 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec 06 08:19:41 np0005548790.localdomain sudo[56622]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mlamzovhtkmoqjyoibhmfnaplltzfszp ; /usr/bin/python3
Dec 06 08:19:41 np0005548790.localdomain sudo[56622]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:19:41 np0005548790.localdomain python3[56624]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009180.5510907-92087-128279229208427/source dest=/var/lib/tripleo-config/ceph/ceph.conf mode=644 _original_basename=ceph.conf follow=False checksum=0cb3e740065655621c29366f25db5e0ef0002cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:19:41 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 46 pg[6.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=46 pruub=9.459021568s) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 active pruub 1182.099975586s@ mbc={}] start_peering_interval up [5,0,1] -> [5,0,1], acting [5,0,1] -> [5,0,1], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:41 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 46 pg[6.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=46 pruub=9.457352638s) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1182.099975586s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:41 np0005548790.localdomain sudo[56622]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.1d( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.1f( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.9( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.4( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.e( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.19( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.8( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.7( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.b( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.6( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.1b( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.1( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.18( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.a( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.5( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.3( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.2( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.f( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.c( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.1c( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.12( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.11( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.16( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.17( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.15( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.1e( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.14( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.10( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.1a( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.13( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:42 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 47 pg[6.d( empty local-lis/les=35/36 n=0 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [5,0,1] r=1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:43 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 48 pg[7.0( v 40'39 (0'0,40'39] local-lis/les=37/38 n=22 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=9.503348351s) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 lcod 40'38 mlcod 40'38 active pruub 1184.157226562s@ mbc={}] start_peering_interval up [0,1,5] -> [0,1,5], acting [0,1,5] -> [0,1,5], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:43 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 48 pg[7.0( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=1 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=9.503348351s) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 lcod 40'38 mlcod 0'0 unknown pruub 1184.157226562s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.f( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.b( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.4( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=2 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.8( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.5( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=2 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.9( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.6( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=2 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.a( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.7( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.1( v 40'39 (0'0,40'39] local-lis/les=37/38 n=2 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.2( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=2 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.3( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=2 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.e( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.d( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.c( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.0( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 lcod 40'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.4( v 40'39 (0'0,40'39] local-lis/les=48/49 n=2 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.8( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.1( v 40'39 (0'0,40'39] local-lis/les=48/49 n=2 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.a( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.b( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.6( v 40'39 (0'0,40'39] local-lis/les=48/49 n=2 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.7( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.9( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.2( v 40'39 (0'0,40'39] local-lis/les=48/49 n=2 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.d( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.3( v 40'39 (0'0,40'39] local-lis/les=48/49 n=2 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.c( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.5( v 40'39 (0'0,40'39] local-lis/les=48/49 n=2 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:44 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 49 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [0,1,5] r=0 lpr=48 pi=[37,48)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:45 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Dec 06 08:19:45 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Dec 06 08:19:45 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec 06 08:19:45 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec 06 08:19:45 np0005548790.localdomain sudo[56684]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eehhnosqaznbcadvxtpzdrohuszrqhet ; /usr/bin/python3
Dec 06 08:19:45 np0005548790.localdomain sudo[56684]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:19:45 np0005548790.localdomain python3[56686]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:19:45 np0005548790.localdomain sudo[56684]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:46 np0005548790.localdomain sudo[56729]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bowejortlrkjmskcazmayvjnamlzowbh ; /usr/bin/python3
Dec 06 08:19:46 np0005548790.localdomain sudo[56729]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:19:46 np0005548790.localdomain python3[56731]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009185.5437746-92414-246030080427924/source _original_basename=tmpq__m9e1r follow=False checksum=f17091ee142621a3c8290c8c96b5b52d67b3a864 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:19:46 np0005548790.localdomain sudo[56729]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:47 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Dec 06 08:19:47 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Dec 06 08:19:47 np0005548790.localdomain sudo[56791]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-swirwixdlvgbnzsypatlbsorrevkyned ; /usr/bin/python3
Dec 06 08:19:47 np0005548790.localdomain sudo[56791]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:19:47 np0005548790.localdomain python3[56793]: ansible-ansible.legacy.stat Invoked with path=/usr/local/sbin/containers-tmpwatch follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:19:47 np0005548790.localdomain sudo[56791]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:47 np0005548790.localdomain sudo[56834]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-awlgkicuxucrgwlubcjleojoulbhpefd ; /usr/bin/python3
Dec 06 08:19:47 np0005548790.localdomain sudo[56834]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:19:48 np0005548790.localdomain python3[56836]: ansible-ansible.legacy.copy Invoked with dest=/usr/local/sbin/containers-tmpwatch group=root mode=493 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009187.2629545-92504-104583310908045/source _original_basename=tmpzvcuh052 follow=False checksum=84397b037dad9813fed388c4bcdd4871f384cd22 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:19:48 np0005548790.localdomain sudo[56834]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:48 np0005548790.localdomain sudo[56864]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mkdbtpyfslxqxxpandlaczeuqkspiwyx ; /usr/bin/python3
Dec 06 08:19:48 np0005548790.localdomain sudo[56864]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:19:48 np0005548790.localdomain python3[56866]: ansible-cron Invoked with job=/usr/local/sbin/containers-tmpwatch name=Remove old logs special_time=daily user=root state=present backup=False minute=* hour=* day=* month=* weekday=* disabled=False env=False cron_file=None insertafter=None insertbefore=None
Dec 06 08:19:48 np0005548790.localdomain crontab[56867]: (root) LIST (root)
Dec 06 08:19:48 np0005548790.localdomain crontab[56868]: (root) REPLACE (root)
Dec 06 08:19:48 np0005548790.localdomain sudo[56864]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:48 np0005548790.localdomain sudo[56882]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ngypclrcwptthclscpxnadzjeipdeawk ; /usr/bin/python3
Dec 06 08:19:48 np0005548790.localdomain sudo[56882]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:19:48 np0005548790.localdomain python3[56884]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_2 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:19:48 np0005548790.localdomain sudo[56882]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:49 np0005548790.localdomain sudo[56932]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-znroazrsptojelnmgzfncezxyjhqdxnw ; /usr/bin/python3
Dec 06 08:19:49 np0005548790.localdomain sudo[56932]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 50 pg[6.15( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [3,4,5] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 50 pg[6.2( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [3,5,4] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 50 pg[6.7( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [3,5,4] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 50 pg[6.8( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [3,1,5] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 50 pg[6.e( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [3,2,4] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 50 pg[6.1a( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [3,4,5] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.a( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.020543098s) [0,5,4] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.666992188s@ mbc={}] start_peering_interval up [5,0,1] -> [0,5,4], acting [5,0,1] -> [0,5,4], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.5( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.019346237s) [0,1,2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.665893555s@ mbc={}] start_peering_interval up [5,0,1] -> [0,1,2], acting [5,0,1] -> [0,1,2], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.a( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.020543098s) [0,5,4] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.666992188s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.5( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.019346237s) [0,1,2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.665893555s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.1c( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.019780159s) [1,5,3] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.666381836s@ mbc={}] start_peering_interval up [5,0,1] -> [1,5,3], acting [5,0,1] -> [1,5,3], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.1c( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.019663811s) [1,5,3] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.666381836s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.19( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.014815331s) [0,5,1] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.661743164s@ mbc={}] start_peering_interval up [5,0,1] -> [0,5,1], acting [5,0,1] -> [0,5,1], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.19( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.014815331s) [0,5,1] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.661743164s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.b( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.014967918s) [2,3,1] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.661987305s@ mbc={}] start_peering_interval up [5,0,1] -> [2,3,1], acting [5,0,1] -> [2,3,1], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.6( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.015528679s) [2,3,1] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.662597656s@ mbc={}] start_peering_interval up [5,0,1] -> [2,3,1], acting [5,0,1] -> [2,3,1], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.b( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.014930725s) [2,3,1] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.661987305s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.6( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.015477180s) [2,3,1] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.662597656s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.1( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.018941879s) [1,0,2] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.666503906s@ mbc={}] start_peering_interval up [5,0,1] -> [1,0,2], acting [5,0,1] -> [1,0,2], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.1b( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.014855385s) [1,3,2] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.662597656s@ mbc={}] start_peering_interval up [5,0,1] -> [1,3,2], acting [5,0,1] -> [1,3,2], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.3( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.018113136s) [0,4,2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.665893555s@ mbc={}] start_peering_interval up [5,0,1] -> [0,4,2], acting [5,0,1] -> [0,4,2], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.1b( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.014738083s) [1,3,2] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.662597656s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.1( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.018569946s) [1,0,2] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.666503906s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.3( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.018113136s) [0,4,2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.665893555s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.d( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.019138336s) [0,5,1] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.667358398s@ mbc={}] start_peering_interval up [5,0,1] -> [0,5,1], acting [5,0,1] -> [0,5,1], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.d( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.019138336s) [0,5,1] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.667358398s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.12( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.017692566s) [4,0,2] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.666137695s@ mbc={}] start_peering_interval up [5,0,1] -> [4,0,2], acting [5,0,1] -> [4,0,2], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.12( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.017615318s) [4,0,2] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.666137695s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.13( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.018751144s) [2,4,3] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.667358398s@ mbc={}] start_peering_interval up [5,0,1] -> [2,4,3], acting [5,0,1] -> [2,4,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.13( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.018714905s) [2,4,3] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.667358398s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.17( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.018104553s) [1,2,0] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.666748047s@ mbc={}] start_peering_interval up [5,0,1] -> [1,2,0], acting [5,0,1] -> [1,2,0], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.15( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.018254280s) [3,4,5] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.666870117s@ mbc={}] start_peering_interval up [5,0,1] -> [3,4,5], acting [5,0,1] -> [3,4,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.17( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.018068314s) [1,2,0] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.666748047s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.15( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.018205643s) [3,4,5] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.666870117s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.1e( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.018016815s) [4,0,2] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.666992188s@ mbc={}] start_peering_interval up [5,0,1] -> [4,0,2], acting [5,0,1] -> [4,0,2], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.1e( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.017978668s) [4,0,2] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.666992188s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.1a( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.018012047s) [3,4,5] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.667114258s@ mbc={}] start_peering_interval up [5,0,1] -> [3,4,5], acting [5,0,1] -> [3,4,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.1a( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.017974854s) [3,4,5] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.667114258s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.c( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.016670227s) [2,0,4] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.666015625s@ mbc={}] start_peering_interval up [5,0,1] -> [2,0,4], acting [5,0,1] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.c( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.016444206s) [2,0,4] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.666015625s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.10( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.017465591s) [2,1,3] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.667114258s@ mbc={}] start_peering_interval up [5,0,1] -> [2,1,3], acting [5,0,1] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.16( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.017138481s) [5,0,4] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.667236328s@ mbc={}] start_peering_interval up [5,0,1] -> [5,0,4], acting [5,0,1] -> [5,0,4], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.16( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.017090797s) [5,0,4] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.667236328s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.10( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.017300606s) [2,1,3] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.667114258s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.14( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.017335892s) [5,3,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.666870117s@ mbc={}] start_peering_interval up [5,0,1] -> [5,3,4], acting [5,0,1] -> [5,3,4], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.11( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.016525269s) [5,1,3] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.666870117s@ mbc={}] start_peering_interval up [5,0,1] -> [5,1,3], acting [5,0,1] -> [5,1,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.14( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.016604424s) [5,3,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.666870117s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.11( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.016445160s) [5,1,3] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.666870117s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.f( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.015464783s) [5,4,0] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.666137695s@ mbc={}] start_peering_interval up [5,0,1] -> [5,4,0], acting [5,0,1] -> [5,4,0], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.f( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.015419006s) [5,4,0] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.666137695s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.2( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.015151024s) [3,5,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.666015625s@ mbc={}] start_peering_interval up [5,0,1] -> [3,5,4], acting [5,0,1] -> [3,5,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.2( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.015106201s) [3,5,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.666015625s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.7( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.010691643s) [3,5,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.661865234s@ mbc={}] start_peering_interval up [5,0,1] -> [3,5,4], acting [5,0,1] -> [3,5,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.7( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.010643005s) [3,5,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.661865234s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.18( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.011219025s) [5,3,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.662597656s@ mbc={}] start_peering_interval up [5,0,1] -> [5,3,4], acting [5,0,1] -> [5,3,4], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.18( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.011169434s) [5,3,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.662597656s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.9( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.010717392s) [5,1,0] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.662109375s@ mbc={}] start_peering_interval up [5,0,1] -> [5,1,0], acting [5,0,1] -> [5,1,0], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.9( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.010691643s) [5,1,0] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.662109375s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.8( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.011103630s) [3,1,5] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.662719727s@ mbc={}] start_peering_interval up [5,0,1] -> [3,1,5], acting [5,0,1] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.e( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.010147095s) [3,2,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.661865234s@ mbc={}] start_peering_interval up [5,0,1] -> [3,2,4], acting [5,0,1] -> [3,2,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.e( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.010126114s) [3,2,4] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.661865234s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.8( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.011043549s) [3,1,5] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.662719727s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.1f( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.009784698s) [5,4,0] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.661621094s@ mbc={}] start_peering_interval up [5,0,1] -> [5,4,0], acting [5,0,1] -> [5,4,0], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.1f( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.009640694s) [5,4,0] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.661621094s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.1d( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.009531021s) [5,4,3] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active pruub 1189.662353516s@ mbc={}] start_peering_interval up [5,0,1] -> [5,4,3], acting [5,0,1] -> [5,4,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 50 pg[6.1d( empty local-lis/les=46/47 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=9.009478569s) [5,4,3] r=-1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.662353516s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:49 np0005548790.localdomain sudo[56932]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:49 np0005548790.localdomain sudo[56950]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-atmujlymutzsfhxtjayfuqdpupkylisr ; /usr/bin/python3
Dec 06 08:19:49 np0005548790.localdomain sudo[56950]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:19:49 np0005548790.localdomain sudo[56950]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:49 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec 06 08:19:50 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec 06 08:19:50 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 50 pg[6.1d( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [5,4,3] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:50 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 50 pg[6.11( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [5,1,3] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:50 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 50 pg[6.14( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [5,3,4] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:50 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 50 pg[6.b( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [2,3,1] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:50 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 50 pg[6.13( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [2,4,3] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:50 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 50 pg[6.10( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [2,1,3] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:50 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 50 pg[6.6( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [2,3,1] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:50 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 50 pg[6.18( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [5,3,4] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:50 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 50 pg[6.1b( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [1,3,2] r=1 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:50 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 50 pg[6.1c( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [1,5,3] r=2 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:50 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 51 pg[6.15( empty local-lis/les=50/51 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [3,4,5] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:50 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 51 pg[6.e( empty local-lis/les=50/51 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [3,2,4] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:50 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 51 pg[6.8( empty local-lis/les=50/51 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [3,1,5] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:50 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 51 pg[6.5( empty local-lis/les=50/51 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [0,1,2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:50 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 51 pg[6.d( empty local-lis/les=50/51 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [0,5,1] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:50 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 51 pg[6.19( empty local-lis/les=50/51 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [0,5,1] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:50 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 51 pg[6.2( empty local-lis/les=50/51 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [3,5,4] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:50 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 51 pg[6.1a( empty local-lis/les=50/51 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [3,4,5] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:50 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 51 pg[6.7( empty local-lis/les=50/51 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [3,5,4] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:50 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 51 pg[6.a( empty local-lis/les=50/51 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [0,5,4] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:50 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 51 pg[6.3( empty local-lis/les=50/51 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [0,4,2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:19:50 np0005548790.localdomain sudo[57054]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fdqzdiidgjclkkqlcwkqrykoihhjvglg ; ANSIBLE_ASYNC_DIR=/tmp/.ansible_async /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009190.0383785-92658-68135074768972/async_wrapper.py 681714433825 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009190.0383785-92658-68135074768972/AnsiballZ_command.py _
Dec 06 08:19:50 np0005548790.localdomain sudo[57054]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 06 08:19:50 np0005548790.localdomain ansible-async_wrapper.py[57056]: Invoked with 681714433825 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009190.0383785-92658-68135074768972/AnsiballZ_command.py _
Dec 06 08:19:50 np0005548790.localdomain ansible-async_wrapper.py[57059]: Starting module and watcher
Dec 06 08:19:50 np0005548790.localdomain ansible-async_wrapper.py[57059]: Start watching 57060 (3600)
Dec 06 08:19:50 np0005548790.localdomain ansible-async_wrapper.py[57060]: Start module (57060)
Dec 06 08:19:50 np0005548790.localdomain ansible-async_wrapper.py[57056]: Return async_wrapper task started.
Dec 06 08:19:50 np0005548790.localdomain sudo[57054]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:50 np0005548790.localdomain sudo[57078]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ftbmacutujogmjuekswyionbkcvlueww ; /usr/bin/python3
Dec 06 08:19:50 np0005548790.localdomain sudo[57078]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:19:51 np0005548790.localdomain python3[57080]: ansible-ansible.legacy.async_status Invoked with jid=681714433825.57056 mode=status _async_dir=/tmp/.ansible_async
Dec 06 08:19:51 np0005548790.localdomain sudo[57078]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]:    (file: /etc/puppet/hiera.yaml)
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]: Warning: Undefined variable '::deploy_config_name';
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]:    (file & line not available)
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]:    (file & line not available)
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]: Notice: Compiled catalog for np0005548790.localdomain in environment production in 0.11 seconds
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]: Notice: Applied catalog in 0.07 seconds
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]: Application:
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]:    Initial environment: production
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]:    Converged environment: production
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]:          Run mode: user
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]: Changes:
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]: Events:
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]: Resources:
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]:             Total: 10
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]: Time:
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]:        Filebucket: 0.00
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]:          Schedule: 0.00
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]:              File: 0.00
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]:              Exec: 0.01
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]:            Augeas: 0.02
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]:    Transaction evaluation: 0.05
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]:    Catalog application: 0.07
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]:    Config retrieval: 0.15
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]:          Last run: 1765009194
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]:             Total: 0.08
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]: Version:
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]:            Config: 1765009194
Dec 06 08:19:54 np0005548790.localdomain puppet-user[57064]:            Puppet: 7.10.0
Dec 06 08:19:54 np0005548790.localdomain ansible-async_wrapper.py[57060]: Module complete (57060)
Dec 06 08:19:55 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec 06 08:19:55 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec 06 08:19:55 np0005548790.localdomain sudo[57192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:19:55 np0005548790.localdomain sudo[57192]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:19:55 np0005548790.localdomain sudo[57192]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:55 np0005548790.localdomain sudo[57207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 06 08:19:55 np0005548790.localdomain sudo[57207]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:19:55 np0005548790.localdomain ansible-async_wrapper.py[57059]: Done in kid B.
Dec 06 08:19:55 np0005548790.localdomain sudo[57207]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:55 np0005548790.localdomain sudo[57242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:19:55 np0005548790.localdomain sudo[57242]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:19:55 np0005548790.localdomain sudo[57242]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:55 np0005548790.localdomain sudo[57257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:19:55 np0005548790.localdomain sudo[57257]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:19:56 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec 06 08:19:56 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec 06 08:19:56 np0005548790.localdomain sudo[57257]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:56 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 52 pg[7.6( v 40'39 (0'0,40'39] local-lis/les=48/49 n=2 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=11.546918869s) [2,1,0] r=2 lpr=52 pi=[48,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1199.702636719s@ mbc={}] start_peering_interval up [0,1,5] -> [2,1,0], acting [0,1,5] -> [2,1,0], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:56 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 52 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=11.547403336s) [2,1,0] r=2 lpr=52 pi=[48,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1199.703247070s@ mbc={}] start_peering_interval up [0,1,5] -> [2,1,0], acting [0,1,5] -> [2,1,0], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:56 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 52 pg[7.6( v 40'39 (0'0,40'39] local-lis/les=48/49 n=2 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=11.546760559s) [2,1,0] r=2 lpr=52 pi=[48,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1199.702636719s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:56 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 52 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=11.547291756s) [2,1,0] r=2 lpr=52 pi=[48,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1199.703247070s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:56 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 52 pg[7.2( v 40'39 (0'0,40'39] local-lis/les=48/49 n=2 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=11.545776367s) [2,1,0] r=2 lpr=52 pi=[48,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1199.702758789s@ mbc={}] start_peering_interval up [0,1,5] -> [2,1,0], acting [0,1,5] -> [2,1,0], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:56 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 52 pg[7.2( v 40'39 (0'0,40'39] local-lis/les=48/49 n=2 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=11.545681000s) [2,1,0] r=2 lpr=52 pi=[48,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1199.702758789s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:56 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 52 pg[7.a( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=11.545213699s) [2,1,0] r=2 lpr=52 pi=[48,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1199.702392578s@ mbc={}] start_peering_interval up [0,1,5] -> [2,1,0], acting [0,1,5] -> [2,1,0], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:56 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 52 pg[7.a( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=11.545149803s) [2,1,0] r=2 lpr=52 pi=[48,52)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1199.702392578s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:57 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec 06 08:19:57 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec 06 08:19:57 np0005548790.localdomain sudo[57304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:19:57 np0005548790.localdomain sudo[57304]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:19:57 np0005548790.localdomain sudo[57304]: pam_unix(sudo:session): session closed for user root
Dec 06 08:19:58 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec 06 08:19:58 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:19:58 np0005548790.localdomain systemd[1]: tmp-crun.z0OpCf.mount: Deactivated successfully.
Dec 06 08:19:58 np0005548790.localdomain podman[57319]: 2025-12-06 08:19:58.569451124 +0000 UTC m=+0.082573645 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, tcib_managed=true, build-date=2025-11-18T22:49:46Z, version=17.1.12, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git)
Dec 06 08:19:58 np0005548790.localdomain podman[57319]: 2025-12-06 08:19:58.791192786 +0000 UTC m=+0.304315267 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, release=1761123044, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64)
Dec 06 08:19:58 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:19:59 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec 06 08:19:59 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec 06 08:19:59 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 54 pg[7.b( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=9.090608597s) [5,3,4] r=-1 lpr=54 pi=[48,54)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1199.702636719s@ mbc={}] start_peering_interval up [0,1,5] -> [5,3,4], acting [0,1,5] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:59 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 54 pg[7.b( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=9.090409279s) [5,3,4] r=-1 lpr=54 pi=[48,54)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1199.702636719s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:59 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 54 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=48/49 n=3 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=9.089915276s) [5,3,4] r=-1 lpr=54 pi=[48,54)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1199.702392578s@ mbc={}] start_peering_interval up [0,1,5] -> [5,3,4], acting [0,1,5] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:59 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 54 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=48/49 n=3 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=9.089739799s) [5,3,4] r=-1 lpr=54 pi=[48,54)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1199.702392578s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:59 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 54 pg[7.7( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=9.089468002s) [5,3,4] r=-1 lpr=54 pi=[48,54)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1199.702636719s@ mbc={}] start_peering_interval up [0,1,5] -> [5,3,4], acting [0,1,5] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:59 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 54 pg[7.3( v 40'39 (0'0,40'39] local-lis/les=48/49 n=2 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=9.090249062s) [5,3,4] r=-1 lpr=54 pi=[48,54)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1199.703125000s@ mbc={}] start_peering_interval up [0,1,5] -> [5,3,4], acting [0,1,5] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:19:59 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 54 pg[7.3( v 40'39 (0'0,40'39] local-lis/les=48/49 n=2 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=9.089618683s) [5,3,4] r=-1 lpr=54 pi=[48,54)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1199.703125000s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:19:59 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 54 pg[7.7( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=54 pruub=9.088979721s) [5,3,4] r=-1 lpr=54 pi=[48,54)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1199.702636719s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:20:00 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Dec 06 08:20:00 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Dec 06 08:20:00 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 54 pg[7.3( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=54) [5,3,4] r=1 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:20:00 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 54 pg[7.7( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=54) [5,3,4] r=1 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:20:00 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 54 pg[7.f( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=54) [5,3,4] r=1 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:20:00 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 54 pg[7.b( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=54) [5,3,4] r=1 lpr=54 pi=[48,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:20:01 np0005548790.localdomain sudo[57360]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ryrwulynwiapflyswegdurhtzwxqdwzg ; /usr/bin/python3
Dec 06 08:20:01 np0005548790.localdomain sudo[57360]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:01 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 7.8 deep-scrub starts
Dec 06 08:20:01 np0005548790.localdomain python3[57362]: ansible-ansible.legacy.async_status Invoked with jid=681714433825.57056 mode=status _async_dir=/tmp/.ansible_async
Dec 06 08:20:01 np0005548790.localdomain sudo[57360]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:01 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 7.8 deep-scrub ok
Dec 06 08:20:01 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 56 pg[7.4( v 40'39 (0'0,40'39] local-lis/les=48/49 n=4 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=14.611827850s) [2,3,4] r=-1 lpr=56 pi=[48,56)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1207.702514648s@ mbc={}] start_peering_interval up [0,1,5] -> [2,3,4], acting [0,1,5] -> [2,3,4], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:20:01 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 56 pg[7.c( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=14.612537384s) [2,3,4] r=-1 lpr=56 pi=[48,56)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1207.703247070s@ mbc={}] start_peering_interval up [0,1,5] -> [2,3,4], acting [0,1,5] -> [2,3,4], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:20:01 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 56 pg[7.4( v 40'39 (0'0,40'39] local-lis/les=48/49 n=4 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=14.611731529s) [2,3,4] r=-1 lpr=56 pi=[48,56)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1207.702514648s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:20:01 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 56 pg[7.c( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=14.612440109s) [2,3,4] r=-1 lpr=56 pi=[48,56)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1207.703247070s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:20:01 np0005548790.localdomain sudo[57376]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ubtoxebzxorrnxunsilgxcvnmzkqvqck ; /usr/bin/python3
Dec 06 08:20:01 np0005548790.localdomain sudo[57376]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:02 np0005548790.localdomain python3[57378]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 08:20:02 np0005548790.localdomain sudo[57376]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:02 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec 06 08:20:02 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec 06 08:20:02 np0005548790.localdomain sudo[57392]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hvgxbgmbpqhbfltwummieiuqfryuyfqi ; /usr/bin/python3
Dec 06 08:20:02 np0005548790.localdomain sudo[57392]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:02 np0005548790.localdomain python3[57394]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:20:02 np0005548790.localdomain sudo[57392]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:02 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 56 pg[7.c( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=56) [2,3,4] r=1 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:20:02 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 56 pg[7.4( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=56) [2,3,4] r=1 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:20:02 np0005548790.localdomain sudo[57442]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-djrqtqswhbwestdtxrpnbkhkmbdslwru ; /usr/bin/python3
Dec 06 08:20:02 np0005548790.localdomain sudo[57442]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:02 np0005548790.localdomain python3[57444]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:20:03 np0005548790.localdomain sudo[57442]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:03 np0005548790.localdomain sudo[57460]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lfoorrxiyplyoqpairurrhcdjrpjsino ; /usr/bin/python3
Dec 06 08:20:03 np0005548790.localdomain sudo[57460]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:03 np0005548790.localdomain python3[57462]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpgjzekri3 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 08:20:03 np0005548790.localdomain sudo[57460]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:03 np0005548790.localdomain sudo[57490]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uksiwdpeichubrlmkutcepfekoexaboz ; /usr/bin/python3
Dec 06 08:20:03 np0005548790.localdomain sudo[57490]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:03 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 58 pg[7.d( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=58 pruub=12.859862328s) [4,5,0] r=2 lpr=58 pi=[48,58)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1207.703491211s@ mbc={}] start_peering_interval up [0,1,5] -> [4,5,0], acting [0,1,5] -> [4,5,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:20:03 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 58 pg[7.d( v 40'39 (0'0,40'39] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=58 pruub=12.859773636s) [4,5,0] r=2 lpr=58 pi=[48,58)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1207.703491211s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:20:03 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 58 pg[7.5( v 40'39 (0'0,40'39] local-lis/les=48/49 n=2 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=58 pruub=12.859122276s) [4,5,0] r=2 lpr=58 pi=[48,58)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1207.703491211s@ mbc={}] start_peering_interval up [0,1,5] -> [4,5,0], acting [0,1,5] -> [4,5,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:20:03 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 58 pg[7.5( v 40'39 (0'0,40'39] local-lis/les=48/49 n=2 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=58 pruub=12.858956337s) [4,5,0] r=2 lpr=58 pi=[48,58)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1207.703491211s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:20:03 np0005548790.localdomain python3[57492]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:20:03 np0005548790.localdomain sudo[57490]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:03 np0005548790.localdomain sudo[57506]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-thdonwwccordorhbvysusiffsaiobyip ; /usr/bin/python3
Dec 06 08:20:03 np0005548790.localdomain sudo[57506]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:04 np0005548790.localdomain sudo[57506]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:04 np0005548790.localdomain sudo[57594]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-agleqdigtuathlsmjfnvdvouagkjiypm ; /usr/bin/python3
Dec 06 08:20:04 np0005548790.localdomain sudo[57594]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:04 np0005548790.localdomain python3[57596]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Dec 06 08:20:04 np0005548790.localdomain sudo[57594]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:05 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 60 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=48/37 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=8.283287048s) [5,0,4] r=1 lpr=60 pi=[52,60)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1205.185913086s@ mbc={}] start_peering_interval up [2,1,0] -> [5,0,4], acting [2,1,0] -> [5,0,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:20:05 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 60 pg[7.6( v 40'39 (0'0,40'39] local-lis/les=52/53 n=2 ec=48/37 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=8.279827118s) [5,0,4] r=1 lpr=60 pi=[52,60)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1205.182495117s@ mbc={}] start_peering_interval up [2,1,0] -> [5,0,4], acting [2,1,0] -> [5,0,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:20:05 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 60 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=48/37 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=8.283192635s) [5,0,4] r=1 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1205.185913086s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:20:05 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 60 pg[7.6( v 40'39 (0'0,40'39] local-lis/les=52/53 n=2 ec=48/37 lis/c=52/52 les/c/f=53/53/0 sis=60 pruub=8.279207230s) [5,0,4] r=1 lpr=60 pi=[52,60)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1205.182495117s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:20:05 np0005548790.localdomain sudo[57613]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fcdnutrftnzktkncoynonhnljuieiluz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:20:05 np0005548790.localdomain sudo[57613]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:05 np0005548790.localdomain python3[57615]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:20:05 np0005548790.localdomain sudo[57613]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:06 np0005548790.localdomain sudo[57629]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yjltapuomwmzwsagzfhzlrwunvvjctyy ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:20:06 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Dec 06 08:20:06 np0005548790.localdomain sudo[57629]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:06 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Dec 06 08:20:06 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec 06 08:20:06 np0005548790.localdomain sudo[57629]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:06 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec 06 08:20:06 np0005548790.localdomain sudo[57645]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yashmjuyzpghcymabeqlvxtdqpppzoip ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:20:06 np0005548790.localdomain sudo[57645]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:06 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 61 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=54/55 n=1 ec=48/37 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=10.122896194s) [3,1,5] r=0 lpr=61 pi=[54,61)/1 luod=0'0 crt=40'39 mlcod 0'0 active pruub 1203.661010742s@ mbc={}] start_peering_interval up [5,3,4] -> [3,1,5], acting [5,3,4] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:20:06 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 61 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=54/55 n=1 ec=48/37 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=10.122896194s) [3,1,5] r=0 lpr=61 pi=[54,61)/1 crt=40'39 mlcod 0'0 unknown pruub 1203.661010742s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:20:06 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 61 pg[7.7( v 40'39 (0'0,40'39] local-lis/les=54/55 n=1 ec=48/37 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=10.122694016s) [3,1,5] r=0 lpr=61 pi=[54,61)/1 luod=0'0 crt=40'39 mlcod 0'0 active pruub 1203.660888672s@ mbc={}] start_peering_interval up [5,3,4] -> [3,1,5], acting [5,3,4] -> [3,1,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:20:06 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 61 pg[7.7( v 40'39 (0'0,40'39] local-lis/les=54/55 n=1 ec=48/37 lis/c=54/54 les/c/f=55/55/0 sis=61 pruub=10.122694016s) [3,1,5] r=0 lpr=61 pi=[54,61)/1 crt=40'39 mlcod 0'0 unknown pruub 1203.660888672s@ mbc={}] state<Start>: transitioning to Primary
Dec 06 08:20:06 np0005548790.localdomain python3[57647]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:20:06 np0005548790.localdomain sudo[57645]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:07 np0005548790.localdomain sudo[57695]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hvsoimvtirjhllmcsygavqzovrpdtuim ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:20:07 np0005548790.localdomain sudo[57695]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:07 np0005548790.localdomain python3[57697]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:20:07 np0005548790.localdomain sudo[57695]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:07 np0005548790.localdomain sudo[57713]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pxuchuuglozlxwvyutpkghqhdxbwiatw ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:20:07 np0005548790.localdomain sudo[57713]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:07 np0005548790.localdomain python3[57715]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:20:07 np0005548790.localdomain sudo[57713]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:07 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 62 pg[7.7( v 40'39 (0'0,40'39] local-lis/les=61/62 n=1 ec=48/37 lis/c=54/54 les/c/f=55/55/0 sis=61) [3,1,5] r=0 lpr=61 pi=[54,61)/1 crt=40'39 mlcod 0'0 active+degraded mbc={255={(2+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:20:07 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 62 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=61/62 n=1 ec=48/37 lis/c=54/54 les/c/f=55/55/0 sis=61) [3,1,5] r=0 lpr=61 pi=[54,61)/1 crt=40'39 mlcod 0'0 active+degraded mbc={255={(2+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:20:07 np0005548790.localdomain sudo[57775]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tbtqfefjzkzcqtvmzrsccqptooddgold ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:20:07 np0005548790.localdomain sudo[57775]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:08 np0005548790.localdomain python3[57777]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:20:08 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec 06 08:20:08 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec 06 08:20:08 np0005548790.localdomain sudo[57775]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:08 np0005548790.localdomain sudo[57793]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qkeyatlriqutxsfmpcugmrnkaktxeemm ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:20:08 np0005548790.localdomain sudo[57793]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:08 np0005548790.localdomain python3[57795]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:20:08 np0005548790.localdomain sudo[57793]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:20:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Cumulative writes: 4521 writes, 20K keys, 4521 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 4521 writes, 429 syncs, 10.54 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 1264 writes, 4394 keys, 1264 commit groups, 1.0 writes per commit group, ingest: 1.94 MB, 0.00 MB/s
                                                          Interval WAL: 1264 writes, 285 syncs, 4.44 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x561316669610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x561316669610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x561316669610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Dec 06 08:20:08 np0005548790.localdomain sudo[57855]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mixstkoatxtxvnbqtcklzcfmlwltoxop ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:20:08 np0005548790.localdomain sudo[57855]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:08 np0005548790.localdomain python3[57857]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:20:08 np0005548790.localdomain sudo[57855]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:09 np0005548790.localdomain sudo[57873]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lexafmibwvgownddumrufjvxkufuuycf ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:20:09 np0005548790.localdomain sudo[57873]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:09 np0005548790.localdomain python3[57875]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:20:09 np0005548790.localdomain sudo[57873]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:09 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 63 pg[7.8( v 40'39 (0'0,40'39] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=63 pruub=15.077893257s) [2,0,1] r=1 lpr=63 pi=[48,63)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1215.702880859s@ mbc={}] start_peering_interval up [0,1,5] -> [2,0,1], acting [0,1,5] -> [2,0,1], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:20:09 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 63 pg[7.8( v 40'39 (0'0,40'39] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=63 pruub=15.077803612s) [2,0,1] r=1 lpr=63 pi=[48,63)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1215.702880859s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:20:09 np0005548790.localdomain sudo[57935]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cmwdqogkwfxztjsqhfppohmhceynaxzt ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:20:09 np0005548790.localdomain sudo[57935]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:09 np0005548790.localdomain python3[57937]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:20:09 np0005548790.localdomain sudo[57935]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:09 np0005548790.localdomain sudo[57953]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-btutqvuacdgrcdrzphgedhprwyduoybx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:20:09 np0005548790.localdomain sudo[57953]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:09 np0005548790.localdomain python3[57955]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:20:09 np0005548790.localdomain sudo[57953]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:10 np0005548790.localdomain sudo[57983]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hckjqranmnbhskcovsdufewsakdnghma ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:20:10 np0005548790.localdomain sudo[57983]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:10 np0005548790.localdomain python3[57985]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:20:10 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:20:10 np0005548790.localdomain systemd-rc-local-generator[58007]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:20:10 np0005548790.localdomain systemd-sysv-generator[58012]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:20:10 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:20:10 np0005548790.localdomain sudo[57983]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:11 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Dec 06 08:20:11 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec 06 08:20:11 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Dec 06 08:20:11 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec 06 08:20:11 np0005548790.localdomain sudo[58069]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nxnpksitzykqrbzfjzubelidpgspsgdt ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:20:11 np0005548790.localdomain sudo[58069]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:11 np0005548790.localdomain python3[58071]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:20:11 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 65 pg[7.9( v 40'39 (0'0,40'39] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=65 pruub=13.054638863s) [5,4,3] r=-1 lpr=65 pi=[48,65)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1215.703125000s@ mbc={}] start_peering_interval up [0,1,5] -> [5,4,3], acting [0,1,5] -> [5,4,3], acting_primary 0 -> 5, up_primary 0 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:20:11 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 65 pg[7.9( v 40'39 (0'0,40'39] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=65 pruub=13.054563522s) [5,4,3] r=-1 lpr=65 pi=[48,65)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1215.703125000s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:20:11 np0005548790.localdomain sudo[58069]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:11 np0005548790.localdomain sudo[58087]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jjrcaltmuzjqocngxbneriobkxzzbrvy ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:20:11 np0005548790.localdomain sudo[58087]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:11 np0005548790.localdomain python3[58089]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:20:11 np0005548790.localdomain sudo[58087]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:12 np0005548790.localdomain sudo[58149]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ikbjoztiqeopteuwzigolfrfwhzevorn ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:20:12 np0005548790.localdomain sudo[58149]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:12 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec 06 08:20:12 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec 06 08:20:12 np0005548790.localdomain python3[58151]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:20:12 np0005548790.localdomain sudo[58149]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:12 np0005548790.localdomain sudo[58167]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jlpbuhfvuaesvbwopdcdxfnpbywijvtg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:20:12 np0005548790.localdomain sudo[58167]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:12 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 65 pg[7.9( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=65) [5,4,3] r=2 lpr=65 pi=[48,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:20:12 np0005548790.localdomain python3[58169]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:20:12 np0005548790.localdomain sudo[58167]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:12 np0005548790.localdomain sudo[58197]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qfqagwwxyprgkfbcxirxrktkjtxklkll ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:20:12 np0005548790.localdomain sudo[58197]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:20:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Cumulative writes: 4793 writes, 21K keys, 4793 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 4793 writes, 494 syncs, 9.70 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 1406 writes, 4933 keys, 1406 commit groups, 1.0 writes per commit group, ingest: 2.18 MB, 0.00 MB/s
                                                          Interval WAL: 1406 writes, 298 syncs, 4.72 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b53971610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b53971610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b53971610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Dec 06 08:20:13 np0005548790.localdomain python3[58199]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:20:13 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:20:13 np0005548790.localdomain systemd-rc-local-generator[58224]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:20:13 np0005548790.localdomain systemd-sysv-generator[58227]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:20:13 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:20:13 np0005548790.localdomain systemd[1]: Starting Create netns directory...
Dec 06 08:20:13 np0005548790.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 08:20:13 np0005548790.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 08:20:13 np0005548790.localdomain systemd[1]: Finished Create netns directory.
Dec 06 08:20:13 np0005548790.localdomain sudo[58197]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:13 np0005548790.localdomain sudo[58256]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dvvbjhdtfjitzkarrquiixxdpaufjvqp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:20:13 np0005548790.localdomain sudo[58256]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:14 np0005548790.localdomain python3[58258]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Dec 06 08:20:14 np0005548790.localdomain sudo[58256]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:14 np0005548790.localdomain sshd[58259]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:20:14 np0005548790.localdomain sudo[58274]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bzrsubpllegqsjwgyzpuggbissqmobcx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:20:14 np0005548790.localdomain sudo[58274]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:14 np0005548790.localdomain sshd[58259]: Invalid user solv from 193.32.162.146 port 46676
Dec 06 08:20:14 np0005548790.localdomain sudo[58274]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:14 np0005548790.localdomain sshd[58259]: Connection closed by invalid user solv 193.32.162.146 port 46676 [preauth]
Dec 06 08:20:15 np0005548790.localdomain sudo[58315]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-giwxbihnosbamjnhibriurwakfpgbtcn ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:20:15 np0005548790.localdomain sudo[58315]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:15 np0005548790.localdomain python3[58317]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step2 config_dir=/var/lib/tripleo-config/container-startup-config/step_2 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Dec 06 08:20:16 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 4.18 deep-scrub starts
Dec 06 08:20:16 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 4.18 deep-scrub ok
Dec 06 08:20:16 np0005548790.localdomain podman[58392]: 2025-12-06 08:20:16.102591345 +0000 UTC m=+0.082958856 container create f18a81a4d2d76d3fef1c8cb12e58059e2b13f2cd3d914f70a56cb07c393d04e7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, config_id=tripleo_step2, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=nova_virtqemud_init_logs, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, io.buildah.version=1.41.4)
Dec 06 08:20:16 np0005548790.localdomain podman[58393]: 2025-12-06 08:20:16.138921378 +0000 UTC m=+0.111951166 container create 57b9bf5306f109165ac74eb2eff72e999ecb3d151b4e40aa05eaac2a1c8f403c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step2, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute_init_log, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 08:20:16 np0005548790.localdomain podman[58392]: 2025-12-06 08:20:16.058388586 +0000 UTC m=+0.038756167 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 06 08:20:16 np0005548790.localdomain systemd[1]: Started libpod-conmon-f18a81a4d2d76d3fef1c8cb12e58059e2b13f2cd3d914f70a56cb07c393d04e7.scope.
Dec 06 08:20:16 np0005548790.localdomain podman[58393]: 2025-12-06 08:20:16.068244635 +0000 UTC m=+0.041274483 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 06 08:20:16 np0005548790.localdomain systemd[1]: Started libpod-conmon-57b9bf5306f109165ac74eb2eff72e999ecb3d151b4e40aa05eaac2a1c8f403c.scope.
Dec 06 08:20:16 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:20:16 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed344d323c02ab9e2880ed86e85e30fa2695ed2203789077293b9be1165d5141/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff)
Dec 06 08:20:16 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:20:16 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998b5ee4f740798b555f9dbe537836a42f089d97290b4d07cd81c8b9291a5941/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 08:20:16 np0005548790.localdomain podman[58392]: 2025-12-06 08:20:16.200343148 +0000 UTC m=+0.180710659 container init f18a81a4d2d76d3fef1c8cb12e58059e2b13f2cd3d914f70a56cb07c393d04e7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, config_id=tripleo_step2, name=rhosp17/openstack-nova-libvirt, container_name=nova_virtqemud_init_logs, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 06 08:20:16 np0005548790.localdomain podman[58392]: 2025-12-06 08:20:16.210536324 +0000 UTC m=+0.190903835 container start f18a81a4d2d76d3fef1c8cb12e58059e2b13f2cd3d914f70a56cb07c393d04e7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, managed_by=tripleo_ansible, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, config_id=tripleo_step2, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtqemud_init_logs, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, batch=17.1_20251118.1)
Dec 06 08:20:16 np0005548790.localdomain systemd[1]: libpod-f18a81a4d2d76d3fef1c8cb12e58059e2b13f2cd3d914f70a56cb07c393d04e7.scope: Deactivated successfully.
Dec 06 08:20:16 np0005548790.localdomain python3[58317]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud_init_logs --conmon-pidfile /run/nova_virtqemud_init_logs.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1765008053 --label config_id=tripleo_step2 --label container_name=nova_virtqemud_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud_init_logs.log --network none --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --user root --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /bin/bash -c chown -R tss:tss /var/log/swtpm
Dec 06 08:20:16 np0005548790.localdomain podman[58393]: 2025-12-06 08:20:16.251513689 +0000 UTC m=+0.224543467 container init 57b9bf5306f109165ac74eb2eff72e999ecb3d151b4e40aa05eaac2a1c8f403c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step2, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, architecture=x86_64, distribution-scope=public, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, container_name=nova_compute_init_log, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git)
Dec 06 08:20:16 np0005548790.localdomain podman[58393]: 2025-12-06 08:20:16.259621922 +0000 UTC m=+0.232651710 container start 57b9bf5306f109165ac74eb2eff72e999ecb3d151b4e40aa05eaac2a1c8f403c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vcs-type=git, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute_init_log, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step2, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Dec 06 08:20:16 np0005548790.localdomain systemd[1]: libpod-57b9bf5306f109165ac74eb2eff72e999ecb3d151b4e40aa05eaac2a1c8f403c.scope: Deactivated successfully.
Dec 06 08:20:16 np0005548790.localdomain python3[58317]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute_init_log --conmon-pidfile /run/nova_compute_init_log.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1765008053 --label config_id=tripleo_step2 --label container_name=nova_compute_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute_init_log.log --network none --privileged=False --user root --volume /var/log/containers/nova:/var/log/nova:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /bin/bash -c chown -R nova:nova /var/log/nova
Dec 06 08:20:16 np0005548790.localdomain podman[58430]: 2025-12-06 08:20:16.278802475 +0000 UTC m=+0.048919944 container died f18a81a4d2d76d3fef1c8cb12e58059e2b13f2cd3d914f70a56cb07c393d04e7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-19T00:35:22Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud_init_logs, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_id=tripleo_step2, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, com.redhat.component=openstack-nova-libvirt-container)
Dec 06 08:20:16 np0005548790.localdomain podman[58457]: 2025-12-06 08:20:16.334091163 +0000 UTC m=+0.051758277 container died 57b9bf5306f109165ac74eb2eff72e999ecb3d151b4e40aa05eaac2a1c8f403c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute_init_log, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, distribution-scope=public, config_id=tripleo_step2, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:20:16 np0005548790.localdomain podman[58430]: 2025-12-06 08:20:16.353105772 +0000 UTC m=+0.123223231 container cleanup f18a81a4d2d76d3fef1c8cb12e58059e2b13f2cd3d914f70a56cb07c393d04e7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, architecture=x86_64, config_id=tripleo_step2, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-19T00:35:22Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, container_name=nova_virtqemud_init_logs, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 08:20:16 np0005548790.localdomain systemd[1]: libpod-conmon-f18a81a4d2d76d3fef1c8cb12e58059e2b13f2cd3d914f70a56cb07c393d04e7.scope: Deactivated successfully.
Dec 06 08:20:16 np0005548790.localdomain podman[58457]: 2025-12-06 08:20:16.409320515 +0000 UTC m=+0.126987639 container cleanup 57b9bf5306f109165ac74eb2eff72e999ecb3d151b4e40aa05eaac2a1c8f403c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, vendor=Red Hat, Inc., release=1761123044, version=17.1.12, container_name=nova_compute_init_log, io.buildah.version=1.41.4, batch=17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step2, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git)
Dec 06 08:20:16 np0005548790.localdomain systemd[1]: libpod-conmon-57b9bf5306f109165ac74eb2eff72e999ecb3d151b4e40aa05eaac2a1c8f403c.scope: Deactivated successfully.
Dec 06 08:20:16 np0005548790.localdomain podman[58582]: 2025-12-06 08:20:16.827640532 +0000 UTC m=+0.088338107 container create 9a54e85cf5a552239d276f55bf931d60ef14c25baad96f6d3c600083073197e2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, container_name=create_haproxy_wrapper, release=1761123044, config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, tcib_managed=true)
Dec 06 08:20:16 np0005548790.localdomain podman[58581]: 2025-12-06 08:20:16.84668965 +0000 UTC m=+0.115823647 container create 7054783f221d5572d7f16cdeac7fc614535c0484e73c1df7d1d330d39c6bd6b6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, config_id=tripleo_step2, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=create_virtlogd_wrapper, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 08:20:16 np0005548790.localdomain podman[58582]: 2025-12-06 08:20:16.780040984 +0000 UTC m=+0.040738599 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Dec 06 08:20:16 np0005548790.localdomain systemd[1]: Started libpod-conmon-9a54e85cf5a552239d276f55bf931d60ef14c25baad96f6d3c600083073197e2.scope.
Dec 06 08:20:16 np0005548790.localdomain podman[58581]: 2025-12-06 08:20:16.787408977 +0000 UTC m=+0.056543024 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 06 08:20:16 np0005548790.localdomain systemd[1]: Started libpod-conmon-7054783f221d5572d7f16cdeac7fc614535c0484e73c1df7d1d330d39c6bd6b6.scope.
Dec 06 08:20:16 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:20:16 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:20:16 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51b981582ce7f1a34b84e0c23c6399b8da018a3879f6d023b21a81fa9cc33483/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:20:16 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b834d84ece9c7f6d27bdbfaa42b19d8cf29e8c24c1847f75b1aabfc03aadccf/merged/var/lib/container-config-scripts supports timestamps until 2038 (0x7fffffff)
Dec 06 08:20:16 np0005548790.localdomain podman[58582]: 2025-12-06 08:20:16.91341772 +0000 UTC m=+0.174115295 container init 9a54e85cf5a552239d276f55bf931d60ef14c25baad96f6d3c600083073197e2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, config_id=tripleo_step2, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=create_haproxy_wrapper, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, architecture=x86_64, tcib_managed=true)
Dec 06 08:20:16 np0005548790.localdomain podman[58582]: 2025-12-06 08:20:16.922643572 +0000 UTC m=+0.183341177 container start 9a54e85cf5a552239d276f55bf931d60ef14c25baad96f6d3c600083073197e2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=create_haproxy_wrapper, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com)
Dec 06 08:20:16 np0005548790.localdomain podman[58582]: 2025-12-06 08:20:16.923193427 +0000 UTC m=+0.183891032 container attach 9a54e85cf5a552239d276f55bf931d60ef14c25baad96f6d3c600083073197e2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step2, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, url=https://www.redhat.com, release=1761123044, vcs-type=git, distribution-scope=public, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, container_name=create_haproxy_wrapper, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 08:20:16 np0005548790.localdomain podman[58581]: 2025-12-06 08:20:16.964029236 +0000 UTC m=+0.233163223 container init 7054783f221d5572d7f16cdeac7fc614535c0484e73c1df7d1d330d39c6bd6b6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=create_virtlogd_wrapper, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, release=1761123044, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, build-date=2025-11-19T00:35:22Z, config_id=tripleo_step2, description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 06 08:20:16 np0005548790.localdomain podman[58581]: 2025-12-06 08:20:16.972145619 +0000 UTC m=+0.241279606 container start 7054783f221d5572d7f16cdeac7fc614535c0484e73c1df7d1d330d39c6bd6b6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step2, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, com.redhat.component=openstack-nova-libvirt-container, container_name=create_virtlogd_wrapper, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt)
Dec 06 08:20:16 np0005548790.localdomain podman[58581]: 2025-12-06 08:20:16.972410986 +0000 UTC m=+0.241544983 container attach 7054783f221d5572d7f16cdeac7fc614535c0484e73c1df7d1d330d39c6bd6b6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, build-date=2025-11-19T00:35:22Z, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=create_virtlogd_wrapper, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, config_id=tripleo_step2, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:20:17 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-998b5ee4f740798b555f9dbe537836a42f089d97290b4d07cd81c8b9291a5941-merged.mount: Deactivated successfully.
Dec 06 08:20:17 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-57b9bf5306f109165ac74eb2eff72e999ecb3d151b4e40aa05eaac2a1c8f403c-userdata-shm.mount: Deactivated successfully.
Dec 06 08:20:17 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ed344d323c02ab9e2880ed86e85e30fa2695ed2203789077293b9be1165d5141-merged.mount: Deactivated successfully.
Dec 06 08:20:17 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f18a81a4d2d76d3fef1c8cb12e58059e2b13f2cd3d914f70a56cb07c393d04e7-userdata-shm.mount: Deactivated successfully.
Dec 06 08:20:18 np0005548790.localdomain ovs-vsctl[58705]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
Dec 06 08:20:19 np0005548790.localdomain systemd[1]: libpod-7054783f221d5572d7f16cdeac7fc614535c0484e73c1df7d1d330d39c6bd6b6.scope: Deactivated successfully.
Dec 06 08:20:19 np0005548790.localdomain systemd[1]: libpod-7054783f221d5572d7f16cdeac7fc614535c0484e73c1df7d1d330d39c6bd6b6.scope: Consumed 2.128s CPU time.
Dec 06 08:20:19 np0005548790.localdomain podman[58581]: 2025-12-06 08:20:19.095051609 +0000 UTC m=+2.364185656 container died 7054783f221d5572d7f16cdeac7fc614535c0484e73c1df7d1d330d39c6bd6b6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, vcs-type=git, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, release=1761123044, container_name=create_virtlogd_wrapper, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step2, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:20:19 np0005548790.localdomain systemd[1]: tmp-crun.4dR7Tx.mount: Deactivated successfully.
Dec 06 08:20:19 np0005548790.localdomain podman[58832]: 2025-12-06 08:20:19.214566832 +0000 UTC m=+0.105044995 container cleanup 7054783f221d5572d7f16cdeac7fc614535c0484e73c1df7d1d330d39c6bd6b6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, release=1761123044, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible, container_name=create_virtlogd_wrapper, version=17.1.12, url=https://www.redhat.com, config_id=tripleo_step2, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 06 08:20:19 np0005548790.localdomain systemd[1]: libpod-conmon-7054783f221d5572d7f16cdeac7fc614535c0484e73c1df7d1d330d39c6bd6b6.scope: Deactivated successfully.
Dec 06 08:20:19 np0005548790.localdomain python3[58317]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/create_virtlogd_wrapper.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1765008053 --label config_id=tripleo_step2 --label container_name=create_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_virtlogd_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::nova::virtlogd_wrapper
Dec 06 08:20:19 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 67 pg[7.a( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=48/37 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=10.361963272s) [4,2,3] r=-1 lpr=67 pi=[52,67)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1221.183105469s@ mbc={}] start_peering_interval up [2,1,0] -> [4,2,3], acting [2,1,0] -> [4,2,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:20:19 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 67 pg[7.a( v 40'39 (0'0,40'39] local-lis/les=52/53 n=1 ec=48/37 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=10.361877441s) [4,2,3] r=-1 lpr=67 pi=[52,67)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1221.183105469s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:20:19 np0005548790.localdomain systemd[1]: libpod-9a54e85cf5a552239d276f55bf931d60ef14c25baad96f6d3c600083073197e2.scope: Deactivated successfully.
Dec 06 08:20:19 np0005548790.localdomain systemd[1]: libpod-9a54e85cf5a552239d276f55bf931d60ef14c25baad96f6d3c600083073197e2.scope: Consumed 2.143s CPU time.
Dec 06 08:20:19 np0005548790.localdomain podman[58582]: 2025-12-06 08:20:19.885199912 +0000 UTC m=+3.145897547 container died 9a54e85cf5a552239d276f55bf931d60ef14c25baad96f6d3c600083073197e2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, io.buildah.version=1.41.4, container_name=create_haproxy_wrapper, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, vcs-type=git, release=1761123044, maintainer=OpenStack TripleO Team, config_id=tripleo_step2, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12)
Dec 06 08:20:19 np0005548790.localdomain podman[58872]: 2025-12-06 08:20:19.960871815 +0000 UTC m=+0.063723251 container cleanup 9a54e85cf5a552239d276f55bf931d60ef14c25baad96f6d3c600083073197e2 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, vcs-type=git, container_name=create_haproxy_wrapper, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:20:19 np0005548790.localdomain systemd[1]: libpod-conmon-9a54e85cf5a552239d276f55bf931d60ef14c25baad96f6d3c600083073197e2.scope: Deactivated successfully.
Dec 06 08:20:19 np0005548790.localdomain python3[58317]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_haproxy_wrapper --conmon-pidfile /run/create_haproxy_wrapper.pid --detach=False --label config_id=tripleo_step2 --label container_name=create_haproxy_wrapper --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_haproxy_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers
Dec 06 08:20:20 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec 06 08:20:20 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec 06 08:20:20 np0005548790.localdomain sudo[58315]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:20 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec 06 08:20:20 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-1b834d84ece9c7f6d27bdbfaa42b19d8cf29e8c24c1847f75b1aabfc03aadccf-merged.mount: Deactivated successfully.
Dec 06 08:20:20 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7054783f221d5572d7f16cdeac7fc614535c0484e73c1df7d1d330d39c6bd6b6-userdata-shm.mount: Deactivated successfully.
Dec 06 08:20:20 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-51b981582ce7f1a34b84e0c23c6399b8da018a3879f6d023b21a81fa9cc33483-merged.mount: Deactivated successfully.
Dec 06 08:20:20 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9a54e85cf5a552239d276f55bf931d60ef14c25baad96f6d3c600083073197e2-userdata-shm.mount: Deactivated successfully.
Dec 06 08:20:20 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec 06 08:20:20 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 67 pg[7.a( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=52/52 les/c/f=53/53/0 sis=67) [4,2,3] r=2 lpr=67 pi=[52,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:20:20 np0005548790.localdomain sudo[58922]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xwtpmijuzlweapbaapaxfuzhwyerjgpz ; /usr/bin/python3
Dec 06 08:20:20 np0005548790.localdomain sudo[58922]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:20 np0005548790.localdomain python3[58924]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks2.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:20:20 np0005548790.localdomain sudo[58922]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:21 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec 06 08:20:21 np0005548790.localdomain sudo[58970]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-psackwyffffhzauhpizhsxzuvpgvntup ; /usr/bin/python3
Dec 06 08:20:21 np0005548790.localdomain sudo[58970]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:21 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec 06 08:20:21 np0005548790.localdomain sudo[58970]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:21 np0005548790.localdomain sudo[59013]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fzckqqtvzfuzbbjxwewxbbvkmsqdaran ; /usr/bin/python3
Dec 06 08:20:21 np0005548790.localdomain sudo[59013]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:21 np0005548790.localdomain sudo[59013]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:22 np0005548790.localdomain sudo[59043]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lnmcdibyzbdfjbjuhbugbwzgkpvvrtif ; /usr/bin/python3
Dec 06 08:20:22 np0005548790.localdomain sudo[59043]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:22 np0005548790.localdomain python3[59045]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks2.json short_hostname=np0005548790 step=2 update_config_hash_only=False
Dec 06 08:20:22 np0005548790.localdomain sudo[59043]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:22 np0005548790.localdomain sudo[59059]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-znsquinommoakmzkbxwdnlvbtxerdimq ; /usr/bin/python3
Dec 06 08:20:22 np0005548790.localdomain sudo[59059]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:22 np0005548790.localdomain python3[59061]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:20:22 np0005548790.localdomain sudo[59059]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:23 np0005548790.localdomain sudo[59075]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jbpadeqvwavqorxcmxflkjrbjyksjtlg ; /usr/bin/python3
Dec 06 08:20:23 np0005548790.localdomain sudo[59075]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:20:23 np0005548790.localdomain python3[59077]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_2 config_pattern=container-puppet-*.json config_overrides={} debug=True
Dec 06 08:20:23 np0005548790.localdomain sudo[59075]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:24 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec 06 08:20:24 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec 06 08:20:25 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec 06 08:20:25 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec 06 08:20:26 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Dec 06 08:20:26 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Dec 06 08:20:29 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 70 pg[7.c( v 40'39 (0'0,40'39] local-lis/les=56/57 n=1 ec=48/37 lis/c=56/56 les/c/f=57/57/0 sis=70 pruub=13.207274437s) [0,5,4] r=-1 lpr=70 pi=[56,70)/1 luod=0'0 crt=40'39 mlcod 0'0 active pruub 1229.454467773s@ mbc={}] start_peering_interval up [2,3,4] -> [0,5,4], acting [2,3,4] -> [0,5,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:20:29 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 70 pg[7.c( v 40'39 (0'0,40'39] local-lis/les=56/57 n=1 ec=48/37 lis/c=56/56 les/c/f=57/57/0 sis=70 pruub=13.207192421s) [0,5,4] r=-1 lpr=70 pi=[56,70)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1229.454467773s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:20:29 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 70 pg[7.c( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=56/56 les/c/f=57/57/0 sis=70) [0,5,4] r=0 lpr=70 pi=[56,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:20:29 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:20:29 np0005548790.localdomain podman[59078]: 2025-12-06 08:20:29.57920223 +0000 UTC m=+0.091734446 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, release=1761123044, container_name=metrics_qdr, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true)
Dec 06 08:20:29 np0005548790.localdomain podman[59078]: 2025-12-06 08:20:29.7768053 +0000 UTC m=+0.289337446 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, container_name=metrics_qdr)
Dec 06 08:20:29 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:20:30 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Dec 06 08:20:30 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Dec 06 08:20:30 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec 06 08:20:30 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec 06 08:20:30 np0005548790.localdomain sshd[59109]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:20:30 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 71 pg[7.c( v 40'39 lc 40'10 (0'0,40'39] local-lis/les=70/71 n=1 ec=48/37 lis/c=56/56 les/c/f=57/57/0 sis=70) [0,5,4] r=0 lpr=70 pi=[56,70)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(1+2)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:20:31 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 72 pg[7.d( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=58/58 les/c/f=59/59/0 sis=72) [3,2,1] r=0 lpr=72 pi=[58,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 06 08:20:31 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 72 pg[7.d( v 40'39 (0'0,40'39] local-lis/les=58/59 n=1 ec=48/37 lis/c=58/58 les/c/f=59/59/0 sis=72 pruub=13.231806755s) [3,2,1] r=-1 lpr=72 pi=[58,72)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1235.902709961s@ mbc={}] start_peering_interval up [4,5,0] -> [3,2,1], acting [4,5,0] -> [3,2,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:20:31 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 72 pg[7.d( v 40'39 (0'0,40'39] local-lis/les=58/59 n=1 ec=48/37 lis/c=58/58 les/c/f=59/59/0 sis=72 pruub=13.231726646s) [3,2,1] r=-1 lpr=72 pi=[58,72)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1235.902709961s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:20:32 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Dec 06 08:20:32 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Dec 06 08:20:32 np0005548790.localdomain sshd[59109]: Received disconnect from 35.247.75.98 port 33862:11: Bye Bye [preauth]
Dec 06 08:20:32 np0005548790.localdomain sshd[59109]: Disconnected from authenticating user root 35.247.75.98 port 33862 [preauth]
Dec 06 08:20:32 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 73 pg[7.d( v 40'39 lc 40'8 (0'0,40'39] local-lis/les=72/73 n=1 ec=48/37 lis/c=58/58 les/c/f=59/59/0 sis=72) [3,2,1] r=0 lpr=72 pi=[58,72)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+3)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 06 08:20:32 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 6.d deep-scrub starts
Dec 06 08:20:33 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 74 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=60/61 n=1 ec=48/37 lis/c=60/60 les/c/f=61/61/0 sis=74 pruub=13.190760612s) [2,1,3] r=-1 lpr=74 pi=[60,74)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1237.938598633s@ mbc={}] start_peering_interval up [5,0,4] -> [2,1,3], acting [5,0,4] -> [2,1,3], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:20:33 np0005548790.localdomain ceph-osd[31627]: osd.0 pg_epoch: 74 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=60/61 n=1 ec=48/37 lis/c=60/60 les/c/f=61/61/0 sis=74 pruub=13.190661430s) [2,1,3] r=-1 lpr=74 pi=[60,74)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1237.938598633s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:20:34 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 74 pg[7.e( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=60/60 les/c/f=61/61/0 sis=74) [2,1,3] r=2 lpr=74 pi=[60,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 06 08:20:34 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec 06 08:20:35 np0005548790.localdomain ceph-osd[31627]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec 06 08:20:35 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 76 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=61/62 n=1 ec=48/37 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=12.203823090s) [2,4,3] r=2 lpr=76 pi=[61,76)/1 crt=40'39 mlcod 0'0 active pruub 1234.580566406s@ mbc={255={}}] start_peering_interval up [3,1,5] -> [2,4,3], acting [3,1,5] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 06 08:20:35 np0005548790.localdomain ceph-osd[32586]: osd.3 pg_epoch: 76 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=61/62 n=1 ec=48/37 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=12.203693390s) [2,4,3] r=2 lpr=76 pi=[61,76)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1234.580566406s@ mbc={}] state<Start>: transitioning to Stray
Dec 06 08:20:39 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 6.e deep-scrub starts
Dec 06 08:20:39 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 6.e deep-scrub ok
Dec 06 08:20:41 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Dec 06 08:20:41 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Dec 06 08:20:42 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec 06 08:20:42 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec 06 08:20:46 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Dec 06 08:20:46 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Dec 06 08:20:50 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec 06 08:20:50 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec 06 08:20:52 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec 06 08:20:52 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec 06 08:20:57 np0005548790.localdomain sudo[59111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:20:57 np0005548790.localdomain sudo[59111]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:20:57 np0005548790.localdomain sudo[59111]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:57 np0005548790.localdomain sudo[59126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:20:57 np0005548790.localdomain sudo[59126]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:20:58 np0005548790.localdomain sudo[59126]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:58 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec 06 08:20:58 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Dec 06 08:20:58 np0005548790.localdomain sudo[59172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:20:58 np0005548790.localdomain sudo[59172]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:20:58 np0005548790.localdomain sudo[59172]: pam_unix(sudo:session): session closed for user root
Dec 06 08:20:59 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec 06 08:20:59 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec 06 08:21:00 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:21:00 np0005548790.localdomain systemd[1]: tmp-crun.3MCoFu.mount: Deactivated successfully.
Dec 06 08:21:00 np0005548790.localdomain podman[59187]: 2025-12-06 08:21:00.551768869 +0000 UTC m=+0.072047690 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=metrics_qdr, config_id=tripleo_step1, release=1761123044, architecture=x86_64, version=17.1.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:21:00 np0005548790.localdomain podman[59187]: 2025-12-06 08:21:00.775413512 +0000 UTC m=+0.295692393 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:21:00 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:21:07 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec 06 08:21:07 np0005548790.localdomain ceph-osd[32586]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec 06 08:21:17 np0005548790.localdomain sshd[59216]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:21:19 np0005548790.localdomain sshd[59216]: Received disconnect from 103.226.138.52 port 56538:11: Bye Bye [preauth]
Dec 06 08:21:19 np0005548790.localdomain sshd[59216]: Disconnected from authenticating user root 103.226.138.52 port 56538 [preauth]
Dec 06 08:21:31 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:21:31 np0005548790.localdomain podman[59218]: 2025-12-06 08:21:31.566651587 +0000 UTC m=+0.081658531 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com, config_id=tripleo_step1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=metrics_qdr, maintainer=OpenStack TripleO Team)
Dec 06 08:21:31 np0005548790.localdomain podman[59218]: 2025-12-06 08:21:31.767234958 +0000 UTC m=+0.282241882 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-18T22:49:46Z, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=metrics_qdr, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_id=tripleo_step1, maintainer=OpenStack TripleO Team)
Dec 06 08:21:31 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:21:58 np0005548790.localdomain sudo[59248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:21:58 np0005548790.localdomain sudo[59248]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:21:58 np0005548790.localdomain sudo[59248]: pam_unix(sudo:session): session closed for user root
Dec 06 08:21:58 np0005548790.localdomain sudo[59263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 08:21:58 np0005548790.localdomain sudo[59263]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:21:59 np0005548790.localdomain systemd[1]: tmp-crun.TmQIne.mount: Deactivated successfully.
Dec 06 08:21:59 np0005548790.localdomain podman[59348]: 2025-12-06 08:21:59.696666384 +0000 UTC m=+0.095472779 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, release=1763362218, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=7, name=rhceph)
Dec 06 08:21:59 np0005548790.localdomain podman[59348]: 2025-12-06 08:21:59.800310368 +0000 UTC m=+0.199116693 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, RELEASE=main, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public)
Dec 06 08:22:00 np0005548790.localdomain sudo[59263]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:00 np0005548790.localdomain sudo[59419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:22:00 np0005548790.localdomain sudo[59419]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:22:00 np0005548790.localdomain sudo[59419]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:00 np0005548790.localdomain sudo[59434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:22:00 np0005548790.localdomain sudo[59434]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:22:00 np0005548790.localdomain sudo[59434]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:01 np0005548790.localdomain sudo[59481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:22:01 np0005548790.localdomain sudo[59481]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:22:01 np0005548790.localdomain sudo[59481]: pam_unix(sudo:session): session closed for user root
Dec 06 08:22:02 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:22:02 np0005548790.localdomain systemd[1]: tmp-crun.OhAlDT.mount: Deactivated successfully.
Dec 06 08:22:02 np0005548790.localdomain podman[59496]: 2025-12-06 08:22:02.570481479 +0000 UTC m=+0.088012380 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-11-18T22:49:46Z, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com)
Dec 06 08:22:02 np0005548790.localdomain podman[59496]: 2025-12-06 08:22:02.756255088 +0000 UTC m=+0.273786029 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc.)
Dec 06 08:22:02 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:22:16 np0005548790.localdomain sshd[59524]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:22:18 np0005548790.localdomain sshd[59524]: Received disconnect from 43.163.123.45 port 36466:11: Bye Bye [preauth]
Dec 06 08:22:18 np0005548790.localdomain sshd[59524]: Disconnected from authenticating user root 43.163.123.45 port 36466 [preauth]
Dec 06 08:22:33 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:22:33 np0005548790.localdomain systemd[1]: tmp-crun.Vue6r9.mount: Deactivated successfully.
Dec 06 08:22:33 np0005548790.localdomain podman[59526]: 2025-12-06 08:22:33.592504595 +0000 UTC m=+0.104931370 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1761123044)
Dec 06 08:22:33 np0005548790.localdomain podman[59526]: 2025-12-06 08:22:33.788050053 +0000 UTC m=+0.300476768 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step1, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, version=17.1.12, container_name=metrics_qdr)
Dec 06 08:22:33 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:22:54 np0005548790.localdomain sshd[59555]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:22:58 np0005548790.localdomain sshd[59555]: Received disconnect from 103.226.138.52 port 39552:11: Bye Bye [preauth]
Dec 06 08:22:58 np0005548790.localdomain sshd[59555]: Disconnected from authenticating user root 103.226.138.52 port 39552 [preauth]
Dec 06 08:22:58 np0005548790.localdomain sshd[59557]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:23:01 np0005548790.localdomain sudo[59559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:23:01 np0005548790.localdomain sudo[59559]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:23:01 np0005548790.localdomain sudo[59559]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:01 np0005548790.localdomain sudo[59574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:23:01 np0005548790.localdomain sudo[59574]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:23:01 np0005548790.localdomain anacron[6186]: Job `cron.monthly' started
Dec 06 08:23:01 np0005548790.localdomain anacron[6186]: Job `cron.monthly' terminated
Dec 06 08:23:01 np0005548790.localdomain anacron[6186]: Normal exit (3 jobs run)
Dec 06 08:23:02 np0005548790.localdomain sudo[59574]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:02 np0005548790.localdomain sudo[59623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:23:02 np0005548790.localdomain sudo[59623]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:23:02 np0005548790.localdomain sudo[59623]: pam_unix(sudo:session): session closed for user root
Dec 06 08:23:03 np0005548790.localdomain sshd[59557]: Received disconnect from 35.247.75.98 port 45100:11: Bye Bye [preauth]
Dec 06 08:23:03 np0005548790.localdomain sshd[59557]: Disconnected from authenticating user root 35.247.75.98 port 45100 [preauth]
Dec 06 08:23:04 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:23:04 np0005548790.localdomain podman[59638]: 2025-12-06 08:23:04.577417136 +0000 UTC m=+0.088861043 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, architecture=x86_64, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true)
Dec 06 08:23:04 np0005548790.localdomain podman[59638]: 2025-12-06 08:23:04.792669017 +0000 UTC m=+0.304112904 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:23:04 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:23:32 np0005548790.localdomain sshd[59667]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:23:33 np0005548790.localdomain sshd[59667]: Invalid user solv from 193.32.162.146 port 59080
Dec 06 08:23:33 np0005548790.localdomain sshd[59667]: Connection closed by invalid user solv 193.32.162.146 port 59080 [preauth]
Dec 06 08:23:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:23:35 np0005548790.localdomain systemd[1]: tmp-crun.vGF4On.mount: Deactivated successfully.
Dec 06 08:23:35 np0005548790.localdomain podman[59669]: 2025-12-06 08:23:35.583292374 +0000 UTC m=+0.089466577 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, container_name=metrics_qdr)
Dec 06 08:23:35 np0005548790.localdomain podman[59669]: 2025-12-06 08:23:35.797312381 +0000 UTC m=+0.303486624 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, architecture=x86_64, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.buildah.version=1.41.4, distribution-scope=public, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=)
Dec 06 08:23:35 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:24:02 np0005548790.localdomain sudo[59698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:24:02 np0005548790.localdomain sudo[59698]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:24:02 np0005548790.localdomain sudo[59698]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:02 np0005548790.localdomain sudo[59713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:24:02 np0005548790.localdomain sudo[59713]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:24:03 np0005548790.localdomain sudo[59713]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:04 np0005548790.localdomain sudo[59759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:24:04 np0005548790.localdomain sudo[59759]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:24:04 np0005548790.localdomain sudo[59759]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:24:06 np0005548790.localdomain podman[59774]: 2025-12-06 08:24:06.56536872 +0000 UTC m=+0.079040998 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, version=17.1.12, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team)
Dec 06 08:24:06 np0005548790.localdomain podman[59774]: 2025-12-06 08:24:06.77313913 +0000 UTC m=+0.286811408 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, container_name=metrics_qdr, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:24:06 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:24:34 np0005548790.localdomain sshd[59803]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:24:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:24:37 np0005548790.localdomain podman[59805]: 2025-12-06 08:24:37.559242512 +0000 UTC m=+0.076883341 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, container_name=metrics_qdr, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:24:37 np0005548790.localdomain podman[59805]: 2025-12-06 08:24:37.755107165 +0000 UTC m=+0.272747994 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, container_name=metrics_qdr, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1761123044, distribution-scope=public)
Dec 06 08:24:37 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:24:37 np0005548790.localdomain sshd[59803]: Received disconnect from 103.226.138.52 port 41384:11: Bye Bye [preauth]
Dec 06 08:24:37 np0005548790.localdomain sshd[59803]: Disconnected from authenticating user root 103.226.138.52 port 41384 [preauth]
Dec 06 08:24:46 np0005548790.localdomain rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 08:24:46 np0005548790.localdomain rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 08:24:52 np0005548790.localdomain sudo[60058]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-crxswkavwmkutnfzfgzumxkybeaipslj ; /usr/bin/python3
Dec 06 08:24:52 np0005548790.localdomain sudo[60058]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:24:52 np0005548790.localdomain python3[60060]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:24:52 np0005548790.localdomain sudo[60058]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:52 np0005548790.localdomain sudo[60103]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jvnfpakuhgkokcfuitcbpcpbrselvgjh ; /usr/bin/python3
Dec 06 08:24:52 np0005548790.localdomain sudo[60103]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:24:52 np0005548790.localdomain python3[60105]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009492.2593222-98736-92541239118988/source _original_basename=tmpaavg0a10 follow=False checksum=62439dd24dde40c90e7a39f6a1b31cc6061fe59b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:24:52 np0005548790.localdomain sudo[60103]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:54 np0005548790.localdomain sudo[60133]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kjkukrwyrhakhjcbhzrcrbgcqajrdfbj ; /usr/bin/python3
Dec 06 08:24:54 np0005548790.localdomain sudo[60133]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:24:54 np0005548790.localdomain python3[60135]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:24:54 np0005548790.localdomain sudo[60133]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:54 np0005548790.localdomain sudo[60183]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fhrwfbvufzemdbvvxpwpqonajajliktr ; /usr/bin/python3
Dec 06 08:24:54 np0005548790.localdomain sudo[60183]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:24:54 np0005548790.localdomain sudo[60183]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:54 np0005548790.localdomain sudo[60201]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ddgphxnxudkisctvntyohaabiccbamzj ; /usr/bin/python3
Dec 06 08:24:54 np0005548790.localdomain sudo[60201]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:24:55 np0005548790.localdomain sudo[60201]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:55 np0005548790.localdomain sudo[60305]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eziihddomnjjyibgwtqefwgnldsdamoq ; ANSIBLE_ASYNC_DIR=/tmp/.ansible_async /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009495.2757747-98905-230810783330761/async_wrapper.py 500723554950 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009495.2757747-98905-230810783330761/AnsiballZ_command.py _
Dec 06 08:24:55 np0005548790.localdomain sudo[60305]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 06 08:24:55 np0005548790.localdomain ansible-async_wrapper.py[60307]: Invoked with 500723554950 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009495.2757747-98905-230810783330761/AnsiballZ_command.py _
Dec 06 08:24:55 np0005548790.localdomain ansible-async_wrapper.py[60310]: Starting module and watcher
Dec 06 08:24:55 np0005548790.localdomain ansible-async_wrapper.py[60310]: Start watching 60311 (3600)
Dec 06 08:24:55 np0005548790.localdomain ansible-async_wrapper.py[60311]: Start module (60311)
Dec 06 08:24:55 np0005548790.localdomain ansible-async_wrapper.py[60307]: Return async_wrapper task started.
Dec 06 08:24:55 np0005548790.localdomain sudo[60305]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:55 np0005548790.localdomain sudo[60329]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ijlcuubogmgzbfovgtkbaezmcweomzyz ; /usr/bin/python3
Dec 06 08:24:55 np0005548790.localdomain sudo[60329]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:24:56 np0005548790.localdomain python3[60331]: ansible-ansible.legacy.async_status Invoked with jid=500723554950.60307 mode=status _async_dir=/tmp/.ansible_async
Dec 06 08:24:56 np0005548790.localdomain sudo[60329]: pam_unix(sudo:session): session closed for user root
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]:    (file: /etc/puppet/hiera.yaml)
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]: Warning: Undefined variable '::deploy_config_name';
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]:    (file & line not available)
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]:    (file & line not available)
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]: Notice: Compiled catalog for np0005548790.localdomain in environment production in 0.11 seconds
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]: Notice: Applied catalog in 0.04 seconds
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]: Application:
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]:    Initial environment: production
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]:    Converged environment: production
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]:          Run mode: user
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]: Changes:
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]: Events:
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]: Resources:
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]:             Total: 10
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]: Time:
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]:          Schedule: 0.00
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]:              File: 0.00
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]:              Exec: 0.01
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]:            Augeas: 0.01
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]:    Transaction evaluation: 0.03
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]:    Catalog application: 0.04
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]:    Config retrieval: 0.15
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]:          Last run: 1765009499
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]:        Filebucket: 0.00
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]:             Total: 0.05
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]: Version:
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]:            Config: 1765009499
Dec 06 08:24:59 np0005548790.localdomain puppet-user[60328]:            Puppet: 7.10.0
Dec 06 08:24:59 np0005548790.localdomain ansible-async_wrapper.py[60311]: Module complete (60311)
Dec 06 08:25:00 np0005548790.localdomain ansible-async_wrapper.py[60310]: Done in kid B.
Dec 06 08:25:04 np0005548790.localdomain sudo[60442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:25:04 np0005548790.localdomain sudo[60442]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:25:04 np0005548790.localdomain sudo[60442]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:04 np0005548790.localdomain sudo[60457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:25:04 np0005548790.localdomain sudo[60457]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:25:04 np0005548790.localdomain sudo[60457]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:05 np0005548790.localdomain sudo[60504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:25:05 np0005548790.localdomain sudo[60504]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:25:05 np0005548790.localdomain sudo[60504]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:06 np0005548790.localdomain sudo[60532]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aalwrgxnwjbcaopjndtbgtegnafhajxv ; /usr/bin/python3
Dec 06 08:25:06 np0005548790.localdomain sudo[60532]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:06 np0005548790.localdomain sshd[60535]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:25:06 np0005548790.localdomain python3[60534]: ansible-ansible.legacy.async_status Invoked with jid=500723554950.60307 mode=status _async_dir=/tmp/.ansible_async
Dec 06 08:25:06 np0005548790.localdomain sudo[60532]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:06 np0005548790.localdomain sshd[60535]: Connection closed by authenticating user root 87.120.191.21 port 37102 [preauth]
Dec 06 08:25:07 np0005548790.localdomain sudo[60550]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-piemlwkoengxhzplndxqtnbemmjtqaow ; /usr/bin/python3
Dec 06 08:25:07 np0005548790.localdomain sudo[60550]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:07 np0005548790.localdomain python3[60552]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 08:25:07 np0005548790.localdomain sudo[60550]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:07 np0005548790.localdomain sudo[60566]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ljvrlttjrnfqqxwgfjredjopxylywnfy ; /usr/bin/python3
Dec 06 08:25:07 np0005548790.localdomain sudo[60566]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:07 np0005548790.localdomain python3[60568]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:25:07 np0005548790.localdomain sudo[60566]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:08 np0005548790.localdomain sudo[60616]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-loresoqflvihkafqidlfkkgjccnslffr ; /usr/bin/python3
Dec 06 08:25:08 np0005548790.localdomain sudo[60616]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:08 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:25:08 np0005548790.localdomain podman[60619]: 2025-12-06 08:25:08.126485125 +0000 UTC m=+0.091516062 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:25:08 np0005548790.localdomain python3[60618]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:25:08 np0005548790.localdomain sudo[60616]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:08 np0005548790.localdomain sudo[60664]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vigwlsiakxtuazlhspkzjgxryoolermm ; /usr/bin/python3
Dec 06 08:25:08 np0005548790.localdomain sudo[60664]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:08 np0005548790.localdomain podman[60619]: 2025-12-06 08:25:08.302061516 +0000 UTC m=+0.267092393 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, release=1761123044, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:25:08 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:25:08 np0005548790.localdomain python3[60666]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpnzsqekct recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 08:25:08 np0005548790.localdomain sudo[60664]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:08 np0005548790.localdomain sudo[60694]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rvvipsnsgkadukihjjlykkyoyghhrysq ; /usr/bin/python3
Dec 06 08:25:08 np0005548790.localdomain sudo[60694]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:08 np0005548790.localdomain python3[60696]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:08 np0005548790.localdomain sudo[60694]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:08 np0005548790.localdomain sshd[60697]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:25:09 np0005548790.localdomain sudo[60711]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hrntltcfvfepufsdupgwnmemtcgfiwlg ; /usr/bin/python3
Dec 06 08:25:09 np0005548790.localdomain sudo[60711]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:09 np0005548790.localdomain sudo[60711]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:09 np0005548790.localdomain sudo[60799]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hgjkuevsjmfpsigudhofmtgzweeylfoz ; /usr/bin/python3
Dec 06 08:25:09 np0005548790.localdomain sudo[60799]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:10 np0005548790.localdomain python3[60801]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Dec 06 08:25:10 np0005548790.localdomain sudo[60799]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:10 np0005548790.localdomain sudo[60818]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rebtfewtbmidrplndorpsgpetxijexlo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:10 np0005548790.localdomain sudo[60818]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:11 np0005548790.localdomain python3[60820]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:11 np0005548790.localdomain sudo[60818]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:11 np0005548790.localdomain sudo[60834]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-huvybqjfbzjznwwkqbbcbrtbmzsixjtq ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:11 np0005548790.localdomain sudo[60834]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:11 np0005548790.localdomain sudo[60834]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:11 np0005548790.localdomain sudo[60850]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cutuwziktcsqobrebhiuaqswbensxbwo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:11 np0005548790.localdomain sudo[60850]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:12 np0005548790.localdomain python3[60852]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:25:12 np0005548790.localdomain sudo[60850]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:12 np0005548790.localdomain sshd[60697]: Received disconnect from 35.247.75.98 port 47104:11: Bye Bye [preauth]
Dec 06 08:25:12 np0005548790.localdomain sshd[60697]: Disconnected from authenticating user root 35.247.75.98 port 47104 [preauth]
Dec 06 08:25:12 np0005548790.localdomain sudo[60900]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-etisjzveyzqcqjnfbjcvjhlbhswzrwgx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:12 np0005548790.localdomain sudo[60900]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:12 np0005548790.localdomain python3[60902]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:25:12 np0005548790.localdomain sudo[60900]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:12 np0005548790.localdomain sudo[60918]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-juztadgxaaawnbhzavddiwcohkyjaino ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:12 np0005548790.localdomain sudo[60918]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:13 np0005548790.localdomain python3[60920]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:13 np0005548790.localdomain sudo[60918]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:13 np0005548790.localdomain sudo[60980]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zebsrgaahyslxkjrrabqawnrblogbhtm ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:13 np0005548790.localdomain sudo[60980]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:13 np0005548790.localdomain python3[60982]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:25:13 np0005548790.localdomain sudo[60980]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:13 np0005548790.localdomain sudo[60998]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pcjdfvxhrjqzwkoglujcrfukgxrqamgv ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:13 np0005548790.localdomain sudo[60998]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:13 np0005548790.localdomain python3[61000]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:13 np0005548790.localdomain sudo[60998]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:14 np0005548790.localdomain sudo[61060]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dftzplcskfpallszdamhkvrxmejvawsh ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:14 np0005548790.localdomain sudo[61060]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:14 np0005548790.localdomain python3[61062]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:25:14 np0005548790.localdomain sudo[61060]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:14 np0005548790.localdomain sudo[61078]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dutegsllbhizuavdhpysoftiuyqqaiih ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:14 np0005548790.localdomain sudo[61078]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:14 np0005548790.localdomain python3[61080]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:14 np0005548790.localdomain sudo[61078]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:15 np0005548790.localdomain sudo[61140]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kxuqztvksttsoetjlxvqqgumaxaiwghx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:15 np0005548790.localdomain sudo[61140]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:15 np0005548790.localdomain python3[61142]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:25:15 np0005548790.localdomain sudo[61140]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:15 np0005548790.localdomain sudo[61158]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-guidrungtzuzzipmrspggqowvjuzngoa ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:15 np0005548790.localdomain sudo[61158]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:15 np0005548790.localdomain python3[61160]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:15 np0005548790.localdomain sudo[61158]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:15 np0005548790.localdomain sudo[61188]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yddkofngyhtrtapvoctrtddavuiegfuk ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:15 np0005548790.localdomain sudo[61188]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:16 np0005548790.localdomain python3[61190]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:25:16 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:25:16 np0005548790.localdomain systemd-rc-local-generator[61214]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:25:16 np0005548790.localdomain systemd-sysv-generator[61218]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:25:16 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:25:16 np0005548790.localdomain systemd[1]: Starting dnf makecache...
Dec 06 08:25:16 np0005548790.localdomain sudo[61188]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:16 np0005548790.localdomain dnf[61228]: Updating Subscription Management repositories.
Dec 06 08:25:16 np0005548790.localdomain sudo[61275]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ddknobkjindbhjipoejsgduuhonpetqn ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:16 np0005548790.localdomain sudo[61275]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:16 np0005548790.localdomain python3[61277]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:25:16 np0005548790.localdomain sudo[61275]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:17 np0005548790.localdomain sudo[61293]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ulfburewojmsthvmhqzqmmhqlpaqsxmo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:17 np0005548790.localdomain sudo[61293]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:17 np0005548790.localdomain python3[61295]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:17 np0005548790.localdomain sudo[61293]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:17 np0005548790.localdomain sudo[61355]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ovxhmzyrdagahexzbssmgoslglktrfka ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:17 np0005548790.localdomain sudo[61355]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:17 np0005548790.localdomain python3[61357]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:25:17 np0005548790.localdomain sudo[61355]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:17 np0005548790.localdomain sudo[61373]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lekhepmmqogrihqljerqjxxwaozccvys ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:17 np0005548790.localdomain sudo[61373]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:18 np0005548790.localdomain python3[61375]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:18 np0005548790.localdomain sudo[61373]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:18 np0005548790.localdomain dnf[61228]: Failed determining last makecache time.
Dec 06 08:25:18 np0005548790.localdomain sudo[61403]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hqlxdnmcebtixnrcywsxbifwozufderb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:18 np0005548790.localdomain sudo[61403]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:18 np0005548790.localdomain dnf[61228]: Red Hat Enterprise Linux 9 for x86_64 - AppStre  31 kB/s | 4.5 kB     00:00
Dec 06 08:25:18 np0005548790.localdomain python3[61405]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:25:18 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:25:18 np0005548790.localdomain systemd-rc-local-generator[61430]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:25:18 np0005548790.localdomain systemd-sysv-generator[61433]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:25:18 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:25:18 np0005548790.localdomain dnf[61228]: Red Hat Enterprise Linux 9 for x86_64 - High Av  32 kB/s | 4.0 kB     00:00
Dec 06 08:25:18 np0005548790.localdomain dnf[61228]: Red Hat Enterprise Linux 9 for x86_64 - AppStre  32 kB/s | 4.5 kB     00:00
Dec 06 08:25:18 np0005548790.localdomain systemd[1]: Starting Create netns directory...
Dec 06 08:25:18 np0005548790.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 08:25:18 np0005548790.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 08:25:18 np0005548790.localdomain systemd[1]: Finished Create netns directory.
Dec 06 08:25:18 np0005548790.localdomain sudo[61403]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:19 np0005548790.localdomain dnf[61228]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS   32 kB/s | 4.1 kB     00:00
Dec 06 08:25:19 np0005548790.localdomain sudo[61465]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jdegaovigpompcwxepnpuhktzkpgfhgb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:19 np0005548790.localdomain sudo[61465]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:19 np0005548790.localdomain dnf[61228]: Fast Datapath for RHEL 9 x86_64 (RPMs)           28 kB/s | 4.0 kB     00:00
Dec 06 08:25:19 np0005548790.localdomain python3[61467]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Dec 06 08:25:19 np0005548790.localdomain sudo[61465]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:19 np0005548790.localdomain dnf[61228]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS   29 kB/s | 4.1 kB     00:00
Dec 06 08:25:19 np0005548790.localdomain dnf[61228]: Red Hat OpenStack Platform 17.1 for RHEL 9 x86_  32 kB/s | 4.0 kB     00:00
Dec 06 08:25:19 np0005548790.localdomain sudo[61483]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wpmczbhgqyoqgtpwbvdlviyfhgqtyyqq ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:19 np0005548790.localdomain sudo[61483]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:20 np0005548790.localdomain dnf[61228]: Metadata cache created.
Dec 06 08:25:20 np0005548790.localdomain systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 06 08:25:20 np0005548790.localdomain systemd[1]: Finished dnf makecache.
Dec 06 08:25:20 np0005548790.localdomain systemd[1]: dnf-makecache.service: Consumed 2.788s CPU time.
Dec 06 08:25:20 np0005548790.localdomain sudo[61483]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:21 np0005548790.localdomain sudo[61524]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-whuavedzvdpunxzapxgivqdiqpftgfvk ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:21 np0005548790.localdomain sudo[61524]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:21 np0005548790.localdomain python3[61526]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step3 config_dir=/var/lib/tripleo-config/container-startup-config/step_3 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Dec 06 08:25:21 np0005548790.localdomain podman[61682]: 2025-12-06 08:25:21.580477445 +0000 UTC m=+0.060623707 container create ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, name=rhosp17/openstack-collectd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=)
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: Started libpod-conmon-ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.scope.
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:25:21 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0521d8df5d3673de67e6c677f90dfbd55b1c1f914f1671502a747da6648e8a6d/merged/scripts supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:21 np0005548790.localdomain podman[61708]: 2025-12-06 08:25:21.623868923 +0000 UTC m=+0.080638301 container create b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=rsyslog, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, release=1761123044, url=https://www.redhat.com, build-date=2025-11-18T22:49:49Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']})
Dec 06 08:25:21 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0521d8df5d3673de67e6c677f90dfbd55b1c1f914f1671502a747da6648e8a6d/merged/var/log/collectd supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:25:21 np0005548790.localdomain podman[61682]: 2025-12-06 08:25:21.64925775 +0000 UTC m=+0.129404042 container init ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, architecture=x86_64, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step3)
Dec 06 08:25:21 np0005548790.localdomain podman[61682]: 2025-12-06 08:25:21.555647183 +0000 UTC m=+0.035793455 image pull  registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Dec 06 08:25:21 np0005548790.localdomain sudo[61767]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 08:25:21 np0005548790.localdomain podman[61739]: 2025-12-06 08:25:21.66913122 +0000 UTC m=+0.091195423 container create 0f653fecfc67df9cb5a24adae077e22b1fae5e81ccf4d53df968317d5cab89d6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, architecture=x86_64, release=1761123044, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, container_name=nova_statedir_owner, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12)
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: Started libpod-conmon-b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308.scope.
Dec 06 08:25:21 np0005548790.localdomain podman[61693]: 2025-12-06 08:25:21.575226775 +0000 UTC m=+0.040247704 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 06 08:25:21 np0005548790.localdomain podman[61699]: 2025-12-06 08:25:21.575703419 +0000 UTC m=+0.038578780 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1
Dec 06 08:25:21 np0005548790.localdomain podman[61693]: 2025-12-06 08:25:21.677477593 +0000 UTC m=+0.142498492 container create 97b023c025806445deae14e86e94f9f9bd79c09975d803afcacb9a5317cf3a94 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, release=1761123044, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=nova_virtlogd_wrapper, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step3, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']})
Dec 06 08:25:21 np0005548790.localdomain systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 06 08:25:21 np0005548790.localdomain podman[61708]: 2025-12-06 08:25:21.579052768 +0000 UTC m=+0.035822146 image pull  registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: Created slice User Slice of UID 0.
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 06 08:25:21 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73cce0d5d3e32439730a4dc0ecb2505d2f64a391fd8c3ee449e9aeefbf57a5a3/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:21 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73cce0d5d3e32439730a4dc0ecb2505d2f64a391fd8c3ee449e9aeefbf57a5a3/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:21 np0005548790.localdomain podman[61682]: 2025-12-06 08:25:21.694198448 +0000 UTC m=+0.174344710 container start ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=collectd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, tcib_managed=true, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: Started libpod-conmon-0f653fecfc67df9cb5a24adae077e22b1fae5e81ccf4d53df968317d5cab89d6.scope.
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: Starting User Manager for UID 0...
Dec 06 08:25:21 np0005548790.localdomain python3[61526]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name collectd --cap-add IPC_LOCK --conmon-pidfile /run/collectd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d31718fcd17fdeee6489534105191c7a --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=collectd --label managed_by=tripleo_ansible --label config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/collectd.log --memory 512m --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro --volume /var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/collectd:/var/log/collectd:rw,z --volume /var/lib/container-config-scripts:/config-scripts:ro --volume /var/lib/container-user-scripts:/scripts:z --volume /run:/run:rw --volume /sys/fs/cgroup:/sys/fs/cgroup:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Dec 06 08:25:21 np0005548790.localdomain podman[61708]: 2025-12-06 08:25:21.707222925 +0000 UTC m=+0.163992313 container init b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step3, version=17.1.12, container_name=rsyslog, io.openshift.expose-services=, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, release=1761123044, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:49Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']})
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: Started libpod-conmon-97b023c025806445deae14e86e94f9f9bd79c09975d803afcacb9a5317cf3a94.scope.
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:25:21 np0005548790.localdomain podman[61739]: 2025-12-06 08:25:21.614522353 +0000 UTC m=+0.036586556 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 06 08:25:21 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e776680c96e76fe14e56ff40e379ac9bdb9ba3d732b302c8beb7b6113cedc1ac/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:21 np0005548790.localdomain podman[61708]: 2025-12-06 08:25:21.716254397 +0000 UTC m=+0.173023775 container start b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, name=rhosp17/openstack-rsyslog, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, release=1761123044, summary=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:49Z, com.redhat.component=openstack-rsyslog-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=rsyslog)
Dec 06 08:25:21 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e776680c96e76fe14e56ff40e379ac9bdb9ba3d732b302c8beb7b6113cedc1ac/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:21 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e776680c96e76fe14e56ff40e379ac9bdb9ba3d732b302c8beb7b6113cedc1ac/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:25:21 np0005548790.localdomain python3[61526]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name rsyslog --conmon-pidfile /run/rsyslog.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=46b3928e39956af0ccbc08ab55267e91 --label config_id=tripleo_step3 --label container_name=rsyslog --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/rsyslog.log --network host --privileged=True --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:ro --volume /var/log/containers/rsyslog:/var/log/rsyslog:rw,z --volume /var/log:/var/log/host:ro --volume /var/lib/rsyslog.container:/var/lib/rsyslog:rw,z registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Dec 06 08:25:21 np0005548790.localdomain sudo[61794]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 08:25:21 np0005548790.localdomain sudo[61794]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 08:25:21 np0005548790.localdomain podman[61739]: 2025-12-06 08:25:21.728526014 +0000 UTC m=+0.150590207 container init 0f653fecfc67df9cb5a24adae077e22b1fae5e81ccf4d53df968317d5cab89d6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, container_name=nova_statedir_owner, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:25:21 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0160acc82432e6ab5584ba775b0f7164edaf038948049207c6a0305ea190059/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:21 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0160acc82432e6ab5584ba775b0f7164edaf038948049207c6a0305ea190059/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:21 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0160acc82432e6ab5584ba775b0f7164edaf038948049207c6a0305ea190059/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:21 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0160acc82432e6ab5584ba775b0f7164edaf038948049207c6a0305ea190059/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:21 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0160acc82432e6ab5584ba775b0f7164edaf038948049207c6a0305ea190059/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:21 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0160acc82432e6ab5584ba775b0f7164edaf038948049207c6a0305ea190059/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:21 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0160acc82432e6ab5584ba775b0f7164edaf038948049207c6a0305ea190059/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:21 np0005548790.localdomain systemd[61783]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Dec 06 08:25:21 np0005548790.localdomain podman[61739]: 2025-12-06 08:25:21.736576938 +0000 UTC m=+0.158641131 container start 0f653fecfc67df9cb5a24adae077e22b1fae5e81ccf4d53df968317d5cab89d6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step3, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, release=1761123044, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, container_name=nova_statedir_owner, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git)
Dec 06 08:25:21 np0005548790.localdomain podman[61739]: 2025-12-06 08:25:21.736749083 +0000 UTC m=+0.158813286 container attach 0f653fecfc67df9cb5a24adae077e22b1fae5e81ccf4d53df968317d5cab89d6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, container_name=nova_statedir_owner, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step3, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, url=https://www.redhat.com)
Dec 06 08:25:21 np0005548790.localdomain podman[61699]: 2025-12-06 08:25:21.741466429 +0000 UTC m=+0.204341780 container create 76a0e6307611b165843588abd223efcb401d3fe944a50aec29d0ee9d6fc5aef5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, container_name=ceilometer_init_log, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi)
Dec 06 08:25:21 np0005548790.localdomain sudo[61794]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: Started libpod-conmon-76a0e6307611b165843588abd223efcb401d3fe944a50aec29d0ee9d6fc5aef5.scope.
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: libpod-b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308.scope: Deactivated successfully.
Dec 06 08:25:21 np0005548790.localdomain podman[61802]: 2025-12-06 08:25:21.777112459 +0000 UTC m=+0.041900709 container died b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-18T22:49:49Z, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, container_name=rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.component=openstack-rsyslog-container)
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:25:21 np0005548790.localdomain podman[61693]: 2025-12-06 08:25:21.787673121 +0000 UTC m=+0.252694010 container init 97b023c025806445deae14e86e94f9f9bd79c09975d803afcacb9a5317cf3a94 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=nova_virtlogd_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, version=17.1.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=)
Dec 06 08:25:21 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ff3cc0882e58e51d95c7dbb8a9007371d8427f559dbbb0b928cad9ba7629e14/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: libpod-0f653fecfc67df9cb5a24adae077e22b1fae5e81ccf4d53df968317d5cab89d6.scope: Deactivated successfully.
Dec 06 08:25:21 np0005548790.localdomain podman[61772]: 2025-12-06 08:25:21.818972105 +0000 UTC m=+0.126441192 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd)
Dec 06 08:25:21 np0005548790.localdomain sudo[61862]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 08:25:21 np0005548790.localdomain systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 06 08:25:21 np0005548790.localdomain podman[61699]: 2025-12-06 08:25:21.845918594 +0000 UTC m=+0.308793935 container init 76a0e6307611b165843588abd223efcb401d3fe944a50aec29d0ee9d6fc5aef5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_init_log, batch=17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step3)
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: libpod-76a0e6307611b165843588abd223efcb401d3fe944a50aec29d0ee9d6fc5aef5.scope: Deactivated successfully.
Dec 06 08:25:21 np0005548790.localdomain systemd[61783]: Queued start job for default target Main User Target.
Dec 06 08:25:21 np0005548790.localdomain systemd[61783]: Created slice User Application Slice.
Dec 06 08:25:21 np0005548790.localdomain systemd[61783]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 06 08:25:21 np0005548790.localdomain systemd[61783]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 08:25:21 np0005548790.localdomain systemd[61783]: Reached target Paths.
Dec 06 08:25:21 np0005548790.localdomain systemd[61783]: Reached target Timers.
Dec 06 08:25:21 np0005548790.localdomain systemd[61783]: Starting D-Bus User Message Bus Socket...
Dec 06 08:25:21 np0005548790.localdomain systemd[61783]: Starting Create User's Volatile Files and Directories...
Dec 06 08:25:21 np0005548790.localdomain systemd[61783]: Finished Create User's Volatile Files and Directories.
Dec 06 08:25:21 np0005548790.localdomain systemd[61783]: Listening on D-Bus User Message Bus Socket.
Dec 06 08:25:21 np0005548790.localdomain systemd[61783]: Reached target Sockets.
Dec 06 08:25:21 np0005548790.localdomain systemd[61783]: Reached target Basic System.
Dec 06 08:25:21 np0005548790.localdomain systemd[61783]: Reached target Main User Target.
Dec 06 08:25:21 np0005548790.localdomain systemd[61783]: Startup finished in 138ms.
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: Started User Manager for UID 0.
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: Started Session c1 of User root.
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: Started Session c2 of User root.
Dec 06 08:25:21 np0005548790.localdomain sudo[61767]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 08:25:21 np0005548790.localdomain sudo[61862]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 08:25:21 np0005548790.localdomain podman[61699]: 2025-12-06 08:25:21.903293734 +0000 UTC m=+0.366169075 container start 76a0e6307611b165843588abd223efcb401d3fe944a50aec29d0ee9d6fc5aef5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, container_name=ceilometer_init_log, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 08:25:21 np0005548790.localdomain podman[61830]: 2025-12-06 08:25:21.905560214 +0000 UTC m=+0.119156828 container cleanup b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, architecture=x86_64, build-date=2025-11-18T22:49:49Z, config_id=tripleo_step3, com.redhat.component=openstack-rsyslog-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-rsyslog, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, vendor=Red Hat, Inc.)
Dec 06 08:25:21 np0005548790.localdomain python3[61526]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_init_log --conmon-pidfile /run/ceilometer_init_log.pid --detach=True --label config_id=tripleo_step3 --label container_name=ceilometer_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_init_log.log --network none --user root --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 /bin/bash -c chown -R ceilometer:ceilometer /var/log/ceilometer
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: libpod-conmon-b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308.scope: Deactivated successfully.
Dec 06 08:25:21 np0005548790.localdomain podman[61772]: 2025-12-06 08:25:21.929237016 +0000 UTC m=+0.236706153 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z)
Dec 06 08:25:21 np0005548790.localdomain podman[61772]: unhealthy
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Failed with result 'exit-code'.
Dec 06 08:25:21 np0005548790.localdomain sudo[61862]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:21 np0005548790.localdomain podman[61693]: 2025-12-06 08:25:21.952292441 +0000 UTC m=+0.417313330 container start 97b023c025806445deae14e86e94f9f9bd79c09975d803afcacb9a5317cf3a94 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, vcs-type=git, build-date=2025-11-19T00:35:22Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtlogd_wrapper)
Dec 06 08:25:21 np0005548790.localdomain podman[61886]: 2025-12-06 08:25:21.954954452 +0000 UTC m=+0.087052602 container died 76a0e6307611b165843588abd223efcb401d3fe944a50aec29d0ee9d6fc5aef5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=ceilometer_init_log, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container)
Dec 06 08:25:21 np0005548790.localdomain python3[61526]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/nova_virtlogd_wrapper.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=1c14d9f34e8565ad391b489e982af70f --label config_id=tripleo_step3 --label container_name=nova_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtlogd_wrapper.log --network host --pid host --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: session-c2.scope: Deactivated successfully.
Dec 06 08:25:21 np0005548790.localdomain sudo[61767]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: session-c1.scope: Deactivated successfully.
Dec 06 08:25:21 np0005548790.localdomain podman[61886]: 2025-12-06 08:25:21.981082488 +0000 UTC m=+0.113180608 container cleanup 76a0e6307611b165843588abd223efcb401d3fe944a50aec29d0ee9d6fc5aef5 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, build-date=2025-11-19T00:12:45Z, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step3, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_init_log, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:25:21 np0005548790.localdomain systemd[1]: libpod-conmon-76a0e6307611b165843588abd223efcb401d3fe944a50aec29d0ee9d6fc5aef5.scope: Deactivated successfully.
Dec 06 08:25:22 np0005548790.localdomain podman[61739]: 2025-12-06 08:25:22.050320915 +0000 UTC m=+0.472385118 container died 0f653fecfc67df9cb5a24adae077e22b1fae5e81ccf4d53df968317d5cab89d6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_statedir_owner, config_id=tripleo_step3, architecture=x86_64, vcs-type=git, build-date=2025-11-19T00:36:58Z, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=)
Dec 06 08:25:22 np0005548790.localdomain podman[61848]: 2025-12-06 08:25:22.125268624 +0000 UTC m=+0.321962827 container cleanup 0f653fecfc67df9cb5a24adae077e22b1fae5e81ccf4d53df968317d5cab89d6 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_statedir_owner, vendor=Red Hat, Inc., config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, config_id=tripleo_step3, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:25:22 np0005548790.localdomain systemd[1]: libpod-conmon-0f653fecfc67df9cb5a24adae077e22b1fae5e81ccf4d53df968317d5cab89d6.scope: Deactivated successfully.
Dec 06 08:25:22 np0005548790.localdomain python3[61526]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_statedir_owner --conmon-pidfile /run/nova_statedir_owner.pid --detach=False --env NOVA_STATEDIR_OWNERSHIP_SKIP=triliovault-mounts --env TRIPLEO_DEPLOY_IDENTIFIER=1765008053 --env __OS_DEBUG=true --label config_id=tripleo_step3 --label container_name=nova_statedir_owner --label managed_by=tripleo_ansible --label config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_statedir_owner.log --network none --privileged=False --security-opt label=disable --user root --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/container-config-scripts:/container-config-scripts:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py
Dec 06 08:25:22 np0005548790.localdomain podman[62071]: 2025-12-06 08:25:22.46865311 +0000 UTC m=+0.084967626 container create e33b647f1c7f511a2e3d9afc8393ee744450ef1d9a8b9d253d39d08c54121c01 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 06 08:25:22 np0005548790.localdomain systemd[1]: Started libpod-conmon-e33b647f1c7f511a2e3d9afc8393ee744450ef1d9a8b9d253d39d08c54121c01.scope.
Dec 06 08:25:22 np0005548790.localdomain podman[62071]: 2025-12-06 08:25:22.431623773 +0000 UTC m=+0.047938309 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 06 08:25:22 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:25:22 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d301709acbc74facad7e2d0e0c7cb4c38dc70cc38063e9c8b691a2c4c7e687e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:22 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d301709acbc74facad7e2d0e0c7cb4c38dc70cc38063e9c8b691a2c4c7e687e/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:22 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d301709acbc74facad7e2d0e0c7cb4c38dc70cc38063e9c8b691a2c4c7e687e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:22 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d301709acbc74facad7e2d0e0c7cb4c38dc70cc38063e9c8b691a2c4c7e687e/merged/var/log/swtpm/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:22 np0005548790.localdomain podman[62071]: 2025-12-06 08:25:22.548932761 +0000 UTC m=+0.165247277 container init e33b647f1c7f511a2e3d9afc8393ee744450ef1d9a8b9d253d39d08c54121c01 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:25:22 np0005548790.localdomain podman[62071]: 2025-12-06 08:25:22.557706225 +0000 UTC m=+0.174020751 container start e33b647f1c7f511a2e3d9afc8393ee744450ef1d9a8b9d253d39d08c54121c01 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:25:22 np0005548790.localdomain podman[62101]: 2025-12-06 08:25:22.569590352 +0000 UTC m=+0.088260834 container create 88da17bd57495bf861303cba36153f51080615a3ef2323dd08baf107ea35b912 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, build-date=2025-11-19T00:35:22Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, container_name=nova_virtsecretd, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3)
Dec 06 08:25:22 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308-userdata-shm.mount: Deactivated successfully.
Dec 06 08:25:22 np0005548790.localdomain podman[62101]: 2025-12-06 08:25:22.519765303 +0000 UTC m=+0.038435815 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 06 08:25:22 np0005548790.localdomain systemd[1]: Started libpod-conmon-88da17bd57495bf861303cba36153f51080615a3ef2323dd08baf107ea35b912.scope.
Dec 06 08:25:22 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:25:22 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83dd2ae8ec29b7cb801d1fd4229674fbfc32ccfb1cee7918282407025d079f4/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:22 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83dd2ae8ec29b7cb801d1fd4229674fbfc32ccfb1cee7918282407025d079f4/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:22 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83dd2ae8ec29b7cb801d1fd4229674fbfc32ccfb1cee7918282407025d079f4/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:22 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83dd2ae8ec29b7cb801d1fd4229674fbfc32ccfb1cee7918282407025d079f4/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:22 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83dd2ae8ec29b7cb801d1fd4229674fbfc32ccfb1cee7918282407025d079f4/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:22 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83dd2ae8ec29b7cb801d1fd4229674fbfc32ccfb1cee7918282407025d079f4/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:22 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83dd2ae8ec29b7cb801d1fd4229674fbfc32ccfb1cee7918282407025d079f4/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:22 np0005548790.localdomain podman[62101]: 2025-12-06 08:25:22.650851809 +0000 UTC m=+0.169522251 container init 88da17bd57495bf861303cba36153f51080615a3ef2323dd08baf107ea35b912 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, container_name=nova_virtsecretd, build-date=2025-11-19T00:35:22Z, batch=17.1_20251118.1, architecture=x86_64)
Dec 06 08:25:22 np0005548790.localdomain systemd[1]: tmp-crun.J1aKWu.mount: Deactivated successfully.
Dec 06 08:25:22 np0005548790.localdomain podman[62101]: 2025-12-06 08:25:22.659807217 +0000 UTC m=+0.178477659 container start 88da17bd57495bf861303cba36153f51080615a3ef2323dd08baf107ea35b912 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, release=1761123044, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=nova_virtsecretd, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, architecture=x86_64, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:25:22 np0005548790.localdomain python3[61526]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtsecretd --cgroupns=host --conmon-pidfile /run/nova_virtsecretd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=1c14d9f34e8565ad391b489e982af70f --label config_id=tripleo_step3 --label container_name=nova_virtsecretd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtsecretd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 06 08:25:22 np0005548790.localdomain sudo[62129]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 08:25:22 np0005548790.localdomain systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 06 08:25:22 np0005548790.localdomain systemd[1]: Started Session c3 of User root.
Dec 06 08:25:22 np0005548790.localdomain sudo[62129]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 08:25:22 np0005548790.localdomain sudo[62129]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:22 np0005548790.localdomain systemd[1]: session-c3.scope: Deactivated successfully.
Dec 06 08:25:23 np0005548790.localdomain podman[62247]: 2025-12-06 08:25:23.126611736 +0000 UTC m=+0.099441103 container create 3da71cdf11184a768bf6160f2ccd670dbb882c1138b39c104b8d2321959543e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, config_id=tripleo_step3, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtnodedevd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 06 08:25:23 np0005548790.localdomain podman[62254]: 2025-12-06 08:25:23.163681254 +0000 UTC m=+0.118992814 container create 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, container_name=iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container)
Dec 06 08:25:23 np0005548790.localdomain systemd[1]: Started libpod-conmon-3da71cdf11184a768bf6160f2ccd670dbb882c1138b39c104b8d2321959543e1.scope.
Dec 06 08:25:23 np0005548790.localdomain podman[62247]: 2025-12-06 08:25:23.078569185 +0000 UTC m=+0.051398612 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 06 08:25:23 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:25:23 np0005548790.localdomain podman[62254]: 2025-12-06 08:25:23.094046977 +0000 UTC m=+0.049358577 image pull  registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Dec 06 08:25:23 np0005548790.localdomain systemd[1]: Started libpod-conmon-20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.scope.
Dec 06 08:25:23 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ebff1bc107fdb4ba48a82a29c7022b5ab13c6ae61733851ce5b1c08088cab4/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:23 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ebff1bc107fdb4ba48a82a29c7022b5ab13c6ae61733851ce5b1c08088cab4/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:23 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ebff1bc107fdb4ba48a82a29c7022b5ab13c6ae61733851ce5b1c08088cab4/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:23 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ebff1bc107fdb4ba48a82a29c7022b5ab13c6ae61733851ce5b1c08088cab4/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:23 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ebff1bc107fdb4ba48a82a29c7022b5ab13c6ae61733851ce5b1c08088cab4/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:23 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ebff1bc107fdb4ba48a82a29c7022b5ab13c6ae61733851ce5b1c08088cab4/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:23 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86ebff1bc107fdb4ba48a82a29c7022b5ab13c6ae61733851ce5b1c08088cab4/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:23 np0005548790.localdomain podman[62247]: 2025-12-06 08:25:23.207356049 +0000 UTC m=+0.180185416 container init 3da71cdf11184a768bf6160f2ccd670dbb882c1138b39c104b8d2321959543e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.openshift.expose-services=, url=https://www.redhat.com, container_name=nova_virtnodedevd, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:25:23 np0005548790.localdomain podman[62247]: 2025-12-06 08:25:23.217414108 +0000 UTC m=+0.190243475 container start 3da71cdf11184a768bf6160f2ccd670dbb882c1138b39c104b8d2321959543e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, container_name=nova_virtnodedevd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 06 08:25:23 np0005548790.localdomain python3[61526]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtnodedevd --cgroupns=host --conmon-pidfile /run/nova_virtnodedevd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=1c14d9f34e8565ad391b489e982af70f --label config_id=tripleo_step3 --label container_name=nova_virtnodedevd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtnodedevd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 06 08:25:23 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:25:23 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/180b3aecafe7c8da44d60e3a56d560d6da12982d5d153924b65220d20b7de3a0/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:23 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/180b3aecafe7c8da44d60e3a56d560d6da12982d5d153924b65220d20b7de3a0/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:23 np0005548790.localdomain sudo[62285]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 08:25:23 np0005548790.localdomain systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 06 08:25:23 np0005548790.localdomain systemd[1]: Started Session c4 of User root.
Dec 06 08:25:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:25:23 np0005548790.localdomain podman[62254]: 2025-12-06 08:25:23.276988716 +0000 UTC m=+0.232300436 container init 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=iscsid, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, release=1761123044, distribution-scope=public, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4)
Dec 06 08:25:23 np0005548790.localdomain sudo[62285]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 08:25:23 np0005548790.localdomain sudo[62303]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 08:25:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:25:23 np0005548790.localdomain podman[62254]: 2025-12-06 08:25:23.314561878 +0000 UTC m=+0.269873468 container start 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-18T23:44:13Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 08:25:23 np0005548790.localdomain systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 06 08:25:23 np0005548790.localdomain systemd[1]: Started Session c5 of User root.
Dec 06 08:25:23 np0005548790.localdomain python3[61526]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name iscsid --conmon-pidfile /run/iscsid.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=02a6418b6bc78669b6757e55b0a3cf68 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=iscsid --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/iscsid.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Dec 06 08:25:23 np0005548790.localdomain sudo[62303]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 08:25:23 np0005548790.localdomain sudo[62285]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:23 np0005548790.localdomain systemd[1]: session-c4.scope: Deactivated successfully.
Dec 06 08:25:23 np0005548790.localdomain sudo[62303]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:23 np0005548790.localdomain systemd[1]: session-c5.scope: Deactivated successfully.
Dec 06 08:25:23 np0005548790.localdomain kernel: Loading iSCSI transport class v2.0-870.
Dec 06 08:25:23 np0005548790.localdomain podman[62304]: 2025-12-06 08:25:23.438828842 +0000 UTC m=+0.110571740 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=starting, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, io.buildah.version=1.41.4, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=iscsid, release=1761123044, config_id=tripleo_step3, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container)
Dec 06 08:25:23 np0005548790.localdomain podman[62304]: 2025-12-06 08:25:23.525062521 +0000 UTC m=+0.196805409 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, container_name=iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:25:23 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:25:23 np0005548790.localdomain podman[62431]: 2025-12-06 08:25:23.720362349 +0000 UTC m=+0.071006394 container create cf855de1a4fb0e317fcb88f2c4523f8d6c248f9a4ed34d1f894c5b1977d5029c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtstoraged, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=)
Dec 06 08:25:23 np0005548790.localdomain systemd[1]: Started libpod-conmon-cf855de1a4fb0e317fcb88f2c4523f8d6c248f9a4ed34d1f894c5b1977d5029c.scope.
Dec 06 08:25:23 np0005548790.localdomain podman[62431]: 2025-12-06 08:25:23.684577085 +0000 UTC m=+0.035221180 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 06 08:25:23 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:25:23 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1e85ee8cd933bc1928fa8420e88eccd78c498fc11458e12d7d40087a3d81339/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:23 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1e85ee8cd933bc1928fa8420e88eccd78c498fc11458e12d7d40087a3d81339/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:23 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1e85ee8cd933bc1928fa8420e88eccd78c498fc11458e12d7d40087a3d81339/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:23 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1e85ee8cd933bc1928fa8420e88eccd78c498fc11458e12d7d40087a3d81339/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:23 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1e85ee8cd933bc1928fa8420e88eccd78c498fc11458e12d7d40087a3d81339/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:23 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1e85ee8cd933bc1928fa8420e88eccd78c498fc11458e12d7d40087a3d81339/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:23 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1e85ee8cd933bc1928fa8420e88eccd78c498fc11458e12d7d40087a3d81339/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:23 np0005548790.localdomain podman[62431]: 2025-12-06 08:25:23.804465122 +0000 UTC m=+0.155109167 container init cf855de1a4fb0e317fcb88f2c4523f8d6c248f9a4ed34d1f894c5b1977d5029c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtstoraged, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public)
Dec 06 08:25:23 np0005548790.localdomain podman[62431]: 2025-12-06 08:25:23.8126595 +0000 UTC m=+0.163303545 container start cf855de1a4fb0e317fcb88f2c4523f8d6c248f9a4ed34d1f894c5b1977d5029c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step3, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-libvirt, container_name=nova_virtstoraged, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']})
Dec 06 08:25:23 np0005548790.localdomain python3[61526]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtstoraged --cgroupns=host --conmon-pidfile /run/nova_virtstoraged.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=1c14d9f34e8565ad391b489e982af70f --label config_id=tripleo_step3 --label container_name=nova_virtstoraged --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtstoraged.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 06 08:25:23 np0005548790.localdomain sudo[62451]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 08:25:23 np0005548790.localdomain systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 06 08:25:23 np0005548790.localdomain systemd[1]: Started Session c6 of User root.
Dec 06 08:25:23 np0005548790.localdomain sudo[62451]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 08:25:23 np0005548790.localdomain sudo[62451]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:23 np0005548790.localdomain systemd[1]: session-c6.scope: Deactivated successfully.
Dec 06 08:25:24 np0005548790.localdomain podman[62538]: 2025-12-06 08:25:24.203899504 +0000 UTC m=+0.065619692 container create 955a185f06c627fd6869ca3b1cd3398316e754f2eda20d9ab7cb1a56c030723f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, version=17.1.12, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, config_id=tripleo_step3, container_name=nova_virtqemud, batch=17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 06 08:25:24 np0005548790.localdomain systemd[1]: Started libpod-conmon-955a185f06c627fd6869ca3b1cd3398316e754f2eda20d9ab7cb1a56c030723f.scope.
Dec 06 08:25:24 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:25:24 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3046bf95c005ae5e06ce4ce46dded50d0c609d6971f4cdb5d43c0345e88618/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:24 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3046bf95c005ae5e06ce4ce46dded50d0c609d6971f4cdb5d43c0345e88618/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:24 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3046bf95c005ae5e06ce4ce46dded50d0c609d6971f4cdb5d43c0345e88618/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:24 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3046bf95c005ae5e06ce4ce46dded50d0c609d6971f4cdb5d43c0345e88618/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:24 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3046bf95c005ae5e06ce4ce46dded50d0c609d6971f4cdb5d43c0345e88618/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:24 np0005548790.localdomain podman[62538]: 2025-12-06 08:25:24.174329325 +0000 UTC m=+0.036049523 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 06 08:25:24 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3046bf95c005ae5e06ce4ce46dded50d0c609d6971f4cdb5d43c0345e88618/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:24 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3046bf95c005ae5e06ce4ce46dded50d0c609d6971f4cdb5d43c0345e88618/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:24 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b3046bf95c005ae5e06ce4ce46dded50d0c609d6971f4cdb5d43c0345e88618/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:24 np0005548790.localdomain podman[62538]: 2025-12-06 08:25:24.282481379 +0000 UTC m=+0.144201547 container init 955a185f06c627fd6869ca3b1cd3398316e754f2eda20d9ab7cb1a56c030723f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, container_name=nova_virtqemud, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, tcib_managed=true, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 06 08:25:24 np0005548790.localdomain podman[62538]: 2025-12-06 08:25:24.292814294 +0000 UTC m=+0.154534482 container start 955a185f06c627fd6869ca3b1cd3398316e754f2eda20d9ab7cb1a56c030723f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step3, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=nova_virtqemud, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']})
Dec 06 08:25:24 np0005548790.localdomain python3[61526]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud --cgroupns=host --conmon-pidfile /run/nova_virtqemud.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=1c14d9f34e8565ad391b489e982af70f --label config_id=tripleo_step3 --label container_name=nova_virtqemud --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 06 08:25:24 np0005548790.localdomain sudo[62557]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 08:25:24 np0005548790.localdomain systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 06 08:25:24 np0005548790.localdomain systemd[1]: Started Session c7 of User root.
Dec 06 08:25:24 np0005548790.localdomain sudo[62557]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 08:25:24 np0005548790.localdomain sudo[62557]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:24 np0005548790.localdomain systemd[1]: session-c7.scope: Deactivated successfully.
Dec 06 08:25:24 np0005548790.localdomain podman[62641]: 2025-12-06 08:25:24.81501292 +0000 UTC m=+0.087826074 container create 062d833a452392c20587d4ca3912b26ff638ef1ea56eba26dd074d03b54fcad2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtproxyd, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com)
Dec 06 08:25:24 np0005548790.localdomain systemd[1]: Started libpod-conmon-062d833a452392c20587d4ca3912b26ff638ef1ea56eba26dd074d03b54fcad2.scope.
Dec 06 08:25:24 np0005548790.localdomain podman[62641]: 2025-12-06 08:25:24.765564161 +0000 UTC m=+0.038377355 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 06 08:25:24 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:25:24 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5ded326c11c52da5ab8fc5537c56948ef8cb9ad4217d530c95e9a7122f4a61/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:24 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5ded326c11c52da5ab8fc5537c56948ef8cb9ad4217d530c95e9a7122f4a61/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:24 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5ded326c11c52da5ab8fc5537c56948ef8cb9ad4217d530c95e9a7122f4a61/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:24 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5ded326c11c52da5ab8fc5537c56948ef8cb9ad4217d530c95e9a7122f4a61/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:24 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5ded326c11c52da5ab8fc5537c56948ef8cb9ad4217d530c95e9a7122f4a61/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:24 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5ded326c11c52da5ab8fc5537c56948ef8cb9ad4217d530c95e9a7122f4a61/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:24 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5ded326c11c52da5ab8fc5537c56948ef8cb9ad4217d530c95e9a7122f4a61/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:24 np0005548790.localdomain podman[62641]: 2025-12-06 08:25:24.895838746 +0000 UTC m=+0.168651900 container init 062d833a452392c20587d4ca3912b26ff638ef1ea56eba26dd074d03b54fcad2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, config_id=tripleo_step3, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, release=1761123044, maintainer=OpenStack TripleO Team, container_name=nova_virtproxyd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 06 08:25:24 np0005548790.localdomain podman[62641]: 2025-12-06 08:25:24.910912357 +0000 UTC m=+0.183725501 container start 062d833a452392c20587d4ca3912b26ff638ef1ea56eba26dd074d03b54fcad2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=nova_virtproxyd, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container)
Dec 06 08:25:24 np0005548790.localdomain python3[61526]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtproxyd --cgroupns=host --conmon-pidfile /run/nova_virtproxyd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=1c14d9f34e8565ad391b489e982af70f --label config_id=tripleo_step3 --label container_name=nova_virtproxyd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtproxyd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 06 08:25:24 np0005548790.localdomain sudo[62660]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 08:25:24 np0005548790.localdomain systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 06 08:25:24 np0005548790.localdomain systemd[1]: Started Session c8 of User root.
Dec 06 08:25:24 np0005548790.localdomain sudo[62660]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 08:25:25 np0005548790.localdomain sudo[62660]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:25 np0005548790.localdomain systemd[1]: session-c8.scope: Deactivated successfully.
Dec 06 08:25:25 np0005548790.localdomain sudo[61524]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:25 np0005548790.localdomain sudo[62720]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ubvwrkoeybnxfipfqehslzmdmglkhxfu ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:25 np0005548790.localdomain sudo[62720]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:25 np0005548790.localdomain python3[62722]: ansible-file Invoked with path=/etc/systemd/system/tripleo_collectd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:25 np0005548790.localdomain sudo[62720]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:25 np0005548790.localdomain sudo[62736]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-igeqwidunvauqexmypvarimqfdvcvqrx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:25 np0005548790.localdomain sudo[62736]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:25 np0005548790.localdomain python3[62738]: ansible-file Invoked with path=/etc/systemd/system/tripleo_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:25 np0005548790.localdomain sudo[62736]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:25 np0005548790.localdomain sudo[62752]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fwccsznplhuxwsdaefwnuhlbadefuday ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:25 np0005548790.localdomain sudo[62752]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:26 np0005548790.localdomain python3[62754]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:26 np0005548790.localdomain sudo[62752]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:26 np0005548790.localdomain sudo[62768]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fdhuguplxoibvltwvacxkgelufdiajnu ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:26 np0005548790.localdomain sudo[62768]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:26 np0005548790.localdomain python3[62770]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:26 np0005548790.localdomain sudo[62768]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:26 np0005548790.localdomain sudo[62784]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-avscmngbhxdfpvhxqqzymlwgjuutnlzx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:26 np0005548790.localdomain sudo[62784]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:26 np0005548790.localdomain python3[62786]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:26 np0005548790.localdomain sudo[62784]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:26 np0005548790.localdomain sudo[62800]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zksiynrwtxyapoadeyvhmbsoarudmwhh ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:26 np0005548790.localdomain sudo[62800]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:26 np0005548790.localdomain python3[62802]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:26 np0005548790.localdomain sudo[62800]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:26 np0005548790.localdomain sudo[62816]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vgwtegerftwgklrgfkuehmgdhgwybiyo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:26 np0005548790.localdomain sudo[62816]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:27 np0005548790.localdomain python3[62818]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:27 np0005548790.localdomain sudo[62816]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:27 np0005548790.localdomain sudo[62832]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mzuqptuajaayepuqlxqsmhqqjkfrwwae ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:27 np0005548790.localdomain sudo[62832]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:27 np0005548790.localdomain python3[62834]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:27 np0005548790.localdomain sudo[62832]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:27 np0005548790.localdomain sudo[62848]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oyazozmckmfqoufbedbxzvmysxgbcjjj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:27 np0005548790.localdomain sudo[62848]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:27 np0005548790.localdomain python3[62850]: ansible-file Invoked with path=/etc/systemd/system/tripleo_rsyslog.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:27 np0005548790.localdomain sudo[62848]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:27 np0005548790.localdomain sudo[62864]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mebqtapqgbuzkxclaxagnpjspdpuofww ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:27 np0005548790.localdomain sudo[62864]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:27 np0005548790.localdomain python3[62866]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_collectd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:25:27 np0005548790.localdomain sudo[62864]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:27 np0005548790.localdomain sudo[62880]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-miaaaljxligtzpypttlkcramhvecpmmt ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:27 np0005548790.localdomain sudo[62880]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:28 np0005548790.localdomain python3[62882]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_iscsid_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:25:28 np0005548790.localdomain sudo[62880]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:28 np0005548790.localdomain sudo[62896]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-impngowgocavtrrjszzmsdraarkkerfc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:28 np0005548790.localdomain sudo[62896]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:28 np0005548790.localdomain python3[62898]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:25:28 np0005548790.localdomain sudo[62896]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:28 np0005548790.localdomain sudo[62912]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nygaynyzixhxjtwukrplrchjlltrfhry ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:28 np0005548790.localdomain sudo[62912]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:28 np0005548790.localdomain python3[62914]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:25:28 np0005548790.localdomain sudo[62912]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:28 np0005548790.localdomain sudo[62929]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hznbewpmkbbbnwqqbmklmkgbkcfrnxbu ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:28 np0005548790.localdomain sudo[62929]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:28 np0005548790.localdomain python3[62931]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:25:28 np0005548790.localdomain sudo[62929]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:28 np0005548790.localdomain sudo[62945]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wbmpvkkpecknpyorotnisueklooxmfev ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:28 np0005548790.localdomain sudo[62945]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:29 np0005548790.localdomain python3[62947]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:25:29 np0005548790.localdomain sudo[62945]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:29 np0005548790.localdomain sudo[62961]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tuhgsjtxzokfyhbcnkqbuqcrwzposkqh ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:29 np0005548790.localdomain sudo[62961]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:29 np0005548790.localdomain python3[62963]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:25:29 np0005548790.localdomain sudo[62961]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:29 np0005548790.localdomain sudo[62977]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zqvdwqcisyvnevdrqnexerbgecfkzstp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:29 np0005548790.localdomain sudo[62977]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:29 np0005548790.localdomain python3[62979]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:25:29 np0005548790.localdomain sudo[62977]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:29 np0005548790.localdomain sudo[62993]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sojoeneeulsexzwztwojyvupczicacuu ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:29 np0005548790.localdomain sudo[62993]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:29 np0005548790.localdomain python3[62995]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_rsyslog_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:25:29 np0005548790.localdomain sudo[62993]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:30 np0005548790.localdomain sudo[63054]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cttalchhtbywlbhtdjaufctlwijscthm ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:30 np0005548790.localdomain sudo[63054]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:30 np0005548790.localdomain python3[63056]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009530.031781-100140-151632675147431/source dest=/etc/systemd/system/tripleo_collectd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:30 np0005548790.localdomain sudo[63054]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:30 np0005548790.localdomain sudo[63083]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ouqdkbkrphcaafmyrpqnlpfkrelgtkhc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:30 np0005548790.localdomain sudo[63083]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:31 np0005548790.localdomain python3[63085]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009530.031781-100140-151632675147431/source dest=/etc/systemd/system/tripleo_iscsid.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:31 np0005548790.localdomain sudo[63083]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:31 np0005548790.localdomain sudo[63112]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-exsdaofkltgnsgmqusycnqikjqbkywbz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:31 np0005548790.localdomain sudo[63112]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:31 np0005548790.localdomain python3[63114]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009530.031781-100140-151632675147431/source dest=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:31 np0005548790.localdomain sudo[63112]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:32 np0005548790.localdomain sudo[63141]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-krvrtxrtaykfcaqdppxmjxjyxqqviaiq ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:32 np0005548790.localdomain sudo[63141]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:32 np0005548790.localdomain python3[63143]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009530.031781-100140-151632675147431/source dest=/etc/systemd/system/tripleo_nova_virtnodedevd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:32 np0005548790.localdomain sudo[63141]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:32 np0005548790.localdomain sudo[63170]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vrgknxxrrxbiebuthqpnfnhmhqtdavsp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:32 np0005548790.localdomain sudo[63170]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:32 np0005548790.localdomain python3[63172]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009530.031781-100140-151632675147431/source dest=/etc/systemd/system/tripleo_nova_virtproxyd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:32 np0005548790.localdomain sudo[63170]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:33 np0005548790.localdomain sudo[63199]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cndrdywadeiqmrxasmtmcdwkfnloxezi ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:33 np0005548790.localdomain sudo[63199]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:33 np0005548790.localdomain python3[63201]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009530.031781-100140-151632675147431/source dest=/etc/systemd/system/tripleo_nova_virtqemud.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:33 np0005548790.localdomain sudo[63199]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:33 np0005548790.localdomain sudo[63228]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nzhrdelhruvfawkkcvuxgguhtjirppsd ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:33 np0005548790.localdomain sudo[63228]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:33 np0005548790.localdomain python3[63230]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009530.031781-100140-151632675147431/source dest=/etc/systemd/system/tripleo_nova_virtsecretd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:33 np0005548790.localdomain sudo[63228]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:34 np0005548790.localdomain sudo[63257]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rmaznmljuzcqcbtymdvqouztbtbaqoxu ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:34 np0005548790.localdomain sudo[63257]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:34 np0005548790.localdomain python3[63259]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009530.031781-100140-151632675147431/source dest=/etc/systemd/system/tripleo_nova_virtstoraged.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:34 np0005548790.localdomain sudo[63257]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:34 np0005548790.localdomain sudo[63286]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vkbqpifpvojysovxxiiexlivefhckiiz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:34 np0005548790.localdomain sudo[63286]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:34 np0005548790.localdomain python3[63288]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009530.031781-100140-151632675147431/source dest=/etc/systemd/system/tripleo_rsyslog.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:34 np0005548790.localdomain sudo[63286]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:34 np0005548790.localdomain sudo[63302]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wgfrlwqxifgdsxcvdjgmwpidnebfanro ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:34 np0005548790.localdomain sudo[63302]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:35 np0005548790.localdomain python3[63304]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 08:25:35 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:25:35 np0005548790.localdomain systemd-sysv-generator[63330]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:25:35 np0005548790.localdomain systemd-rc-local-generator[63325]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:25:35 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:25:35 np0005548790.localdomain systemd[1]: Stopping User Manager for UID 0...
Dec 06 08:25:35 np0005548790.localdomain systemd[61783]: Activating special unit Exit the Session...
Dec 06 08:25:35 np0005548790.localdomain systemd[61783]: Stopped target Main User Target.
Dec 06 08:25:35 np0005548790.localdomain systemd[61783]: Stopped target Basic System.
Dec 06 08:25:35 np0005548790.localdomain systemd[61783]: Stopped target Paths.
Dec 06 08:25:35 np0005548790.localdomain systemd[61783]: Stopped target Sockets.
Dec 06 08:25:35 np0005548790.localdomain systemd[61783]: Stopped target Timers.
Dec 06 08:25:35 np0005548790.localdomain systemd[61783]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 06 08:25:35 np0005548790.localdomain systemd[61783]: Closed D-Bus User Message Bus Socket.
Dec 06 08:25:35 np0005548790.localdomain systemd[61783]: Stopped Create User's Volatile Files and Directories.
Dec 06 08:25:35 np0005548790.localdomain systemd[61783]: Removed slice User Application Slice.
Dec 06 08:25:35 np0005548790.localdomain systemd[61783]: Reached target Shutdown.
Dec 06 08:25:35 np0005548790.localdomain systemd[61783]: Finished Exit the Session.
Dec 06 08:25:35 np0005548790.localdomain systemd[61783]: Reached target Exit the Session.
Dec 06 08:25:35 np0005548790.localdomain systemd[1]: user@0.service: Deactivated successfully.
Dec 06 08:25:35 np0005548790.localdomain systemd[1]: Stopped User Manager for UID 0.
Dec 06 08:25:35 np0005548790.localdomain systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 06 08:25:35 np0005548790.localdomain sudo[63302]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:35 np0005548790.localdomain systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 06 08:25:35 np0005548790.localdomain systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 06 08:25:35 np0005548790.localdomain systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 06 08:25:35 np0005548790.localdomain systemd[1]: Removed slice User Slice of UID 0.
Dec 06 08:25:35 np0005548790.localdomain sudo[63356]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bkrbwwjegsyyjokeqbghtrmwtvpqzmni ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:35 np0005548790.localdomain sudo[63356]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:36 np0005548790.localdomain python3[63358]: ansible-systemd Invoked with state=restarted name=tripleo_collectd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:25:37 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:25:37 np0005548790.localdomain systemd-rc-local-generator[63386]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:25:37 np0005548790.localdomain systemd-sysv-generator[63390]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:25:37 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:25:37 np0005548790.localdomain systemd[1]: Starting collectd container...
Dec 06 08:25:37 np0005548790.localdomain systemd[1]: Started collectd container.
Dec 06 08:25:37 np0005548790.localdomain sudo[63356]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:37 np0005548790.localdomain sudo[63424]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kbhtgfstxxfxlgunfbopyecrysuiplpb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:37 np0005548790.localdomain sudo[63424]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:38 np0005548790.localdomain python3[63426]: ansible-systemd Invoked with state=restarted name=tripleo_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:25:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:25:38 np0005548790.localdomain systemd[1]: tmp-crun.PfcXWs.mount: Deactivated successfully.
Dec 06 08:25:38 np0005548790.localdomain podman[63428]: 2025-12-06 08:25:38.578389532 +0000 UTC m=+0.092530908 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, version=17.1.12, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, release=1761123044, vcs-type=git, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z)
Dec 06 08:25:38 np0005548790.localdomain podman[63428]: 2025-12-06 08:25:38.795277365 +0000 UTC m=+0.309418701 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, container_name=metrics_qdr, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1)
Dec 06 08:25:38 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:25:39 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:25:39 np0005548790.localdomain systemd-rc-local-generator[63483]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:25:39 np0005548790.localdomain systemd-sysv-generator[63487]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:25:39 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:25:39 np0005548790.localdomain systemd[1]: Starting iscsid container...
Dec 06 08:25:39 np0005548790.localdomain systemd[1]: Started iscsid container.
Dec 06 08:25:39 np0005548790.localdomain sudo[63424]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:39 np0005548790.localdomain sudo[63519]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lfoogbwimgtcllquonwbmgahizyqhjvx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:39 np0005548790.localdomain sudo[63519]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:40 np0005548790.localdomain python3[63521]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtlogd_wrapper.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:25:40 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:25:40 np0005548790.localdomain systemd-rc-local-generator[63551]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:25:40 np0005548790.localdomain systemd-sysv-generator[63555]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:25:40 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:25:40 np0005548790.localdomain systemd[1]: Starting nova_virtlogd_wrapper container...
Dec 06 08:25:40 np0005548790.localdomain systemd[1]: Started nova_virtlogd_wrapper container.
Dec 06 08:25:40 np0005548790.localdomain sudo[63519]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:40 np0005548790.localdomain sudo[63586]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-thpdbbrardclawaadfmtaoqiznhkqkty ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:40 np0005548790.localdomain sudo[63586]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:41 np0005548790.localdomain python3[63588]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtnodedevd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:25:41 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:25:41 np0005548790.localdomain systemd-rc-local-generator[63614]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:25:41 np0005548790.localdomain systemd-sysv-generator[63621]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:25:41 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:25:41 np0005548790.localdomain systemd[1]: Starting nova_virtnodedevd container...
Dec 06 08:25:41 np0005548790.localdomain tripleo-start-podman-container[63628]: Creating additional drop-in dependency for "nova_virtnodedevd" (3da71cdf11184a768bf6160f2ccd670dbb882c1138b39c104b8d2321959543e1)
Dec 06 08:25:41 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:25:41 np0005548790.localdomain systemd-rc-local-generator[63683]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:25:41 np0005548790.localdomain systemd-sysv-generator[63689]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:25:41 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:25:42 np0005548790.localdomain systemd[1]: Started nova_virtnodedevd container.
Dec 06 08:25:42 np0005548790.localdomain sudo[63586]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:42 np0005548790.localdomain sudo[63711]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wqjaggorzkrscbrwxtauzhqczcadmvzf ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:42 np0005548790.localdomain sudo[63711]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:42 np0005548790.localdomain python3[63713]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtproxyd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:25:42 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:25:42 np0005548790.localdomain systemd-sysv-generator[63743]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:25:42 np0005548790.localdomain systemd-rc-local-generator[63739]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:25:42 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:25:43 np0005548790.localdomain systemd[1]: Starting nova_virtproxyd container...
Dec 06 08:25:43 np0005548790.localdomain tripleo-start-podman-container[63753]: Creating additional drop-in dependency for "nova_virtproxyd" (062d833a452392c20587d4ca3912b26ff638ef1ea56eba26dd074d03b54fcad2)
Dec 06 08:25:43 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:25:43 np0005548790.localdomain systemd-rc-local-generator[63810]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:25:43 np0005548790.localdomain systemd-sysv-generator[63814]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:25:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:25:43 np0005548790.localdomain systemd[1]: Started nova_virtproxyd container.
Dec 06 08:25:43 np0005548790.localdomain sudo[63711]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:43 np0005548790.localdomain sudo[63835]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mmdfjjkohcrhmeizajzwpumgnooxkaqa ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:43 np0005548790.localdomain sudo[63835]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:44 np0005548790.localdomain python3[63837]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtqemud.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:25:44 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:25:44 np0005548790.localdomain systemd-sysv-generator[63870]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:25:44 np0005548790.localdomain systemd-rc-local-generator[63866]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:25:44 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:25:44 np0005548790.localdomain systemd[1]: Starting nova_virtqemud container...
Dec 06 08:25:44 np0005548790.localdomain tripleo-start-podman-container[63877]: Creating additional drop-in dependency for "nova_virtqemud" (955a185f06c627fd6869ca3b1cd3398316e754f2eda20d9ab7cb1a56c030723f)
Dec 06 08:25:44 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:25:44 np0005548790.localdomain systemd-sysv-generator[63939]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:25:44 np0005548790.localdomain systemd-rc-local-generator[63936]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:25:44 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:25:45 np0005548790.localdomain systemd[1]: Started nova_virtqemud container.
Dec 06 08:25:45 np0005548790.localdomain sudo[63835]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:45 np0005548790.localdomain sudo[63958]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-blrfwrzonxstjnekxdcnwarvvouvryaf ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:45 np0005548790.localdomain sudo[63958]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:45 np0005548790.localdomain python3[63960]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtsecretd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:25:45 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:25:45 np0005548790.localdomain systemd-rc-local-generator[63986]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:25:45 np0005548790.localdomain systemd-sysv-generator[63992]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:25:45 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:25:45 np0005548790.localdomain sshd[63999]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:25:46 np0005548790.localdomain systemd[1]: Starting nova_virtsecretd container...
Dec 06 08:25:47 np0005548790.localdomain tripleo-start-podman-container[64001]: Creating additional drop-in dependency for "nova_virtsecretd" (88da17bd57495bf861303cba36153f51080615a3ef2323dd08baf107ea35b912)
Dec 06 08:25:47 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:25:47 np0005548790.localdomain systemd-sysv-generator[64060]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:25:47 np0005548790.localdomain systemd-rc-local-generator[64057]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:25:47 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:25:47 np0005548790.localdomain systemd[1]: Started nova_virtsecretd container.
Dec 06 08:25:47 np0005548790.localdomain sudo[63958]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:47 np0005548790.localdomain sudo[64082]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fizeuejooawumdrxubdiaxmhsvhuujls ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:47 np0005548790.localdomain sudo[64082]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:47 np0005548790.localdomain python3[64084]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtstoraged.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:25:48 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:25:48 np0005548790.localdomain systemd-sysv-generator[64113]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:25:48 np0005548790.localdomain systemd-rc-local-generator[64108]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:25:48 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:25:48 np0005548790.localdomain systemd[1]: Starting nova_virtstoraged container...
Dec 06 08:25:48 np0005548790.localdomain tripleo-start-podman-container[64124]: Creating additional drop-in dependency for "nova_virtstoraged" (cf855de1a4fb0e317fcb88f2c4523f8d6c248f9a4ed34d1f894c5b1977d5029c)
Dec 06 08:25:48 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:25:48 np0005548790.localdomain systemd-rc-local-generator[64178]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:25:48 np0005548790.localdomain systemd-sysv-generator[64182]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:25:48 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:25:48 np0005548790.localdomain systemd[1]: Started nova_virtstoraged container.
Dec 06 08:25:48 np0005548790.localdomain sudo[64082]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:49 np0005548790.localdomain sudo[64204]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qlttewzvjwtbantrmiuamomuwfnmdzbo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:25:49 np0005548790.localdomain sudo[64204]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:49 np0005548790.localdomain python3[64206]: ansible-systemd Invoked with state=restarted name=tripleo_rsyslog.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:25:49 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:25:49 np0005548790.localdomain systemd-sysv-generator[64235]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:25:49 np0005548790.localdomain systemd-rc-local-generator[64231]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:25:49 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:25:49 np0005548790.localdomain systemd[1]: Starting rsyslog container...
Dec 06 08:25:50 np0005548790.localdomain systemd[1]: tmp-crun.B7V1et.mount: Deactivated successfully.
Dec 06 08:25:50 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:25:50 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73cce0d5d3e32439730a4dc0ecb2505d2f64a391fd8c3ee449e9aeefbf57a5a3/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:50 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73cce0d5d3e32439730a4dc0ecb2505d2f64a391fd8c3ee449e9aeefbf57a5a3/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:50 np0005548790.localdomain podman[64245]: 2025-12-06 08:25:50.118708642 +0000 UTC m=+0.125043865 container init b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, container_name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, distribution-scope=public, build-date=2025-11-18T22:49:49Z, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-rsyslog-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, name=rhosp17/openstack-rsyslog, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12)
Dec 06 08:25:50 np0005548790.localdomain podman[64245]: 2025-12-06 08:25:50.128585686 +0000 UTC m=+0.134920909 container start b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, container_name=rsyslog, name=rhosp17/openstack-rsyslog, build-date=2025-11-18T22:49:49Z, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-rsyslog-container, version=17.1.12)
Dec 06 08:25:50 np0005548790.localdomain podman[64245]: rsyslog
Dec 06 08:25:50 np0005548790.localdomain systemd[1]: Started rsyslog container.
Dec 06 08:25:50 np0005548790.localdomain sudo[64264]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 08:25:50 np0005548790.localdomain sudo[64264]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 08:25:50 np0005548790.localdomain sudo[64204]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:50 np0005548790.localdomain sudo[64264]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:50 np0005548790.localdomain systemd[1]: libpod-b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308.scope: Deactivated successfully.
Dec 06 08:25:50 np0005548790.localdomain podman[64281]: 2025-12-06 08:25:50.304862616 +0000 UTC m=+0.049375048 container died b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, container_name=rsyslog, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-rsyslog-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-18T22:49:49Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:25:50 np0005548790.localdomain podman[64281]: 2025-12-06 08:25:50.327389717 +0000 UTC m=+0.071902119 container cleanup b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20251118.1, container_name=rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rsyslog, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.component=openstack-rsyslog-container, distribution-scope=public, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, release=1761123044, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog)
Dec 06 08:25:50 np0005548790.localdomain systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 08:25:50 np0005548790.localdomain podman[64295]: 2025-12-06 08:25:50.423158561 +0000 UTC m=+0.056598940 container cleanup b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_id=tripleo_step3, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, build-date=2025-11-18T22:49:49Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, container_name=rsyslog, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, tcib_managed=true, name=rhosp17/openstack-rsyslog, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git)
Dec 06 08:25:50 np0005548790.localdomain podman[64295]: rsyslog
Dec 06 08:25:50 np0005548790.localdomain systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Dec 06 08:25:50 np0005548790.localdomain sudo[64319]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cwhiyynmihbwxyssgrzyksyqhvvztnwh ; /usr/bin/python3
Dec 06 08:25:50 np0005548790.localdomain sudo[64319]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:50 np0005548790.localdomain python3[64322]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks3.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:50 np0005548790.localdomain sudo[64319]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:50 np0005548790.localdomain systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 1.
Dec 06 08:25:50 np0005548790.localdomain systemd[1]: Stopped rsyslog container.
Dec 06 08:25:50 np0005548790.localdomain systemd[1]: Starting rsyslog container...
Dec 06 08:25:50 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:25:50 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73cce0d5d3e32439730a4dc0ecb2505d2f64a391fd8c3ee449e9aeefbf57a5a3/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:50 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73cce0d5d3e32439730a4dc0ecb2505d2f64a391fd8c3ee449e9aeefbf57a5a3/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:50 np0005548790.localdomain podman[64323]: 2025-12-06 08:25:50.844925668 +0000 UTC m=+0.112813140 container init b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:49Z, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-rsyslog, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-rsyslog-container, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3)
Dec 06 08:25:50 np0005548790.localdomain podman[64323]: 2025-12-06 08:25:50.855200472 +0000 UTC m=+0.123087974 container start b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, com.redhat.component=openstack-rsyslog-container, architecture=x86_64, config_id=tripleo_step3, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, build-date=2025-11-18T22:49:49Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:25:50 np0005548790.localdomain podman[64323]: rsyslog
Dec 06 08:25:50 np0005548790.localdomain systemd[1]: Started rsyslog container.
Dec 06 08:25:50 np0005548790.localdomain sudo[64342]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 08:25:50 np0005548790.localdomain sudo[64342]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 08:25:50 np0005548790.localdomain sudo[64342]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:50 np0005548790.localdomain systemd[1]: libpod-b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308.scope: Deactivated successfully.
Dec 06 08:25:51 np0005548790.localdomain podman[64345]: 2025-12-06 08:25:51.012671262 +0000 UTC m=+0.050732275 container died b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, release=1761123044, name=rhosp17/openstack-rsyslog, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, container_name=rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-rsyslog-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, build-date=2025-11-18T22:49:49Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-type=git, version=17.1.12, maintainer=OpenStack TripleO Team)
Dec 06 08:25:51 np0005548790.localdomain podman[64345]: 2025-12-06 08:25:51.037943056 +0000 UTC m=+0.076004029 container cleanup b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=rsyslog, com.redhat.component=openstack-rsyslog-container)
Dec 06 08:25:51 np0005548790.localdomain systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 08:25:51 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-73cce0d5d3e32439730a4dc0ecb2505d2f64a391fd8c3ee449e9aeefbf57a5a3-merged.mount: Deactivated successfully.
Dec 06 08:25:51 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308-userdata-shm.mount: Deactivated successfully.
Dec 06 08:25:51 np0005548790.localdomain podman[64371]: 2025-12-06 08:25:51.121425141 +0000 UTC m=+0.057450983 container cleanup b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-rsyslog-container, distribution-scope=public, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, name=rhosp17/openstack-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:49Z)
Dec 06 08:25:51 np0005548790.localdomain podman[64371]: rsyslog
Dec 06 08:25:51 np0005548790.localdomain systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Dec 06 08:25:51 np0005548790.localdomain sudo[64415]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ycmljserkqsrqeqxtedcpgmwyhxndtlx ; /usr/bin/python3
Dec 06 08:25:51 np0005548790.localdomain systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 2.
Dec 06 08:25:51 np0005548790.localdomain systemd[1]: Stopped rsyslog container.
Dec 06 08:25:51 np0005548790.localdomain sudo[64415]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:51 np0005548790.localdomain systemd[1]: Starting rsyslog container...
Dec 06 08:25:51 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:25:51 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73cce0d5d3e32439730a4dc0ecb2505d2f64a391fd8c3ee449e9aeefbf57a5a3/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:51 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73cce0d5d3e32439730a4dc0ecb2505d2f64a391fd8c3ee449e9aeefbf57a5a3/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:51 np0005548790.localdomain podman[64417]: 2025-12-06 08:25:51.372214439 +0000 UTC m=+0.121617005 container init b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, release=1761123044, name=rhosp17/openstack-rsyslog, vcs-type=git, build-date=2025-11-18T22:49:49Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, container_name=rsyslog, com.redhat.component=openstack-rsyslog-container)
Dec 06 08:25:51 np0005548790.localdomain podman[64417]: 2025-12-06 08:25:51.380630994 +0000 UTC m=+0.130033550 container start b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-rsyslog-container, tcib_managed=true, name=rhosp17/openstack-rsyslog, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, build-date=2025-11-18T22:49:49Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-type=git, container_name=rsyslog, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog)
Dec 06 08:25:51 np0005548790.localdomain podman[64417]: rsyslog
Dec 06 08:25:51 np0005548790.localdomain systemd[1]: Started rsyslog container.
Dec 06 08:25:51 np0005548790.localdomain sudo[64437]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 08:25:51 np0005548790.localdomain sudo[64437]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 08:25:51 np0005548790.localdomain sudo[64415]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:51 np0005548790.localdomain sudo[64437]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:51 np0005548790.localdomain systemd[1]: libpod-b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308.scope: Deactivated successfully.
Dec 06 08:25:51 np0005548790.localdomain podman[64445]: 2025-12-06 08:25:51.537380594 +0000 UTC m=+0.050204190 container died b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vendor=Red Hat, Inc., com.redhat.component=openstack-rsyslog-container, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, build-date=2025-11-18T22:49:49Z, description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, name=rhosp17/openstack-rsyslog, architecture=x86_64)
Dec 06 08:25:51 np0005548790.localdomain podman[64445]: 2025-12-06 08:25:51.559682358 +0000 UTC m=+0.072505914 container cleanup b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_id=tripleo_step3, com.redhat.component=openstack-rsyslog-container, version=17.1.12, build-date=2025-11-18T22:49:49Z, description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-rsyslog, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, architecture=x86_64)
Dec 06 08:25:51 np0005548790.localdomain systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 08:25:51 np0005548790.localdomain podman[64478]: 2025-12-06 08:25:51.645331102 +0000 UTC m=+0.055682606 container cleanup b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-18T22:49:49Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-rsyslog, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.expose-services=, container_name=rsyslog, architecture=x86_64)
Dec 06 08:25:51 np0005548790.localdomain podman[64478]: rsyslog
Dec 06 08:25:51 np0005548790.localdomain systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Dec 06 08:25:51 np0005548790.localdomain sudo[64503]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jatxldaclttylgrtwajjsxcaywsmtbcf ; /usr/bin/python3
Dec 06 08:25:51 np0005548790.localdomain sudo[64503]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:51 np0005548790.localdomain sudo[64503]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:51 np0005548790.localdomain systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 3.
Dec 06 08:25:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:25:51 np0005548790.localdomain systemd[1]: Stopped rsyslog container.
Dec 06 08:25:51 np0005548790.localdomain systemd[1]: Starting rsyslog container...
Dec 06 08:25:52 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-73cce0d5d3e32439730a4dc0ecb2505d2f64a391fd8c3ee449e9aeefbf57a5a3-merged.mount: Deactivated successfully.
Dec 06 08:25:52 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308-userdata-shm.mount: Deactivated successfully.
Dec 06 08:25:52 np0005548790.localdomain podman[64520]: 2025-12-06 08:25:52.070065218 +0000 UTC m=+0.084856763 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git)
Dec 06 08:25:52 np0005548790.localdomain podman[64520]: 2025-12-06 08:25:52.076838558 +0000 UTC m=+0.091630063 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, name=rhosp17/openstack-collectd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, tcib_managed=true, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd)
Dec 06 08:25:52 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:25:52 np0005548790.localdomain sudo[64563]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cjwnjjrybchsmtbbskcpykzoyqjkjmrr ; /usr/bin/python3
Dec 06 08:25:52 np0005548790.localdomain sudo[64563]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:52 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:25:52 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73cce0d5d3e32439730a4dc0ecb2505d2f64a391fd8c3ee449e9aeefbf57a5a3/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:52 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73cce0d5d3e32439730a4dc0ecb2505d2f64a391fd8c3ee449e9aeefbf57a5a3/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:52 np0005548790.localdomain podman[64521]: 2025-12-06 08:25:52.203592129 +0000 UTC m=+0.215431306 container init b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=rsyslog, io.buildah.version=1.41.4, config_id=tripleo_step3, name=rhosp17/openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-rsyslog-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-18T22:49:49Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12)
Dec 06 08:25:52 np0005548790.localdomain podman[64521]: 2025-12-06 08:25:52.21224968 +0000 UTC m=+0.224088857 container start b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:49Z, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, name=rhosp17/openstack-rsyslog, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, batch=17.1_20251118.1, container_name=rsyslog, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, release=1761123044, vcs-type=git)
Dec 06 08:25:52 np0005548790.localdomain podman[64521]: rsyslog
Dec 06 08:25:52 np0005548790.localdomain systemd[1]: Started rsyslog container.
Dec 06 08:25:52 np0005548790.localdomain sudo[64574]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 08:25:52 np0005548790.localdomain sudo[64574]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 08:25:52 np0005548790.localdomain sudo[64574]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:52 np0005548790.localdomain systemd[1]: libpod-b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308.scope: Deactivated successfully.
Dec 06 08:25:52 np0005548790.localdomain podman[64577]: 2025-12-06 08:25:52.368566409 +0000 UTC m=+0.049923373 container died b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-rsyslog-container, build-date=2025-11-18T22:49:49Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, version=17.1.12, vendor=Red Hat, Inc., container_name=rsyslog, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible)
Dec 06 08:25:52 np0005548790.localdomain python3[64570]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks3.json short_hostname=np0005548790 step=3 update_config_hash_only=False
Dec 06 08:25:52 np0005548790.localdomain sudo[64563]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:52 np0005548790.localdomain podman[64577]: 2025-12-06 08:25:52.397413778 +0000 UTC m=+0.078770702 container cleanup b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-rsyslog, config_id=tripleo_step3, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, build-date=2025-11-18T22:49:49Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.openshift.expose-services=, container_name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog)
Dec 06 08:25:52 np0005548790.localdomain systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 08:25:52 np0005548790.localdomain podman[64589]: 2025-12-06 08:25:52.470146727 +0000 UTC m=+0.044268911 container cleanup b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, version=17.1.12, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-18T22:49:49Z, batch=17.1_20251118.1, config_id=tripleo_step3, name=rhosp17/openstack-rsyslog, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=rsyslog, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']})
Dec 06 08:25:52 np0005548790.localdomain podman[64589]: rsyslog
Dec 06 08:25:52 np0005548790.localdomain systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Dec 06 08:25:52 np0005548790.localdomain systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 4.
Dec 06 08:25:52 np0005548790.localdomain systemd[1]: Stopped rsyslog container.
Dec 06 08:25:52 np0005548790.localdomain systemd[1]: Starting rsyslog container...
Dec 06 08:25:52 np0005548790.localdomain sudo[64614]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-owihgglrvarggsfenwjpjdgblceevglx ; /usr/bin/python3
Dec 06 08:25:52 np0005548790.localdomain sudo[64614]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:52 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:25:52 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73cce0d5d3e32439730a4dc0ecb2505d2f64a391fd8c3ee449e9aeefbf57a5a3/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:52 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73cce0d5d3e32439730a4dc0ecb2505d2f64a391fd8c3ee449e9aeefbf57a5a3/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 06 08:25:52 np0005548790.localdomain podman[64616]: 2025-12-06 08:25:52.852615226 +0000 UTC m=+0.108546905 container init b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-rsyslog-container, architecture=x86_64, name=rhosp17/openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, batch=17.1_20251118.1, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4)
Dec 06 08:25:52 np0005548790.localdomain podman[64616]: 2025-12-06 08:25:52.861613687 +0000 UTC m=+0.117545376 container start b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:49Z, url=https://www.redhat.com, container_name=rsyslog, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.component=openstack-rsyslog-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, version=17.1.12, config_id=tripleo_step3, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, distribution-scope=public)
Dec 06 08:25:52 np0005548790.localdomain podman[64616]: rsyslog
Dec 06 08:25:52 np0005548790.localdomain systemd[1]: Started rsyslog container.
Dec 06 08:25:52 np0005548790.localdomain sudo[64636]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 08:25:52 np0005548790.localdomain sudo[64636]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 08:25:52 np0005548790.localdomain python3[64622]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:25:52 np0005548790.localdomain sudo[64614]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:52 np0005548790.localdomain sudo[64636]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:52 np0005548790.localdomain systemd[1]: libpod-b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308.scope: Deactivated successfully.
Dec 06 08:25:53 np0005548790.localdomain podman[64639]: 2025-12-06 08:25:53.007878887 +0000 UTC m=+0.034291646 container died b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, container_name=rsyslog, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-rsyslog, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, config_id=tripleo_step3, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog)
Dec 06 08:25:53 np0005548790.localdomain podman[64639]: 2025-12-06 08:25:53.027046478 +0000 UTC m=+0.053459207 container cleanup b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, container_name=rsyslog, distribution-scope=public, build-date=2025-11-18T22:49:49Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, url=https://www.redhat.com, version=17.1.12, name=rhosp17/openstack-rsyslog, architecture=x86_64, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-rsyslog-container, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:25:53 np0005548790.localdomain systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 08:25:53 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-73cce0d5d3e32439730a4dc0ecb2505d2f64a391fd8c3ee449e9aeefbf57a5a3-merged.mount: Deactivated successfully.
Dec 06 08:25:53 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308-userdata-shm.mount: Deactivated successfully.
Dec 06 08:25:53 np0005548790.localdomain podman[64656]: 2025-12-06 08:25:53.098981896 +0000 UTC m=+0.042575576 container cleanup b67e27fc14f0be21208467b9fa175d80a2b0246c0cb08daebe7f2ccc2bf07308 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, build-date=2025-11-18T22:49:49Z, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, container_name=rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-rsyslog, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, release=1761123044, com.redhat.component=openstack-rsyslog-container, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '46b3928e39956af0ccbc08ab55267e91'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:25:53 np0005548790.localdomain podman[64656]: rsyslog
Dec 06 08:25:53 np0005548790.localdomain systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Dec 06 08:25:53 np0005548790.localdomain sudo[64675]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mpedovuoqawokgwdcclamnodqsgmmbzm ; /usr/bin/python3
Dec 06 08:25:53 np0005548790.localdomain sudo[64675]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:25:53 np0005548790.localdomain systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 5.
Dec 06 08:25:53 np0005548790.localdomain systemd[1]: Stopped rsyslog container.
Dec 06 08:25:53 np0005548790.localdomain systemd[1]: tripleo_rsyslog.service: Start request repeated too quickly.
Dec 06 08:25:53 np0005548790.localdomain systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Dec 06 08:25:53 np0005548790.localdomain systemd[1]: Failed to start rsyslog container.
Dec 06 08:25:53 np0005548790.localdomain python3[64680]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_3 config_pattern=container-puppet-*.json config_overrides={} debug=True
Dec 06 08:25:53 np0005548790.localdomain sudo[64675]: pam_unix(sudo:session): session closed for user root
Dec 06 08:25:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:25:54 np0005548790.localdomain podman[64681]: 2025-12-06 08:25:54.562976066 +0000 UTC m=+0.077963530 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, release=1761123044, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, build-date=2025-11-18T23:44:13Z, distribution-scope=public, com.redhat.component=openstack-iscsid-container, tcib_managed=true, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 08:25:54 np0005548790.localdomain podman[64681]: 2025-12-06 08:25:54.577127583 +0000 UTC m=+0.092115097 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step3, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1761123044, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Dec 06 08:25:54 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:25:56 np0005548790.localdomain sshd[63999]: error: kex_exchange_identification: read: Connection timed out
Dec 06 08:25:56 np0005548790.localdomain sshd[63999]: banner exchange: Connection from 14.103.76.234 port 49974: Connection timed out
Dec 06 08:26:05 np0005548790.localdomain sudo[64700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:26:05 np0005548790.localdomain sudo[64700]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:26:05 np0005548790.localdomain sudo[64700]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:05 np0005548790.localdomain sudo[64715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:26:05 np0005548790.localdomain sudo[64715]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:26:06 np0005548790.localdomain sudo[64715]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:07 np0005548790.localdomain sudo[64763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:26:07 np0005548790.localdomain sudo[64763]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:26:07 np0005548790.localdomain sudo[64763]: pam_unix(sudo:session): session closed for user root
Dec 06 08:26:09 np0005548790.localdomain sshd[64778]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:26:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:26:09 np0005548790.localdomain podman[64780]: 2025-12-06 08:26:09.581840887 +0000 UTC m=+0.096366190 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step1, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044)
Dec 06 08:26:09 np0005548790.localdomain podman[64780]: 2025-12-06 08:26:09.779806686 +0000 UTC m=+0.294332029 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1761123044, batch=17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git)
Dec 06 08:26:09 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:26:10 np0005548790.localdomain sshd[64778]: Received disconnect from 103.226.138.52 port 51248:11: Bye Bye [preauth]
Dec 06 08:26:10 np0005548790.localdomain sshd[64778]: Disconnected from authenticating user root 103.226.138.52 port 51248 [preauth]
Dec 06 08:26:22 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:26:22 np0005548790.localdomain podman[64809]: 2025-12-06 08:26:22.575091032 +0000 UTC m=+0.092481357 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, vendor=Red Hat, Inc.)
Dec 06 08:26:22 np0005548790.localdomain podman[64809]: 2025-12-06 08:26:22.585698515 +0000 UTC m=+0.103088770 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, release=1761123044, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, name=rhosp17/openstack-collectd)
Dec 06 08:26:22 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:26:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:26:25 np0005548790.localdomain podman[64830]: 2025-12-06 08:26:25.564662454 +0000 UTC m=+0.081564846 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Dec 06 08:26:25 np0005548790.localdomain podman[64830]: 2025-12-06 08:26:25.604072305 +0000 UTC m=+0.120974687 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true)
Dec 06 08:26:25 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:26:26 np0005548790.localdomain sshd[64849]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:26:28 np0005548790.localdomain sshd[64849]: Received disconnect from 43.163.123.45 port 38576:11: Bye Bye [preauth]
Dec 06 08:26:28 np0005548790.localdomain sshd[64849]: Disconnected from authenticating user root 43.163.123.45 port 38576 [preauth]
Dec 06 08:26:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:26:40 np0005548790.localdomain systemd[1]: tmp-crun.HFGIJ5.mount: Deactivated successfully.
Dec 06 08:26:40 np0005548790.localdomain podman[64851]: 2025-12-06 08:26:40.554273115 +0000 UTC m=+0.071515657 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git)
Dec 06 08:26:40 np0005548790.localdomain podman[64851]: 2025-12-06 08:26:40.771054457 +0000 UTC m=+0.288296999 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, distribution-scope=public, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:26:40 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:26:53 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:26:53 np0005548790.localdomain podman[64880]: 2025-12-06 08:26:53.566317175 +0000 UTC m=+0.081292189 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, container_name=collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container)
Dec 06 08:26:53 np0005548790.localdomain podman[64880]: 2025-12-06 08:26:53.605047918 +0000 UTC m=+0.120022912 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc.)
Dec 06 08:26:53 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:26:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:26:56 np0005548790.localdomain podman[64901]: 2025-12-06 08:26:56.547839361 +0000 UTC m=+0.065333133 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 08:26:56 np0005548790.localdomain podman[64901]: 2025-12-06 08:26:56.560044697 +0000 UTC m=+0.077538429 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4)
Dec 06 08:26:56 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:27:07 np0005548790.localdomain sudo[64920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:27:07 np0005548790.localdomain sudo[64920]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:27:07 np0005548790.localdomain sudo[64920]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:07 np0005548790.localdomain sudo[64935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:27:07 np0005548790.localdomain sudo[64935]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:27:08 np0005548790.localdomain sudo[64935]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:08 np0005548790.localdomain sudo[64981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:27:08 np0005548790.localdomain sudo[64981]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:27:08 np0005548790.localdomain sudo[64981]: pam_unix(sudo:session): session closed for user root
Dec 06 08:27:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:27:11 np0005548790.localdomain systemd[1]: tmp-crun.Wav8Y0.mount: Deactivated successfully.
Dec 06 08:27:11 np0005548790.localdomain podman[64996]: 2025-12-06 08:27:11.570660458 +0000 UTC m=+0.081228367 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4)
Dec 06 08:27:11 np0005548790.localdomain podman[64996]: 2025-12-06 08:27:11.735366301 +0000 UTC m=+0.245934180 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:27:11 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:27:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:27:24 np0005548790.localdomain podman[65026]: 2025-12-06 08:27:24.565195169 +0000 UTC m=+0.080363114 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1)
Dec 06 08:27:24 np0005548790.localdomain podman[65026]: 2025-12-06 08:27:24.579158871 +0000 UTC m=+0.094326756 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, config_id=tripleo_step3, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd)
Dec 06 08:27:24 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:27:25 np0005548790.localdomain sshd[65046]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:27:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:27:27 np0005548790.localdomain podman[65048]: 2025-12-06 08:27:27.555329034 +0000 UTC m=+0.070906991 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team)
Dec 06 08:27:27 np0005548790.localdomain podman[65048]: 2025-12-06 08:27:27.56229628 +0000 UTC m=+0.077874257 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, config_id=tripleo_step3, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 08:27:27 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:27:28 np0005548790.localdomain sshd[65046]: Received disconnect from 35.247.75.98 port 39034:11: Bye Bye [preauth]
Dec 06 08:27:28 np0005548790.localdomain sshd[65046]: Disconnected from authenticating user root 35.247.75.98 port 39034 [preauth]
Dec 06 08:27:36 np0005548790.localdomain sshd[65068]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:27:38 np0005548790.localdomain sshd[65068]: Received disconnect from 103.226.138.52 port 42206:11: Bye Bye [preauth]
Dec 06 08:27:38 np0005548790.localdomain sshd[65068]: Disconnected from authenticating user root 103.226.138.52 port 42206 [preauth]
Dec 06 08:27:42 np0005548790.localdomain sshd[65070]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:27:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:27:42 np0005548790.localdomain podman[65072]: 2025-12-06 08:27:42.580388722 +0000 UTC m=+0.096623476 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, build-date=2025-11-18T22:49:46Z, release=1761123044, container_name=metrics_qdr, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container)
Dec 06 08:27:42 np0005548790.localdomain podman[65072]: 2025-12-06 08:27:42.767077998 +0000 UTC m=+0.283312682 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, release=1761123044, distribution-scope=public, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com)
Dec 06 08:27:42 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:27:43 np0005548790.localdomain sshd[65070]: Received disconnect from 43.163.123.45 port 37242:11: Bye Bye [preauth]
Dec 06 08:27:43 np0005548790.localdomain sshd[65070]: Disconnected from authenticating user root 43.163.123.45 port 37242 [preauth]
Dec 06 08:27:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:27:55 np0005548790.localdomain podman[65099]: 2025-12-06 08:27:55.558532295 +0000 UTC m=+0.077115714 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=collectd, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:27:55 np0005548790.localdomain podman[65099]: 2025-12-06 08:27:55.573209177 +0000 UTC m=+0.091792636 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-18T22:51:28Z, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, url=https://www.redhat.com, managed_by=tripleo_ansible)
Dec 06 08:27:55 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:27:58 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:27:58 np0005548790.localdomain systemd[1]: tmp-crun.S9pxT5.mount: Deactivated successfully.
Dec 06 08:27:58 np0005548790.localdomain podman[65119]: 2025-12-06 08:27:58.568582479 +0000 UTC m=+0.082568351 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-iscsid-container, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z)
Dec 06 08:27:58 np0005548790.localdomain podman[65119]: 2025-12-06 08:27:58.601271983 +0000 UTC m=+0.115257865 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-iscsid-container, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 08:27:58 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:28:08 np0005548790.localdomain sudo[65139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:28:08 np0005548790.localdomain sudo[65139]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:28:08 np0005548790.localdomain sudo[65139]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:08 np0005548790.localdomain sudo[65154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:28:08 np0005548790.localdomain sudo[65154]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:28:09 np0005548790.localdomain sudo[65154]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:10 np0005548790.localdomain sudo[65200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:28:10 np0005548790.localdomain sudo[65200]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:28:10 np0005548790.localdomain sudo[65200]: pam_unix(sudo:session): session closed for user root
Dec 06 08:28:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:28:13 np0005548790.localdomain podman[65215]: 2025-12-06 08:28:13.56363082 +0000 UTC m=+0.079702443 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:28:13 np0005548790.localdomain podman[65215]: 2025-12-06 08:28:13.728010869 +0000 UTC m=+0.244082422 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, distribution-scope=public, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-type=git, config_id=tripleo_step1, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:28:13 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:28:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:28:26 np0005548790.localdomain systemd[1]: tmp-crun.QAPweh.mount: Deactivated successfully.
Dec 06 08:28:26 np0005548790.localdomain podman[65244]: 2025-12-06 08:28:26.564638126 +0000 UTC m=+0.080306659 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-collectd-container, distribution-scope=public, build-date=2025-11-18T22:51:28Z, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible)
Dec 06 08:28:26 np0005548790.localdomain podman[65244]: 2025-12-06 08:28:26.578245441 +0000 UTC m=+0.093913964 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, container_name=collectd, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:28:26 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:28:29 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:28:29 np0005548790.localdomain podman[65263]: 2025-12-06 08:28:29.559715249 +0000 UTC m=+0.079033305 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:28:29 np0005548790.localdomain podman[65263]: 2025-12-06 08:28:29.573134709 +0000 UTC m=+0.092452765 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, name=rhosp17/openstack-iscsid)
Dec 06 08:28:29 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:28:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:28:44 np0005548790.localdomain podman[65283]: 2025-12-06 08:28:44.568821498 +0000 UTC m=+0.086789104 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vendor=Red Hat, Inc., container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public)
Dec 06 08:28:44 np0005548790.localdomain podman[65283]: 2025-12-06 08:28:44.755309838 +0000 UTC m=+0.273277374 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team)
Dec 06 08:28:44 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:28:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:28:57 np0005548790.localdomain podman[65312]: 2025-12-06 08:28:57.56526418 +0000 UTC m=+0.083641950 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, batch=17.1_20251118.1, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, distribution-scope=public, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Dec 06 08:28:57 np0005548790.localdomain podman[65312]: 2025-12-06 08:28:57.597870212 +0000 UTC m=+0.116247972 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, config_id=tripleo_step3)
Dec 06 08:28:57 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:28:58 np0005548790.localdomain sshd[65332]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:28:59 np0005548790.localdomain sshd[65332]: Received disconnect from 43.163.123.45 port 35904:11: Bye Bye [preauth]
Dec 06 08:28:59 np0005548790.localdomain sshd[65332]: Disconnected from authenticating user root 43.163.123.45 port 35904 [preauth]
Dec 06 08:28:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:29:00 np0005548790.localdomain podman[65334]: 2025-12-06 08:29:00.007343375 +0000 UTC m=+0.059118143 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com)
Dec 06 08:29:00 np0005548790.localdomain podman[65334]: 2025-12-06 08:29:00.019172081 +0000 UTC m=+0.070946849 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 06 08:29:00 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:29:04 np0005548790.localdomain sshd[65353]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:29:06 np0005548790.localdomain sshd[65353]: Received disconnect from 103.226.138.52 port 52562:11: Bye Bye [preauth]
Dec 06 08:29:06 np0005548790.localdomain sshd[65353]: Disconnected from authenticating user root 103.226.138.52 port 52562 [preauth]
Dec 06 08:29:10 np0005548790.localdomain sudo[65355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:29:10 np0005548790.localdomain sudo[65355]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:29:10 np0005548790.localdomain sudo[65355]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:10 np0005548790.localdomain sudo[65370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:29:10 np0005548790.localdomain sudo[65370]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:29:11 np0005548790.localdomain sudo[65370]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:11 np0005548790.localdomain sudo[65417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:29:11 np0005548790.localdomain sudo[65417]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:29:11 np0005548790.localdomain sudo[65417]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:11 np0005548790.localdomain sudo[65432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 06 08:29:11 np0005548790.localdomain sudo[65432]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:29:12 np0005548790.localdomain sudo[65432]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:29:15 np0005548790.localdomain systemd[1]: tmp-crun.HWHDJv.mount: Deactivated successfully.
Dec 06 08:29:15 np0005548790.localdomain podman[65466]: 2025-12-06 08:29:15.573331818 +0000 UTC m=+0.090802012 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step1, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, version=17.1.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team)
Dec 06 08:29:15 np0005548790.localdomain podman[65466]: 2025-12-06 08:29:15.785971368 +0000 UTC m=+0.303441622 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, release=1761123044, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, architecture=x86_64, io.openshift.expose-services=)
Dec 06 08:29:15 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:29:17 np0005548790.localdomain sudo[65493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:29:17 np0005548790.localdomain sudo[65493]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:29:17 np0005548790.localdomain sudo[65493]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:29:28 np0005548790.localdomain podman[65509]: 2025-12-06 08:29:28.583631181 +0000 UTC m=+0.096337268 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-collectd, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Dec 06 08:29:28 np0005548790.localdomain podman[65509]: 2025-12-06 08:29:28.62021217 +0000 UTC m=+0.132918217 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, name=rhosp17/openstack-collectd, config_id=tripleo_step3, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, release=1761123044, batch=17.1_20251118.1)
Dec 06 08:29:28 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:29:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:29:30 np0005548790.localdomain podman[65530]: 2025-12-06 08:29:30.562268426 +0000 UTC m=+0.080067323 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git)
Dec 06 08:29:30 np0005548790.localdomain podman[65530]: 2025-12-06 08:29:30.59570106 +0000 UTC m=+0.113499927 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, batch=17.1_20251118.1, container_name=iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team)
Dec 06 08:29:30 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:29:33 np0005548790.localdomain sshd[65547]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:29:36 np0005548790.localdomain sshd[65547]: Received disconnect from 35.247.75.98 port 57442:11: Bye Bye [preauth]
Dec 06 08:29:36 np0005548790.localdomain sshd[65547]: Disconnected from authenticating user root 35.247.75.98 port 57442 [preauth]
Dec 06 08:29:44 np0005548790.localdomain sudo[65594]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tvmlvyctstuysswgqgvucjooslzlvbhj ; /usr/bin/python3
Dec 06 08:29:44 np0005548790.localdomain sudo[65594]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:29:45 np0005548790.localdomain python3[65596]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:29:45 np0005548790.localdomain sudo[65594]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:45 np0005548790.localdomain sudo[65639]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qtuxberxorllxpbknxtuabnzqkhwzsoj ; /usr/bin/python3
Dec 06 08:29:45 np0005548790.localdomain sudo[65639]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:29:45 np0005548790.localdomain python3[65641]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009784.697813-107276-95595430571874/source _original_basename=tmpa6by6363 follow=False checksum=ee48fb03297eb703b1954c8852d0f67fab51dac1 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:29:45 np0005548790.localdomain sudo[65639]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:29:46 np0005548790.localdomain systemd[1]: tmp-crun.2zgTsh.mount: Deactivated successfully.
Dec 06 08:29:46 np0005548790.localdomain podman[65669]: 2025-12-06 08:29:46.581634217 +0000 UTC m=+0.088372886 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1)
Dec 06 08:29:46 np0005548790.localdomain sudo[65722]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dtmuechexiuucdqydchcuswcaqyojpss ; /usr/bin/python3
Dec 06 08:29:46 np0005548790.localdomain podman[65669]: 2025-12-06 08:29:46.797213136 +0000 UTC m=+0.303951775 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, vcs-type=git, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1)
Dec 06 08:29:46 np0005548790.localdomain sudo[65722]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:29:46 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:29:47 np0005548790.localdomain python3[65731]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/recover_tripleo_nova_virtqemud.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:29:47 np0005548790.localdomain sudo[65722]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:47 np0005548790.localdomain sudo[65772]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nhcrjahghnztvcsisviqufkvzoevbsms ; /usr/bin/python3
Dec 06 08:29:47 np0005548790.localdomain sudo[65772]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:29:47 np0005548790.localdomain python3[65774]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/recover_tripleo_nova_virtqemud.sh mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009786.459038-107384-14351003990703/source _original_basename=tmptwxmb8iz follow=False checksum=922b8aa8342176110bffc2e39abdccc2b39e53a9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:29:47 np0005548790.localdomain sudo[65772]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:47 np0005548790.localdomain sudo[65834]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-reveebwbwrmpjxxkqjorujpfgrqowoka ; /usr/bin/python3
Dec 06 08:29:47 np0005548790.localdomain sudo[65834]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:29:48 np0005548790.localdomain python3[65836]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:29:48 np0005548790.localdomain sudo[65834]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:48 np0005548790.localdomain sudo[65877]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zqixwrsfihwuipaewjrcoqkhubbsvoul ; /usr/bin/python3
Dec 06 08:29:48 np0005548790.localdomain sudo[65877]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:29:48 np0005548790.localdomain python3[65879]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.service mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009787.8174038-107460-127061783144896/source _original_basename=tmpfdd_oo_b follow=False checksum=92f73544b703afc85885fa63ab07bdf8f8671554 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:29:48 np0005548790.localdomain sudo[65877]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:48 np0005548790.localdomain sudo[65939]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mlxjdpmlqxqtvxiqbhywrvzjkgbwnxaw ; /usr/bin/python3
Dec 06 08:29:48 np0005548790.localdomain sudo[65939]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:29:49 np0005548790.localdomain python3[65941]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:29:49 np0005548790.localdomain sudo[65939]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:49 np0005548790.localdomain sudo[65982]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-htzrddlpngowyjltyxaqvkbcezlaezdg ; /usr/bin/python3
Dec 06 08:29:49 np0005548790.localdomain sudo[65982]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:29:49 np0005548790.localdomain python3[65984]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009788.7486663-107513-40551578675628/source _original_basename=tmpxria39z3 follow=False checksum=c6e5f76a53c0d6ccaf46c4b48d813dc2891ad8e9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:29:49 np0005548790.localdomain sudo[65982]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:49 np0005548790.localdomain sudo[66012]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-goiuhnyydpsrvwosjggvkbppnirayctd ; /usr/bin/python3
Dec 06 08:29:49 np0005548790.localdomain sudo[66012]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:29:50 np0005548790.localdomain python3[66014]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.service daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 06 08:29:50 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:29:50 np0005548790.localdomain systemd-rc-local-generator[66043]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:29:50 np0005548790.localdomain systemd-sysv-generator[66046]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:29:50 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:29:50 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:29:50 np0005548790.localdomain systemd-rc-local-generator[66081]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:29:50 np0005548790.localdomain systemd-sysv-generator[66084]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:29:50 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:29:50 np0005548790.localdomain sudo[66012]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:50 np0005548790.localdomain sudo[66103]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-schmpggzzajaurisfcpzjtfmfnvelyaw ; /usr/bin/python3
Dec 06 08:29:50 np0005548790.localdomain sudo[66103]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:29:51 np0005548790.localdomain python3[66105]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.timer state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:29:51 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:29:51 np0005548790.localdomain systemd-rc-local-generator[66129]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:29:51 np0005548790.localdomain systemd-sysv-generator[66133]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:29:51 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:29:51 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:29:51 np0005548790.localdomain systemd-sysv-generator[66175]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:29:51 np0005548790.localdomain systemd-rc-local-generator[66171]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:29:51 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:29:51 np0005548790.localdomain systemd[1]: Started Check and recover tripleo_nova_virtqemud every 10m.
Dec 06 08:29:51 np0005548790.localdomain sudo[66103]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:52 np0005548790.localdomain sudo[66195]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-erovtiwtyaklgbqqwjfzrdrfpchonyoe ; /usr/bin/python3
Dec 06 08:29:52 np0005548790.localdomain sudo[66195]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:29:52 np0005548790.localdomain python3[66197]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl enable --now tripleo_nova_virtqemud_recover.timer _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:29:52 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:29:52 np0005548790.localdomain systemd-sysv-generator[66225]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:29:52 np0005548790.localdomain systemd-rc-local-generator[66221]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:29:52 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:29:52 np0005548790.localdomain sudo[66195]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:52 np0005548790.localdomain sudo[66278]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ujoxvfzogwmokgevfvplhehbsveqknic ; /usr/bin/python3
Dec 06 08:29:52 np0005548790.localdomain sudo[66278]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:29:52 np0005548790.localdomain python3[66280]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:29:52 np0005548790.localdomain sudo[66278]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:53 np0005548790.localdomain sudo[66321]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bcuwfeqlhyymqkhrlsnrffrrentfsmzx ; /usr/bin/python3
Dec 06 08:29:53 np0005548790.localdomain sudo[66321]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:29:53 np0005548790.localdomain python3[66323]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_libvirt.target group=root mode=0644 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009792.6564102-107639-142274234484280/source _original_basename=tmp3v9h8i61 follow=False checksum=c064b4a8e7d3d1d7c62d1f80a09e350659996afd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:29:53 np0005548790.localdomain sudo[66321]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:53 np0005548790.localdomain sudo[66351]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eadwovlfiweiqlcpfvudhvfchgmpbkvw ; /usr/bin/python3
Dec 06 08:29:53 np0005548790.localdomain sudo[66351]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:29:53 np0005548790.localdomain python3[66353]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:29:53 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:29:53 np0005548790.localdomain systemd-rc-local-generator[66379]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:29:53 np0005548790.localdomain systemd-sysv-generator[66382]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:29:53 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:29:54 np0005548790.localdomain systemd[1]: Reached target tripleo_nova_libvirt.target.
Dec 06 08:29:54 np0005548790.localdomain sudo[66351]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:54 np0005548790.localdomain sudo[66407]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hvnfldumukdblrpxpkjipwgdajgapytu ; /usr/bin/python3
Dec 06 08:29:54 np0005548790.localdomain sudo[66407]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:29:54 np0005548790.localdomain python3[66409]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:29:54 np0005548790.localdomain sudo[66407]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:55 np0005548790.localdomain sudo[66457]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-snpqmunstygoroymqvcrqgdxcplqpjwu ; /usr/bin/python3
Dec 06 08:29:55 np0005548790.localdomain sudo[66457]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:29:55 np0005548790.localdomain sudo[66457]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:55 np0005548790.localdomain sudo[66475]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qxlydeqghdqouyatftljyatdvjchpizx ; /usr/bin/python3
Dec 06 08:29:55 np0005548790.localdomain sudo[66475]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:29:55 np0005548790.localdomain sudo[66475]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:56 np0005548790.localdomain sudo[66579]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tvlktkhjrqhalcmjtcoypckdbegjvpmn ; ANSIBLE_ASYNC_DIR=/tmp/.ansible_async /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009795.635527-107736-94506905484198/async_wrapper.py 644433128592 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009795.635527-107736-94506905484198/AnsiballZ_command.py _
Dec 06 08:29:56 np0005548790.localdomain sudo[66579]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 06 08:29:56 np0005548790.localdomain ansible-async_wrapper.py[66581]: Invoked with 644433128592 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009795.635527-107736-94506905484198/AnsiballZ_command.py _
Dec 06 08:29:56 np0005548790.localdomain ansible-async_wrapper.py[66584]: Starting module and watcher
Dec 06 08:29:56 np0005548790.localdomain ansible-async_wrapper.py[66584]: Start watching 66585 (3600)
Dec 06 08:29:56 np0005548790.localdomain ansible-async_wrapper.py[66585]: Start module (66585)
Dec 06 08:29:56 np0005548790.localdomain ansible-async_wrapper.py[66581]: Return async_wrapper task started.
Dec 06 08:29:56 np0005548790.localdomain sudo[66579]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:56 np0005548790.localdomain sudo[66600]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nqbgjkwveqecnsthoosapjkeiqjieoor ; /usr/bin/python3
Dec 06 08:29:56 np0005548790.localdomain sudo[66600]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:29:56 np0005548790.localdomain python3[66605]: ansible-ansible.legacy.async_status Invoked with jid=644433128592.66581 mode=status _async_dir=/tmp/.ansible_async
Dec 06 08:29:56 np0005548790.localdomain sudo[66600]: pam_unix(sudo:session): session closed for user root
Dec 06 08:29:58 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:29:58 np0005548790.localdomain podman[66656]: 2025-12-06 08:29:58.7660424 +0000 UTC m=+0.072413068 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.buildah.version=1.41.4, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_id=tripleo_step3, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Dec 06 08:29:58 np0005548790.localdomain podman[66656]: 2025-12-06 08:29:58.803085421 +0000 UTC m=+0.109456029 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1)
Dec 06 08:29:58 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:29:59 np0005548790.localdomain puppet-user[66604]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 06 08:29:59 np0005548790.localdomain puppet-user[66604]:    (file: /etc/puppet/hiera.yaml)
Dec 06 08:29:59 np0005548790.localdomain puppet-user[66604]: Warning: Undefined variable '::deploy_config_name';
Dec 06 08:29:59 np0005548790.localdomain puppet-user[66604]:    (file & line not available)
Dec 06 08:29:59 np0005548790.localdomain puppet-user[66604]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 06 08:29:59 np0005548790.localdomain puppet-user[66604]:    (file & line not available)
Dec 06 08:29:59 np0005548790.localdomain puppet-user[66604]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Dec 06 08:29:59 np0005548790.localdomain puppet-user[66604]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 06 08:29:59 np0005548790.localdomain puppet-user[66604]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 06 08:29:59 np0005548790.localdomain puppet-user[66604]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 06 08:29:59 np0005548790.localdomain puppet-user[66604]:                     with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 06 08:29:59 np0005548790.localdomain puppet-user[66604]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 06 08:29:59 np0005548790.localdomain puppet-user[66604]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 06 08:29:59 np0005548790.localdomain puppet-user[66604]:                     with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 06 08:29:59 np0005548790.localdomain puppet-user[66604]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 06 08:29:59 np0005548790.localdomain puppet-user[66604]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 06 08:29:59 np0005548790.localdomain puppet-user[66604]:                     with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 06 08:29:59 np0005548790.localdomain puppet-user[66604]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 06 08:29:59 np0005548790.localdomain puppet-user[66604]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 06 08:29:59 np0005548790.localdomain puppet-user[66604]:                     with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 06 08:29:59 np0005548790.localdomain puppet-user[66604]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 06 08:29:59 np0005548790.localdomain puppet-user[66604]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 06 08:29:59 np0005548790.localdomain puppet-user[66604]:                     with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 06 08:29:59 np0005548790.localdomain puppet-user[66604]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 06 08:30:00 np0005548790.localdomain puppet-user[66604]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Dec 06 08:30:00 np0005548790.localdomain puppet-user[66604]: Notice: Compiled catalog for np0005548790.localdomain in environment production in 0.21 seconds
Dec 06 08:30:01 np0005548790.localdomain ansible-async_wrapper.py[66584]: 66585 still running (3600)
Dec 06 08:30:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:30:01 np0005548790.localdomain systemd[1]: tmp-crun.M8numE.mount: Deactivated successfully.
Dec 06 08:30:01 np0005548790.localdomain podman[66747]: 2025-12-06 08:30:01.580288384 +0000 UTC m=+0.089730682 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, version=17.1.12, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:30:01 np0005548790.localdomain podman[66747]: 2025-12-06 08:30:01.591205936 +0000 UTC m=+0.100648264 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z, release=1761123044, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., vcs-type=git, container_name=iscsid)
Dec 06 08:30:01 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:30:06 np0005548790.localdomain ansible-async_wrapper.py[66584]: 66585 still running (3595)
Dec 06 08:30:06 np0005548790.localdomain sudo[66845]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vcvwszkpdxpfussxcpbbzlryhwunocuu ; /usr/bin/python3
Dec 06 08:30:06 np0005548790.localdomain sudo[66845]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:06 np0005548790.localdomain python3[66847]: ansible-ansible.legacy.async_status Invoked with jid=644433128592.66581 mode=status _async_dir=/tmp/.ansible_async
Dec 06 08:30:06 np0005548790.localdomain sudo[66845]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:07 np0005548790.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 08:30:07 np0005548790.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 06 08:30:07 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:30:07 np0005548790.localdomain systemd-rc-local-generator[66936]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:30:07 np0005548790.localdomain systemd-sysv-generator[66942]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:30:07 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:30:07 np0005548790.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 08:30:08 np0005548790.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 08:30:08 np0005548790.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 06 08:30:08 np0005548790.localdomain systemd[1]: run-rf40ad39847754886a30ef69a6980be7c.service: Deactivated successfully.
Dec 06 08:30:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:30:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 1800.1 total, 600.0 interval
                                                          Cumulative writes: 4641 writes, 21K keys, 4641 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 4641 writes, 489 syncs, 9.49 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 120 writes, 338 keys, 120 commit groups, 1.0 writes per commit group, ingest: 0.36 MB, 0.00 MB/s
                                                          Interval WAL: 120 writes, 60 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 08:30:09 np0005548790.localdomain puppet-user[66604]: Notice: /Stage[main]/Snmp/Package[snmpd]/ensure: created
Dec 06 08:30:09 np0005548790.localdomain puppet-user[66604]: Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{sha256}2b743f970e80e2150759bfc66f2d8d0fbd8b31624f79e2991248d1a5ac57494e' to '{sha256}46237bee39247e2700f22fea187b827e12a80f35763f9b3f88c85eed737f6f3a'
Dec 06 08:30:09 np0005548790.localdomain puppet-user[66604]: Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{sha256}b63afb2dee7419b6834471f88581d981c8ae5c8b27b9d329ba67a02f3ddd8221' to '{sha256}3917ee8bbc680ad50d77186ad4a1d2705c2025c32fc32f823abbda7f2328dfbd'
Dec 06 08:30:09 np0005548790.localdomain puppet-user[66604]: Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{sha256}2e1ca894d609ef337b6243909bf5623c87fd5df98ecbd00c7d4c12cf12f03c4e' to '{sha256}3ecf18da1ba84ea3932607f2b903ee6a038b6f9ac4e1e371e48f3ef61c5052ea'
Dec 06 08:30:09 np0005548790.localdomain puppet-user[66604]: Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{sha256}86ee5797ad10cb1ea0f631e9dfa6ae278ecf4f4d16f4c80f831cdde45601b23c' to '{sha256}2244553364afcca151958f8e2003e4c182f5e2ecfbe55405cec73fd818581e97'
Dec 06 08:30:09 np0005548790.localdomain puppet-user[66604]: Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events
Dec 06 08:30:11 np0005548790.localdomain ansible-async_wrapper.py[66584]: 66585 still running (3590)
Dec 06 08:30:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:30:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 1800.2 total, 600.0 interval
                                                          Cumulative writes: 4958 writes, 21K keys, 4958 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 4958 writes, 576 syncs, 8.61 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 165 writes, 357 keys, 165 commit groups, 1.0 writes per commit group, ingest: 0.30 MB, 0.00 MB/s
                                                          Interval WAL: 165 writes, 82 syncs, 2.01 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 08:30:14 np0005548790.localdomain puppet-user[66604]: Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully
Dec 06 08:30:14 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:30:14 np0005548790.localdomain systemd-sysv-generator[67978]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:30:14 np0005548790.localdomain systemd-rc-local-generator[67975]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:30:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:30:14 np0005548790.localdomain systemd[1]: Starting Simple Network Management Protocol (SNMP) Daemon....
Dec 06 08:30:14 np0005548790.localdomain snmpd[67989]: Can't find directory of RPM packages
Dec 06 08:30:14 np0005548790.localdomain snmpd[67989]: Duplicate IPv4 address detected, some interfaces may not be visible in IP-MIB
Dec 06 08:30:14 np0005548790.localdomain systemd[1]: Started Simple Network Management Protocol (SNMP) Daemon..
Dec 06 08:30:14 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:30:14 np0005548790.localdomain systemd-rc-local-generator[68015]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:30:14 np0005548790.localdomain systemd-sysv-generator[68020]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:30:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:30:15 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:30:15 np0005548790.localdomain systemd-rc-local-generator[68051]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:30:15 np0005548790.localdomain systemd-sysv-generator[68054]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:30:15 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]: Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running'
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]: Notice: Applied catalog in 15.33 seconds
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]: Application:
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]:    Initial environment: production
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]:    Converged environment: production
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]:          Run mode: user
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]: Changes:
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]:             Total: 8
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]: Events:
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]:           Success: 8
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]:             Total: 8
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]: Resources:
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]:         Restarted: 1
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]:           Changed: 8
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]:       Out of sync: 8
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]:             Total: 19
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]: Time:
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]:        Filebucket: 0.00
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]:          Schedule: 0.00
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]:            Augeas: 0.01
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]:              File: 0.08
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]:    Config retrieval: 0.26
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]:           Service: 1.24
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]:    Transaction evaluation: 15.32
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]:    Catalog application: 15.33
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]:          Last run: 1765009815
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]:              Exec: 5.05
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]:           Package: 8.76
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]:             Total: 15.33
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]: Version:
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]:            Config: 1765009799
Dec 06 08:30:15 np0005548790.localdomain puppet-user[66604]:            Puppet: 7.10.0
Dec 06 08:30:15 np0005548790.localdomain ansible-async_wrapper.py[66585]: Module complete (66585)
Dec 06 08:30:15 np0005548790.localdomain sshd[68063]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:30:16 np0005548790.localdomain ansible-async_wrapper.py[66584]: Done in kid B.
Dec 06 08:30:16 np0005548790.localdomain sudo[68078]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-blvvgaznlgtyiirgjyupqljhcvdmmsto ; /usr/bin/python3
Dec 06 08:30:16 np0005548790.localdomain sudo[68078]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:30:17 np0005548790.localdomain systemd[1]: tmp-crun.HFY9Xj.mount: Deactivated successfully.
Dec 06 08:30:17 np0005548790.localdomain podman[68080]: 2025-12-06 08:30:17.110891657 +0000 UTC m=+0.104880358 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044)
Dec 06 08:30:17 np0005548790.localdomain python3[68081]: ansible-ansible.legacy.async_status Invoked with jid=644433128592.66581 mode=status _async_dir=/tmp/.ansible_async
Dec 06 08:30:17 np0005548790.localdomain sudo[68078]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:17 np0005548790.localdomain podman[68080]: 2025-12-06 08:30:17.319317373 +0000 UTC m=+0.313306134 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, container_name=metrics_qdr, batch=17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd)
Dec 06 08:30:17 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:30:17 np0005548790.localdomain sshd[68063]: Received disconnect from 43.163.123.45 port 34576:11: Bye Bye [preauth]
Dec 06 08:30:17 np0005548790.localdomain sshd[68063]: Disconnected from authenticating user root 43.163.123.45 port 34576 [preauth]
Dec 06 08:30:17 np0005548790.localdomain sudo[68109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:30:17 np0005548790.localdomain sudo[68109]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:30:17 np0005548790.localdomain sudo[68109]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:17 np0005548790.localdomain sudo[68124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 06 08:30:17 np0005548790.localdomain sudo[68124]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:30:17 np0005548790.localdomain sudo[68152]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eamviljahunpalxgsemrybgvqzsjwgxf ; /usr/bin/python3
Dec 06 08:30:17 np0005548790.localdomain sudo[68152]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:17 np0005548790.localdomain sudo[68124]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:17 np0005548790.localdomain python3[68154]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 08:30:17 np0005548790.localdomain sudo[68152]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:17 np0005548790.localdomain sudo[68176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:30:17 np0005548790.localdomain sudo[68176]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:30:17 np0005548790.localdomain sudo[68176]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:18 np0005548790.localdomain sudo[68191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:30:18 np0005548790.localdomain sudo[68191]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:30:18 np0005548790.localdomain sudo[68219]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-srdfmjoffutbokdrwekhyknbheplylhg ; /usr/bin/python3
Dec 06 08:30:18 np0005548790.localdomain sudo[68219]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:18 np0005548790.localdomain python3[68221]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:30:18 np0005548790.localdomain sudo[68219]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:18 np0005548790.localdomain sudo[68191]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:18 np0005548790.localdomain sudo[68300]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tjkngdmiargoccdsehyenittbokmktso ; /usr/bin/python3
Dec 06 08:30:18 np0005548790.localdomain sudo[68300]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:18 np0005548790.localdomain python3[68302]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:30:18 np0005548790.localdomain sudo[68300]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:18 np0005548790.localdomain sudo[68318]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vjaexgtxzjtuewzikuynschqprpyvpgf ; /usr/bin/python3
Dec 06 08:30:18 np0005548790.localdomain sudo[68318]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:18 np0005548790.localdomain python3[68320]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmphr24c7us recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 08:30:19 np0005548790.localdomain sudo[68318]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:19 np0005548790.localdomain sudo[68348]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-guodmwfqoslyyhnguydgaxxwqsmlbvuw ; /usr/bin/python3
Dec 06 08:30:19 np0005548790.localdomain sudo[68348]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:19 np0005548790.localdomain python3[68350]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:30:19 np0005548790.localdomain sudo[68348]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:19 np0005548790.localdomain sudo[68364]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-grtbtsnoeozflzbjcbdogitqebcwmfui ; /usr/bin/python3
Dec 06 08:30:19 np0005548790.localdomain sudo[68364]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:20 np0005548790.localdomain sudo[68364]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:20 np0005548790.localdomain sudo[68451]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mlnoummwjjnvxsskgeodtefgbvqeuxpl ; /usr/bin/python3
Dec 06 08:30:20 np0005548790.localdomain sudo[68451]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:20 np0005548790.localdomain python3[68453]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Dec 06 08:30:20 np0005548790.localdomain sudo[68451]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:21 np0005548790.localdomain sudo[68470]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gygxlpnwkqlegzvqepgoygoqmlwvvjmb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:21 np0005548790.localdomain sudo[68470]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:21 np0005548790.localdomain python3[68472]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:30:21 np0005548790.localdomain sudo[68470]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:21 np0005548790.localdomain sudo[68486]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qvkpjtittxldklmkibmmwsdofdkftxgd ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:21 np0005548790.localdomain sudo[68486]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:21 np0005548790.localdomain sudo[68486]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:21 np0005548790.localdomain sudo[68489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:30:21 np0005548790.localdomain sudo[68489]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:30:21 np0005548790.localdomain sudo[68489]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:22 np0005548790.localdomain sudo[68517]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rhodkbsgudynjpkhfrgcukkwduvotgeo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:22 np0005548790.localdomain sudo[68517]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:22 np0005548790.localdomain python3[68519]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:30:22 np0005548790.localdomain sudo[68517]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:22 np0005548790.localdomain sudo[68567]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nztgeojljhwnynedxvbncysbsqhnnywk ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:22 np0005548790.localdomain sudo[68567]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:22 np0005548790.localdomain python3[68569]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:30:22 np0005548790.localdomain sudo[68567]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:22 np0005548790.localdomain sudo[68585]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wdxrsuqdioibjdddxpjxihihppqrxsty ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:22 np0005548790.localdomain sudo[68585]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:23 np0005548790.localdomain python3[68587]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:30:23 np0005548790.localdomain sudo[68585]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:23 np0005548790.localdomain sudo[68647]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-btpeyizkmwxymdlsffeanlvmdkpsfmti ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:23 np0005548790.localdomain sudo[68647]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:23 np0005548790.localdomain python3[68649]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:30:23 np0005548790.localdomain sudo[68647]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:23 np0005548790.localdomain sudo[68665]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vozdsyswcgwuqcjiyrmpvopwmiavwsyg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:23 np0005548790.localdomain sudo[68665]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:23 np0005548790.localdomain python3[68667]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:30:23 np0005548790.localdomain sudo[68665]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:24 np0005548790.localdomain sudo[68727]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nezrshbpcgacqiuzmveqkzhpgzairdzi ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:24 np0005548790.localdomain sudo[68727]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:24 np0005548790.localdomain python3[68729]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:30:24 np0005548790.localdomain sudo[68727]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:24 np0005548790.localdomain sudo[68745]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-etonirfilffdxzadzifowzytaxxzuulp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:24 np0005548790.localdomain sudo[68745]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:24 np0005548790.localdomain python3[68747]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:30:24 np0005548790.localdomain sudo[68745]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:25 np0005548790.localdomain sudo[68807]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tzjkpkzjbucdobsckvoobxnsjijcgwku ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:25 np0005548790.localdomain sudo[68807]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:25 np0005548790.localdomain python3[68809]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:30:25 np0005548790.localdomain sudo[68807]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:25 np0005548790.localdomain sudo[68825]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dletzktjyhzulsvhbivfxifnylrktgyc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:25 np0005548790.localdomain sudo[68825]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:25 np0005548790.localdomain python3[68827]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:30:25 np0005548790.localdomain sudo[68825]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:25 np0005548790.localdomain sudo[68855]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zwidbsukvhvxnqfcxzrkcqtnxsjalmvs ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:25 np0005548790.localdomain sudo[68855]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:26 np0005548790.localdomain python3[68857]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:30:26 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:30:26 np0005548790.localdomain systemd-sysv-generator[68885]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:30:26 np0005548790.localdomain systemd-rc-local-generator[68881]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:30:26 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:30:26 np0005548790.localdomain sudo[68855]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:26 np0005548790.localdomain sudo[68941]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fgxqzjrkzybuqigxhengvyqvnothbhsr ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:26 np0005548790.localdomain sudo[68941]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:26 np0005548790.localdomain python3[68943]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:30:26 np0005548790.localdomain sudo[68941]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:27 np0005548790.localdomain sudo[68959]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bonaolsmxmcnwozwxqnatdvjhgkivxkg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:27 np0005548790.localdomain sudo[68959]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:27 np0005548790.localdomain python3[68961]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:30:27 np0005548790.localdomain sudo[68959]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:27 np0005548790.localdomain sudo[69021]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eqpkofqogqdnjfpdrvjvfvfdqszjnbiw ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:27 np0005548790.localdomain sudo[69021]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:27 np0005548790.localdomain python3[69023]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:30:27 np0005548790.localdomain sudo[69021]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:27 np0005548790.localdomain sudo[69039]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-akwwoaalfpmftfiqtdvirmeemfwdyjxz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:27 np0005548790.localdomain sudo[69039]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:28 np0005548790.localdomain python3[69041]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:30:28 np0005548790.localdomain sudo[69039]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:28 np0005548790.localdomain sudo[69069]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ziudwltbvwmurgyjqhpvfjdwoulhsmrf ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:28 np0005548790.localdomain sudo[69069]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:28 np0005548790.localdomain python3[69071]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:30:28 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:30:28 np0005548790.localdomain systemd-rc-local-generator[69093]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:30:28 np0005548790.localdomain systemd-sysv-generator[69098]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:30:28 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:30:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:30:28 np0005548790.localdomain systemd[1]: Starting Create netns directory...
Dec 06 08:30:28 np0005548790.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 08:30:28 np0005548790.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 08:30:28 np0005548790.localdomain systemd[1]: Finished Create netns directory.
Dec 06 08:30:29 np0005548790.localdomain sudo[69069]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:29 np0005548790.localdomain podman[69109]: 2025-12-06 08:30:29.02211454 +0000 UTC m=+0.091543501 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, container_name=collectd, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:30:29 np0005548790.localdomain podman[69109]: 2025-12-06 08:30:29.036173906 +0000 UTC m=+0.105602897 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-collectd-container, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12)
Dec 06 08:30:29 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:30:29 np0005548790.localdomain sudo[69145]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eqtqjzphjvknpdkmbtsbygatsmvvebdl ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:29 np0005548790.localdomain sudo[69145]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:29 np0005548790.localdomain python3[69147]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Dec 06 08:30:29 np0005548790.localdomain sudo[69145]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:29 np0005548790.localdomain sudo[69161]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iygnpcfhiajwbygdxupehdhcxxrokvei ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:29 np0005548790.localdomain sudo[69161]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:30 np0005548790.localdomain sudo[69161]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:31 np0005548790.localdomain sudo[69203]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oziblzuhybymnolpogciumwqbaevqiue ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:31 np0005548790.localdomain sudo[69203]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:31 np0005548790.localdomain python3[69205]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step4 config_dir=/var/lib/tripleo-config/container-startup-config/step_4 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Dec 06 08:30:31 np0005548790.localdomain podman[69365]: 2025-12-06 08:30:31.681817569 +0000 UTC m=+0.070995921 container create a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1)
Dec 06 08:30:31 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:30:31 np0005548790.localdomain podman[69381]: 2025-12-06 08:30:31.707503196 +0000 UTC m=+0.077632629 container create da7f9d3719dcb84b9e7f57b700648a6dd094cf294e7d55f44c28380e165d6830 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=configure_cms_options, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, architecture=x86_64)
Dec 06 08:30:31 np0005548790.localdomain systemd[1]: Started libpod-conmon-a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.scope.
Dec 06 08:30:31 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:30:31 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a7065aceffc6d2146ce223b38d40dafae928acede64701f0e57091e6babe580/merged/var/log/containers supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:31 np0005548790.localdomain podman[69365]: 2025-12-06 08:30:31.65347206 +0000 UTC m=+0.042650422 image pull  registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Dec 06 08:30:31 np0005548790.localdomain podman[69373]: 2025-12-06 08:30:31.75887133 +0000 UTC m=+0.138434545 container create 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, build-date=2025-11-19T00:12:45Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi)
Dec 06 08:30:31 np0005548790.localdomain podman[69373]: 2025-12-06 08:30:31.657687012 +0000 UTC m=+0.037250247 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1
Dec 06 08:30:31 np0005548790.localdomain podman[69389]: 2025-12-06 08:30:31.665711097 +0000 UTC m=+0.034675718 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 06 08:30:31 np0005548790.localdomain podman[69381]: 2025-12-06 08:30:31.676438474 +0000 UTC m=+0.046567917 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Dec 06 08:30:31 np0005548790.localdomain podman[69389]: 2025-12-06 08:30:31.776323798 +0000 UTC m=+0.145288419 container create 16733be2f45b7734ae789392cc81689fa09091df67a30ee74cf4dbb4a5951643 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step4, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_libvirt_init_secret, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4)
Dec 06 08:30:31 np0005548790.localdomain systemd[1]: Started libpod-conmon-da7f9d3719dcb84b9e7f57b700648a6dd094cf294e7d55f44c28380e165d6830.scope.
Dec 06 08:30:31 np0005548790.localdomain systemd[1]: Started libpod-conmon-8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.scope.
Dec 06 08:30:31 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:30:31 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:30:31 np0005548790.localdomain podman[69416]: 2025-12-06 08:30:31.799305462 +0000 UTC m=+0.131718455 container create 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 08:30:31 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d6d513da38af6e3cd155d4b3518d4b989d374acab410fbc1ad1d5be1919c445/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:31 np0005548790.localdomain systemd[1]: Started libpod-conmon-16733be2f45b7734ae789392cc81689fa09091df67a30ee74cf4dbb4a5951643.scope.
Dec 06 08:30:31 np0005548790.localdomain podman[69416]: 2025-12-06 08:30:31.730933262 +0000 UTC m=+0.063346295 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1
Dec 06 08:30:31 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:30:31 np0005548790.localdomain systemd[1]: Started libpod-conmon-610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.scope.
Dec 06 08:30:31 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2b1f5f95097aa03fff37b0cad6d11c71f5f221a93020bec6d77c9d01b86ad5/merged/etc/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:31 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2b1f5f95097aa03fff37b0cad6d11c71f5f221a93020bec6d77c9d01b86ad5/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:31 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d2b1f5f95097aa03fff37b0cad6d11c71f5f221a93020bec6d77c9d01b86ad5/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:31 np0005548790.localdomain podman[69432]: 2025-12-06 08:30:31.839058566 +0000 UTC m=+0.122651503 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, release=1761123044, distribution-scope=public, container_name=iscsid, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Dec 06 08:30:31 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:30:31 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:30:31 np0005548790.localdomain podman[69365]: 2025-12-06 08:30:31.847129792 +0000 UTC m=+0.236308164 container init a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, container_name=logrotate_crond, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public)
Dec 06 08:30:31 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c39009f0f6008c6d55691d8dd4cf23ac737f3eb0a424c8d14f1ebded10dc0a2/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:31 np0005548790.localdomain sudo[69487]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 08:30:31 np0005548790.localdomain sudo[69487]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 08:30:31 np0005548790.localdomain podman[69432]: 2025-12-06 08:30:31.876044465 +0000 UTC m=+0.159637442 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:30:31 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:30:31 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:30:31 np0005548790.localdomain podman[69365]: 2025-12-06 08:30:31.890632266 +0000 UTC m=+0.279810608 container start a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond)
Dec 06 08:30:31 np0005548790.localdomain python3[69205]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name logrotate_crond --conmon-pidfile /run/logrotate_crond.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=53ed83bb0cae779ff95edb2002262c6f --healthcheck-command /usr/share/openstack-tripleo-common/healthcheck/cron --label config_id=tripleo_step4 --label container_name=logrotate_crond --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/logrotate_crond.log --network none --pid host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:z registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Dec 06 08:30:31 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:30:31 np0005548790.localdomain podman[69389]: 2025-12-06 08:30:31.899686829 +0000 UTC m=+0.268651460 container init 16733be2f45b7734ae789392cc81689fa09091df67a30ee74cf4dbb4a5951643 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, release=1761123044, vcs-type=git, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_libvirt_init_secret, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt)
Dec 06 08:30:31 np0005548790.localdomain podman[69416]: 2025-12-06 08:30:31.901080045 +0000 UTC m=+0.233493068 container init 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-type=git, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z)
Dec 06 08:30:31 np0005548790.localdomain sudo[69487]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:31 np0005548790.localdomain crond[69486]: (CRON) STARTUP (1.5.7)
Dec 06 08:30:31 np0005548790.localdomain crond[69486]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 36% if used.)
Dec 06 08:30:31 np0005548790.localdomain crond[69486]: (CRON) INFO (running with inotify support)
Dec 06 08:30:31 np0005548790.localdomain sudo[69505]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 08:30:31 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:30:31 np0005548790.localdomain podman[69373]: 2025-12-06 08:30:31.932839756 +0000 UTC m=+0.312403021 container init 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 08:30:31 np0005548790.localdomain sudo[69505]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 08:30:31 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:30:31 np0005548790.localdomain sudo[69524]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 08:30:31 np0005548790.localdomain sudo[69524]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 06 08:30:31 np0005548790.localdomain podman[69381]: 2025-12-06 08:30:31.958539513 +0000 UTC m=+0.328668936 container init da7f9d3719dcb84b9e7f57b700648a6dd094cf294e7d55f44c28380e165d6830 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, version=17.1.12, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, container_name=configure_cms_options, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64)
Dec 06 08:30:31 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:30:31 np0005548790.localdomain podman[69373]: 2025-12-06 08:30:31.964105322 +0000 UTC m=+0.343668537 container start 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:30:31 np0005548790.localdomain python3[69205]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=94eddc2d1a780b6dc03d015a7bd0e411 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_ipmi --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_ipmi.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1
Dec 06 08:30:31 np0005548790.localdomain podman[69381]: 2025-12-06 08:30:31.973731029 +0000 UTC m=+0.343860452 container start da7f9d3719dcb84b9e7f57b700648a6dd094cf294e7d55f44c28380e165d6830 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=configure_cms_options, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, managed_by=tripleo_ansible)
Dec 06 08:30:31 np0005548790.localdomain podman[69381]: 2025-12-06 08:30:31.973969916 +0000 UTC m=+0.344099339 container attach da7f9d3719dcb84b9e7f57b700648a6dd094cf294e7d55f44c28380e165d6830 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, tcib_managed=true, container_name=configure_cms_options, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, managed_by=tripleo_ansible)
Dec 06 08:30:31 np0005548790.localdomain podman[69489]: 2025-12-06 08:30:31.975054395 +0000 UTC m=+0.086581188 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=starting, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, vendor=Red Hat, Inc., tcib_managed=true, release=1761123044, distribution-scope=public, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, architecture=x86_64)
Dec 06 08:30:31 np0005548790.localdomain podman[69489]: 2025-12-06 08:30:31.983020118 +0000 UTC m=+0.094546911 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, container_name=logrotate_crond, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-cron-container, version=17.1.12, config_id=tripleo_step4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:30:31 np0005548790.localdomain sudo[69505]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:31 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:30:31 np0005548790.localdomain podman[69416]: 2025-12-06 08:30:31.999448917 +0000 UTC m=+0.331861910 container start 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 08:30:31 np0005548790.localdomain sudo[69524]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:32 np0005548790.localdomain python3[69205]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=94eddc2d1a780b6dc03d015a7bd0e411 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_compute.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1
Dec 06 08:30:32 np0005548790.localdomain podman[69536]: 2025-12-06 08:30:32.058061746 +0000 UTC m=+0.091099088 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi)
Dec 06 08:30:32 np0005548790.localdomain podman[69389]: 2025-12-06 08:30:32.060320066 +0000 UTC m=+0.429284687 container start 16733be2f45b7734ae789392cc81689fa09091df67a30ee74cf4dbb4a5951643 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, release=1761123044, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_libvirt_init_secret, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-19T00:35:22Z, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt)
Dec 06 08:30:32 np0005548790.localdomain podman[69389]: 2025-12-06 08:30:32.060500831 +0000 UTC m=+0.429465482 container attach 16733be2f45b7734ae789392cc81689fa09091df67a30ee74cf4dbb4a5951643 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_libvirt_init_secret)
Dec 06 08:30:32 np0005548790.localdomain systemd[1]: libpod-16733be2f45b7734ae789392cc81689fa09091df67a30ee74cf4dbb4a5951643.scope: Deactivated successfully.
Dec 06 08:30:32 np0005548790.localdomain podman[69523]: 2025-12-06 08:30:32.065069814 +0000 UTC m=+0.113541090 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:30:32 np0005548790.localdomain ovs-vsctl[69609]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . external_ids ovn-cms-options
Dec 06 08:30:32 np0005548790.localdomain podman[69389]: 2025-12-06 08:30:32.069620025 +0000 UTC m=+0.438584646 container died 16733be2f45b7734ae789392cc81689fa09091df67a30ee74cf4dbb4a5951643 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_libvirt_init_secret, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vendor=Red Hat, Inc.)
Dec 06 08:30:32 np0005548790.localdomain systemd[1]: libpod-da7f9d3719dcb84b9e7f57b700648a6dd094cf294e7d55f44c28380e165d6830.scope: Deactivated successfully.
Dec 06 08:30:32 np0005548790.localdomain podman[69523]: 2025-12-06 08:30:32.104944851 +0000 UTC m=+0.153416117 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Dec 06 08:30:32 np0005548790.localdomain podman[69523]: unhealthy
Dec 06 08:30:32 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 08:30:32 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Failed with result 'exit-code'.
Dec 06 08:30:32 np0005548790.localdomain podman[69381]: 2025-12-06 08:30:32.173693291 +0000 UTC m=+0.543822714 container died da7f9d3719dcb84b9e7f57b700648a6dd094cf294e7d55f44c28380e165d6830 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=configure_cms_options, maintainer=OpenStack TripleO Team, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044)
Dec 06 08:30:32 np0005548790.localdomain podman[69611]: 2025-12-06 08:30:32.183198955 +0000 UTC m=+0.109852181 container cleanup 16733be2f45b7734ae789392cc81689fa09091df67a30ee74cf4dbb4a5951643 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, container_name=nova_libvirt_init_secret, config_id=tripleo_step4)
Dec 06 08:30:32 np0005548790.localdomain podman[69536]: 2025-12-06 08:30:32.188758113 +0000 UTC m=+0.221795455 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, version=17.1.12)
Dec 06 08:30:32 np0005548790.localdomain systemd[1]: libpod-conmon-16733be2f45b7734ae789392cc81689fa09091df67a30ee74cf4dbb4a5951643.scope: Deactivated successfully.
Dec 06 08:30:32 np0005548790.localdomain podman[69536]: unhealthy
Dec 06 08:30:32 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 08:30:32 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Failed with result 'exit-code'.
Dec 06 08:30:32 np0005548790.localdomain python3[69205]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_libvirt_init_secret --cgroupns=host --conmon-pidfile /run/nova_libvirt_init_secret.pid --detach=False --env LIBVIRT_DEFAULT_URI=qemu:///system --env TRIPLEO_CONFIG_HASH=1c14d9f34e8565ad391b489e982af70f --label config_id=tripleo_step4 --label container_name=nova_libvirt_init_secret --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_libvirt_init_secret.log --network host --privileged=False --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova --volume /etc/libvirt:/etc/libvirt --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro --volume /var/lib/tripleo-config/ceph:/etc/ceph:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /nova_libvirt_init_secret.sh ceph:openstack
Dec 06 08:30:32 np0005548790.localdomain podman[69624]: 2025-12-06 08:30:32.29175366 +0000 UTC m=+0.212161969 container cleanup da7f9d3719dcb84b9e7f57b700648a6dd094cf294e7d55f44c28380e165d6830 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=configure_cms_options, distribution-scope=public, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, com.redhat.component=openstack-ovn-controller-container, release=1761123044)
Dec 06 08:30:32 np0005548790.localdomain systemd[1]: libpod-conmon-da7f9d3719dcb84b9e7f57b700648a6dd094cf294e7d55f44c28380e165d6830.scope: Deactivated successfully.
Dec 06 08:30:32 np0005548790.localdomain python3[69205]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name configure_cms_options --conmon-pidfile /run/configure_cms_options.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1765008053 --label config_id=tripleo_step4 --label container_name=configure_cms_options --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/configure_cms_options.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 /bin/bash -c CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi
Dec 06 08:30:32 np0005548790.localdomain sshd[69801]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:30:32 np0005548790.localdomain podman[69739]: 2025-12-06 08:30:32.479221686 +0000 UTC m=+0.181400905 container create 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, managed_by=tripleo_ansible, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 06 08:30:32 np0005548790.localdomain systemd[1]: Started libpod-conmon-8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.scope.
Dec 06 08:30:32 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:30:32 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba8b2af72d8dcbf02be78782d6a093327973a6f19db17113d2698cfcfba8f0d1/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:32 np0005548790.localdomain podman[69748]: 2025-12-06 08:30:32.535729038 +0000 UTC m=+0.222534756 container create 19f5afcedae52571a906616dc033c991a4c107c1ab9296a14f7098b386a253a9 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, architecture=x86_64, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, container_name=setup_ovs_manager, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']})
Dec 06 08:30:32 np0005548790.localdomain podman[69748]: 2025-12-06 08:30:32.440424188 +0000 UTC m=+0.127229956 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Dec 06 08:30:32 np0005548790.localdomain podman[69739]: 2025-12-06 08:30:32.43903697 +0000 UTC m=+0.141216209 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 06 08:30:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:30:32 np0005548790.localdomain podman[69739]: 2025-12-06 08:30:32.54890585 +0000 UTC m=+0.251085089 container init 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com)
Dec 06 08:30:32 np0005548790.localdomain sudo[69816]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 08:30:32 np0005548790.localdomain sudo[69816]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 08:30:32 np0005548790.localdomain systemd[1]: Started libpod-conmon-19f5afcedae52571a906616dc033c991a4c107c1ab9296a14f7098b386a253a9.scope.
Dec 06 08:30:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:30:32 np0005548790.localdomain podman[69739]: 2025-12-06 08:30:32.580083215 +0000 UTC m=+0.282262454 container start 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, container_name=nova_migration_target)
Dec 06 08:30:32 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:30:32 np0005548790.localdomain python3[69205]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_migration_target --conmon-pidfile /run/nova_migration_target.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=1c14d9f34e8565ad391b489e982af70f --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=nova_migration_target --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_migration_target.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /etc/ssh:/host-ssh:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 06 08:30:32 np0005548790.localdomain podman[69748]: 2025-12-06 08:30:32.605411663 +0000 UTC m=+0.292217381 container init 19f5afcedae52571a906616dc033c991a4c107c1ab9296a14f7098b386a253a9 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:14:25Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=setup_ovs_manager, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 08:30:32 np0005548790.localdomain podman[69748]: 2025-12-06 08:30:32.6146617 +0000 UTC m=+0.301467398 container start 19f5afcedae52571a906616dc033c991a4c107c1ab9296a14f7098b386a253a9 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, container_name=setup_ovs_manager, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1)
Dec 06 08:30:32 np0005548790.localdomain podman[69748]: 2025-12-06 08:30:32.61505161 +0000 UTC m=+0.301857348 container attach 19f5afcedae52571a906616dc033c991a4c107c1ab9296a14f7098b386a253a9 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=setup_ovs_manager, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step4, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 08:30:32 np0005548790.localdomain sudo[69816]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:32 np0005548790.localdomain sshd[69844]: Server listening on 0.0.0.0 port 2022.
Dec 06 08:30:32 np0005548790.localdomain sshd[69844]: Server listening on :: port 2022.
Dec 06 08:30:32 np0005548790.localdomain podman[69820]: 2025-12-06 08:30:32.651118735 +0000 UTC m=+0.068670109 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=starting, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.buildah.version=1.41.4)
Dec 06 08:30:32 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-da7f9d3719dcb84b9e7f57b700648a6dd094cf294e7d55f44c28380e165d6830-userdata-shm.mount: Deactivated successfully.
Dec 06 08:30:32 np0005548790.localdomain sudo[69874]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpjkf8jrl_/privsep.sock
Dec 06 08:30:32 np0005548790.localdomain sudo[69874]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 06 08:30:33 np0005548790.localdomain podman[69820]: 2025-12-06 08:30:33.003458874 +0000 UTC m=+0.421010198 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:30:33 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:30:33 np0005548790.localdomain kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec 06 08:30:33 np0005548790.localdomain sudo[69874]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:34 np0005548790.localdomain sshd[69801]: Received disconnect from 103.226.138.52 port 55034:11: Bye Bye [preauth]
Dec 06 08:30:34 np0005548790.localdomain sshd[69801]: Disconnected from authenticating user root 103.226.138.52 port 55034 [preauth]
Dec 06 08:30:35 np0005548790.localdomain ovs-vsctl[69996]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec 06 08:30:35 np0005548790.localdomain systemd[1]: libpod-19f5afcedae52571a906616dc033c991a4c107c1ab9296a14f7098b386a253a9.scope: Deactivated successfully.
Dec 06 08:30:35 np0005548790.localdomain systemd[1]: libpod-19f5afcedae52571a906616dc033c991a4c107c1ab9296a14f7098b386a253a9.scope: Consumed 2.691s CPU time.
Dec 06 08:30:35 np0005548790.localdomain podman[69748]: 2025-12-06 08:30:35.297285663 +0000 UTC m=+2.984091421 container died 19f5afcedae52571a906616dc033c991a4c107c1ab9296a14f7098b386a253a9 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=setup_ovs_manager, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:30:35 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-19f5afcedae52571a906616dc033c991a4c107c1ab9296a14f7098b386a253a9-userdata-shm.mount: Deactivated successfully.
Dec 06 08:30:35 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-7f25be2b5c3eb053d9b9667fd987790b6eba6abedf717019f3a18cf19bc3f462-merged.mount: Deactivated successfully.
Dec 06 08:30:35 np0005548790.localdomain podman[69997]: 2025-12-06 08:30:35.403396221 +0000 UTC m=+0.089699191 container cleanup 19f5afcedae52571a906616dc033c991a4c107c1ab9296a14f7098b386a253a9 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=setup_ovs_manager, vendor=Red Hat, Inc.)
Dec 06 08:30:35 np0005548790.localdomain systemd[1]: libpod-conmon-19f5afcedae52571a906616dc033c991a4c107c1ab9296a14f7098b386a253a9.scope: Deactivated successfully.
Dec 06 08:30:35 np0005548790.localdomain python3[69205]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name setup_ovs_manager --conmon-pidfile /run/setup_ovs_manager.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1765008053 --label config_id=tripleo_step4 --label container_name=setup_ovs_manager --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1765008053'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/setup_ovs_manager.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 exec include tripleo::profile::base::neutron::ovn_metadata
Dec 06 08:30:35 np0005548790.localdomain podman[70117]: 2025-12-06 08:30:35.818850739 +0000 UTC m=+0.056726500 container create 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, version=17.1.12, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:30:35 np0005548790.localdomain podman[70110]: 2025-12-06 08:30:35.843394346 +0000 UTC m=+0.093691929 container create 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 08:30:35 np0005548790.localdomain systemd[1]: Started libpod-conmon-8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.scope.
Dec 06 08:30:35 np0005548790.localdomain systemd[1]: Started libpod-conmon-8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.scope.
Dec 06 08:30:35 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:30:35 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8284dca6529e5ab9438e0117511d130f8650dec0e9dc23d1b17bfc3ebcf839dd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:35 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8284dca6529e5ab9438e0117511d130f8650dec0e9dc23d1b17bfc3ebcf839dd/merged/var/log/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:35 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8284dca6529e5ab9438e0117511d130f8650dec0e9dc23d1b17bfc3ebcf839dd/merged/etc/neutron/kill_scripts supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:35 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:30:35 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7395787d7c7157781d23da507bf0dd85e09ccfaea104452a9558b49679a2c1ad/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:35 np0005548790.localdomain podman[70110]: 2025-12-06 08:30:35.786357029 +0000 UTC m=+0.036654632 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Dec 06 08:30:35 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7395787d7c7157781d23da507bf0dd85e09ccfaea104452a9558b49679a2c1ad/merged/var/log/ovn supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:35 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7395787d7c7157781d23da507bf0dd85e09ccfaea104452a9558b49679a2c1ad/merged/var/log/openvswitch supports timestamps until 2038 (0x7fffffff)
Dec 06 08:30:35 np0005548790.localdomain podman[70117]: 2025-12-06 08:30:35.788288831 +0000 UTC m=+0.026164602 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Dec 06 08:30:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:30:35 np0005548790.localdomain podman[70117]: 2025-12-06 08:30:35.900946215 +0000 UTC m=+0.138821986 container init 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, batch=17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Dec 06 08:30:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:30:35 np0005548790.localdomain podman[70110]: 2025-12-06 08:30:35.909324889 +0000 UTC m=+0.159622442 container init 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, container_name=ovn_controller, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team)
Dec 06 08:30:35 np0005548790.localdomain sudo[70150]:  neutron : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 08:30:35 np0005548790.localdomain sudo[70150]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42435)
Dec 06 08:30:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:30:35 np0005548790.localdomain systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 06 08:30:35 np0005548790.localdomain systemd[1]: Created slice User Slice of UID 0.
Dec 06 08:30:35 np0005548790.localdomain systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 06 08:30:35 np0005548790.localdomain podman[70117]: 2025-12-06 08:30:35.944069259 +0000 UTC m=+0.181945030 container start 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 08:30:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:30:35 np0005548790.localdomain systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 06 08:30:35 np0005548790.localdomain podman[70110]: 2025-12-06 08:30:35.958711891 +0000 UTC m=+0.209009434 container start 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, name=rhosp17/openstack-ovn-controller, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=ovn_controller, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible)
Dec 06 08:30:35 np0005548790.localdomain python3[69205]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=9b9208098644933bd8c0484efcd7b934 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ovn_metadata_agent --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_metadata_agent.log --network host --pid host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/neutron:/var/log/neutron:z --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /run/netns:/run/netns:shared --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Dec 06 08:30:35 np0005548790.localdomain systemd[1]: Starting User Manager for UID 0...
Dec 06 08:30:35 np0005548790.localdomain python3[69205]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck 6642 --label config_id=tripleo_step4 --label container_name=ovn_controller --label managed_by=tripleo_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_controller.log --network host --privileged=True --user root --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/log/containers/openvswitch:/var/log/openvswitch:z --volume /var/log/containers/openvswitch:/var/log/ovn:z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Dec 06 08:30:35 np0005548790.localdomain systemd[70167]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Dec 06 08:30:36 np0005548790.localdomain sudo[70150]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:36 np0005548790.localdomain podman[70154]: 2025-12-06 08:30:36.026889895 +0000 UTC m=+0.081127022 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 08:30:36 np0005548790.localdomain systemd[70167]: Queued start job for default target Main User Target.
Dec 06 08:30:36 np0005548790.localdomain systemd[70167]: Created slice User Application Slice.
Dec 06 08:30:36 np0005548790.localdomain systemd[70167]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 06 08:30:36 np0005548790.localdomain systemd[70167]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 08:30:36 np0005548790.localdomain systemd[70167]: Reached target Paths.
Dec 06 08:30:36 np0005548790.localdomain systemd[70167]: Reached target Timers.
Dec 06 08:30:36 np0005548790.localdomain systemd[70167]: Starting D-Bus User Message Bus Socket...
Dec 06 08:30:36 np0005548790.localdomain systemd[70167]: Starting Create User's Volatile Files and Directories...
Dec 06 08:30:36 np0005548790.localdomain systemd[70167]: Listening on D-Bus User Message Bus Socket.
Dec 06 08:30:36 np0005548790.localdomain systemd[70167]: Reached target Sockets.
Dec 06 08:30:36 np0005548790.localdomain systemd[70167]: Finished Create User's Volatile Files and Directories.
Dec 06 08:30:36 np0005548790.localdomain systemd[70167]: Reached target Basic System.
Dec 06 08:30:36 np0005548790.localdomain systemd[70167]: Reached target Main User Target.
Dec 06 08:30:36 np0005548790.localdomain systemd[70167]: Startup finished in 127ms.
Dec 06 08:30:36 np0005548790.localdomain systemd[1]: Started User Manager for UID 0.
Dec 06 08:30:36 np0005548790.localdomain systemd[1]: Started Session c9 of User root.
Dec 06 08:30:36 np0005548790.localdomain podman[70161]: 2025-12-06 08:30:36.155932488 +0000 UTC m=+0.198044581 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_id=tripleo_step4, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:30:36 np0005548790.localdomain podman[70154]: 2025-12-06 08:30:36.165884725 +0000 UTC m=+0.220121842 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public)
Dec 06 08:30:36 np0005548790.localdomain podman[70154]: unhealthy
Dec 06 08:30:36 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 08:30:36 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 08:30:36 np0005548790.localdomain podman[70161]: 2025-12-06 08:30:36.195110636 +0000 UTC m=+0.237222729 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, io.openshift.expose-services=, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 08:30:36 np0005548790.localdomain podman[70161]: unhealthy
Dec 06 08:30:36 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 08:30:36 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 08:30:36 np0005548790.localdomain systemd[1]: session-c9.scope: Deactivated successfully.
Dec 06 08:30:36 np0005548790.localdomain kernel: device br-int entered promiscuous mode
Dec 06 08:30:36 np0005548790.localdomain NetworkManager[5968]: <info>  [1765009836.2664] manager: (br-int): new Generic device (/org/freedesktop/NetworkManager/Devices/11)
Dec 06 08:30:36 np0005548790.localdomain sudo[69203]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:36 np0005548790.localdomain systemd-udevd[70265]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:30:36 np0005548790.localdomain sudo[70283]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-trtcbenmqlfxpoodfoqgtmqzfwydqzjs ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:36 np0005548790.localdomain sudo[70283]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:36 np0005548790.localdomain python3[70285]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:30:36 np0005548790.localdomain sudo[70283]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:36 np0005548790.localdomain sudo[70299]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dagqibffvnqstyppalhczybaszhcfvzu ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:36 np0005548790.localdomain sudo[70299]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:37 np0005548790.localdomain python3[70301]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:30:37 np0005548790.localdomain sudo[70299]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:37 np0005548790.localdomain sudo[70315]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hghhpnvuvdlhcndokjwfhnmzaudtstcy ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:37 np0005548790.localdomain sudo[70315]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:37 np0005548790.localdomain python3[70317]: ansible-file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:30:37 np0005548790.localdomain sudo[70315]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:37 np0005548790.localdomain kernel: device genev_sys_6081 entered promiscuous mode
Dec 06 08:30:37 np0005548790.localdomain systemd-udevd[70267]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 08:30:37 np0005548790.localdomain NetworkManager[5968]: <info>  [1765009837.3286] device (genev_sys_6081): carrier: link connected
Dec 06 08:30:37 np0005548790.localdomain NetworkManager[5968]: <info>  [1765009837.3292] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/12)
Dec 06 08:30:37 np0005548790.localdomain sudo[70334]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nepvgzryflbodfqlajfvdtaegbrvuybu ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:37 np0005548790.localdomain sudo[70334]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:37 np0005548790.localdomain python3[70336]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:30:37 np0005548790.localdomain sudo[70334]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:37 np0005548790.localdomain sudo[70338]:  neutron : PWD=/ ; USER=root ; COMMAND=/usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf privsep-helper --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --privsep_context neutron.privileged.default --privsep_sock_path /tmp/tmpjj_ysr1m/privsep.sock
Dec 06 08:30:37 np0005548790.localdomain sudo[70338]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42435)
Dec 06 08:30:37 np0005548790.localdomain sudo[70353]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-trxcxqvgrqhrlyqwyqiefxnmdbdcdfrh ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:37 np0005548790.localdomain sudo[70353]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:37 np0005548790.localdomain python3[70355]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:30:37 np0005548790.localdomain sudo[70353]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:37 np0005548790.localdomain sudo[70370]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rhjiuirddalmipmlytqmomdwtlwchrcg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:37 np0005548790.localdomain sudo[70370]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:38 np0005548790.localdomain python3[70372]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:30:38 np0005548790.localdomain sudo[70370]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:38 np0005548790.localdomain sudo[70386]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cycsjlcjgmbqdtjwayqhfdftqxhlqayk ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:38 np0005548790.localdomain sudo[70386]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:38 np0005548790.localdomain sudo[70338]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:38 np0005548790.localdomain python3[70388]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:30:38 np0005548790.localdomain sudo[70386]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:38 np0005548790.localdomain sudo[70404]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uxmoxilumkfmohmfxbboozbyfqqustht ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:38 np0005548790.localdomain sudo[70404]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:38 np0005548790.localdomain python3[70406]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:30:38 np0005548790.localdomain sudo[70404]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:38 np0005548790.localdomain sudo[70422]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mjtqduiubgugabbayafwjwrkwpnqjcal ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:38 np0005548790.localdomain sudo[70422]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:38 np0005548790.localdomain python3[70424]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_logrotate_crond_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:30:38 np0005548790.localdomain sudo[70422]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:38 np0005548790.localdomain sudo[70438]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ezyhctxzexftqjsjosjuyrsobmvvsppo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:38 np0005548790.localdomain sudo[70438]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:39 np0005548790.localdomain python3[70440]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_migration_target_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:30:39 np0005548790.localdomain sudo[70438]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:39 np0005548790.localdomain sudo[70454]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gwdqkqdehgbkyhtxtitnzlysisiugdhj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:39 np0005548790.localdomain sudo[70454]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:39 np0005548790.localdomain python3[70456]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_controller_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:30:39 np0005548790.localdomain sudo[70454]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:39 np0005548790.localdomain sudo[70470]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zsdxuhqayjogvgihhaxbxuzoopjextqc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:39 np0005548790.localdomain sudo[70470]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:39 np0005548790.localdomain python3[70472]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:30:39 np0005548790.localdomain sudo[70470]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:40 np0005548790.localdomain sudo[70531]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qofecqyvqyuzsdlbnsrpmjqufnfwxckx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:40 np0005548790.localdomain sudo[70531]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:40 np0005548790.localdomain python3[70533]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009839.6410306-109035-39896188420114/source dest=/etc/systemd/system/tripleo_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:30:40 np0005548790.localdomain sudo[70531]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:40 np0005548790.localdomain sudo[70560]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-noqhpfjbokccyczdrwmlolmnxteueonu ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:40 np0005548790.localdomain sudo[70560]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:40 np0005548790.localdomain python3[70562]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009839.6410306-109035-39896188420114/source dest=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:30:40 np0005548790.localdomain sudo[70560]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:41 np0005548790.localdomain sudo[70589]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tehiyviqwzkgchmeyfbsnuyqbgrqsegn ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:41 np0005548790.localdomain sudo[70589]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:41 np0005548790.localdomain python3[70591]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009839.6410306-109035-39896188420114/source dest=/etc/systemd/system/tripleo_logrotate_crond.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:30:41 np0005548790.localdomain sudo[70589]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:41 np0005548790.localdomain sudo[70618]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ulomtnipsatgjtlvlhvstzeesleqkyhs ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:41 np0005548790.localdomain sudo[70618]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:41 np0005548790.localdomain python3[70620]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009839.6410306-109035-39896188420114/source dest=/etc/systemd/system/tripleo_nova_migration_target.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:30:41 np0005548790.localdomain sudo[70618]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:42 np0005548790.localdomain sudo[70647]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gzdvonkijefplhpsdkmtjmvlkrrperqs ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:42 np0005548790.localdomain sudo[70647]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:42 np0005548790.localdomain python3[70649]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009839.6410306-109035-39896188420114/source dest=/etc/systemd/system/tripleo_ovn_controller.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:30:42 np0005548790.localdomain sudo[70647]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:42 np0005548790.localdomain sudo[70676]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ngilmhjtlczyzyomypennigwtrlpzpfe ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:42 np0005548790.localdomain sudo[70676]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:42 np0005548790.localdomain python3[70678]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765009839.6410306-109035-39896188420114/source dest=/etc/systemd/system/tripleo_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:30:42 np0005548790.localdomain sudo[70676]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:42 np0005548790.localdomain sudo[70692]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-glrdihftsgabbkmtdcfiyxopwqrmsinc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:42 np0005548790.localdomain sudo[70692]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:43 np0005548790.localdomain python3[70694]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 08:30:43 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:30:43 np0005548790.localdomain systemd-rc-local-generator[70711]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:30:43 np0005548790.localdomain systemd-sysv-generator[70714]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:30:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:30:43 np0005548790.localdomain sudo[70692]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:44 np0005548790.localdomain sudo[70744]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ruqntiiiehkbuyzmmmuohlanwucsuibz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:44 np0005548790.localdomain sudo[70744]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:44 np0005548790.localdomain python3[70746]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:30:45 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:30:45 np0005548790.localdomain systemd-rc-local-generator[70773]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:30:45 np0005548790.localdomain systemd-sysv-generator[70776]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:30:45 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:30:45 np0005548790.localdomain systemd[1]: Starting ceilometer_agent_compute container...
Dec 06 08:30:45 np0005548790.localdomain tripleo-start-podman-container[70786]: Creating additional drop-in dependency for "ceilometer_agent_compute" (610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be)
Dec 06 08:30:45 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:30:46 np0005548790.localdomain systemd-rc-local-generator[70843]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:30:46 np0005548790.localdomain systemd-sysv-generator[70848]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:30:46 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:30:46 np0005548790.localdomain systemd[1]: Started ceilometer_agent_compute container.
Dec 06 08:30:46 np0005548790.localdomain systemd[1]: Stopping User Manager for UID 0...
Dec 06 08:30:46 np0005548790.localdomain systemd[70167]: Activating special unit Exit the Session...
Dec 06 08:30:46 np0005548790.localdomain systemd[70167]: Stopped target Main User Target.
Dec 06 08:30:46 np0005548790.localdomain systemd[70167]: Stopped target Basic System.
Dec 06 08:30:46 np0005548790.localdomain systemd[70167]: Stopped target Paths.
Dec 06 08:30:46 np0005548790.localdomain systemd[70167]: Stopped target Sockets.
Dec 06 08:30:46 np0005548790.localdomain systemd[70167]: Stopped target Timers.
Dec 06 08:30:46 np0005548790.localdomain systemd[70167]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 06 08:30:46 np0005548790.localdomain systemd[70167]: Closed D-Bus User Message Bus Socket.
Dec 06 08:30:46 np0005548790.localdomain systemd[70167]: Stopped Create User's Volatile Files and Directories.
Dec 06 08:30:46 np0005548790.localdomain systemd[70167]: Removed slice User Application Slice.
Dec 06 08:30:46 np0005548790.localdomain systemd[70167]: Reached target Shutdown.
Dec 06 08:30:46 np0005548790.localdomain systemd[70167]: Finished Exit the Session.
Dec 06 08:30:46 np0005548790.localdomain systemd[70167]: Reached target Exit the Session.
Dec 06 08:30:46 np0005548790.localdomain systemd[1]: user@0.service: Deactivated successfully.
Dec 06 08:30:46 np0005548790.localdomain systemd[1]: Stopped User Manager for UID 0.
Dec 06 08:30:46 np0005548790.localdomain systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 06 08:30:46 np0005548790.localdomain sudo[70744]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:46 np0005548790.localdomain systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 06 08:30:46 np0005548790.localdomain systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 06 08:30:46 np0005548790.localdomain systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 06 08:30:46 np0005548790.localdomain systemd[1]: Removed slice User Slice of UID 0.
Dec 06 08:30:46 np0005548790.localdomain sudo[70869]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oojygbfvxqwzyjwzrkxqflqctnqgrmtp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:46 np0005548790.localdomain sudo[70869]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:46 np0005548790.localdomain python3[70871]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:30:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:30:47 np0005548790.localdomain systemd[1]: tmp-crun.l38ReT.mount: Deactivated successfully.
Dec 06 08:30:47 np0005548790.localdomain podman[70874]: 2025-12-06 08:30:47.608868739 +0000 UTC m=+0.119836277 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4)
Dec 06 08:30:47 np0005548790.localdomain podman[70874]: 2025-12-06 08:30:47.800104767 +0000 UTC m=+0.311072355 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr)
Dec 06 08:30:47 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:30:47 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:30:48 np0005548790.localdomain systemd-rc-local-generator[70929]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:30:48 np0005548790.localdomain systemd-sysv-generator[70933]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:30:48 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:30:48 np0005548790.localdomain systemd[1]: Starting ceilometer_agent_ipmi container...
Dec 06 08:30:48 np0005548790.localdomain systemd[1]: Started ceilometer_agent_ipmi container.
Dec 06 08:30:48 np0005548790.localdomain sudo[70869]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:48 np0005548790.localdomain sudo[70966]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ezbrbhnukejmrrygetfebnfdnrdzykfc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:48 np0005548790.localdomain sudo[70966]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:48 np0005548790.localdomain python3[70968]: ansible-systemd Invoked with state=restarted name=tripleo_logrotate_crond.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:30:48 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:30:49 np0005548790.localdomain systemd-rc-local-generator[70992]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:30:49 np0005548790.localdomain systemd-sysv-generator[70996]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:30:49 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:30:49 np0005548790.localdomain systemd[1]: Starting logrotate_crond container...
Dec 06 08:30:49 np0005548790.localdomain systemd[1]: Started logrotate_crond container.
Dec 06 08:30:49 np0005548790.localdomain sudo[70966]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:49 np0005548790.localdomain sudo[71034]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aixasahcimeynhecgcrbfhqfqulrarmp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:49 np0005548790.localdomain sudo[71034]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:49 np0005548790.localdomain python3[71036]: ansible-systemd Invoked with state=restarted name=tripleo_nova_migration_target.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:30:50 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:30:50 np0005548790.localdomain systemd-rc-local-generator[71060]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:30:50 np0005548790.localdomain systemd-sysv-generator[71064]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:30:50 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:30:50 np0005548790.localdomain systemd[1]: Starting nova_migration_target container...
Dec 06 08:30:50 np0005548790.localdomain systemd[1]: Started nova_migration_target container.
Dec 06 08:30:50 np0005548790.localdomain sudo[71034]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:50 np0005548790.localdomain sudo[71100]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-znrincyzdanjdcwgqbersxfdoxicpush ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:50 np0005548790.localdomain sudo[71100]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:51 np0005548790.localdomain python3[71102]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:30:51 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:30:51 np0005548790.localdomain systemd-sysv-generator[71130]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:30:51 np0005548790.localdomain systemd-rc-local-generator[71126]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:30:51 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:30:51 np0005548790.localdomain systemd[1]: Starting ovn_controller container...
Dec 06 08:30:51 np0005548790.localdomain tripleo-start-podman-container[71142]: Creating additional drop-in dependency for "ovn_controller" (8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3)
Dec 06 08:30:51 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:30:51 np0005548790.localdomain systemd-sysv-generator[71199]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:30:51 np0005548790.localdomain systemd-rc-local-generator[71195]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:30:51 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:30:52 np0005548790.localdomain systemd[1]: Started ovn_controller container.
Dec 06 08:30:52 np0005548790.localdomain sudo[71100]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:52 np0005548790.localdomain sudo[71223]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-osiefhdkjkqwxdcyxieqqjbmilpqzcec ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:30:52 np0005548790.localdomain sudo[71223]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:52 np0005548790.localdomain python3[71225]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:30:52 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:30:52 np0005548790.localdomain systemd-sysv-generator[71254]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:30:52 np0005548790.localdomain systemd-rc-local-generator[71251]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:30:53 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:30:53 np0005548790.localdomain systemd[1]: Starting ovn_metadata_agent container...
Dec 06 08:30:53 np0005548790.localdomain systemd[1]: Started ovn_metadata_agent container.
Dec 06 08:30:53 np0005548790.localdomain sudo[71223]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:53 np0005548790.localdomain sudo[71304]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ijjlxejgsdjukskqmewqpovyyaqhqotk ; /usr/bin/python3
Dec 06 08:30:53 np0005548790.localdomain sudo[71304]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:53 np0005548790.localdomain python3[71306]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks4.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:30:53 np0005548790.localdomain sudo[71304]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:54 np0005548790.localdomain sudo[71352]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vstyunzrsrfwtqwzvckhroitpldyffff ; /usr/bin/python3
Dec 06 08:30:54 np0005548790.localdomain sudo[71352]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:54 np0005548790.localdomain sudo[71352]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:54 np0005548790.localdomain sudo[71395]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-okkoiggqvxlzgjkmvjnyjzowiwampxkw ; /usr/bin/python3
Dec 06 08:30:54 np0005548790.localdomain sudo[71395]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:54 np0005548790.localdomain sudo[71395]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:55 np0005548790.localdomain sudo[71425]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gptfkuadhwpryndvixdcguxxhfsgwiuz ; /usr/bin/python3
Dec 06 08:30:55 np0005548790.localdomain sudo[71425]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:55 np0005548790.localdomain python3[71427]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks4.json short_hostname=np0005548790 step=4 update_config_hash_only=False
Dec 06 08:30:55 np0005548790.localdomain sudo[71425]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:55 np0005548790.localdomain sudo[71442]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yxafkfccekezjiagtgdbtoaxshoqbitq ; /usr/bin/python3
Dec 06 08:30:55 np0005548790.localdomain sudo[71442]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:55 np0005548790.localdomain python3[71444]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:30:55 np0005548790.localdomain sudo[71442]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:55 np0005548790.localdomain sudo[71458]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cudajkgaxhsrddoipnczczhasdbtmfod ; /usr/bin/python3
Dec 06 08:30:55 np0005548790.localdomain sudo[71458]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:30:56 np0005548790.localdomain python3[71460]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_4 config_pattern=container-puppet-*.json config_overrides={} debug=True
Dec 06 08:30:56 np0005548790.localdomain sudo[71458]: pam_unix(sudo:session): session closed for user root
Dec 06 08:30:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:30:59 np0005548790.localdomain systemd[1]: tmp-crun.FSvZLU.mount: Deactivated successfully.
Dec 06 08:30:59 np0005548790.localdomain podman[71461]: 2025-12-06 08:30:59.571091828 +0000 UTC m=+0.088719934 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, container_name=collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z)
Dec 06 08:30:59 np0005548790.localdomain podman[71461]: 2025-12-06 08:30:59.585297739 +0000 UTC m=+0.102925825 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, architecture=x86_64, config_id=tripleo_step3, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vcs-type=git, batch=17.1_20251118.1, name=rhosp17/openstack-collectd)
Dec 06 08:30:59 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:31:02 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:31:02 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:31:02 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:31:02 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:31:02 np0005548790.localdomain systemd[1]: tmp-crun.gT0Ys1.mount: Deactivated successfully.
Dec 06 08:31:02 np0005548790.localdomain podman[71483]: 2025-12-06 08:31:02.570003374 +0000 UTC m=+0.085185880 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, config_id=tripleo_step4)
Dec 06 08:31:02 np0005548790.localdomain podman[71483]: 2025-12-06 08:31:02.601114937 +0000 UTC m=+0.116297403 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, distribution-scope=public, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git)
Dec 06 08:31:02 np0005548790.localdomain podman[71484]: 2025-12-06 08:31:02.60684043 +0000 UTC m=+0.120515576 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, config_id=tripleo_step4)
Dec 06 08:31:02 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:31:02 np0005548790.localdomain podman[71484]: 2025-12-06 08:31:02.626683621 +0000 UTC m=+0.140358817 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com)
Dec 06 08:31:02 np0005548790.localdomain podman[71482]: 2025-12-06 08:31:02.661799971 +0000 UTC m=+0.177753198 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, architecture=x86_64, version=17.1.12)
Dec 06 08:31:02 np0005548790.localdomain podman[71485]: 2025-12-06 08:31:02.590682577 +0000 UTC m=+0.098225369 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, version=17.1.12, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:31:02 np0005548790.localdomain podman[71482]: 2025-12-06 08:31:02.699124499 +0000 UTC m=+0.215077706 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, container_name=iscsid, distribution-scope=public)
Dec 06 08:31:02 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:31:02 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:31:02 np0005548790.localdomain podman[71485]: 2025-12-06 08:31:02.720159512 +0000 UTC m=+0.227702264 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.component=openstack-cron-container, container_name=logrotate_crond)
Dec 06 08:31:02 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:31:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:31:03 np0005548790.localdomain podman[71569]: 2025-12-06 08:31:03.529659523 +0000 UTC m=+0.053933965 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:31:03 np0005548790.localdomain systemd[1]: tmp-crun.XTfc5P.mount: Deactivated successfully.
Dec 06 08:31:03 np0005548790.localdomain podman[71569]: 2025-12-06 08:31:03.900806744 +0000 UTC m=+0.425081216 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, version=17.1.12, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:31:03 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:31:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:31:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:31:06 np0005548790.localdomain podman[71593]: 2025-12-06 08:31:06.543849467 +0000 UTC m=+0.055132916 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, tcib_managed=true, version=17.1.12)
Dec 06 08:31:06 np0005548790.localdomain podman[71593]: 2025-12-06 08:31:06.594196854 +0000 UTC m=+0.105480293 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, container_name=ovn_controller)
Dec 06 08:31:06 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:31:06 np0005548790.localdomain podman[71592]: 2025-12-06 08:31:06.659369118 +0000 UTC m=+0.170914304 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, io.buildah.version=1.41.4, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible)
Dec 06 08:31:06 np0005548790.localdomain podman[71592]: 2025-12-06 08:31:06.725095836 +0000 UTC m=+0.236641032 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, vcs-type=git, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, architecture=x86_64, version=17.1.12)
Dec 06 08:31:06 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:31:14 np0005548790.localdomain snmpd[67989]: empty variable list in _query
Dec 06 08:31:14 np0005548790.localdomain snmpd[67989]: empty variable list in _query
Dec 06 08:31:18 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:31:18 np0005548790.localdomain systemd[1]: tmp-crun.qb1JGc.mount: Deactivated successfully.
Dec 06 08:31:18 np0005548790.localdomain podman[71639]: 2025-12-06 08:31:18.562917328 +0000 UTC m=+0.079101018 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:31:18 np0005548790.localdomain podman[71639]: 2025-12-06 08:31:18.787547798 +0000 UTC m=+0.303731528 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1761123044)
Dec 06 08:31:18 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:31:21 np0005548790.localdomain sudo[71669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:31:21 np0005548790.localdomain sudo[71669]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:31:21 np0005548790.localdomain sudo[71669]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:22 np0005548790.localdomain sudo[71684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:31:22 np0005548790.localdomain sudo[71684]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:31:22 np0005548790.localdomain sudo[71684]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:22 np0005548790.localdomain sudo[71730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:31:22 np0005548790.localdomain sudo[71730]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:31:22 np0005548790.localdomain sudo[71730]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:23 np0005548790.localdomain sudo[71745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 -- inventory --format=json-pretty --filter-for-batch
Dec 06 08:31:23 np0005548790.localdomain sudo[71745]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:31:23 np0005548790.localdomain podman[71801]: 
Dec 06 08:31:23 np0005548790.localdomain podman[71801]: 2025-12-06 08:31:23.535181457 +0000 UTC m=+0.060378248 container create a534a2a0b36ea524a5c3fc0dc1184782a8629d4cfbb17db1105764c16817ced4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_dhawan, io.openshift.expose-services=, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, vcs-type=git, name=rhceph, release=1763362218, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7)
Dec 06 08:31:23 np0005548790.localdomain systemd[1]: Started libpod-conmon-a534a2a0b36ea524a5c3fc0dc1184782a8629d4cfbb17db1105764c16817ced4.scope.
Dec 06 08:31:23 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:31:23 np0005548790.localdomain podman[71801]: 2025-12-06 08:31:23.503301513 +0000 UTC m=+0.028498354 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 08:31:23 np0005548790.localdomain podman[71801]: 2025-12-06 08:31:23.610587804 +0000 UTC m=+0.135784595 container init a534a2a0b36ea524a5c3fc0dc1184782a8629d4cfbb17db1105764c16817ced4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_dhawan, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, build-date=2025-11-26T19:44:28Z, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-type=git, GIT_BRANCH=main)
Dec 06 08:31:23 np0005548790.localdomain podman[71801]: 2025-12-06 08:31:23.62199854 +0000 UTC m=+0.147195331 container start a534a2a0b36ea524a5c3fc0dc1184782a8629d4cfbb17db1105764c16817ced4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_dhawan, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=rhceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 08:31:23 np0005548790.localdomain podman[71801]: 2025-12-06 08:31:23.622221786 +0000 UTC m=+0.147418637 container attach a534a2a0b36ea524a5c3fc0dc1184782a8629d4cfbb17db1105764c16817ced4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_dhawan, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, release=1763362218, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, ceph=True, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 06 08:31:23 np0005548790.localdomain sad_dhawan[71816]: 167 167
Dec 06 08:31:23 np0005548790.localdomain systemd[1]: libpod-a534a2a0b36ea524a5c3fc0dc1184782a8629d4cfbb17db1105764c16817ced4.scope: Deactivated successfully.
Dec 06 08:31:23 np0005548790.localdomain podman[71801]: 2025-12-06 08:31:23.627159098 +0000 UTC m=+0.152355949 container died a534a2a0b36ea524a5c3fc0dc1184782a8629d4cfbb17db1105764c16817ced4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_dhawan, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, ceph=True, vcs-type=git, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, architecture=x86_64, version=7, release=1763362218)
Dec 06 08:31:23 np0005548790.localdomain podman[71821]: 2025-12-06 08:31:23.72067549 +0000 UTC m=+0.084199284 container remove a534a2a0b36ea524a5c3fc0dc1184782a8629d4cfbb17db1105764c16817ced4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_dhawan, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., architecture=x86_64, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec 06 08:31:23 np0005548790.localdomain systemd[1]: libpod-conmon-a534a2a0b36ea524a5c3fc0dc1184782a8629d4cfbb17db1105764c16817ced4.scope: Deactivated successfully.
Dec 06 08:31:23 np0005548790.localdomain podman[71843]: 
Dec 06 08:31:23 np0005548790.localdomain podman[71843]: 2025-12-06 08:31:23.915704729 +0000 UTC m=+0.060877030 container create ca13b4b45e998779f2484c7572f655b430655d7f549efb65ec926a76321902fa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_moore, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, RELEASE=main, ceph=True, version=7, name=rhceph, io.openshift.expose-services=, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 06 08:31:23 np0005548790.localdomain systemd[1]: Started libpod-conmon-ca13b4b45e998779f2484c7572f655b430655d7f549efb65ec926a76321902fa.scope.
Dec 06 08:31:23 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:31:23 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77181fe1518cbed24172ee28a0baf71d127df3fef8a0acf709996b7238afd29c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 08:31:23 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77181fe1518cbed24172ee28a0baf71d127df3fef8a0acf709996b7238afd29c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:31:23 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77181fe1518cbed24172ee28a0baf71d127df3fef8a0acf709996b7238afd29c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 08:31:23 np0005548790.localdomain podman[71843]: 2025-12-06 08:31:23.966527729 +0000 UTC m=+0.111700040 container init ca13b4b45e998779f2484c7572f655b430655d7f549efb65ec926a76321902fa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_moore, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, RELEASE=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_BRANCH=main, GIT_CLEAN=True, ceph=True, release=1763362218, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public)
Dec 06 08:31:23 np0005548790.localdomain podman[71843]: 2025-12-06 08:31:23.973375542 +0000 UTC m=+0.118547863 container start ca13b4b45e998779f2484c7572f655b430655d7f549efb65ec926a76321902fa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_moore, name=rhceph, com.redhat.component=rhceph-container, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, ceph=True, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, RELEASE=main, vendor=Red Hat, Inc.)
Dec 06 08:31:23 np0005548790.localdomain podman[71843]: 2025-12-06 08:31:23.973629618 +0000 UTC m=+0.118801949 container attach ca13b4b45e998779f2484c7572f655b430655d7f549efb65ec926a76321902fa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_moore, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., release=1763362218, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, ceph=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-type=git, description=Red Hat Ceph Storage 7)
Dec 06 08:31:23 np0005548790.localdomain podman[71843]: 2025-12-06 08:31:23.896898395 +0000 UTC m=+0.042070696 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 08:31:24 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-5953b5176cf454274d96562f32d596c8efaef7be347c9ac204f928807b329ea8-merged.mount: Deactivated successfully.
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]: [
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:     {
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:         "available": false,
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:         "ceph_device": false,
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:         "lsm_data": {},
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:         "lvs": [],
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:         "path": "/dev/sr0",
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:         "rejected_reasons": [
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:             "Insufficient space (<5GB)",
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:             "Has a FileSystem"
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:         ],
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:         "sys_api": {
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:             "actuators": null,
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:             "device_nodes": "sr0",
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:             "human_readable_size": "482.00 KB",
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:             "id_bus": "ata",
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:             "model": "QEMU DVD-ROM",
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:             "nr_requests": "2",
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:             "partitions": {},
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:             "path": "/dev/sr0",
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:             "removable": "1",
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:             "rev": "2.5+",
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:             "ro": "0",
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:             "rotational": "1",
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:             "sas_address": "",
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:             "sas_device_handle": "",
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:             "scheduler_mode": "mq-deadline",
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:             "sectors": 0,
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:             "sectorsize": "2048",
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:             "size": 493568.0,
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:             "support_discard": "0",
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:             "type": "disk",
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:             "vendor": "QEMU"
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:         }
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]:     }
Dec 06 08:31:24 np0005548790.localdomain intelligent_moore[71859]: ]
Dec 06 08:31:24 np0005548790.localdomain systemd[1]: libpod-ca13b4b45e998779f2484c7572f655b430655d7f549efb65ec926a76321902fa.scope: Deactivated successfully.
Dec 06 08:31:24 np0005548790.localdomain podman[71843]: 2025-12-06 08:31:24.846179417 +0000 UTC m=+0.991351788 container died ca13b4b45e998779f2484c7572f655b430655d7f549efb65ec926a76321902fa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_moore, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, release=1763362218, version=7, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 08:31:24 np0005548790.localdomain systemd[1]: tmp-crun.gRKMtK.mount: Deactivated successfully.
Dec 06 08:31:24 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-77181fe1518cbed24172ee28a0baf71d127df3fef8a0acf709996b7238afd29c-merged.mount: Deactivated successfully.
Dec 06 08:31:24 np0005548790.localdomain podman[73620]: 2025-12-06 08:31:24.923554867 +0000 UTC m=+0.071253118 container remove ca13b4b45e998779f2484c7572f655b430655d7f549efb65ec926a76321902fa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_moore, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, RELEASE=main, release=1763362218, build-date=2025-11-26T19:44:28Z, name=rhceph, GIT_BRANCH=main, vendor=Red Hat, Inc., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_CLEAN=True, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-type=git)
Dec 06 08:31:24 np0005548790.localdomain systemd[1]: libpod-conmon-ca13b4b45e998779f2484c7572f655b430655d7f549efb65ec926a76321902fa.scope: Deactivated successfully.
Dec 06 08:31:24 np0005548790.localdomain sudo[71745]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:25 np0005548790.localdomain sudo[73634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:31:25 np0005548790.localdomain sudo[73634]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:31:25 np0005548790.localdomain sudo[73634]: pam_unix(sudo:session): session closed for user root
Dec 06 08:31:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:31:30 np0005548790.localdomain podman[73649]: 2025-12-06 08:31:30.555657083 +0000 UTC m=+0.070923488 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Dec 06 08:31:30 np0005548790.localdomain podman[73649]: 2025-12-06 08:31:30.566187535 +0000 UTC m=+0.081453900 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Dec 06 08:31:30 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:31:30 np0005548790.localdomain sshd[73669]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:31:32 np0005548790.localdomain sshd[73669]: Received disconnect from 43.163.123.45 port 33244:11: Bye Bye [preauth]
Dec 06 08:31:32 np0005548790.localdomain sshd[73669]: Disconnected from authenticating user root 43.163.123.45 port 33244 [preauth]
Dec 06 08:31:33 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:31:33 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:31:33 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:31:33 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:31:33 np0005548790.localdomain podman[73674]: 2025-12-06 08:31:33.583990587 +0000 UTC m=+0.090867892 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, release=1761123044, container_name=logrotate_crond, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:31:33 np0005548790.localdomain podman[73673]: 2025-12-06 08:31:33.624304475 +0000 UTC m=+0.134783697 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, release=1761123044, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1)
Dec 06 08:31:33 np0005548790.localdomain podman[73671]: 2025-12-06 08:31:33.673914543 +0000 UTC m=+0.184032376 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, version=17.1.12, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, url=https://www.redhat.com)
Dec 06 08:31:33 np0005548790.localdomain podman[73673]: 2025-12-06 08:31:33.682163684 +0000 UTC m=+0.192642906 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, vcs-type=git, version=17.1.12, distribution-scope=public)
Dec 06 08:31:33 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:31:33 np0005548790.localdomain podman[73674]: 2025-12-06 08:31:33.727977769 +0000 UTC m=+0.234855064 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.expose-services=, tcib_managed=true, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, name=rhosp17/openstack-cron, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container)
Dec 06 08:31:33 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:31:33 np0005548790.localdomain podman[73672]: 2025-12-06 08:31:33.744299836 +0000 UTC m=+0.253804151 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1)
Dec 06 08:31:33 np0005548790.localdomain podman[73672]: 2025-12-06 08:31:33.77618921 +0000 UTC m=+0.285693565 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-type=git, release=1761123044, version=17.1.12)
Dec 06 08:31:33 np0005548790.localdomain podman[73671]: 2025-12-06 08:31:33.784947504 +0000 UTC m=+0.295065337 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, container_name=iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, release=1761123044)
Dec 06 08:31:33 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:31:33 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:31:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:31:34 np0005548790.localdomain podman[73765]: 2025-12-06 08:31:34.579720831 +0000 UTC m=+0.090510543 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044)
Dec 06 08:31:34 np0005548790.localdomain sshd[73789]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:31:34 np0005548790.localdomain podman[73765]: 2025-12-06 08:31:34.978254615 +0000 UTC m=+0.489044287 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public)
Dec 06 08:31:34 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:31:36 np0005548790.localdomain sshd[73789]: Received disconnect from 35.247.75.98 port 44140:11: Bye Bye [preauth]
Dec 06 08:31:36 np0005548790.localdomain sshd[73789]: Disconnected from authenticating user root 35.247.75.98 port 44140 [preauth]
Dec 06 08:31:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:31:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:31:37 np0005548790.localdomain podman[73792]: 2025-12-06 08:31:37.571251049 +0000 UTC m=+0.084414260 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 06 08:31:37 np0005548790.localdomain systemd[1]: tmp-crun.2RWfNi.mount: Deactivated successfully.
Dec 06 08:31:37 np0005548790.localdomain podman[73793]: 2025-12-06 08:31:37.630521615 +0000 UTC m=+0.142613286 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, release=1761123044, architecture=x86_64, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, tcib_managed=true)
Dec 06 08:31:37 np0005548790.localdomain podman[73792]: 2025-12-06 08:31:37.654143727 +0000 UTC m=+0.167306948 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 06 08:31:37 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:31:37 np0005548790.localdomain podman[73793]: 2025-12-06 08:31:37.706369435 +0000 UTC m=+0.218461176 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, container_name=ovn_controller, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, release=1761123044, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:31:37 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:31:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:31:49 np0005548790.localdomain podman[73841]: 2025-12-06 08:31:49.569289114 +0000 UTC m=+0.082148196 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.buildah.version=1.41.4)
Dec 06 08:31:49 np0005548790.localdomain podman[73841]: 2025-12-06 08:31:49.765959539 +0000 UTC m=+0.278818571 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Dec 06 08:31:49 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:32:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:32:01 np0005548790.localdomain systemd[1]: tmp-crun.tJ1aq0.mount: Deactivated successfully.
Dec 06 08:32:01 np0005548790.localdomain podman[73869]: 2025-12-06 08:32:01.567626011 +0000 UTC m=+0.082746731 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, architecture=x86_64, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, release=1761123044, batch=17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd)
Dec 06 08:32:01 np0005548790.localdomain podman[73869]: 2025-12-06 08:32:01.578570634 +0000 UTC m=+0.093691324 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, release=1761123044, url=https://www.redhat.com, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, vcs-type=git, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd)
Dec 06 08:32:01 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:32:03 np0005548790.localdomain sshd[73889]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:32:04 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:32:04 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:32:04 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:32:04 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:32:04 np0005548790.localdomain systemd[1]: tmp-crun.p7Sttd.mount: Deactivated successfully.
Dec 06 08:32:04 np0005548790.localdomain podman[73891]: 2025-12-06 08:32:04.594372207 +0000 UTC m=+0.104638907 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, release=1761123044, container_name=iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=)
Dec 06 08:32:04 np0005548790.localdomain podman[73893]: 2025-12-06 08:32:04.606996284 +0000 UTC m=+0.116366610 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi)
Dec 06 08:32:04 np0005548790.localdomain podman[73894]: 2025-12-06 08:32:04.571203977 +0000 UTC m=+0.077591813 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, config_id=tripleo_step4, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public)
Dec 06 08:32:04 np0005548790.localdomain podman[73892]: 2025-12-06 08:32:04.627172083 +0000 UTC m=+0.134775282 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.expose-services=, version=17.1.12, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, vendor=Red Hat, Inc.)
Dec 06 08:32:04 np0005548790.localdomain podman[73891]: 2025-12-06 08:32:04.633238675 +0000 UTC m=+0.143505425 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, release=1761123044, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 06 08:32:04 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:32:04 np0005548790.localdomain podman[73894]: 2025-12-06 08:32:04.655156561 +0000 UTC m=+0.161544357 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, io.openshift.expose-services=, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, name=rhosp17/openstack-cron, container_name=logrotate_crond, batch=17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron)
Dec 06 08:32:04 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:32:04 np0005548790.localdomain podman[73892]: 2025-12-06 08:32:04.676166592 +0000 UTC m=+0.183769761 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, vcs-type=git, url=https://www.redhat.com)
Dec 06 08:32:04 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:32:04 np0005548790.localdomain podman[73893]: 2025-12-06 08:32:04.741014605 +0000 UTC m=+0.250384921 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64)
Dec 06 08:32:04 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:32:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:32:05 np0005548790.localdomain systemd[1]: tmp-crun.hxfAtH.mount: Deactivated successfully.
Dec 06 08:32:05 np0005548790.localdomain podman[73981]: 2025-12-06 08:32:05.560875432 +0000 UTC m=+0.079487386 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=nova_migration_target, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z)
Dec 06 08:32:05 np0005548790.localdomain podman[73981]: 2025-12-06 08:32:05.9421742 +0000 UTC m=+0.460786094 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team)
Dec 06 08:32:05 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:32:06 np0005548790.localdomain sshd[73889]: Received disconnect from 103.226.138.52 port 60966:11: Bye Bye [preauth]
Dec 06 08:32:06 np0005548790.localdomain sshd[73889]: Disconnected from authenticating user root 103.226.138.52 port 60966 [preauth]
Dec 06 08:32:08 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:32:08 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:32:08 np0005548790.localdomain systemd[1]: tmp-crun.169N5y.mount: Deactivated successfully.
Dec 06 08:32:08 np0005548790.localdomain podman[74004]: 2025-12-06 08:32:08.577082075 +0000 UTC m=+0.085857015 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent)
Dec 06 08:32:08 np0005548790.localdomain podman[74004]: 2025-12-06 08:32:08.617228348 +0000 UTC m=+0.126003378 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=)
Dec 06 08:32:08 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:32:08 np0005548790.localdomain podman[74005]: 2025-12-06 08:32:08.634114079 +0000 UTC m=+0.141906193 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible)
Dec 06 08:32:08 np0005548790.localdomain podman[74005]: 2025-12-06 08:32:08.659363574 +0000 UTC m=+0.167155678 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=ovn_controller, config_id=tripleo_step4, io.buildah.version=1.41.4, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git)
Dec 06 08:32:08 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:32:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:32:20 np0005548790.localdomain systemd[1]: tmp-crun.kme7oP.mount: Deactivated successfully.
Dec 06 08:32:20 np0005548790.localdomain podman[74052]: 2025-12-06 08:32:20.569548723 +0000 UTC m=+0.084534968 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Dec 06 08:32:20 np0005548790.localdomain podman[74052]: 2025-12-06 08:32:20.748421783 +0000 UTC m=+0.263407958 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, release=1761123044, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1)
Dec 06 08:32:20 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:32:25 np0005548790.localdomain sudo[74081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:32:25 np0005548790.localdomain sudo[74081]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:32:25 np0005548790.localdomain sudo[74081]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:25 np0005548790.localdomain sudo[74096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 08:32:25 np0005548790.localdomain sudo[74096]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:32:26 np0005548790.localdomain systemd[1]: tmp-crun.Twd9CV.mount: Deactivated successfully.
Dec 06 08:32:26 np0005548790.localdomain podman[74180]: 2025-12-06 08:32:26.674266921 +0000 UTC m=+0.100747973 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, name=rhceph, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, release=1763362218, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 08:32:26 np0005548790.localdomain podman[74180]: 2025-12-06 08:32:26.778475196 +0000 UTC m=+0.204956298 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, version=7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_BRANCH=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, release=1763362218, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4)
Dec 06 08:32:27 np0005548790.localdomain sudo[74096]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:27 np0005548790.localdomain sudo[74246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:32:27 np0005548790.localdomain sudo[74246]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:32:27 np0005548790.localdomain sudo[74246]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:27 np0005548790.localdomain sudo[74261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:32:27 np0005548790.localdomain sudo[74261]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:32:27 np0005548790.localdomain sudo[74261]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:28 np0005548790.localdomain sudo[74308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:32:28 np0005548790.localdomain sudo[74308]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:32:28 np0005548790.localdomain sudo[74308]: pam_unix(sudo:session): session closed for user root
Dec 06 08:32:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:32:32 np0005548790.localdomain podman[74323]: 2025-12-06 08:32:32.562943376 +0000 UTC m=+0.079120504 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z)
Dec 06 08:32:32 np0005548790.localdomain podman[74323]: 2025-12-06 08:32:32.60611197 +0000 UTC m=+0.122289088 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, release=1761123044, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com)
Dec 06 08:32:32 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:32:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:32:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:32:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:32:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:32:35 np0005548790.localdomain systemd[1]: tmp-crun.mscGTG.mount: Deactivated successfully.
Dec 06 08:32:35 np0005548790.localdomain podman[74343]: 2025-12-06 08:32:35.589177488 +0000 UTC m=+0.093699034 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=iscsid)
Dec 06 08:32:35 np0005548790.localdomain podman[74343]: 2025-12-06 08:32:35.626200347 +0000 UTC m=+0.130721853 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1)
Dec 06 08:32:35 np0005548790.localdomain systemd[1]: tmp-crun.dc4LmQ.mount: Deactivated successfully.
Dec 06 08:32:35 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:32:35 np0005548790.localdomain podman[74344]: 2025-12-06 08:32:35.69030079 +0000 UTC m=+0.191869358 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, distribution-scope=public)
Dec 06 08:32:35 np0005548790.localdomain podman[74346]: 2025-12-06 08:32:35.641889687 +0000 UTC m=+0.137461375 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true)
Dec 06 08:32:35 np0005548790.localdomain podman[74344]: 2025-12-06 08:32:35.723177529 +0000 UTC m=+0.224746147 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:32:35 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:32:35 np0005548790.localdomain podman[74345]: 2025-12-06 08:32:35.737422609 +0000 UTC m=+0.235357309 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:32:35 np0005548790.localdomain podman[74345]: 2025-12-06 08:32:35.766073255 +0000 UTC m=+0.264007945 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=)
Dec 06 08:32:35 np0005548790.localdomain podman[74346]: 2025-12-06 08:32:35.773676148 +0000 UTC m=+0.269247816 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, container_name=logrotate_crond, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:32:35 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:32:35 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:32:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:32:36 np0005548790.localdomain podman[74432]: 2025-12-06 08:32:36.568583737 +0000 UTC m=+0.084212031 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible)
Dec 06 08:32:36 np0005548790.localdomain podman[74432]: 2025-12-06 08:32:36.942309164 +0000 UTC m=+0.457937498 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4)
Dec 06 08:32:36 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:32:39 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:32:39 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:32:39 np0005548790.localdomain podman[74454]: 2025-12-06 08:32:39.571067075 +0000 UTC m=+0.081624302 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=)
Dec 06 08:32:39 np0005548790.localdomain podman[74455]: 2025-12-06 08:32:39.630259736 +0000 UTC m=+0.137854434 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=)
Dec 06 08:32:39 np0005548790.localdomain podman[74454]: 2025-12-06 08:32:39.652126381 +0000 UTC m=+0.162683588 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public)
Dec 06 08:32:39 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:32:39 np0005548790.localdomain podman[74455]: 2025-12-06 08:32:39.683282033 +0000 UTC m=+0.190876711 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, release=1761123044, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 06 08:32:39 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:32:41 np0005548790.localdomain sshd[74502]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:32:42 np0005548790.localdomain sshd[74502]: Received disconnect from 43.163.123.45 port 60136:11: Bye Bye [preauth]
Dec 06 08:32:42 np0005548790.localdomain sshd[74502]: Disconnected from authenticating user root 43.163.123.45 port 60136 [preauth]
Dec 06 08:32:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:32:51 np0005548790.localdomain podman[74504]: 2025-12-06 08:32:51.545381061 +0000 UTC m=+0.067078842 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, batch=17.1_20251118.1)
Dec 06 08:32:51 np0005548790.localdomain podman[74504]: 2025-12-06 08:32:51.740518474 +0000 UTC m=+0.262216305 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z)
Dec 06 08:32:51 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:33:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:33:03 np0005548790.localdomain systemd[1]: tmp-crun.4ldU3s.mount: Deactivated successfully.
Dec 06 08:33:03 np0005548790.localdomain podman[74533]: 2025-12-06 08:33:03.569122035 +0000 UTC m=+0.083526053 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., release=1761123044, container_name=collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Dec 06 08:33:03 np0005548790.localdomain podman[74533]: 2025-12-06 08:33:03.604193601 +0000 UTC m=+0.118597679 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, version=17.1.12, build-date=2025-11-18T22:51:28Z, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Dec 06 08:33:03 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:33:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:33:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:33:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:33:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:33:06 np0005548790.localdomain systemd[1]: tmp-crun.8tj8Z6.mount: Deactivated successfully.
Dec 06 08:33:06 np0005548790.localdomain podman[74556]: 2025-12-06 08:33:06.56339191 +0000 UTC m=+0.072495557 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z)
Dec 06 08:33:06 np0005548790.localdomain podman[74554]: 2025-12-06 08:33:06.628654725 +0000 UTC m=+0.140694890 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., config_id=tripleo_step3, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, build-date=2025-11-18T23:44:13Z, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4)
Dec 06 08:33:06 np0005548790.localdomain podman[74554]: 2025-12-06 08:33:06.636451453 +0000 UTC m=+0.148491608 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, batch=17.1_20251118.1)
Dec 06 08:33:06 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:33:06 np0005548790.localdomain podman[74556]: 2025-12-06 08:33:06.646480351 +0000 UTC m=+0.155583958 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 08:33:06 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:33:06 np0005548790.localdomain podman[74566]: 2025-12-06 08:33:06.606035471 +0000 UTC m=+0.107144655 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12)
Dec 06 08:33:06 np0005548790.localdomain podman[74555]: 2025-12-06 08:33:06.614823385 +0000 UTC m=+0.127618131 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, architecture=x86_64, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1761123044, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z)
Dec 06 08:33:06 np0005548790.localdomain podman[74566]: 2025-12-06 08:33:06.684004674 +0000 UTC m=+0.185113838 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Dec 06 08:33:06 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:33:06 np0005548790.localdomain podman[74555]: 2025-12-06 08:33:06.698053229 +0000 UTC m=+0.210847975 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, container_name=ceilometer_agent_compute)
Dec 06 08:33:06 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:33:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:33:07 np0005548790.localdomain systemd[1]: tmp-crun.Gt92J6.mount: Deactivated successfully.
Dec 06 08:33:07 np0005548790.localdomain podman[74646]: 2025-12-06 08:33:07.561826499 +0000 UTC m=+0.076952737 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=)
Dec 06 08:33:07 np0005548790.localdomain podman[74646]: 2025-12-06 08:33:07.923080562 +0000 UTC m=+0.438206780 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:33:07 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:33:10 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:33:10 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:33:10 np0005548790.localdomain systemd[1]: tmp-crun.wcYMoh.mount: Deactivated successfully.
Dec 06 08:33:10 np0005548790.localdomain podman[74669]: 2025-12-06 08:33:10.562967999 +0000 UTC m=+0.079646518 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, distribution-scope=public, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:33:10 np0005548790.localdomain podman[74670]: 2025-12-06 08:33:10.611928658 +0000 UTC m=+0.123558252 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-11-18T23:34:05Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, container_name=ovn_controller, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com)
Dec 06 08:33:10 np0005548790.localdomain podman[74670]: 2025-12-06 08:33:10.637473061 +0000 UTC m=+0.149102665 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, distribution-scope=public, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com)
Dec 06 08:33:10 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:33:10 np0005548790.localdomain podman[74669]: 2025-12-06 08:33:10.687613831 +0000 UTC m=+0.204292350 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, version=17.1.12, batch=17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public)
Dec 06 08:33:10 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:33:22 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:33:22 np0005548790.localdomain podman[74718]: 2025-12-06 08:33:22.566654178 +0000 UTC m=+0.082047082 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, release=1761123044, batch=17.1_20251118.1, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, managed_by=tripleo_ansible)
Dec 06 08:33:22 np0005548790.localdomain podman[74718]: 2025-12-06 08:33:22.743894253 +0000 UTC m=+0.259287147 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd)
Dec 06 08:33:22 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:33:28 np0005548790.localdomain sshd[74747]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:33:28 np0005548790.localdomain sudo[74749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:33:28 np0005548790.localdomain sudo[74749]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:33:28 np0005548790.localdomain sudo[74749]: pam_unix(sudo:session): session closed for user root
Dec 06 08:33:28 np0005548790.localdomain sudo[74764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:33:28 np0005548790.localdomain sudo[74764]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:33:29 np0005548790.localdomain sudo[74764]: pam_unix(sudo:session): session closed for user root
Dec 06 08:33:30 np0005548790.localdomain sudo[74811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:33:30 np0005548790.localdomain sudo[74811]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:33:30 np0005548790.localdomain sudo[74811]: pam_unix(sudo:session): session closed for user root
Dec 06 08:33:30 np0005548790.localdomain sshd[74747]: Received disconnect from 35.247.75.98 port 44794:11: Bye Bye [preauth]
Dec 06 08:33:30 np0005548790.localdomain sshd[74747]: Disconnected from authenticating user root 35.247.75.98 port 44794 [preauth]
Dec 06 08:33:34 np0005548790.localdomain sshd[74826]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:33:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:33:34 np0005548790.localdomain podman[74828]: 2025-12-06 08:33:34.572416273 +0000 UTC m=+0.083300247 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4)
Dec 06 08:33:34 np0005548790.localdomain podman[74828]: 2025-12-06 08:33:34.584149136 +0000 UTC m=+0.095033100 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, container_name=collectd, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Dec 06 08:33:34 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:33:37 np0005548790.localdomain sshd[74826]: Received disconnect from 103.226.138.52 port 35420:11: Bye Bye [preauth]
Dec 06 08:33:37 np0005548790.localdomain sshd[74826]: Disconnected from authenticating user root 103.226.138.52 port 35420 [preauth]
Dec 06 08:33:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:33:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:33:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:33:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:33:37 np0005548790.localdomain podman[74849]: 2025-12-06 08:33:37.171359266 +0000 UTC m=+0.089540283 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 08:33:37 np0005548790.localdomain podman[74849]: 2025-12-06 08:33:37.200101135 +0000 UTC m=+0.118282152 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public)
Dec 06 08:33:37 np0005548790.localdomain podman[74855]: 2025-12-06 08:33:37.220105079 +0000 UTC m=+0.134739641 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, batch=17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, container_name=logrotate_crond)
Dec 06 08:33:37 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:33:37 np0005548790.localdomain podman[74847]: 2025-12-06 08:33:37.272847089 +0000 UTC m=+0.199242545 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 08:33:37 np0005548790.localdomain podman[74847]: 2025-12-06 08:33:37.281599162 +0000 UTC m=+0.207994628 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, vcs-type=git, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 08:33:37 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:33:37 np0005548790.localdomain podman[74855]: 2025-12-06 08:33:37.312918959 +0000 UTC m=+0.227553561 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, release=1761123044, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc.)
Dec 06 08:33:37 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:33:37 np0005548790.localdomain podman[74848]: 2025-12-06 08:33:37.329213134 +0000 UTC m=+0.248507901 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4)
Dec 06 08:33:37 np0005548790.localdomain podman[74848]: 2025-12-06 08:33:37.359170495 +0000 UTC m=+0.278465282 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.expose-services=, release=1761123044, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4)
Dec 06 08:33:37 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:33:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:33:38 np0005548790.localdomain podman[74938]: 2025-12-06 08:33:38.065250152 +0000 UTC m=+0.077524813 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, container_name=nova_migration_target, vendor=Red Hat, Inc., release=1761123044, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, version=17.1.12, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1)
Dec 06 08:33:38 np0005548790.localdomain systemd[1]: tmp-crun.MdXKdZ.mount: Deactivated successfully.
Dec 06 08:33:38 np0005548790.localdomain podman[74938]: 2025-12-06 08:33:38.493294748 +0000 UTC m=+0.505569349 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=nova_migration_target, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:33:38 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:33:41 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:33:41 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:33:41 np0005548790.localdomain systemd[1]: tmp-crun.Zd7HXz.mount: Deactivated successfully.
Dec 06 08:33:41 np0005548790.localdomain podman[74961]: 2025-12-06 08:33:41.556417376 +0000 UTC m=+0.074399150 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, vcs-type=git, batch=17.1_20251118.1, container_name=ovn_metadata_agent, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public)
Dec 06 08:33:41 np0005548790.localdomain podman[74962]: 2025-12-06 08:33:41.616238134 +0000 UTC m=+0.130409105 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-ovn-controller, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4)
Dec 06 08:33:41 np0005548790.localdomain podman[74961]: 2025-12-06 08:33:41.647356105 +0000 UTC m=+0.165337949 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ovn_metadata_agent, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 08:33:41 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:33:41 np0005548790.localdomain podman[74962]: 2025-12-06 08:33:41.661201175 +0000 UTC m=+0.175372156 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com)
Dec 06 08:33:41 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:33:47 np0005548790.localdomain sudo[75054]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-plrwefuwdnidfwjkgyznxmsrzvfxmphl ; /usr/bin/python3
Dec 06 08:33:47 np0005548790.localdomain sudo[75054]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:33:48 np0005548790.localdomain python3[75056]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:33:48 np0005548790.localdomain sudo[75054]: pam_unix(sudo:session): session closed for user root
Dec 06 08:33:48 np0005548790.localdomain sudo[75099]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tjsjwucshqsurfouqwgjjkonrjhmybuz ; /usr/bin/python3
Dec 06 08:33:48 np0005548790.localdomain sudo[75099]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:33:48 np0005548790.localdomain python3[75101]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765010027.7232997-113372-174270403938616/source _original_basename=tmpbwt2u6uj follow=False checksum=039e0b234f00fbd1242930f0d5dc67e8b4c067fe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:33:48 np0005548790.localdomain sudo[75099]: pam_unix(sudo:session): session closed for user root
Dec 06 08:33:49 np0005548790.localdomain sudo[75129]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mmvsolnelxrfvenrpgvrdwnxpmvcjudg ; /usr/bin/python3
Dec 06 08:33:49 np0005548790.localdomain sudo[75129]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:33:49 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 08:33:49 np0005548790.localdomain recover_tripleo_nova_virtqemud[75133]: 62556
Dec 06 08:33:49 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 08:33:49 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 08:33:49 np0005548790.localdomain python3[75131]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:33:49 np0005548790.localdomain sudo[75129]: pam_unix(sudo:session): session closed for user root
Dec 06 08:33:50 np0005548790.localdomain sudo[75181]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pccihxacxducioglbvpugwgmszichipp ; /usr/bin/python3
Dec 06 08:33:50 np0005548790.localdomain sudo[75181]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:33:50 np0005548790.localdomain sudo[75181]: pam_unix(sudo:session): session closed for user root
Dec 06 08:33:50 np0005548790.localdomain sudo[75199]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pkgnkqobbenkdiprmuwnqbengzwhlmpg ; /usr/bin/python3
Dec 06 08:33:50 np0005548790.localdomain sudo[75199]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:33:50 np0005548790.localdomain sudo[75199]: pam_unix(sudo:session): session closed for user root
Dec 06 08:33:51 np0005548790.localdomain sudo[75303]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qpamtzqegtcqugjycxafslvyegessdez ; ANSIBLE_ASYNC_DIR=/tmp/.ansible_async /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765010030.7394736-113614-99679761671187/async_wrapper.py 678753906868 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765010030.7394736-113614-99679761671187/AnsiballZ_command.py _
Dec 06 08:33:51 np0005548790.localdomain sudo[75303]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 06 08:33:51 np0005548790.localdomain ansible-async_wrapper.py[75305]: Invoked with 678753906868 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765010030.7394736-113614-99679761671187/AnsiballZ_command.py _
Dec 06 08:33:51 np0005548790.localdomain ansible-async_wrapper.py[75308]: Starting module and watcher
Dec 06 08:33:51 np0005548790.localdomain ansible-async_wrapper.py[75308]: Start watching 75309 (3600)
Dec 06 08:33:51 np0005548790.localdomain ansible-async_wrapper.py[75309]: Start module (75309)
Dec 06 08:33:51 np0005548790.localdomain ansible-async_wrapper.py[75305]: Return async_wrapper task started.
Dec 06 08:33:51 np0005548790.localdomain sudo[75303]: pam_unix(sudo:session): session closed for user root
Dec 06 08:33:51 np0005548790.localdomain sudo[75327]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dzdyibrdksfbkrtaqrhhnacnohjnbfiw ; /usr/bin/python3
Dec 06 08:33:51 np0005548790.localdomain sudo[75327]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:33:51 np0005548790.localdomain python3[75329]: ansible-ansible.legacy.async_status Invoked with jid=678753906868.75305 mode=status _async_dir=/tmp/.ansible_async
Dec 06 08:33:51 np0005548790.localdomain sudo[75327]: pam_unix(sudo:session): session closed for user root
Dec 06 08:33:53 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:33:53 np0005548790.localdomain sshd[75373]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:33:53 np0005548790.localdomain systemd[1]: tmp-crun.gLIws8.mount: Deactivated successfully.
Dec 06 08:33:53 np0005548790.localdomain podman[75374]: 2025-12-06 08:33:53.608030997 +0000 UTC m=+0.114949432 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, managed_by=tripleo_ansible, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:33:53 np0005548790.localdomain podman[75374]: 2025-12-06 08:33:53.816884457 +0000 UTC m=+0.323802902 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044, container_name=metrics_qdr, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:33:53 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:33:54 np0005548790.localdomain puppet-user[75323]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 06 08:33:54 np0005548790.localdomain puppet-user[75323]:    (file: /etc/puppet/hiera.yaml)
Dec 06 08:33:54 np0005548790.localdomain puppet-user[75323]: Warning: Undefined variable '::deploy_config_name';
Dec 06 08:33:54 np0005548790.localdomain puppet-user[75323]:    (file & line not available)
Dec 06 08:33:54 np0005548790.localdomain puppet-user[75323]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 06 08:33:54 np0005548790.localdomain puppet-user[75323]:    (file & line not available)
Dec 06 08:33:54 np0005548790.localdomain puppet-user[75323]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Dec 06 08:33:55 np0005548790.localdomain sshd[75373]: Received disconnect from 43.163.123.45 port 58788:11: Bye Bye [preauth]
Dec 06 08:33:55 np0005548790.localdomain sshd[75373]: Disconnected from authenticating user root 43.163.123.45 port 58788 [preauth]
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:                     with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:                     with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:                     with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:                     with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:                     with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]: Notice: Compiled catalog for np0005548790.localdomain in environment production in 0.19 seconds
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]: Notice: Applied catalog in 0.30 seconds
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]: Application:
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:    Initial environment: production
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:    Converged environment: production
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:          Run mode: user
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]: Changes:
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]: Events:
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]: Resources:
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:             Total: 19
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]: Time:
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:          Schedule: 0.00
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:           Package: 0.00
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:              Exec: 0.01
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:            Augeas: 0.01
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:              File: 0.02
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:           Service: 0.07
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:    Config retrieval: 0.25
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:    Transaction evaluation: 0.29
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:    Catalog application: 0.30
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:          Last run: 1765010035
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:        Filebucket: 0.00
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:             Total: 0.30
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]: Version:
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:            Config: 1765010034
Dec 06 08:33:55 np0005548790.localdomain puppet-user[75323]:            Puppet: 7.10.0
Dec 06 08:33:55 np0005548790.localdomain ansible-async_wrapper.py[75309]: Module complete (75309)
Dec 06 08:33:56 np0005548790.localdomain ansible-async_wrapper.py[75308]: Done in kid B.
Dec 06 08:34:01 np0005548790.localdomain sudo[75494]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dntwzytjcomwloqcnfqqulxfiphyxueq ; /usr/bin/python3
Dec 06 08:34:01 np0005548790.localdomain sudo[75494]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:01 np0005548790.localdomain python3[75496]: ansible-ansible.legacy.async_status Invoked with jid=678753906868.75305 mode=status _async_dir=/tmp/.ansible_async
Dec 06 08:34:01 np0005548790.localdomain sudo[75494]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:02 np0005548790.localdomain sudo[75510]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tfzhovrxlgizkkoagiuauzcadboivavk ; /usr/bin/python3
Dec 06 08:34:02 np0005548790.localdomain sudo[75510]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:02 np0005548790.localdomain python3[75512]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 08:34:02 np0005548790.localdomain sudo[75510]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:02 np0005548790.localdomain sudo[75526]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cetjpzfngppuppkplssrwwaurdpigbvi ; /usr/bin/python3
Dec 06 08:34:02 np0005548790.localdomain sudo[75526]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:02 np0005548790.localdomain python3[75528]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:34:02 np0005548790.localdomain sudo[75526]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:03 np0005548790.localdomain sudo[75576]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vbyhfdrowdggbmgoywabcqyjruzkrqjr ; /usr/bin/python3
Dec 06 08:34:03 np0005548790.localdomain sudo[75576]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:03 np0005548790.localdomain python3[75578]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:34:03 np0005548790.localdomain sudo[75576]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:03 np0005548790.localdomain sudo[75594]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dxtxhbgoizzytkexershfssyoktopvfr ; /usr/bin/python3
Dec 06 08:34:03 np0005548790.localdomain sudo[75594]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:03 np0005548790.localdomain python3[75596]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmp03unw5zk recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 08:34:03 np0005548790.localdomain sudo[75594]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:03 np0005548790.localdomain sudo[75624]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eauyglyvsccpkpkxolnfvpnwuegtrhir ; /usr/bin/python3
Dec 06 08:34:03 np0005548790.localdomain sudo[75624]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:04 np0005548790.localdomain python3[75626]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:34:04 np0005548790.localdomain sudo[75624]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:04 np0005548790.localdomain sudo[75640]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uurnidigqnyxqtrmelhhsrutiezkqwdq ; /usr/bin/python3
Dec 06 08:34:04 np0005548790.localdomain sudo[75640]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:04 np0005548790.localdomain sudo[75640]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:04 np0005548790.localdomain sudo[75730]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eouyqsxpcwpsmipclzzpdibseqccmcjf ; /usr/bin/python3
Dec 06 08:34:04 np0005548790.localdomain sudo[75730]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:04 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:34:05 np0005548790.localdomain podman[75733]: 2025-12-06 08:34:05.0830393 +0000 UTC m=+0.084072638 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z)
Dec 06 08:34:05 np0005548790.localdomain podman[75733]: 2025-12-06 08:34:05.094965098 +0000 UTC m=+0.095998416 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64)
Dec 06 08:34:05 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:34:05 np0005548790.localdomain python3[75732]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Dec 06 08:34:05 np0005548790.localdomain sudo[75730]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:05 np0005548790.localdomain sudo[75769]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eqwovupldnhguvrkocosszxfmzxmvxhf ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:34:05 np0005548790.localdomain sudo[75769]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:05 np0005548790.localdomain python3[75771]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:34:05 np0005548790.localdomain sudo[75769]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:06 np0005548790.localdomain sudo[75785]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wlkphracuvczasvmwljmcasstgqcwvaw ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:34:06 np0005548790.localdomain sudo[75785]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:06 np0005548790.localdomain sudo[75785]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:06 np0005548790.localdomain sudo[75801]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xtjehjlwbzcfbiqdcrswdgrphpnzddrm ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:34:06 np0005548790.localdomain sudo[75801]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:06 np0005548790.localdomain python3[75803]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:34:06 np0005548790.localdomain sudo[75801]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:07 np0005548790.localdomain sudo[75851]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bxdqbrrwzqvipefyranpdhplxmfqxmsd ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:34:07 np0005548790.localdomain sudo[75851]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:07 np0005548790.localdomain python3[75853]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:34:07 np0005548790.localdomain sudo[75851]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:07 np0005548790.localdomain sudo[75869]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-onapzftylendyookvmxrisiblpystkyy ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:34:07 np0005548790.localdomain sudo[75869]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:34:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:34:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:34:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:34:07 np0005548790.localdomain systemd[1]: tmp-crun.9PZKI7.mount: Deactivated successfully.
Dec 06 08:34:07 np0005548790.localdomain python3[75872]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:34:07 np0005548790.localdomain podman[75874]: 2025-12-06 08:34:07.507691117 +0000 UTC m=+0.073843104 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64)
Dec 06 08:34:07 np0005548790.localdomain sudo[75869]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:07 np0005548790.localdomain podman[75873]: 2025-12-06 08:34:07.529031097 +0000 UTC m=+0.092617986 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 08:34:07 np0005548790.localdomain podman[75871]: 2025-12-06 08:34:07.560900869 +0000 UTC m=+0.127057577 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=)
Dec 06 08:34:07 np0005548790.localdomain podman[75873]: 2025-12-06 08:34:07.615047746 +0000 UTC m=+0.178634645 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true)
Dec 06 08:34:07 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:34:07 np0005548790.localdomain podman[75871]: 2025-12-06 08:34:07.638715518 +0000 UTC m=+0.204872136 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, container_name=iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z)
Dec 06 08:34:07 np0005548790.localdomain podman[75874]: 2025-12-06 08:34:07.644350818 +0000 UTC m=+0.210502805 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Dec 06 08:34:07 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:34:07 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:34:07 np0005548790.localdomain podman[75875]: 2025-12-06 08:34:07.612636171 +0000 UTC m=+0.175524471 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-cron, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com)
Dec 06 08:34:07 np0005548790.localdomain podman[75875]: 2025-12-06 08:34:07.691489078 +0000 UTC m=+0.254377338 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, version=17.1.12, com.redhat.component=openstack-cron-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vendor=Red Hat, Inc., config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team)
Dec 06 08:34:07 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:34:07 np0005548790.localdomain sudo[76020]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-owijkurxpkmaqjgvgkxkwfntjzdocphp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:34:07 np0005548790.localdomain sudo[76020]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:08 np0005548790.localdomain python3[76022]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:34:08 np0005548790.localdomain sudo[76020]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:08 np0005548790.localdomain sudo[76038]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qcvaprqpxntjlysasumeyfhfdiwzjjbq ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:34:08 np0005548790.localdomain sudo[76038]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:08 np0005548790.localdomain python3[76040]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:34:08 np0005548790.localdomain sudo[76038]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:08 np0005548790.localdomain sudo[76100]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wbjkngeemqvsdlybmnmplkxkjxrruvxi ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:34:08 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:34:08 np0005548790.localdomain sudo[76100]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:08 np0005548790.localdomain systemd[1]: tmp-crun.Nj4f0C.mount: Deactivated successfully.
Dec 06 08:34:08 np0005548790.localdomain podman[76102]: 2025-12-06 08:34:08.884090565 +0000 UTC m=+0.111556182 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, tcib_managed=true, container_name=nova_migration_target, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z)
Dec 06 08:34:08 np0005548790.localdomain python3[76103]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:34:08 np0005548790.localdomain sudo[76100]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:09 np0005548790.localdomain sudo[76140]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bncsimzpqlvgqxgjztszfdbwwwkrhawv ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:34:09 np0005548790.localdomain sudo[76140]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:09 np0005548790.localdomain python3[76142]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:34:09 np0005548790.localdomain sudo[76140]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:09 np0005548790.localdomain podman[76102]: 2025-12-06 08:34:09.284152814 +0000 UTC m=+0.511618411 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, release=1761123044, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute)
Dec 06 08:34:09 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:34:09 np0005548790.localdomain sudo[76204]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ejrtiaahgbdswfrwgydzsexipvnktrem ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:34:09 np0005548790.localdomain sudo[76204]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:09 np0005548790.localdomain python3[76206]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:34:09 np0005548790.localdomain sudo[76204]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:09 np0005548790.localdomain sudo[76222]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-snrfcdkuqaxambfcgkkouvqsbcrixpdn ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:34:09 np0005548790.localdomain sudo[76222]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:10 np0005548790.localdomain python3[76224]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:34:10 np0005548790.localdomain sudo[76222]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:10 np0005548790.localdomain sudo[76252]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hielnvjgnqdqoodihlivzullwznzmarm ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:34:10 np0005548790.localdomain sudo[76252]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:10 np0005548790.localdomain python3[76254]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:34:10 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:34:10 np0005548790.localdomain systemd-rc-local-generator[76275]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:34:10 np0005548790.localdomain systemd-sysv-generator[76280]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:34:10 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:34:10 np0005548790.localdomain sudo[76252]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:11 np0005548790.localdomain sudo[76338]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-btkxsuolwujxfgxbrndzktiprgybempn ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:34:11 np0005548790.localdomain sudo[76338]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:11 np0005548790.localdomain python3[76340]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:34:11 np0005548790.localdomain sudo[76338]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:11 np0005548790.localdomain sudo[76356]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vphmcrxlcpmccahsplhdcbazgemzyabj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:34:11 np0005548790.localdomain sudo[76356]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:11 np0005548790.localdomain python3[76358]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:34:11 np0005548790.localdomain sudo[76356]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:12 np0005548790.localdomain sudo[76418]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cpsdlbnfnicxhhoftncluzqvyrpfissr ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:34:12 np0005548790.localdomain sudo[76418]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:12 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:34:12 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:34:12 np0005548790.localdomain podman[76422]: 2025-12-06 08:34:12.153977906 +0000 UTC m=+0.093872159 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller)
Dec 06 08:34:12 np0005548790.localdomain podman[76422]: 2025-12-06 08:34:12.175980694 +0000 UTC m=+0.115874987 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true)
Dec 06 08:34:12 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:34:12 np0005548790.localdomain python3[76420]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 06 08:34:12 np0005548790.localdomain podman[76421]: 2025-12-06 08:34:12.213944459 +0000 UTC m=+0.154306265 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Dec 06 08:34:12 np0005548790.localdomain sudo[76418]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:12 np0005548790.localdomain podman[76421]: 2025-12-06 08:34:12.287508874 +0000 UTC m=+0.227870690 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, container_name=ovn_metadata_agent, architecture=x86_64, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:34:12 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:34:12 np0005548790.localdomain sudo[76483]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yfbnsvpsjgjtxvkjnegayqsbylzkfwsg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:34:12 np0005548790.localdomain sudo[76483]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:12 np0005548790.localdomain python3[76485]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:34:12 np0005548790.localdomain sudo[76483]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:12 np0005548790.localdomain sudo[76513]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rucsalbfcqowimszfjejzcatiwxjochd ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:34:12 np0005548790.localdomain sudo[76513]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:13 np0005548790.localdomain python3[76515]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:34:13 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:34:13 np0005548790.localdomain systemd-rc-local-generator[76538]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:34:13 np0005548790.localdomain systemd-sysv-generator[76541]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:34:13 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:34:13 np0005548790.localdomain systemd[1]: Starting Create netns directory...
Dec 06 08:34:13 np0005548790.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 08:34:13 np0005548790.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 08:34:13 np0005548790.localdomain systemd[1]: Finished Create netns directory.
Dec 06 08:34:13 np0005548790.localdomain sudo[76513]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:13 np0005548790.localdomain sudo[76570]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vjglnnbnwvavkygexcvjupyeyiuxgcwk ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:34:13 np0005548790.localdomain sudo[76570]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:14 np0005548790.localdomain python3[76572]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Dec 06 08:34:14 np0005548790.localdomain sudo[76570]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:14 np0005548790.localdomain sudo[76586]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sbfyhunqxxnhwgwvjkuxkbaicuxocjkx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:34:14 np0005548790.localdomain sudo[76586]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:14 np0005548790.localdomain sudo[76586]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:15 np0005548790.localdomain sudo[76629]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ihdwupzrgtxhmglwamzxkvuozddkjjah ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:34:15 np0005548790.localdomain sudo[76629]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:34:15 np0005548790.localdomain python3[76631]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step5 config_dir=/var/lib/tripleo-config/container-startup-config/step_5 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Dec 06 08:34:16 np0005548790.localdomain podman[76669]: 2025-12-06 08:34:16.193540754 +0000 UTC m=+0.110696549 container create 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, distribution-scope=public)
Dec 06 08:34:16 np0005548790.localdomain podman[76669]: 2025-12-06 08:34:16.131434954 +0000 UTC m=+0.048590829 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 06 08:34:16 np0005548790.localdomain systemd[1]: Started libpod-conmon-1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.scope.
Dec 06 08:34:16 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:34:16 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a613f65a5b410bb267b33e0b08cce4603bbfb5f7e30ec2a1f53d0927d5f78cd/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 08:34:16 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a613f65a5b410bb267b33e0b08cce4603bbfb5f7e30ec2a1f53d0927d5f78cd/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 08:34:16 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a613f65a5b410bb267b33e0b08cce4603bbfb5f7e30ec2a1f53d0927d5f78cd/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 08:34:16 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a613f65a5b410bb267b33e0b08cce4603bbfb5f7e30ec2a1f53d0927d5f78cd/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 06 08:34:16 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a613f65a5b410bb267b33e0b08cce4603bbfb5f7e30ec2a1f53d0927d5f78cd/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 08:34:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:34:16 np0005548790.localdomain podman[76669]: 2025-12-06 08:34:16.297955663 +0000 UTC m=+0.215111508 container init 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step5, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, version=17.1.12)
Dec 06 08:34:16 np0005548790.localdomain systemd[1]: tmp-crun.QcmFpd.mount: Deactivated successfully.
Dec 06 08:34:16 np0005548790.localdomain sudo[76690]:     nova : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 08:34:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:34:16 np0005548790.localdomain podman[76669]: 2025-12-06 08:34:16.357189567 +0000 UTC m=+0.274345352 container start 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, version=17.1.12, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, release=1761123044)
Dec 06 08:34:16 np0005548790.localdomain systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 06 08:34:16 np0005548790.localdomain python3[76631]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute --conmon-pidfile /run/nova_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env LIBGUESTFS_BACKEND=direct --env TRIPLEO_CONFIG_HASH=02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f --healthcheck-command /openstack/healthcheck 5672 --ipc host --label config_id=tripleo_step5 --label container_name=nova_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute.log --network host --privileged=True --ulimit nofile=131072 --ulimit memlock=67108864 --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /dev:/dev --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /run/nova:/run/nova:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /sys/class/net:/sys/class/net --volume /sys/bus/pci:/sys/bus/pci --volume /boot:/boot:ro --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 06 08:34:16 np0005548790.localdomain systemd[1]: Created slice User Slice of UID 0.
Dec 06 08:34:16 np0005548790.localdomain systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 06 08:34:16 np0005548790.localdomain systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 06 08:34:16 np0005548790.localdomain systemd[1]: Starting User Manager for UID 0...
Dec 06 08:34:16 np0005548790.localdomain systemd[76710]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Dec 06 08:34:16 np0005548790.localdomain podman[76691]: 2025-12-06 08:34:16.449960966 +0000 UTC m=+0.083955515 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, release=1761123044, container_name=nova_compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:34:16 np0005548790.localdomain podman[76691]: 2025-12-06 08:34:16.511321555 +0000 UTC m=+0.145316114 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, build-date=2025-11-19T00:36:58Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team)
Dec 06 08:34:16 np0005548790.localdomain podman[76691]: unhealthy
Dec 06 08:34:16 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 08:34:16 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Failed with result 'exit-code'.
Dec 06 08:34:16 np0005548790.localdomain systemd[76710]: Queued start job for default target Main User Target.
Dec 06 08:34:16 np0005548790.localdomain systemd[76710]: Created slice User Application Slice.
Dec 06 08:34:16 np0005548790.localdomain systemd[76710]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 06 08:34:16 np0005548790.localdomain systemd[76710]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 08:34:16 np0005548790.localdomain systemd[76710]: Reached target Paths.
Dec 06 08:34:16 np0005548790.localdomain systemd[76710]: Reached target Timers.
Dec 06 08:34:16 np0005548790.localdomain systemd[76710]: Starting D-Bus User Message Bus Socket...
Dec 06 08:34:16 np0005548790.localdomain systemd[76710]: Starting Create User's Volatile Files and Directories...
Dec 06 08:34:16 np0005548790.localdomain systemd[76710]: Listening on D-Bus User Message Bus Socket.
Dec 06 08:34:16 np0005548790.localdomain systemd[76710]: Reached target Sockets.
Dec 06 08:34:16 np0005548790.localdomain systemd[76710]: Finished Create User's Volatile Files and Directories.
Dec 06 08:34:16 np0005548790.localdomain systemd[76710]: Reached target Basic System.
Dec 06 08:34:16 np0005548790.localdomain systemd[76710]: Reached target Main User Target.
Dec 06 08:34:16 np0005548790.localdomain systemd[76710]: Startup finished in 148ms.
Dec 06 08:34:16 np0005548790.localdomain systemd[1]: Started User Manager for UID 0.
Dec 06 08:34:16 np0005548790.localdomain systemd[1]: Started Session c10 of User root.
Dec 06 08:34:16 np0005548790.localdomain sudo[76690]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42436)
Dec 06 08:34:16 np0005548790.localdomain sudo[76690]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:16 np0005548790.localdomain systemd[1]: session-c10.scope: Deactivated successfully.
Dec 06 08:34:16 np0005548790.localdomain podman[76790]: 2025-12-06 08:34:16.8520953 +0000 UTC m=+0.086557554 container create bee56f31c74019a64bbb1bf291270de77c8cc037177c55fab37088c17f0fe5f3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., config_id=tripleo_step5, container_name=nova_wait_for_compute_service, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:34:16 np0005548790.localdomain systemd[1]: Started libpod-conmon-bee56f31c74019a64bbb1bf291270de77c8cc037177c55fab37088c17f0fe5f3.scope.
Dec 06 08:34:16 np0005548790.localdomain podman[76790]: 2025-12-06 08:34:16.798971481 +0000 UTC m=+0.033433735 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 06 08:34:16 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 08:34:16 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead6993018cbf28a6911fcc6a4afc0bfdf470e6d9ea5b6906d250c0b5f41599c/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff)
Dec 06 08:34:16 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead6993018cbf28a6911fcc6a4afc0bfdf470e6d9ea5b6906d250c0b5f41599c/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 08:34:16 np0005548790.localdomain podman[76790]: 2025-12-06 08:34:16.9310439 +0000 UTC m=+0.165506094 container init bee56f31c74019a64bbb1bf291270de77c8cc037177c55fab37088c17f0fe5f3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step5, container_name=nova_wait_for_compute_service, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, distribution-scope=public)
Dec 06 08:34:16 np0005548790.localdomain podman[76790]: 2025-12-06 08:34:16.940910554 +0000 UTC m=+0.175372768 container start bee56f31c74019a64bbb1bf291270de77c8cc037177c55fab37088c17f0fe5f3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, config_id=tripleo_step5, io.buildah.version=1.41.4, container_name=nova_wait_for_compute_service, vcs-type=git, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team)
Dec 06 08:34:16 np0005548790.localdomain podman[76790]: 2025-12-06 08:34:16.941192981 +0000 UTC m=+0.175655215 container attach bee56f31c74019a64bbb1bf291270de77c8cc037177c55fab37088c17f0fe5f3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, managed_by=tripleo_ansible, container_name=nova_wait_for_compute_service, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, vcs-type=git, distribution-scope=public, build-date=2025-11-19T00:36:58Z)
Dec 06 08:34:16 np0005548790.localdomain sudo[76810]:     nova : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 08:34:16 np0005548790.localdomain sudo[76810]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42436)
Dec 06 08:34:17 np0005548790.localdomain sudo[76810]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:20 np0005548790.localdomain sshd[76814]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:34:21 np0005548790.localdomain sshd[76814]: Invalid user admin from 45.135.232.92 port 58782
Dec 06 08:34:22 np0005548790.localdomain sshd[76814]: Connection reset by invalid user admin 45.135.232.92 port 58782 [preauth]
Dec 06 08:34:22 np0005548790.localdomain sshd[76816]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:34:24 np0005548790.localdomain sshd[76816]: Invalid user temp from 45.135.232.92 port 58800
Dec 06 08:34:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:34:24 np0005548790.localdomain podman[76818]: 2025-12-06 08:34:24.161063046 +0000 UTC m=+0.102198981 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:34:24 np0005548790.localdomain podman[76818]: 2025-12-06 08:34:24.377382446 +0000 UTC m=+0.318518291 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, release=1761123044, name=rhosp17/openstack-qdrouterd, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, container_name=metrics_qdr, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1)
Dec 06 08:34:24 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:34:24 np0005548790.localdomain sshd[76816]: Connection reset by invalid user temp 45.135.232.92 port 58800 [preauth]
Dec 06 08:34:24 np0005548790.localdomain sshd[76847]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:34:26 np0005548790.localdomain systemd[1]: Stopping User Manager for UID 0...
Dec 06 08:34:26 np0005548790.localdomain systemd[76710]: Activating special unit Exit the Session...
Dec 06 08:34:26 np0005548790.localdomain systemd[76710]: Stopped target Main User Target.
Dec 06 08:34:26 np0005548790.localdomain systemd[76710]: Stopped target Basic System.
Dec 06 08:34:26 np0005548790.localdomain systemd[76710]: Stopped target Paths.
Dec 06 08:34:26 np0005548790.localdomain systemd[76710]: Stopped target Sockets.
Dec 06 08:34:26 np0005548790.localdomain systemd[76710]: Stopped target Timers.
Dec 06 08:34:26 np0005548790.localdomain systemd[76710]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 06 08:34:26 np0005548790.localdomain systemd[76710]: Closed D-Bus User Message Bus Socket.
Dec 06 08:34:26 np0005548790.localdomain systemd[76710]: Stopped Create User's Volatile Files and Directories.
Dec 06 08:34:26 np0005548790.localdomain systemd[76710]: Removed slice User Application Slice.
Dec 06 08:34:26 np0005548790.localdomain systemd[76710]: Reached target Shutdown.
Dec 06 08:34:26 np0005548790.localdomain systemd[76710]: Finished Exit the Session.
Dec 06 08:34:26 np0005548790.localdomain systemd[76710]: Reached target Exit the Session.
Dec 06 08:34:26 np0005548790.localdomain systemd[1]: user@0.service: Deactivated successfully.
Dec 06 08:34:26 np0005548790.localdomain systemd[1]: Stopped User Manager for UID 0.
Dec 06 08:34:26 np0005548790.localdomain systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 06 08:34:26 np0005548790.localdomain systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 06 08:34:26 np0005548790.localdomain systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 06 08:34:26 np0005548790.localdomain systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 06 08:34:26 np0005548790.localdomain systemd[1]: Removed slice User Slice of UID 0.
Dec 06 08:34:26 np0005548790.localdomain sshd[76847]: Invalid user user from 45.135.232.92 port 61162
Dec 06 08:34:27 np0005548790.localdomain sshd[76847]: Connection reset by invalid user user 45.135.232.92 port 61162 [preauth]
Dec 06 08:34:27 np0005548790.localdomain sshd[76850]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:34:29 np0005548790.localdomain sshd[76850]: Connection reset by authenticating user root 45.135.232.92 port 61170 [preauth]
Dec 06 08:34:29 np0005548790.localdomain sshd[76852]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:34:30 np0005548790.localdomain sudo[76854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:34:30 np0005548790.localdomain sudo[76854]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:34:30 np0005548790.localdomain sudo[76854]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:30 np0005548790.localdomain sudo[76869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:34:30 np0005548790.localdomain sudo[76869]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:34:31 np0005548790.localdomain sudo[76869]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:31 np0005548790.localdomain sshd[76852]: Invalid user admin from 45.135.232.92 port 61190
Dec 06 08:34:31 np0005548790.localdomain sudo[76916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:34:31 np0005548790.localdomain sudo[76916]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:34:31 np0005548790.localdomain sudo[76916]: pam_unix(sudo:session): session closed for user root
Dec 06 08:34:31 np0005548790.localdomain sshd[76852]: Connection reset by invalid user admin 45.135.232.92 port 61190 [preauth]
Dec 06 08:34:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:34:35 np0005548790.localdomain podman[76931]: 2025-12-06 08:34:35.605267505 +0000 UTC m=+0.085955747 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z)
Dec 06 08:34:35 np0005548790.localdomain podman[76931]: 2025-12-06 08:34:35.622219078 +0000 UTC m=+0.102907330 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-collectd-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd)
Dec 06 08:34:35 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:34:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:34:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:34:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:34:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:34:38 np0005548790.localdomain systemd[1]: tmp-crun.vYrbQ9.mount: Deactivated successfully.
Dec 06 08:34:38 np0005548790.localdomain podman[76951]: 2025-12-06 08:34:38.603621613 +0000 UTC m=+0.108798558 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, distribution-scope=public, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 06 08:34:38 np0005548790.localdomain systemd[1]: tmp-crun.iPIFQ8.mount: Deactivated successfully.
Dec 06 08:34:38 np0005548790.localdomain podman[76950]: 2025-12-06 08:34:38.635968317 +0000 UTC m=+0.142434207 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, version=17.1.12, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, release=1761123044, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1)
Dec 06 08:34:38 np0005548790.localdomain podman[76950]: 2025-12-06 08:34:38.668344022 +0000 UTC m=+0.174809912 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, config_id=tripleo_step3, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid)
Dec 06 08:34:38 np0005548790.localdomain podman[76953]: 2025-12-06 08:34:38.680531997 +0000 UTC m=+0.182621480 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-cron-container, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:34:38 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:34:38 np0005548790.localdomain podman[76953]: 2025-12-06 08:34:38.686647791 +0000 UTC m=+0.188737244 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 06 08:34:38 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:34:38 np0005548790.localdomain podman[76952]: 2025-12-06 08:34:38.724504393 +0000 UTC m=+0.230588782 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, config_id=tripleo_step4, vendor=Red Hat, Inc.)
Dec 06 08:34:38 np0005548790.localdomain podman[76952]: 2025-12-06 08:34:38.753547158 +0000 UTC m=+0.259631537 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, tcib_managed=true, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:34:38 np0005548790.localdomain podman[76951]: 2025-12-06 08:34:38.784226448 +0000 UTC m=+0.289403363 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, architecture=x86_64, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:34:38 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:34:38 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:34:39 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:34:39 np0005548790.localdomain podman[77040]: 2025-12-06 08:34:39.55150604 +0000 UTC m=+0.069074277 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, version=17.1.12, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 08:34:39 np0005548790.localdomain podman[77040]: 2025-12-06 08:34:39.944629064 +0000 UTC m=+0.462197311 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, batch=17.1_20251118.1, config_id=tripleo_step4)
Dec 06 08:34:39 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:34:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:34:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:34:42 np0005548790.localdomain podman[77063]: 2025-12-06 08:34:42.562529405 +0000 UTC m=+0.077041460 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1761123044, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, version=17.1.12, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:34:42 np0005548790.localdomain podman[77063]: 2025-12-06 08:34:42.604527417 +0000 UTC m=+0.119039472 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044)
Dec 06 08:34:42 np0005548790.localdomain systemd[1]: tmp-crun.8aiYsk.mount: Deactivated successfully.
Dec 06 08:34:42 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:34:42 np0005548790.localdomain podman[77064]: 2025-12-06 08:34:42.623004631 +0000 UTC m=+0.134069883 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, container_name=ovn_controller, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 08:34:42 np0005548790.localdomain podman[77064]: 2025-12-06 08:34:42.671662181 +0000 UTC m=+0.182727423 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64)
Dec 06 08:34:42 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:34:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:34:47 np0005548790.localdomain systemd[1]: tmp-crun.HUPPTL.mount: Deactivated successfully.
Dec 06 08:34:47 np0005548790.localdomain podman[77110]: 2025-12-06 08:34:47.58701861 +0000 UTC m=+0.098924825 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, config_id=tripleo_step5)
Dec 06 08:34:47 np0005548790.localdomain podman[77110]: 2025-12-06 08:34:47.672900744 +0000 UTC m=+0.184806909 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible)
Dec 06 08:34:47 np0005548790.localdomain podman[77110]: unhealthy
Dec 06 08:34:47 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 08:34:47 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Failed with result 'exit-code'.
Dec 06 08:34:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:34:54 np0005548790.localdomain podman[77132]: 2025-12-06 08:34:54.564339114 +0000 UTC m=+0.083439231 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, release=1761123044, container_name=metrics_qdr, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=)
Dec 06 08:34:54 np0005548790.localdomain podman[77132]: 2025-12-06 08:34:54.74982322 +0000 UTC m=+0.268923347 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:34:54 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:35:05 np0005548790.localdomain sshd[77161]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:35:05 np0005548790.localdomain sshd[77163]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:35:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:35:06 np0005548790.localdomain podman[77165]: 2025-12-06 08:35:06.562426652 +0000 UTC m=+0.075852848 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:35:06 np0005548790.localdomain podman[77165]: 2025-12-06 08:35:06.569156992 +0000 UTC m=+0.082583148 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, managed_by=tripleo_ansible, batch=17.1_20251118.1, container_name=collectd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Dec 06 08:35:06 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:35:06 np0005548790.localdomain sshd[77161]: Received disconnect from 43.163.123.45 port 57438:11: Bye Bye [preauth]
Dec 06 08:35:06 np0005548790.localdomain sshd[77161]: Disconnected from authenticating user root 43.163.123.45 port 57438 [preauth]
Dec 06 08:35:07 np0005548790.localdomain sshd[77163]: Received disconnect from 103.226.138.52 port 42166:11: Bye Bye [preauth]
Dec 06 08:35:07 np0005548790.localdomain sshd[77163]: Disconnected from authenticating user root 103.226.138.52 port 42166 [preauth]
Dec 06 08:35:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:35:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:35:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:35:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:35:09 np0005548790.localdomain podman[77186]: 2025-12-06 08:35:09.576068687 +0000 UTC m=+0.082330342 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-19T00:12:45Z, version=17.1.12, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git)
Dec 06 08:35:09 np0005548790.localdomain podman[77185]: 2025-12-06 08:35:09.623871984 +0000 UTC m=+0.131002432 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 08:35:09 np0005548790.localdomain systemd[1]: tmp-crun.gdKzJz.mount: Deactivated successfully.
Dec 06 08:35:09 np0005548790.localdomain podman[77186]: 2025-12-06 08:35:09.677007464 +0000 UTC m=+0.183269139 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.expose-services=)
Dec 06 08:35:09 np0005548790.localdomain podman[77185]: 2025-12-06 08:35:09.67685922 +0000 UTC m=+0.183989638 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 06 08:35:09 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:35:09 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:35:09 np0005548790.localdomain podman[77187]: 2025-12-06 08:35:09.679154501 +0000 UTC m=+0.179889498 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible)
Dec 06 08:35:09 np0005548790.localdomain podman[77184]: 2025-12-06 08:35:09.730614016 +0000 UTC m=+0.241110833 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 08:35:09 np0005548790.localdomain podman[77184]: 2025-12-06 08:35:09.812150434 +0000 UTC m=+0.322647221 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, container_name=iscsid, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, vcs-type=git)
Dec 06 08:35:09 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:35:09 np0005548790.localdomain podman[77187]: 2025-12-06 08:35:09.859703645 +0000 UTC m=+0.360438682 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step4, io.buildah.version=1.41.4, distribution-scope=public)
Dec 06 08:35:09 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:35:10 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:35:10 np0005548790.localdomain podman[77274]: 2025-12-06 08:35:10.560375577 +0000 UTC m=+0.077110070 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.buildah.version=1.41.4, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, container_name=nova_migration_target, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.)
Dec 06 08:35:10 np0005548790.localdomain podman[77274]: 2025-12-06 08:35:10.908228711 +0000 UTC m=+0.424963264 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, version=17.1.12, io.buildah.version=1.41.4, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044)
Dec 06 08:35:10 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:35:13 np0005548790.localdomain sshd[77295]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:35:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:35:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:35:13 np0005548790.localdomain systemd[1]: tmp-crun.DlyEil.mount: Deactivated successfully.
Dec 06 08:35:13 np0005548790.localdomain podman[77297]: 2025-12-06 08:35:13.577660559 +0000 UTC m=+0.090590662 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, config_id=tripleo_step4, version=17.1.12, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 08:35:13 np0005548790.localdomain systemd[1]: tmp-crun.qq7quf.mount: Deactivated successfully.
Dec 06 08:35:13 np0005548790.localdomain podman[77298]: 2025-12-06 08:35:13.631548199 +0000 UTC m=+0.142047286 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, distribution-scope=public, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.expose-services=, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 06 08:35:13 np0005548790.localdomain podman[77297]: 2025-12-06 08:35:13.651018889 +0000 UTC m=+0.163949032 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, release=1761123044, distribution-scope=public, architecture=x86_64, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 06 08:35:13 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:35:13 np0005548790.localdomain podman[77298]: 2025-12-06 08:35:13.684258997 +0000 UTC m=+0.194758044 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, release=1761123044, tcib_managed=true, io.openshift.expose-services=)
Dec 06 08:35:13 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:35:14 np0005548790.localdomain sshd[77295]: Received disconnect from 35.247.75.98 port 42390:11: Bye Bye [preauth]
Dec 06 08:35:14 np0005548790.localdomain sshd[77295]: Disconnected from authenticating user root 35.247.75.98 port 42390 [preauth]
Dec 06 08:35:18 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:35:18 np0005548790.localdomain podman[77343]: 2025-12-06 08:35:18.555895007 +0000 UTC m=+0.073320120 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:35:18 np0005548790.localdomain podman[77343]: 2025-12-06 08:35:18.597123519 +0000 UTC m=+0.114548612 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12)
Dec 06 08:35:18 np0005548790.localdomain podman[77343]: unhealthy
Dec 06 08:35:18 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 08:35:18 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Failed with result 'exit-code'.
Dec 06 08:35:18 np0005548790.localdomain sshd[35740]: Received disconnect from 192.168.122.100 port 43496:11: disconnected by user
Dec 06 08:35:18 np0005548790.localdomain sshd[35740]: Disconnected from user zuul 192.168.122.100 port 43496
Dec 06 08:35:18 np0005548790.localdomain sshd[35737]: pam_unix(sshd:session): session closed for user zuul
Dec 06 08:35:18 np0005548790.localdomain systemd[1]: session-27.scope: Deactivated successfully.
Dec 06 08:35:18 np0005548790.localdomain systemd[1]: session-27.scope: Consumed 2.948s CPU time.
Dec 06 08:35:18 np0005548790.localdomain systemd-logind[760]: Session 27 logged out. Waiting for processes to exit.
Dec 06 08:35:18 np0005548790.localdomain systemd-logind[760]: Removed session 27.
Dec 06 08:35:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:35:25 np0005548790.localdomain systemd[1]: tmp-crun.mnPr58.mount: Deactivated successfully.
Dec 06 08:35:25 np0005548790.localdomain podman[77365]: 2025-12-06 08:35:25.582213332 +0000 UTC m=+0.095559915 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, batch=17.1_20251118.1, container_name=metrics_qdr, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:35:25 np0005548790.localdomain podman[77365]: 2025-12-06 08:35:25.778422274 +0000 UTC m=+0.291768857 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:35:25 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:35:31 np0005548790.localdomain sudo[77393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:35:31 np0005548790.localdomain sudo[77393]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:35:31 np0005548790.localdomain sudo[77393]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:32 np0005548790.localdomain sudo[77408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:35:32 np0005548790.localdomain sudo[77408]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:35:32 np0005548790.localdomain sudo[77408]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:32 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 08:35:32 np0005548790.localdomain recover_tripleo_nova_virtqemud[77455]: 62556
Dec 06 08:35:32 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 08:35:32 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 08:35:33 np0005548790.localdomain sudo[77456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:35:33 np0005548790.localdomain sudo[77456]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:35:33 np0005548790.localdomain sudo[77456]: pam_unix(sudo:session): session closed for user root
Dec 06 08:35:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:35:37 np0005548790.localdomain podman[77471]: 2025-12-06 08:35:37.57393114 +0000 UTC m=+0.089119803 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step3, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd)
Dec 06 08:35:37 np0005548790.localdomain podman[77471]: 2025-12-06 08:35:37.590122562 +0000 UTC m=+0.105311265 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step3, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, vcs-type=git)
Dec 06 08:35:37 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:35:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:35:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:35:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:35:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:35:40 np0005548790.localdomain podman[77499]: 2025-12-06 08:35:40.567126038 +0000 UTC m=+0.075078587 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, tcib_managed=true, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_id=tripleo_step4, url=https://www.redhat.com, distribution-scope=public, release=1761123044)
Dec 06 08:35:40 np0005548790.localdomain podman[77491]: 2025-12-06 08:35:40.625116697 +0000 UTC m=+0.140873675 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, container_name=iscsid, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=)
Dec 06 08:35:40 np0005548790.localdomain podman[77491]: 2025-12-06 08:35:40.639409109 +0000 UTC m=+0.155166137 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64)
Dec 06 08:35:40 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:35:40 np0005548790.localdomain podman[77499]: 2025-12-06 08:35:40.655523369 +0000 UTC m=+0.163475948 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-cron-container, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 06 08:35:40 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:35:40 np0005548790.localdomain podman[77493]: 2025-12-06 08:35:40.722862109 +0000 UTC m=+0.234144257 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:35:40 np0005548790.localdomain podman[77493]: 2025-12-06 08:35:40.782296077 +0000 UTC m=+0.293578175 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public)
Dec 06 08:35:40 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:35:40 np0005548790.localdomain podman[77492]: 2025-12-06 08:35:40.785180043 +0000 UTC m=+0.296416910 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-type=git, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044)
Dec 06 08:35:40 np0005548790.localdomain podman[77492]: 2025-12-06 08:35:40.870254557 +0000 UTC m=+0.381491384 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute)
Dec 06 08:35:40 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:35:41 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:35:41 np0005548790.localdomain podman[77585]: 2025-12-06 08:35:41.568104774 +0000 UTC m=+0.085912677 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:35:41 np0005548790.localdomain podman[77585]: 2025-12-06 08:35:41.951257012 +0000 UTC m=+0.469064925 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute)
Dec 06 08:35:41 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:35:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:35:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:35:44 np0005548790.localdomain podman[77609]: 2025-12-06 08:35:44.562071632 +0000 UTC m=+0.075428516 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.12, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Dec 06 08:35:44 np0005548790.localdomain podman[77610]: 2025-12-06 08:35:44.640977241 +0000 UTC m=+0.151771476 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-11-18T23:34:05Z, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:35:44 np0005548790.localdomain podman[77609]: 2025-12-06 08:35:44.659381622 +0000 UTC m=+0.172738496 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, container_name=ovn_metadata_agent, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:35:44 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:35:44 np0005548790.localdomain podman[77610]: 2025-12-06 08:35:44.68738945 +0000 UTC m=+0.198183745 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, tcib_managed=true, version=17.1.12, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:35:44 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:35:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:35:49 np0005548790.localdomain podman[77658]: 2025-12-06 08:35:49.56950864 +0000 UTC m=+0.085209656 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, container_name=nova_compute, tcib_managed=true, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z)
Dec 06 08:35:49 np0005548790.localdomain podman[77658]: 2025-12-06 08:35:49.654284345 +0000 UTC m=+0.169985351 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, url=https://www.redhat.com)
Dec 06 08:35:49 np0005548790.localdomain podman[77658]: unhealthy
Dec 06 08:35:49 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 08:35:49 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Failed with result 'exit-code'.
Dec 06 08:35:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:35:56 np0005548790.localdomain podman[77680]: 2025-12-06 08:35:56.55000818 +0000 UTC m=+0.071204963 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, build-date=2025-11-18T22:49:46Z, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, io.buildah.version=1.41.4, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:35:56 np0005548790.localdomain podman[77680]: 2025-12-06 08:35:56.76743901 +0000 UTC m=+0.288635773 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, config_id=tripleo_step1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=)
Dec 06 08:35:56 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:36:08 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:36:08 np0005548790.localdomain podman[77709]: 2025-12-06 08:36:08.603219245 +0000 UTC m=+0.122573379 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, config_id=tripleo_step3, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=collectd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container)
Dec 06 08:36:08 np0005548790.localdomain podman[77709]: 2025-12-06 08:36:08.63634056 +0000 UTC m=+0.155694724 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=)
Dec 06 08:36:08 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:36:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:36:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:36:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:36:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:36:11 np0005548790.localdomain podman[77728]: 2025-12-06 08:36:11.584050508 +0000 UTC m=+0.098519827 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:36:11 np0005548790.localdomain podman[77728]: 2025-12-06 08:36:11.617020659 +0000 UTC m=+0.131489998 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, container_name=iscsid, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z)
Dec 06 08:36:11 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:36:11 np0005548790.localdomain systemd[1]: tmp-crun.EpnQQd.mount: Deactivated successfully.
Dec 06 08:36:11 np0005548790.localdomain podman[77730]: 2025-12-06 08:36:11.699593817 +0000 UTC m=+0.205705432 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, release=1761123044, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=)
Dec 06 08:36:11 np0005548790.localdomain podman[77733]: 2025-12-06 08:36:11.74569613 +0000 UTC m=+0.248074985 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, name=rhosp17/openstack-cron, architecture=x86_64, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=logrotate_crond, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., distribution-scope=public)
Dec 06 08:36:11 np0005548790.localdomain podman[77730]: 2025-12-06 08:36:11.753595901 +0000 UTC m=+0.259707556 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, version=17.1.12, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git)
Dec 06 08:36:11 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:36:11 np0005548790.localdomain podman[77729]: 2025-12-06 08:36:11.795339058 +0000 UTC m=+0.304462874 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 08:36:11 np0005548790.localdomain podman[77733]: 2025-12-06 08:36:11.805966452 +0000 UTC m=+0.308345287 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, batch=17.1_20251118.1, container_name=logrotate_crond, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, tcib_managed=true)
Dec 06 08:36:11 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:36:11 np0005548790.localdomain podman[77729]: 2025-12-06 08:36:11.852377402 +0000 UTC m=+0.361501228 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vcs-type=git, architecture=x86_64, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, tcib_managed=true)
Dec 06 08:36:11 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:36:12 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:36:12 np0005548790.localdomain podman[77822]: 2025-12-06 08:36:12.571064212 +0000 UTC m=+0.083606197 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=nova_migration_target, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Dec 06 08:36:12 np0005548790.localdomain podman[77822]: 2025-12-06 08:36:12.935752384 +0000 UTC m=+0.448294409 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, vcs-type=git, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.)
Dec 06 08:36:12 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:36:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:36:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:36:15 np0005548790.localdomain systemd[1]: tmp-crun.agPDXW.mount: Deactivated successfully.
Dec 06 08:36:15 np0005548790.localdomain podman[77846]: 2025-12-06 08:36:15.570283975 +0000 UTC m=+0.088380895 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-19T00:14:25Z, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4)
Dec 06 08:36:15 np0005548790.localdomain podman[77847]: 2025-12-06 08:36:15.588842421 +0000 UTC m=+0.099969204 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4)
Dec 06 08:36:15 np0005548790.localdomain podman[77847]: 2025-12-06 08:36:15.604680715 +0000 UTC m=+0.115807478 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, distribution-scope=public, architecture=x86_64, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller)
Dec 06 08:36:15 np0005548790.localdomain podman[77846]: 2025-12-06 08:36:15.613614914 +0000 UTC m=+0.131711814 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, architecture=x86_64)
Dec 06 08:36:15 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:36:15 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:36:18 np0005548790.localdomain sshd[77896]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:36:20 np0005548790.localdomain sshd[77896]: Received disconnect from 43.163.123.45 port 56100:11: Bye Bye [preauth]
Dec 06 08:36:20 np0005548790.localdomain sshd[77896]: Disconnected from authenticating user root 43.163.123.45 port 56100 [preauth]
Dec 06 08:36:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:36:20 np0005548790.localdomain systemd[1]: tmp-crun.atZNH1.mount: Deactivated successfully.
Dec 06 08:36:20 np0005548790.localdomain podman[77898]: 2025-12-06 08:36:20.541756381 +0000 UTC m=+0.094376854 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, container_name=nova_compute, architecture=x86_64, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:36:20 np0005548790.localdomain podman[77898]: 2025-12-06 08:36:20.626274801 +0000 UTC m=+0.178895334 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-type=git, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com)
Dec 06 08:36:20 np0005548790.localdomain podman[77898]: unhealthy
Dec 06 08:36:20 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 08:36:20 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Failed with result 'exit-code'.
Dec 06 08:36:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:36:27 np0005548790.localdomain podman[77921]: 2025-12-06 08:36:27.56458911 +0000 UTC m=+0.079746143 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64)
Dec 06 08:36:27 np0005548790.localdomain podman[77921]: 2025-12-06 08:36:27.751387386 +0000 UTC m=+0.266544369 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:36:27 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:36:33 np0005548790.localdomain sudo[77951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:36:33 np0005548790.localdomain sudo[77951]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:36:33 np0005548790.localdomain sudo[77951]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:33 np0005548790.localdomain sudo[77966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:36:33 np0005548790.localdomain sudo[77966]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:36:34 np0005548790.localdomain sudo[77966]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:35 np0005548790.localdomain sudo[78013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:36:35 np0005548790.localdomain sudo[78013]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:36:35 np0005548790.localdomain sudo[78013]: pam_unix(sudo:session): session closed for user root
Dec 06 08:36:39 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:36:39 np0005548790.localdomain podman[78028]: 2025-12-06 08:36:39.566799219 +0000 UTC m=+0.082798246 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, release=1761123044, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:36:39 np0005548790.localdomain podman[78028]: 2025-12-06 08:36:39.580131294 +0000 UTC m=+0.096130332 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, release=1761123044, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.expose-services=, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Dec 06 08:36:39 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:36:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:36:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:36:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:36:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:36:42 np0005548790.localdomain systemd[1]: tmp-crun.YHJhrS.mount: Deactivated successfully.
Dec 06 08:36:42 np0005548790.localdomain podman[78048]: 2025-12-06 08:36:42.577277843 +0000 UTC m=+0.091420165 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, build-date=2025-11-19T00:11:48Z, version=17.1.12, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, batch=17.1_20251118.1, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git)
Dec 06 08:36:42 np0005548790.localdomain podman[78050]: 2025-12-06 08:36:42.623578282 +0000 UTC m=+0.129571417 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, tcib_managed=true, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, distribution-scope=public, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, container_name=logrotate_crond)
Dec 06 08:36:42 np0005548790.localdomain podman[78048]: 2025-12-06 08:36:42.637124784 +0000 UTC m=+0.151267086 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.expose-services=, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1)
Dec 06 08:36:42 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:36:42 np0005548790.localdomain podman[78050]: 2025-12-06 08:36:42.661241398 +0000 UTC m=+0.167234533 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:36:42 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:36:42 np0005548790.localdomain podman[78047]: 2025-12-06 08:36:42.723733409 +0000 UTC m=+0.239314480 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, version=17.1.12, tcib_managed=true, vcs-type=git, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Dec 06 08:36:42 np0005548790.localdomain podman[78047]: 2025-12-06 08:36:42.769150244 +0000 UTC m=+0.284731315 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Dec 06 08:36:42 np0005548790.localdomain podman[78049]: 2025-12-06 08:36:42.779715336 +0000 UTC m=+0.288207057 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:36:42 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:36:42 np0005548790.localdomain podman[78049]: 2025-12-06 08:36:42.810319075 +0000 UTC m=+0.318810766 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container)
Dec 06 08:36:42 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:36:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:36:43 np0005548790.localdomain systemd[1]: tmp-crun.VwImRH.mount: Deactivated successfully.
Dec 06 08:36:43 np0005548790.localdomain podman[78137]: 2025-12-06 08:36:43.572142907 +0000 UTC m=+0.086634418 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:36:43 np0005548790.localdomain podman[78137]: 2025-12-06 08:36:43.947094674 +0000 UTC m=+0.461586165 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git)
Dec 06 08:36:43 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:36:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:36:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:36:46 np0005548790.localdomain podman[78160]: 2025-12-06 08:36:46.568033912 +0000 UTC m=+0.080155775 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git)
Dec 06 08:36:46 np0005548790.localdomain podman[78161]: 2025-12-06 08:36:46.622166259 +0000 UTC m=+0.131423366 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, config_id=tripleo_step4, io.buildah.version=1.41.4)
Dec 06 08:36:46 np0005548790.localdomain podman[78160]: 2025-12-06 08:36:46.649077809 +0000 UTC m=+0.161199622 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, release=1761123044, container_name=ovn_metadata_agent, batch=17.1_20251118.1, tcib_managed=true)
Dec 06 08:36:46 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:36:46 np0005548790.localdomain podman[78161]: 2025-12-06 08:36:46.699296782 +0000 UTC m=+0.208553929 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, io.buildah.version=1.41.4, distribution-scope=public, vcs-type=git, version=17.1.12)
Dec 06 08:36:46 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:36:47 np0005548790.localdomain sshd[78205]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:36:49 np0005548790.localdomain sshd[78205]: Received disconnect from 103.226.138.52 port 51532:11: Bye Bye [preauth]
Dec 06 08:36:49 np0005548790.localdomain sshd[78205]: Disconnected from authenticating user root 103.226.138.52 port 51532 [preauth]
Dec 06 08:36:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:36:51 np0005548790.localdomain podman[78207]: 2025-12-06 08:36:51.571508943 +0000 UTC m=+0.084770788 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step5)
Dec 06 08:36:51 np0005548790.localdomain podman[78207]: 2025-12-06 08:36:51.637261931 +0000 UTC m=+0.150523796 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:36:51 np0005548790.localdomain podman[78207]: unhealthy
Dec 06 08:36:51 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 08:36:51 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Failed with result 'exit-code'.
Dec 06 08:36:58 np0005548790.localdomain sshd[78229]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:36:58 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:36:58 np0005548790.localdomain systemd[1]: tmp-crun.NilGSF.mount: Deactivated successfully.
Dec 06 08:36:58 np0005548790.localdomain podman[78230]: 2025-12-06 08:36:58.577683408 +0000 UTC m=+0.094813036 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, architecture=x86_64, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:36:58 np0005548790.localdomain podman[78230]: 2025-12-06 08:36:58.801223476 +0000 UTC m=+0.318353054 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, container_name=metrics_qdr, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:36:58 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:37:00 np0005548790.localdomain sshd[78229]: Received disconnect from 35.247.75.98 port 56902:11: Bye Bye [preauth]
Dec 06 08:37:00 np0005548790.localdomain sshd[78229]: Disconnected from authenticating user root 35.247.75.98 port 56902 [preauth]
Dec 06 08:37:10 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:37:10 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 08:37:10 np0005548790.localdomain recover_tripleo_nova_virtqemud[78266]: 62556
Dec 06 08:37:10 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 08:37:10 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 08:37:10 np0005548790.localdomain podman[78261]: 2025-12-06 08:37:10.574334559 +0000 UTC m=+0.083523265 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_id=tripleo_step3, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Dec 06 08:37:10 np0005548790.localdomain podman[78261]: 2025-12-06 08:37:10.585048016 +0000 UTC m=+0.094236702 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.openshift.expose-services=, architecture=x86_64)
Dec 06 08:37:10 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:37:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:37:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:37:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:37:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:37:13 np0005548790.localdomain systemd[1]: tmp-crun.RTVIfI.mount: Deactivated successfully.
Dec 06 08:37:13 np0005548790.localdomain podman[78286]: 2025-12-06 08:37:13.632614553 +0000 UTC m=+0.139897992 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, version=17.1.12)
Dec 06 08:37:13 np0005548790.localdomain podman[78284]: 2025-12-06 08:37:13.599519498 +0000 UTC m=+0.109637103 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute)
Dec 06 08:37:13 np0005548790.localdomain podman[78285]: 2025-12-06 08:37:13.559001075 +0000 UTC m=+0.072001447 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public)
Dec 06 08:37:13 np0005548790.localdomain podman[78286]: 2025-12-06 08:37:13.668104692 +0000 UTC m=+0.175388111 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:37:13 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:37:13 np0005548790.localdomain podman[78284]: 2025-12-06 08:37:13.67812354 +0000 UTC m=+0.188241165 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, batch=17.1_20251118.1, io.buildah.version=1.41.4)
Dec 06 08:37:13 np0005548790.localdomain podman[78285]: 2025-12-06 08:37:13.694173339 +0000 UTC m=+0.207173661 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4, architecture=x86_64, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi)
Dec 06 08:37:13 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:37:13 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:37:13 np0005548790.localdomain podman[78283]: 2025-12-06 08:37:13.613274966 +0000 UTC m=+0.128188390 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, vcs-type=git, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 08:37:13 np0005548790.localdomain podman[78283]: 2025-12-06 08:37:13.745183333 +0000 UTC m=+0.260096837 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-type=git, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 06 08:37:13 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:37:14 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:37:14 np0005548790.localdomain podman[78376]: 2025-12-06 08:37:14.574543251 +0000 UTC m=+0.085040935 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20251118.1, architecture=x86_64, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible)
Dec 06 08:37:14 np0005548790.localdomain podman[78376]: 2025-12-06 08:37:14.939324966 +0000 UTC m=+0.449822680 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_migration_target, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:36:58Z)
Dec 06 08:37:14 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:37:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:37:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:37:17 np0005548790.localdomain podman[78399]: 2025-12-06 08:37:17.578857142 +0000 UTC m=+0.090437670 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Dec 06 08:37:17 np0005548790.localdomain podman[78399]: 2025-12-06 08:37:17.630215085 +0000 UTC m=+0.141795653 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, version=17.1.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Dec 06 08:37:17 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:37:17 np0005548790.localdomain podman[78400]: 2025-12-06 08:37:17.631614072 +0000 UTC m=+0.139257395 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, container_name=ovn_controller, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller)
Dec 06 08:37:17 np0005548790.localdomain podman[78400]: 2025-12-06 08:37:17.715145206 +0000 UTC m=+0.222788439 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, io.buildah.version=1.41.4, config_id=tripleo_step4, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:37:17 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:37:22 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:37:22 np0005548790.localdomain podman[78537]: 2025-12-06 08:37:22.569959781 +0000 UTC m=+0.078634714 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:37:22 np0005548790.localdomain podman[78537]: 2025-12-06 08:37:22.596032969 +0000 UTC m=+0.104707892 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, container_name=nova_compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, release=1761123044, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=)
Dec 06 08:37:22 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:37:29 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:37:29 np0005548790.localdomain podman[78563]: 2025-12-06 08:37:29.560594902 +0000 UTC m=+0.078603453 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, version=17.1.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible)
Dec 06 08:37:29 np0005548790.localdomain systemd[1]: libpod-bee56f31c74019a64bbb1bf291270de77c8cc037177c55fab37088c17f0fe5f3.scope: Deactivated successfully.
Dec 06 08:37:29 np0005548790.localdomain podman[78592]: 2025-12-06 08:37:29.677526308 +0000 UTC m=+0.064074713 container died bee56f31c74019a64bbb1bf291270de77c8cc037177c55fab37088c17f0fe5f3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_wait_for_compute_service, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:37:29 np0005548790.localdomain systemd[1]: tmp-crun.1G22sY.mount: Deactivated successfully.
Dec 06 08:37:29 np0005548790.localdomain podman[78592]: 2025-12-06 08:37:29.719376707 +0000 UTC m=+0.105925072 container cleanup bee56f31c74019a64bbb1bf291270de77c8cc037177c55fab37088c17f0fe5f3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, container_name=nova_wait_for_compute_service, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, vcs-type=git, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, distribution-scope=public, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:37:29 np0005548790.localdomain systemd[1]: libpod-conmon-bee56f31c74019a64bbb1bf291270de77c8cc037177c55fab37088c17f0fe5f3.scope: Deactivated successfully.
Dec 06 08:37:29 np0005548790.localdomain python3[76631]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_wait_for_compute_service --conmon-pidfile /run/nova_wait_for_compute_service.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env __OS_DEBUG=true --env TRIPLEO_CONFIG_HASH=1c14d9f34e8565ad391b489e982af70f --label config_id=tripleo_step5 --label container_name=nova_wait_for_compute_service --label managed_by=tripleo_ansible --label config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_wait_for_compute_service.log --network host --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/nova:/var/log/nova --volume /var/lib/container-config-scripts:/container-config-scripts registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 06 08:37:29 np0005548790.localdomain podman[78563]: 2025-12-06 08:37:29.754436645 +0000 UTC m=+0.272445176 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:37:29 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:37:29 np0005548790.localdomain sudo[76629]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:30 np0005548790.localdomain sudo[78644]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eloisvordenvmujclnmbmgcvawnnhmdj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:37:30 np0005548790.localdomain sudo[78644]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:37:30 np0005548790.localdomain python3[78646]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:37:30 np0005548790.localdomain sudo[78644]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:30 np0005548790.localdomain sudo[78660]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jdaehyrxtnehvhnrndhpdqyinueheilo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:37:30 np0005548790.localdomain sudo[78660]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:37:30 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ead6993018cbf28a6911fcc6a4afc0bfdf470e6d9ea5b6906d250c0b5f41599c-merged.mount: Deactivated successfully.
Dec 06 08:37:30 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bee56f31c74019a64bbb1bf291270de77c8cc037177c55fab37088c17f0fe5f3-userdata-shm.mount: Deactivated successfully.
Dec 06 08:37:30 np0005548790.localdomain python3[78662]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 06 08:37:30 np0005548790.localdomain sudo[78660]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:31 np0005548790.localdomain sudo[78721]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gyoldgfiekoeekxjnavebltlkgjgolup ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:37:31 np0005548790.localdomain sudo[78721]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:37:31 np0005548790.localdomain python3[78723]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1765010250.6483-118343-243403342657777/source dest=/etc/systemd/system/tripleo_nova_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:37:31 np0005548790.localdomain sudo[78721]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:31 np0005548790.localdomain sudo[78737]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-urmjhmlmocrogsqkqqewfkjtsrteyvsa ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:37:31 np0005548790.localdomain sudo[78737]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:37:31 np0005548790.localdomain python3[78739]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 08:37:31 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:37:31 np0005548790.localdomain systemd-sysv-generator[78764]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:37:31 np0005548790.localdomain systemd-rc-local-generator[78757]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:37:31 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:37:31 np0005548790.localdomain sudo[78737]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:32 np0005548790.localdomain sshd[78775]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:37:32 np0005548790.localdomain sudo[78790]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tetycnnjcbgxdrqixmhtyjddsrmmgjjv ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 06 08:37:32 np0005548790.localdomain sudo[78790]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:37:32 np0005548790.localdomain python3[78792]: ansible-systemd Invoked with state=restarted name=tripleo_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 08:37:32 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:37:32 np0005548790.localdomain systemd-rc-local-generator[78819]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:37:32 np0005548790.localdomain systemd-sysv-generator[78822]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:37:32 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:37:32 np0005548790.localdomain systemd[1]: Starting nova_compute container...
Dec 06 08:37:33 np0005548790.localdomain tripleo-start-podman-container[78832]: Creating additional drop-in dependency for "nova_compute" (1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254)
Dec 06 08:37:33 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 08:37:33 np0005548790.localdomain systemd-rc-local-generator[78886]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 08:37:33 np0005548790.localdomain systemd-sysv-generator[78891]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 08:37:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 08:37:33 np0005548790.localdomain systemd[1]: Started nova_compute container.
Dec 06 08:37:33 np0005548790.localdomain sudo[78790]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:33 np0005548790.localdomain sshd[78775]: Received disconnect from 43.163.123.45 port 54772:11: Bye Bye [preauth]
Dec 06 08:37:33 np0005548790.localdomain sshd[78775]: Disconnected from authenticating user root 43.163.123.45 port 54772 [preauth]
Dec 06 08:37:33 np0005548790.localdomain sudo[78927]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iuhyxqrxzbxvrehinoyzlwzaagsnrtsc ; /usr/bin/python3
Dec 06 08:37:33 np0005548790.localdomain sudo[78927]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:37:33 np0005548790.localdomain python3[78929]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks5.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:37:33 np0005548790.localdomain sudo[78927]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:34 np0005548790.localdomain sudo[78975]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-upxeaqyqpsefdtyrqfcpqjadphusngpy ; /usr/bin/python3
Dec 06 08:37:34 np0005548790.localdomain sudo[78975]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:37:34 np0005548790.localdomain sudo[78975]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:34 np0005548790.localdomain sudo[79018]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ltsxelnuysfngadkxkfhcwejniytcsgw ; /usr/bin/python3
Dec 06 08:37:34 np0005548790.localdomain sudo[79018]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:37:35 np0005548790.localdomain sudo[79018]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:35 np0005548790.localdomain sudo[79056]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sbypoktfsmllttfedisasepozlzuyzqq ; /usr/bin/python3
Dec 06 08:37:35 np0005548790.localdomain sudo[79056]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:37:35 np0005548790.localdomain sudo[79042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:37:35 np0005548790.localdomain sudo[79042]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:37:35 np0005548790.localdomain sudo[79042]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:35 np0005548790.localdomain sudo[79066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:37:35 np0005548790.localdomain sudo[79066]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:37:35 np0005548790.localdomain python3[79064]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks5.json short_hostname=np0005548790 step=5 update_config_hash_only=False
Dec 06 08:37:35 np0005548790.localdomain sudo[79056]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:35 np0005548790.localdomain sudo[79109]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jpxwyzxfsufydnucgqiaihfwuqosmedj ; /usr/bin/python3
Dec 06 08:37:35 np0005548790.localdomain sudo[79109]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:37:36 np0005548790.localdomain sudo[79066]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:36 np0005548790.localdomain python3[79113]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 08:37:36 np0005548790.localdomain sudo[79109]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:36 np0005548790.localdomain sudo[79141]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dwxqgdomormreojknftexosbrolnuurr ; /usr/bin/python3
Dec 06 08:37:36 np0005548790.localdomain sudo[79141]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 06 08:37:36 np0005548790.localdomain python3[79143]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_5 config_pattern=container-puppet-*.json config_overrides={} debug=True
Dec 06 08:37:36 np0005548790.localdomain sudo[79141]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:36 np0005548790.localdomain sudo[79144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:37:36 np0005548790.localdomain sudo[79144]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:37:36 np0005548790.localdomain sudo[79144]: pam_unix(sudo:session): session closed for user root
Dec 06 08:37:41 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:37:41 np0005548790.localdomain systemd[1]: tmp-crun.E0pmBp.mount: Deactivated successfully.
Dec 06 08:37:41 np0005548790.localdomain podman[79159]: 2025-12-06 08:37:41.587399335 +0000 UTC m=+0.099387379 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Dec 06 08:37:41 np0005548790.localdomain podman[79159]: 2025-12-06 08:37:41.622699589 +0000 UTC m=+0.134687643 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd)
Dec 06 08:37:41 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:37:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:37:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:37:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:37:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:37:44 np0005548790.localdomain systemd[1]: tmp-crun.2QmWon.mount: Deactivated successfully.
Dec 06 08:37:44 np0005548790.localdomain podman[79180]: 2025-12-06 08:37:44.558496827 +0000 UTC m=+0.064773983 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, version=17.1.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, release=1761123044, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4)
Dec 06 08:37:44 np0005548790.localdomain podman[79179]: 2025-12-06 08:37:44.572376628 +0000 UTC m=+0.078669935 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, distribution-scope=public, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 08:37:44 np0005548790.localdomain podman[79180]: 2025-12-06 08:37:44.585363235 +0000 UTC m=+0.091640401 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.buildah.version=1.41.4)
Dec 06 08:37:44 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:37:44 np0005548790.localdomain podman[79179]: 2025-12-06 08:37:44.62518249 +0000 UTC m=+0.131475757 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.buildah.version=1.41.4, config_id=tripleo_step4)
Dec 06 08:37:44 np0005548790.localdomain podman[79186]: 2025-12-06 08:37:44.635948198 +0000 UTC m=+0.135633658 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, vcs-type=git, vendor=Red Hat, Inc.)
Dec 06 08:37:44 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:37:44 np0005548790.localdomain podman[79186]: 2025-12-06 08:37:44.643146651 +0000 UTC m=+0.142832121 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step4, release=1761123044, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64)
Dec 06 08:37:44 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:37:44 np0005548790.localdomain podman[79178]: 2025-12-06 08:37:44.727210099 +0000 UTC m=+0.236596288 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, distribution-scope=public, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, url=https://www.redhat.com, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:37:44 np0005548790.localdomain podman[79178]: 2025-12-06 08:37:44.735622634 +0000 UTC m=+0.245008833 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 06 08:37:44 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:37:45 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:37:45 np0005548790.localdomain podman[79260]: 2025-12-06 08:37:45.546052356 +0000 UTC m=+0.064877286 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=)
Dec 06 08:37:45 np0005548790.localdomain podman[79260]: 2025-12-06 08:37:45.912706281 +0000 UTC m=+0.431531181 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-nova-compute, version=17.1.12)
Dec 06 08:37:45 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:37:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:37:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:37:48 np0005548790.localdomain systemd[1]: tmp-crun.m1BE4G.mount: Deactivated successfully.
Dec 06 08:37:48 np0005548790.localdomain podman[79284]: 2025-12-06 08:37:48.574988403 +0000 UTC m=+0.086037562 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, container_name=ovn_controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:37:48 np0005548790.localdomain podman[79284]: 2025-12-06 08:37:48.602214762 +0000 UTC m=+0.113263911 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, io.buildah.version=1.41.4, config_id=tripleo_step4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team)
Dec 06 08:37:48 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:37:48 np0005548790.localdomain podman[79283]: 2025-12-06 08:37:48.620228053 +0000 UTC m=+0.134292252 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Dec 06 08:37:48 np0005548790.localdomain podman[79283]: 2025-12-06 08:37:48.691387056 +0000 UTC m=+0.205451215 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible)
Dec 06 08:37:48 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:37:53 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:37:53 np0005548790.localdomain podman[79331]: 2025-12-06 08:37:53.567535411 +0000 UTC m=+0.079413314 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 08:37:53 np0005548790.localdomain podman[79331]: 2025-12-06 08:37:53.597169854 +0000 UTC m=+0.109047767 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-19T00:36:58Z, tcib_managed=true, url=https://www.redhat.com, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 08:37:53 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:38:00 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:38:00 np0005548790.localdomain podman[79357]: 2025-12-06 08:38:00.573280595 +0000 UTC m=+0.088820977 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, batch=17.1_20251118.1, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:38:00 np0005548790.localdomain podman[79357]: 2025-12-06 08:38:00.774205328 +0000 UTC m=+0.289745750 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:38:00 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:38:05 np0005548790.localdomain sshd[79387]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:38:05 np0005548790.localdomain sshd[79387]: Accepted publickey for zuul from 192.168.122.100 port 35544 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 08:38:05 np0005548790.localdomain systemd-logind[760]: New session 33 of user zuul.
Dec 06 08:38:05 np0005548790.localdomain systemd[1]: Started Session 33 of User zuul.
Dec 06 08:38:05 np0005548790.localdomain sshd[79387]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 08:38:06 np0005548790.localdomain sudo[79494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dyuqvtxoxfxhghgahayppuapmupawvou ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765010285.9844577-40341-175822515974978/AnsiballZ_setup.py
Dec 06 08:38:06 np0005548790.localdomain sudo[79494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 08:38:06 np0005548790.localdomain python3[79496]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 08:38:09 np0005548790.localdomain sudo[79494]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:12 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:38:12 np0005548790.localdomain podman[79683]: 2025-12-06 08:38:12.574978456 +0000 UTC m=+0.086014140 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1)
Dec 06 08:38:12 np0005548790.localdomain podman[79683]: 2025-12-06 08:38:12.612241013 +0000 UTC m=+0.123276707 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, release=1761123044, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, maintainer=OpenStack TripleO Team)
Dec 06 08:38:12 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:38:13 np0005548790.localdomain sudo[79775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mjmpmkevrjgbntmipwytioiocbngyuzr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765010293.6740866-40403-154087890409209/AnsiballZ_dnf.py
Dec 06 08:38:13 np0005548790.localdomain sudo[79775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 08:38:14 np0005548790.localdomain python3[79777]: ansible-ansible.legacy.dnf Invoked with name=['iptables'] allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None state=None
Dec 06 08:38:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:38:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:38:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:38:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:38:15 np0005548790.localdomain podman[79780]: 2025-12-06 08:38:15.573327467 +0000 UTC m=+0.087885552 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid)
Dec 06 08:38:15 np0005548790.localdomain podman[79780]: 2025-12-06 08:38:15.611111127 +0000 UTC m=+0.125669162 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, io.openshift.expose-services=, container_name=iscsid, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, tcib_managed=true, config_id=tripleo_step3, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc.)
Dec 06 08:38:15 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:38:15 np0005548790.localdomain podman[79782]: 2025-12-06 08:38:15.624881345 +0000 UTC m=+0.134877868 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z)
Dec 06 08:38:15 np0005548790.localdomain podman[79782]: 2025-12-06 08:38:15.680198434 +0000 UTC m=+0.190194947 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4)
Dec 06 08:38:15 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:38:15 np0005548790.localdomain podman[79781]: 2025-12-06 08:38:15.682163476 +0000 UTC m=+0.194243835 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute)
Dec 06 08:38:15 np0005548790.localdomain podman[79783]: 2025-12-06 08:38:15.743136777 +0000 UTC m=+0.249118713 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, build-date=2025-11-18T22:49:32Z, release=1761123044, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:38:15 np0005548790.localdomain podman[79781]: 2025-12-06 08:38:15.771868556 +0000 UTC m=+0.283948865 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, container_name=ceilometer_agent_compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 08:38:15 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:38:15 np0005548790.localdomain podman[79783]: 2025-12-06 08:38:15.825528881 +0000 UTC m=+0.331510767 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:38:15 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:38:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:38:16 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 08:38:16 np0005548790.localdomain recover_tripleo_nova_virtqemud[79870]: 62556
Dec 06 08:38:16 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 08:38:16 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 08:38:16 np0005548790.localdomain systemd[1]: tmp-crun.9hlAGb.mount: Deactivated successfully.
Dec 06 08:38:16 np0005548790.localdomain podman[79868]: 2025-12-06 08:38:16.588412731 +0000 UTC m=+0.101014682 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container)
Dec 06 08:38:16 np0005548790.localdomain podman[79868]: 2025-12-06 08:38:16.938866403 +0000 UTC m=+0.451468334 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, version=17.1.12, build-date=2025-11-19T00:36:58Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, release=1761123044)
Dec 06 08:38:16 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:38:17 np0005548790.localdomain sudo[79775]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:18 np0005548790.localdomain sudo[79981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fejlpubskgdkufqtvfxvojbxxjkthbdd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765010298.05332-40458-33228801673781/AnsiballZ_iptables.py
Dec 06 08:38:18 np0005548790.localdomain sudo[79981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 08:38:18 np0005548790.localdomain python3[79983]: ansible-ansible.builtin.iptables Invoked with action=insert chain=INPUT comment=allow ssh access for zuul executor in_interface=eth0 jump=ACCEPT protocol=tcp source=38.102.83.114 table=filter state=present ip_version=ipv4 match=[] destination_ports=[] ctstate=[] syn=ignore flush=False chain_management=False numeric=False rule_num=None wait=None to_source=None destination=None to_destination=None tcp_flags=None gateway=None log_prefix=None log_level=None goto=None out_interface=None fragment=None set_counters=None source_port=None destination_port=None to_ports=None set_dscp_mark=None set_dscp_mark_class=None src_range=None dst_range=None match_set=None match_set_flags=None limit=None limit_burst=None uid_owner=None gid_owner=None reject_with=None icmp_type=None policy=None
Dec 06 08:38:18 np0005548790.localdomain kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec 06 08:38:18 np0005548790.localdomain systemd-journald[47675]: Field hash table of /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal has a fill level at 81.1 (270 of 333 items), suggesting rotation.
Dec 06 08:38:18 np0005548790.localdomain systemd-journald[47675]: /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 06 08:38:18 np0005548790.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 08:38:18 np0005548790.localdomain sudo[79981]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:18 np0005548790.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 08:38:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:38:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:38:19 np0005548790.localdomain podman[80030]: 2025-12-06 08:38:19.575804688 +0000 UTC m=+0.086482043 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4)
Dec 06 08:38:19 np0005548790.localdomain podman[80030]: 2025-12-06 08:38:19.653217749 +0000 UTC m=+0.163895054 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, vcs-type=git, build-date=2025-11-18T23:34:05Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc.)
Dec 06 08:38:19 np0005548790.localdomain systemd[1]: tmp-crun.YvB7IJ.mount: Deactivated successfully.
Dec 06 08:38:19 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:38:19 np0005548790.localdomain podman[80029]: 2025-12-06 08:38:19.66485767 +0000 UTC m=+0.175406741 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 08:38:19 np0005548790.localdomain podman[80029]: 2025-12-06 08:38:19.738257492 +0000 UTC m=+0.248806563 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:38:19 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:38:21 np0005548790.localdomain sshd[80099]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:38:23 np0005548790.localdomain sshd[80099]: Received disconnect from 103.226.138.52 port 33610:11: Bye Bye [preauth]
Dec 06 08:38:23 np0005548790.localdomain sshd[80099]: Disconnected from authenticating user root 103.226.138.52 port 33610 [preauth]
Dec 06 08:38:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:38:24 np0005548790.localdomain podman[80101]: 2025-12-06 08:38:24.568267756 +0000 UTC m=+0.082528988 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 08:38:24 np0005548790.localdomain podman[80101]: 2025-12-06 08:38:24.598140005 +0000 UTC m=+0.112401197 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=nova_compute, distribution-scope=public, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, version=17.1.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:38:24 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:38:31 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:38:31 np0005548790.localdomain podman[80128]: 2025-12-06 08:38:31.569029127 +0000 UTC m=+0.084258196 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, distribution-scope=public)
Dec 06 08:38:31 np0005548790.localdomain podman[80128]: 2025-12-06 08:38:31.745060463 +0000 UTC m=+0.260289482 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, container_name=metrics_qdr, tcib_managed=true)
Dec 06 08:38:31 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:38:36 np0005548790.localdomain sudo[80156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:38:36 np0005548790.localdomain sudo[80156]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:38:36 np0005548790.localdomain sudo[80156]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:36 np0005548790.localdomain sudo[80171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:38:36 np0005548790.localdomain sudo[80171]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:38:37 np0005548790.localdomain sudo[80171]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:38 np0005548790.localdomain sudo[80217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:38:38 np0005548790.localdomain sudo[80217]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:38:38 np0005548790.localdomain sudo[80217]: pam_unix(sudo:session): session closed for user root
Dec 06 08:38:42 np0005548790.localdomain sshd[80232]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:38:43 np0005548790.localdomain sshd[80232]: Received disconnect from 35.247.75.98 port 51764:11: Bye Bye [preauth]
Dec 06 08:38:43 np0005548790.localdomain sshd[80232]: Disconnected from authenticating user root 35.247.75.98 port 51764 [preauth]
Dec 06 08:38:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:38:43 np0005548790.localdomain systemd[1]: tmp-crun.bLo4UO.mount: Deactivated successfully.
Dec 06 08:38:43 np0005548790.localdomain podman[80234]: 2025-12-06 08:38:43.544016556 +0000 UTC m=+0.093031088 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3)
Dec 06 08:38:43 np0005548790.localdomain podman[80234]: 2025-12-06 08:38:43.556139571 +0000 UTC m=+0.105154183 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, io.buildah.version=1.41.4, vcs-type=git, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:38:43 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:38:45 np0005548790.localdomain sshd[80254]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:38:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:38:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:38:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:38:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:38:46 np0005548790.localdomain podman[80256]: 2025-12-06 08:38:46.576222342 +0000 UTC m=+0.094319694 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Dec 06 08:38:46 np0005548790.localdomain podman[80256]: 2025-12-06 08:38:46.612128771 +0000 UTC m=+0.130226133 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, config_id=tripleo_step3, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, version=17.1.12, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, container_name=iscsid)
Dec 06 08:38:46 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:38:46 np0005548790.localdomain podman[80258]: 2025-12-06 08:38:46.626677221 +0000 UTC m=+0.134570519 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Dec 06 08:38:46 np0005548790.localdomain systemd[1]: tmp-crun.gnOyFf.mount: Deactivated successfully.
Dec 06 08:38:46 np0005548790.localdomain podman[80257]: 2025-12-06 08:38:46.689445299 +0000 UTC m=+0.201455218 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 08:38:46 np0005548790.localdomain podman[80264]: 2025-12-06 08:38:46.734837913 +0000 UTC m=+0.239432883 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 06 08:38:46 np0005548790.localdomain podman[80264]: 2025-12-06 08:38:46.746048403 +0000 UTC m=+0.250643363 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, release=1761123044, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z)
Dec 06 08:38:46 np0005548790.localdomain podman[80258]: 2025-12-06 08:38:46.756651386 +0000 UTC m=+0.264544675 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Dec 06 08:38:46 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:38:46 np0005548790.localdomain podman[80257]: 2025-12-06 08:38:46.766616913 +0000 UTC m=+0.278626822 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, distribution-scope=public, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container)
Dec 06 08:38:46 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:38:46 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:38:47 np0005548790.localdomain sshd[80254]: Received disconnect from 43.163.123.45 port 53428:11: Bye Bye [preauth]
Dec 06 08:38:47 np0005548790.localdomain sshd[80254]: Disconnected from authenticating user root 43.163.123.45 port 53428 [preauth]
Dec 06 08:38:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:38:47 np0005548790.localdomain podman[80345]: 2025-12-06 08:38:47.240502155 +0000 UTC m=+0.078304955 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, version=17.1.12, config_id=tripleo_step4, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=nova_migration_target)
Dec 06 08:38:47 np0005548790.localdomain podman[80345]: 2025-12-06 08:38:47.638287543 +0000 UTC m=+0.476090363 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Dec 06 08:38:47 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:38:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:38:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:38:50 np0005548790.localdomain podman[80368]: 2025-12-06 08:38:50.581255601 +0000 UTC m=+0.089295569 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step4, container_name=ovn_metadata_agent, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:38:50 np0005548790.localdomain systemd[1]: tmp-crun.lYKphN.mount: Deactivated successfully.
Dec 06 08:38:50 np0005548790.localdomain podman[80369]: 2025-12-06 08:38:50.637228538 +0000 UTC m=+0.142574474 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, build-date=2025-11-18T23:34:05Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 06 08:38:50 np0005548790.localdomain podman[80369]: 2025-12-06 08:38:50.660389958 +0000 UTC m=+0.165735854 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=ovn_controller, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 08:38:50 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:38:50 np0005548790.localdomain podman[80368]: 2025-12-06 08:38:50.71656734 +0000 UTC m=+0.224607288 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, version=17.1.12, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4)
Dec 06 08:38:50 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:38:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:38:55 np0005548790.localdomain podman[80417]: 2025-12-06 08:38:55.573020198 +0000 UTC m=+0.086925736 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, vcs-type=git, architecture=x86_64, tcib_managed=true, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, distribution-scope=public)
Dec 06 08:38:55 np0005548790.localdomain podman[80417]: 2025-12-06 08:38:55.602376423 +0000 UTC m=+0.116281961 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, distribution-scope=public)
Dec 06 08:38:55 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:39:02 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:39:02 np0005548790.localdomain podman[80443]: 2025-12-06 08:39:02.601817579 +0000 UTC m=+0.088437335 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.12)
Dec 06 08:39:02 np0005548790.localdomain podman[80443]: 2025-12-06 08:39:02.768183098 +0000 UTC m=+0.254802844 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, config_id=tripleo_step1, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:39:02 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:39:14 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:39:14 np0005548790.localdomain podman[80472]: 2025-12-06 08:39:14.565708601 +0000 UTC m=+0.082147628 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, distribution-scope=public, name=rhosp17/openstack-collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, container_name=collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:39:14 np0005548790.localdomain podman[80472]: 2025-12-06 08:39:14.580098115 +0000 UTC m=+0.096537142 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12)
Dec 06 08:39:14 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:39:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:39:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:39:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:39:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:39:17 np0005548790.localdomain systemd[1]: tmp-crun.cwl8jq.mount: Deactivated successfully.
Dec 06 08:39:17 np0005548790.localdomain podman[80496]: 2025-12-06 08:39:17.571001167 +0000 UTC m=+0.077804612 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, distribution-scope=public, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, config_id=tripleo_step4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:39:17 np0005548790.localdomain systemd[1]: tmp-crun.Ek9tGQ.mount: Deactivated successfully.
Dec 06 08:39:17 np0005548790.localdomain podman[80495]: 2025-12-06 08:39:17.625016911 +0000 UTC m=+0.129103623 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 08:39:17 np0005548790.localdomain podman[80494]: 2025-12-06 08:39:17.630335374 +0000 UTC m=+0.141795353 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 08:39:17 np0005548790.localdomain podman[80496]: 2025-12-06 08:39:17.642337854 +0000 UTC m=+0.149141209 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, name=rhosp17/openstack-cron)
Dec 06 08:39:17 np0005548790.localdomain podman[80494]: 2025-12-06 08:39:17.660196893 +0000 UTC m=+0.171656912 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, batch=17.1_20251118.1, tcib_managed=true)
Dec 06 08:39:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:39:17 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:39:17 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:39:17 np0005548790.localdomain podman[80495]: 2025-12-06 08:39:17.682163219 +0000 UTC m=+0.186249911 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 06 08:39:17 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:39:17 np0005548790.localdomain podman[80571]: 2025-12-06 08:39:17.750163308 +0000 UTC m=+0.074920124 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, tcib_managed=true)
Dec 06 08:39:17 np0005548790.localdomain podman[80493]: 2025-12-06 08:39:17.733309628 +0000 UTC m=+0.244883570 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, version=17.1.12, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, distribution-scope=public, architecture=x86_64, tcib_managed=true, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 08:39:17 np0005548790.localdomain podman[80493]: 2025-12-06 08:39:17.8126723 +0000 UTC m=+0.324246282 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com)
Dec 06 08:39:17 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:39:18 np0005548790.localdomain podman[80571]: 2025-12-06 08:39:18.15073567 +0000 UTC m=+0.475492476 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12)
Dec 06 08:39:18 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:39:18 np0005548790.localdomain sshd[79387]: pam_unix(sshd:session): session closed for user zuul
Dec 06 08:39:18 np0005548790.localdomain systemd[1]: session-33.scope: Deactivated successfully.
Dec 06 08:39:18 np0005548790.localdomain systemd[1]: session-33.scope: Consumed 5.774s CPU time.
Dec 06 08:39:18 np0005548790.localdomain systemd-logind[760]: Session 33 logged out. Waiting for processes to exit.
Dec 06 08:39:18 np0005548790.localdomain systemd-logind[760]: Removed session 33.
Dec 06 08:39:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:39:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:39:21 np0005548790.localdomain podman[80650]: 2025-12-06 08:39:21.563432951 +0000 UTC m=+0.081795848 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, distribution-scope=public, version=17.1.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com)
Dec 06 08:39:21 np0005548790.localdomain podman[80650]: 2025-12-06 08:39:21.603694177 +0000 UTC m=+0.122057034 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 06 08:39:21 np0005548790.localdomain systemd[1]: tmp-crun.ZFuA0g.mount: Deactivated successfully.
Dec 06 08:39:21 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:39:21 np0005548790.localdomain podman[80651]: 2025-12-06 08:39:21.62285077 +0000 UTC m=+0.140451827 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, container_name=ovn_controller, url=https://www.redhat.com, vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 06 08:39:21 np0005548790.localdomain podman[80651]: 2025-12-06 08:39:21.674241684 +0000 UTC m=+0.191842751 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, version=17.1.12, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4)
Dec 06 08:39:21 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:39:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:39:26 np0005548790.localdomain systemd[1]: tmp-crun.xRu5Pi.mount: Deactivated successfully.
Dec 06 08:39:26 np0005548790.localdomain podman[80698]: 2025-12-06 08:39:26.538003889 +0000 UTC m=+0.061601908 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step5, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, release=1761123044, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:39:26 np0005548790.localdomain podman[80698]: 2025-12-06 08:39:26.559069062 +0000 UTC m=+0.082667081 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, container_name=nova_compute, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute)
Dec 06 08:39:26 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:39:32 np0005548790.localdomain sshd[80724]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:39:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:39:32 np0005548790.localdomain sshd[80724]: Accepted publickey for zuul from 38.102.83.114 port 36278 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 08:39:33 np0005548790.localdomain systemd-logind[760]: New session 34 of user zuul.
Dec 06 08:39:33 np0005548790.localdomain systemd[1]: Started Session 34 of User zuul.
Dec 06 08:39:33 np0005548790.localdomain sshd[80724]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 08:39:33 np0005548790.localdomain podman[80726]: 2025-12-06 08:39:33.061107866 +0000 UTC m=+0.088008854 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:39:33 np0005548790.localdomain sudo[80770]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xerycenhjtqwawxcjdnfzwbodycgcbqd ; /usr/bin/python3
Dec 06 08:39:33 np0005548790.localdomain sudo[80770]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 08:39:33 np0005548790.localdomain podman[80726]: 2025-12-06 08:39:33.279322551 +0000 UTC m=+0.306223529 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step1, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4)
Dec 06 08:39:33 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:39:33 np0005548790.localdomain python3[80772]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 08:39:36 np0005548790.localdomain sudo[80770]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:36 np0005548790.localdomain sshd[80774]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:39:36 np0005548790.localdomain sshd[80776]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:39:36 np0005548790.localdomain sshd[80776]: error: kex_exchange_identification: client sent invalid protocol identifier "MGLNDD_38.102.83.234_22"
Dec 06 08:39:36 np0005548790.localdomain sshd[80776]: banner exchange: Connection from 20.65.194.96 port 40596: invalid format
Dec 06 08:39:38 np0005548790.localdomain sudo[80777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:39:38 np0005548790.localdomain sudo[80777]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:39:38 np0005548790.localdomain sudo[80777]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:38 np0005548790.localdomain sudo[80792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:39:38 np0005548790.localdomain sudo[80792]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:39:39 np0005548790.localdomain sudo[80792]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:42 np0005548790.localdomain sudo[80838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:39:42 np0005548790.localdomain sudo[80838]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:39:42 np0005548790.localdomain sudo[80838]: pam_unix(sudo:session): session closed for user root
Dec 06 08:39:45 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:39:45 np0005548790.localdomain podman[80853]: 2025-12-06 08:39:45.582542767 +0000 UTC m=+0.095946547 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, maintainer=OpenStack TripleO Team)
Dec 06 08:39:45 np0005548790.localdomain podman[80853]: 2025-12-06 08:39:45.594124516 +0000 UTC m=+0.107528246 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:39:45 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:39:46 np0005548790.localdomain sshd[80774]: Connection closed by 20.65.194.96 port 40584 [preauth]
Dec 06 08:39:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:39:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:39:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:39:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:39:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:39:48 np0005548790.localdomain systemd[1]: tmp-crun.uV2xS7.mount: Deactivated successfully.
Dec 06 08:39:48 np0005548790.localdomain systemd[1]: tmp-crun.5wU4Xk.mount: Deactivated successfully.
Dec 06 08:39:48 np0005548790.localdomain podman[80882]: 2025-12-06 08:39:48.596641868 +0000 UTC m=+0.097005435 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target)
Dec 06 08:39:48 np0005548790.localdomain podman[80876]: 2025-12-06 08:39:48.62700109 +0000 UTC m=+0.132289849 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, tcib_managed=true, release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 08:39:48 np0005548790.localdomain podman[80876]: 2025-12-06 08:39:48.64608265 +0000 UTC m=+0.151371449 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 08:39:48 np0005548790.localdomain podman[80886]: 2025-12-06 08:39:48.675484227 +0000 UTC m=+0.173658005 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 06 08:39:48 np0005548790.localdomain podman[80874]: 2025-12-06 08:39:48.55405336 +0000 UTC m=+0.066846219 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, version=17.1.12, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, release=1761123044, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true)
Dec 06 08:39:48 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:39:48 np0005548790.localdomain podman[80875]: 2025-12-06 08:39:48.67747275 +0000 UTC m=+0.184022082 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 08:39:48 np0005548790.localdomain podman[80874]: 2025-12-06 08:39:48.740023092 +0000 UTC m=+0.252815921 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, release=1761123044, name=rhosp17/openstack-iscsid, container_name=iscsid, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3)
Dec 06 08:39:48 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:39:48 np0005548790.localdomain podman[80886]: 2025-12-06 08:39:48.760053208 +0000 UTC m=+0.258226986 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, vcs-type=git)
Dec 06 08:39:48 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:39:48 np0005548790.localdomain podman[80875]: 2025-12-06 08:39:48.810578829 +0000 UTC m=+0.317128191 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:39:48 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:39:48 np0005548790.localdomain podman[80882]: 2025-12-06 08:39:48.983193135 +0000 UTC m=+0.483556692 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target)
Dec 06 08:39:48 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:39:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:39:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:39:52 np0005548790.localdomain podman[80987]: 2025-12-06 08:39:52.577292917 +0000 UTC m=+0.088027526 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, distribution-scope=public, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.12, vendor=Red Hat, Inc., tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:39:52 np0005548790.localdomain podman[80987]: 2025-12-06 08:39:52.623213165 +0000 UTC m=+0.133947724 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, distribution-scope=public, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 06 08:39:52 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:39:52 np0005548790.localdomain podman[80988]: 2025-12-06 08:39:52.665374702 +0000 UTC m=+0.172001470 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, architecture=x86_64, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044)
Dec 06 08:39:52 np0005548790.localdomain podman[80988]: 2025-12-06 08:39:52.712152853 +0000 UTC m=+0.218779651 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 06 08:39:52 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:39:55 np0005548790.localdomain sshd[81035]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:39:56 np0005548790.localdomain sshd[81035]: Received disconnect from 43.163.123.45 port 52076:11: Bye Bye [preauth]
Dec 06 08:39:56 np0005548790.localdomain sshd[81035]: Disconnected from authenticating user root 43.163.123.45 port 52076 [preauth]
Dec 06 08:39:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:39:57 np0005548790.localdomain podman[81037]: 2025-12-06 08:39:57.075889506 +0000 UTC m=+0.088625311 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:39:57 np0005548790.localdomain podman[81037]: 2025-12-06 08:39:57.128277296 +0000 UTC m=+0.141013061 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:39:57 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:39:57 np0005548790.localdomain sshd[81063]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:40:00 np0005548790.localdomain sshd[81063]: Received disconnect from 103.226.138.52 port 42766:11: Bye Bye [preauth]
Dec 06 08:40:00 np0005548790.localdomain sshd[81063]: Disconnected from authenticating user root 103.226.138.52 port 42766 [preauth]
Dec 06 08:40:01 np0005548790.localdomain sudo[81078]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ifkpwkwtjjeasitolgsafxcjgclfioio ; /usr/bin/python3
Dec 06 08:40:01 np0005548790.localdomain sudo[81078]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 08:40:02 np0005548790.localdomain python3[81080]: ansible-ansible.legacy.dnf Invoked with name=['sos'] state=latest allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 06 08:40:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:40:03 np0005548790.localdomain systemd[1]: tmp-crun.NtMG3K.mount: Deactivated successfully.
Dec 06 08:40:03 np0005548790.localdomain podman[81082]: 2025-12-06 08:40:03.570892382 +0000 UTC m=+0.089420143 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, tcib_managed=true, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-type=git, build-date=2025-11-18T22:49:46Z, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:40:03 np0005548790.localdomain podman[81082]: 2025-12-06 08:40:03.756663609 +0000 UTC m=+0.275191360 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, batch=17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_id=tripleo_step1)
Dec 06 08:40:03 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:40:05 np0005548790.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 08:40:05 np0005548790.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 06 08:40:05 np0005548790.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 08:40:06 np0005548790.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 08:40:06 np0005548790.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 06 08:40:06 np0005548790.localdomain systemd[1]: run-r9741ccbb1a8f48abbe314d243bd16b0f.service: Deactivated successfully.
Dec 06 08:40:06 np0005548790.localdomain systemd[1]: run-re4433109b61246588a9ee8949e5c0ca3.service: Deactivated successfully.
Dec 06 08:40:06 np0005548790.localdomain sudo[81078]: pam_unix(sudo:session): session closed for user root
Dec 06 08:40:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:40:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 2400.1 total, 600.0 interval
                                                          Cumulative writes: 4641 writes, 21K keys, 4641 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 4641 writes, 489 syncs, 9.49 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 08:40:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:40:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 2400.2 total, 600.0 interval
                                                          Cumulative writes: 4958 writes, 21K keys, 4958 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 4958 writes, 576 syncs, 8.61 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 08:40:15 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 08:40:15 np0005548790.localdomain recover_tripleo_nova_virtqemud[81259]: 62556
Dec 06 08:40:15 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 08:40:15 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 08:40:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:40:16 np0005548790.localdomain podman[81260]: 2025-12-06 08:40:16.578285338 +0000 UTC m=+0.092352110 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, version=17.1.12)
Dec 06 08:40:16 np0005548790.localdomain podman[81260]: 2025-12-06 08:40:16.596163427 +0000 UTC m=+0.110230149 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_id=tripleo_step3, url=https://www.redhat.com, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, container_name=collectd, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64)
Dec 06 08:40:16 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:40:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:40:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:40:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:40:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:40:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:40:19 np0005548790.localdomain podman[81281]: 2025-12-06 08:40:19.629699279 +0000 UTC m=+0.140556060 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Dec 06 08:40:19 np0005548790.localdomain podman[81280]: 2025-12-06 08:40:19.597882908 +0000 UTC m=+0.110197558 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, name=rhosp17/openstack-iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Dec 06 08:40:19 np0005548790.localdomain podman[81280]: 2025-12-06 08:40:19.681205276 +0000 UTC m=+0.193519996 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, config_id=tripleo_step3, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc.)
Dec 06 08:40:19 np0005548790.localdomain podman[81282]: 2025-12-06 08:40:19.691167043 +0000 UTC m=+0.196636420 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container)
Dec 06 08:40:19 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:40:19 np0005548790.localdomain podman[81282]: 2025-12-06 08:40:19.724174605 +0000 UTC m=+0.229643972 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, vcs-type=git, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:40:19 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:40:19 np0005548790.localdomain podman[81294]: 2025-12-06 08:40:19.742741762 +0000 UTC m=+0.241392556 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, container_name=logrotate_crond, release=1761123044, com.redhat.component=openstack-cron-container, version=17.1.12, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, architecture=x86_64)
Dec 06 08:40:19 np0005548790.localdomain podman[81294]: 2025-12-06 08:40:19.754041474 +0000 UTC m=+0.252692308 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 06 08:40:19 np0005548790.localdomain podman[81281]: 2025-12-06 08:40:19.763934189 +0000 UTC m=+0.274791010 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, container_name=ceilometer_agent_compute, release=1761123044, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 08:40:19 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:40:19 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:40:19 np0005548790.localdomain podman[81283]: 2025-12-06 08:40:19.846525767 +0000 UTC m=+0.349798755 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:40:20 np0005548790.localdomain podman[81283]: 2025-12-06 08:40:20.221094174 +0000 UTC m=+0.724367122 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, container_name=nova_migration_target, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12)
Dec 06 08:40:20 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:40:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:40:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:40:23 np0005548790.localdomain podman[81434]: 2025-12-06 08:40:23.571497743 +0000 UTC m=+0.082667435 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-11-18T23:34:05Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:40:23 np0005548790.localdomain podman[81433]: 2025-12-06 08:40:23.625292804 +0000 UTC m=+0.138414268 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20251118.1, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step4, distribution-scope=public)
Dec 06 08:40:23 np0005548790.localdomain podman[81434]: 2025-12-06 08:40:23.627104783 +0000 UTC m=+0.138274455 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12)
Dec 06 08:40:23 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:40:23 np0005548790.localdomain podman[81433]: 2025-12-06 08:40:23.711437251 +0000 UTC m=+0.224558655 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, build-date=2025-11-19T00:14:25Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1, version=17.1.12)
Dec 06 08:40:23 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:40:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:40:27 np0005548790.localdomain podman[81481]: 2025-12-06 08:40:27.532100565 +0000 UTC m=+0.052638669 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, managed_by=tripleo_ansible, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_id=tripleo_step5, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute)
Dec 06 08:40:27 np0005548790.localdomain podman[81481]: 2025-12-06 08:40:27.580137693 +0000 UTC m=+0.100675817 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, vcs-type=git)
Dec 06 08:40:27 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:40:33 np0005548790.localdomain sshd[81507]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:40:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:40:34 np0005548790.localdomain systemd[1]: tmp-crun.fYbZSl.mount: Deactivated successfully.
Dec 06 08:40:34 np0005548790.localdomain podman[81509]: 2025-12-06 08:40:34.569927998 +0000 UTC m=+0.088213993 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, release=1761123044, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-11-18T22:49:46Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr)
Dec 06 08:40:34 np0005548790.localdomain podman[81509]: 2025-12-06 08:40:34.769271516 +0000 UTC m=+0.287557481 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, container_name=metrics_qdr, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044)
Dec 06 08:40:34 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:40:34 np0005548790.localdomain sshd[81507]: Received disconnect from 35.247.75.98 port 40076:11: Bye Bye [preauth]
Dec 06 08:40:34 np0005548790.localdomain sshd[81507]: Disconnected from authenticating user root 35.247.75.98 port 40076 [preauth]
Dec 06 08:40:42 np0005548790.localdomain sudo[81537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:40:42 np0005548790.localdomain sudo[81537]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:40:42 np0005548790.localdomain sudo[81537]: pam_unix(sudo:session): session closed for user root
Dec 06 08:40:42 np0005548790.localdomain sudo[81552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 06 08:40:42 np0005548790.localdomain sudo[81552]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:40:42 np0005548790.localdomain sudo[81552]: pam_unix(sudo:session): session closed for user root
Dec 06 08:40:42 np0005548790.localdomain sudo[81588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:40:42 np0005548790.localdomain sudo[81588]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:40:42 np0005548790.localdomain sudo[81588]: pam_unix(sudo:session): session closed for user root
Dec 06 08:40:42 np0005548790.localdomain sudo[81603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:40:42 np0005548790.localdomain sudo[81603]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:40:43 np0005548790.localdomain sudo[81603]: pam_unix(sudo:session): session closed for user root
Dec 06 08:40:44 np0005548790.localdomain sudo[81649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:40:44 np0005548790.localdomain sudo[81649]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:40:44 np0005548790.localdomain sudo[81649]: pam_unix(sudo:session): session closed for user root
Dec 06 08:40:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:40:47 np0005548790.localdomain systemd[1]: tmp-crun.ssgYIS.mount: Deactivated successfully.
Dec 06 08:40:47 np0005548790.localdomain podman[81664]: 2025-12-06 08:40:47.862073754 +0000 UTC m=+0.097203794 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, distribution-scope=public, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4)
Dec 06 08:40:47 np0005548790.localdomain podman[81664]: 2025-12-06 08:40:47.876693045 +0000 UTC m=+0.111823115 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, managed_by=tripleo_ansible, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044)
Dec 06 08:40:47 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:40:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:40:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:40:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:40:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:40:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:40:50 np0005548790.localdomain systemd[1]: tmp-crun.c5gtCB.mount: Deactivated successfully.
Dec 06 08:40:50 np0005548790.localdomain podman[81685]: 2025-12-06 08:40:50.582633902 +0000 UTC m=+0.083580120 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, container_name=ceilometer_agent_compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true)
Dec 06 08:40:50 np0005548790.localdomain podman[81685]: 2025-12-06 08:40:50.613067907 +0000 UTC m=+0.114014165 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 08:40:50 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:40:50 np0005548790.localdomain podman[81686]: 2025-12-06 08:40:50.627422811 +0000 UTC m=+0.123575680 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.openshift.expose-services=, version=17.1.12)
Dec 06 08:40:50 np0005548790.localdomain podman[81686]: 2025-12-06 08:40:50.651049983 +0000 UTC m=+0.147202862 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, batch=17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 06 08:40:50 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:40:50 np0005548790.localdomain podman[81692]: 2025-12-06 08:40:50.696690135 +0000 UTC m=+0.190528943 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1761123044)
Dec 06 08:40:50 np0005548790.localdomain podman[81684]: 2025-12-06 08:40:50.730926423 +0000 UTC m=+0.235378314 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team)
Dec 06 08:40:50 np0005548790.localdomain podman[81684]: 2025-12-06 08:40:50.741094114 +0000 UTC m=+0.245546025 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, container_name=iscsid)
Dec 06 08:40:50 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:40:50 np0005548790.localdomain podman[81695]: 2025-12-06 08:40:50.789677146 +0000 UTC m=+0.280660958 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, url=https://www.redhat.com, config_id=tripleo_step4, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, architecture=x86_64)
Dec 06 08:40:50 np0005548790.localdomain podman[81695]: 2025-12-06 08:40:50.800094594 +0000 UTC m=+0.291078406 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, vcs-type=git, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 06 08:40:50 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:40:51 np0005548790.localdomain podman[81692]: 2025-12-06 08:40:51.096145442 +0000 UTC m=+0.589984320 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-19T00:36:58Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:40:51 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:40:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:40:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:40:54 np0005548790.localdomain podman[81802]: 2025-12-06 08:40:54.565290755 +0000 UTC m=+0.076343605 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=)
Dec 06 08:40:54 np0005548790.localdomain podman[81801]: 2025-12-06 08:40:54.61999496 +0000 UTC m=+0.133654050 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team)
Dec 06 08:40:54 np0005548790.localdomain podman[81802]: 2025-12-06 08:40:54.672079675 +0000 UTC m=+0.183132525 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, distribution-scope=public, architecture=x86_64, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12)
Dec 06 08:40:54 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:40:54 np0005548790.localdomain podman[81801]: 2025-12-06 08:40:54.689186873 +0000 UTC m=+0.202845983 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 06 08:40:54 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:40:58 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:40:58 np0005548790.localdomain podman[81847]: 2025-12-06 08:40:58.558403198 +0000 UTC m=+0.073790906 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.component=openstack-nova-compute-container, release=1761123044, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public)
Dec 06 08:40:58 np0005548790.localdomain podman[81847]: 2025-12-06 08:40:58.612148397 +0000 UTC m=+0.127536145 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, build-date=2025-11-19T00:36:58Z, version=17.1.12, url=https://www.redhat.com)
Dec 06 08:40:58 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:41:02 np0005548790.localdomain sudo[81886]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-renqzxvyyhxcsfavrdlquffqjfbzvsft ; /usr/bin/python3
Dec 06 08:41:02 np0005548790.localdomain sudo[81886]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 08:41:02 np0005548790.localdomain python3[81888]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhel-9-for-x86_64-baseos-eus-rpms --disable rhel-9-for-x86_64-appstream-eus-rpms --disable rhel-9-for-x86_64-highavailability-eus-rpms --disable openstack-17.1-for-rhel-9-x86_64-rpms --disable fast-datapath-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:41:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:41:05 np0005548790.localdomain podman[82010]: 2025-12-06 08:41:05.568636262 +0000 UTC m=+0.084367250 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, vcs-type=git, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:41:05 np0005548790.localdomain podman[82010]: 2025-12-06 08:41:05.784006199 +0000 UTC m=+0.299737107 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:41:05 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:41:06 np0005548790.localdomain sshd[82039]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:41:07 np0005548790.localdomain sshd[82039]: Received disconnect from 43.163.123.45 port 50734:11: Bye Bye [preauth]
Dec 06 08:41:07 np0005548790.localdomain sshd[82039]: Disconnected from authenticating user root 43.163.123.45 port 50734 [preauth]
Dec 06 08:41:16 np0005548790.localdomain rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 08:41:16 np0005548790.localdomain rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 08:41:18 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:41:18 np0005548790.localdomain podman[82048]: 2025-12-06 08:41:18.585166926 +0000 UTC m=+0.102970749 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3)
Dec 06 08:41:18 np0005548790.localdomain podman[82048]: 2025-12-06 08:41:18.597030003 +0000 UTC m=+0.114833826 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, container_name=collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Dec 06 08:41:18 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:41:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:41:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:41:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:41:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:41:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:41:21 np0005548790.localdomain podman[82156]: 2025-12-06 08:41:21.578937579 +0000 UTC m=+0.085390277 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4)
Dec 06 08:41:21 np0005548790.localdomain podman[82156]: 2025-12-06 08:41:21.626239446 +0000 UTC m=+0.132692154 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4)
Dec 06 08:41:21 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:41:21 np0005548790.localdomain podman[82154]: 2025-12-06 08:41:21.631538098 +0000 UTC m=+0.143370890 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Dec 06 08:41:21 np0005548790.localdomain podman[82154]: 2025-12-06 08:41:21.717764417 +0000 UTC m=+0.229597159 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, release=1761123044, tcib_managed=true)
Dec 06 08:41:21 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:41:21 np0005548790.localdomain podman[82157]: 2025-12-06 08:41:21.690595929 +0000 UTC m=+0.190560103 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, release=1761123044, version=17.1.12, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:41:21 np0005548790.localdomain podman[82155]: 2025-12-06 08:41:21.791500741 +0000 UTC m=+0.300655841 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-type=git, config_id=tripleo_step4, container_name=ceilometer_agent_compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 08:41:21 np0005548790.localdomain podman[82163]: 2025-12-06 08:41:21.754176872 +0000 UTC m=+0.257240589 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, tcib_managed=true, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:41:21 np0005548790.localdomain sudo[81886]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:21 np0005548790.localdomain podman[82155]: 2025-12-06 08:41:21.822211723 +0000 UTC m=+0.331366863 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=)
Dec 06 08:41:21 np0005548790.localdomain podman[82163]: 2025-12-06 08:41:21.833939927 +0000 UTC m=+0.337003624 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, name=rhosp17/openstack-cron, architecture=x86_64, vendor=Red Hat, Inc., container_name=logrotate_crond, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:41:21 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:41:21 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:41:22 np0005548790.localdomain podman[82157]: 2025-12-06 08:41:22.116253717 +0000 UTC m=+0.616217941 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1761123044, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:41:22 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:41:22 np0005548790.localdomain systemd[1]: tmp-crun.TfPjPS.mount: Deactivated successfully.
Dec 06 08:41:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:41:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:41:25 np0005548790.localdomain podman[82283]: 2025-12-06 08:41:25.570908773 +0000 UTC m=+0.081166825 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, url=https://www.redhat.com, container_name=ovn_controller, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12)
Dec 06 08:41:25 np0005548790.localdomain podman[82283]: 2025-12-06 08:41:25.591452933 +0000 UTC m=+0.101711025 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible)
Dec 06 08:41:25 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:41:25 np0005548790.localdomain systemd[1]: tmp-crun.wrDUln.mount: Deactivated successfully.
Dec 06 08:41:25 np0005548790.localdomain podman[82282]: 2025-12-06 08:41:25.686226801 +0000 UTC m=+0.198929048 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20251118.1, architecture=x86_64, container_name=ovn_metadata_agent, config_id=tripleo_step4)
Dec 06 08:41:25 np0005548790.localdomain podman[82282]: 2025-12-06 08:41:25.767550849 +0000 UTC m=+0.280253076 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, tcib_managed=true, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 08:41:25 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:41:29 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:41:29 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 08:41:29 np0005548790.localdomain recover_tripleo_nova_virtqemud[82331]: 62556
Dec 06 08:41:29 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 08:41:29 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 08:41:29 np0005548790.localdomain podman[82329]: 2025-12-06 08:41:29.579461149 +0000 UTC m=+0.088461199 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team)
Dec 06 08:41:29 np0005548790.localdomain podman[82329]: 2025-12-06 08:41:29.61122815 +0000 UTC m=+0.120228250 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=)
Dec 06 08:41:29 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:41:31 np0005548790.localdomain sshd[82358]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:41:33 np0005548790.localdomain sshd[82358]: Received disconnect from 103.226.138.52 port 52912:11: Bye Bye [preauth]
Dec 06 08:41:33 np0005548790.localdomain sshd[82358]: Disconnected from authenticating user root 103.226.138.52 port 52912 [preauth]
Dec 06 08:41:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:41:36 np0005548790.localdomain podman[82360]: 2025-12-06 08:41:36.564684653 +0000 UTC m=+0.080018474 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:41:36 np0005548790.localdomain podman[82360]: 2025-12-06 08:41:36.752150943 +0000 UTC m=+0.267484744 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, distribution-scope=public, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:41:36 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:41:44 np0005548790.localdomain sudo[82389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:41:44 np0005548790.localdomain sudo[82389]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:41:44 np0005548790.localdomain sudo[82389]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:44 np0005548790.localdomain sudo[82404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:41:44 np0005548790.localdomain sudo[82404]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:41:44 np0005548790.localdomain sudo[82404]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:45 np0005548790.localdomain sudo[82452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:41:45 np0005548790.localdomain sudo[82452]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:41:45 np0005548790.localdomain sudo[82452]: pam_unix(sudo:session): session closed for user root
Dec 06 08:41:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:41:49 np0005548790.localdomain systemd[1]: tmp-crun.8A7mA2.mount: Deactivated successfully.
Dec 06 08:41:49 np0005548790.localdomain podman[82467]: 2025-12-06 08:41:49.582047466 +0000 UTC m=+0.097436050 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12)
Dec 06 08:41:49 np0005548790.localdomain podman[82467]: 2025-12-06 08:41:49.598037404 +0000 UTC m=+0.113426048 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd)
Dec 06 08:41:49 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:41:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:41:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:41:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:41:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:41:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:41:52 np0005548790.localdomain podman[82490]: 2025-12-06 08:41:52.563987123 +0000 UTC m=+0.075636227 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, architecture=x86_64, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:41:52 np0005548790.localdomain podman[82487]: 2025-12-06 08:41:52.624105862 +0000 UTC m=+0.137535564 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 06 08:41:52 np0005548790.localdomain podman[82487]: 2025-12-06 08:41:52.630423952 +0000 UTC m=+0.143853604 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step3, container_name=iscsid, release=1761123044, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 08:41:52 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:41:52 np0005548790.localdomain podman[82491]: 2025-12-06 08:41:52.669059227 +0000 UTC m=+0.178505052 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4)
Dec 06 08:41:52 np0005548790.localdomain podman[82488]: 2025-12-06 08:41:52.732157596 +0000 UTC m=+0.245412783 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1)
Dec 06 08:41:52 np0005548790.localdomain podman[82489]: 2025-12-06 08:41:52.776904504 +0000 UTC m=+0.289005900 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, release=1761123044, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:41:52 np0005548790.localdomain podman[82488]: 2025-12-06 08:41:52.786137021 +0000 UTC m=+0.299392218 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 08:41:52 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:41:52 np0005548790.localdomain podman[82491]: 2025-12-06 08:41:52.801678897 +0000 UTC m=+0.311124702 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=logrotate_crond, tcib_managed=true)
Dec 06 08:41:52 np0005548790.localdomain podman[82489]: 2025-12-06 08:41:52.813016501 +0000 UTC m=+0.325117867 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, config_id=tripleo_step4, url=https://www.redhat.com, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, architecture=x86_64, vendor=Red Hat, Inc.)
Dec 06 08:41:52 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:41:52 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:41:52 np0005548790.localdomain podman[82490]: 2025-12-06 08:41:52.89814923 +0000 UTC m=+0.409798354 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64)
Dec 06 08:41:52 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:41:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:41:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:41:56 np0005548790.localdomain podman[82596]: 2025-12-06 08:41:56.542451165 +0000 UTC m=+0.061913819 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:41:56 np0005548790.localdomain systemd[1]: tmp-crun.yOPl2c.mount: Deactivated successfully.
Dec 06 08:41:56 np0005548790.localdomain podman[82597]: 2025-12-06 08:41:56.622635562 +0000 UTC m=+0.135624483 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, vcs-type=git, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, config_id=tripleo_step4, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com)
Dec 06 08:41:56 np0005548790.localdomain podman[82596]: 2025-12-06 08:41:56.653105378 +0000 UTC m=+0.172568012 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, vcs-type=git, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 06 08:41:56 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:41:56 np0005548790.localdomain podman[82597]: 2025-12-06 08:41:56.671208533 +0000 UTC m=+0.184197464 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, managed_by=tripleo_ansible, container_name=ovn_controller, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 08:41:56 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:42:00 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:42:00 np0005548790.localdomain podman[82643]: 2025-12-06 08:42:00.560884298 +0000 UTC m=+0.076656594 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute)
Dec 06 08:42:00 np0005548790.localdomain podman[82643]: 2025-12-06 08:42:00.587133641 +0000 UTC m=+0.102905877 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public)
Dec 06 08:42:00 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:42:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:42:07 np0005548790.localdomain podman[82670]: 2025-12-06 08:42:07.572164298 +0000 UTC m=+0.086677682 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Dec 06 08:42:07 np0005548790.localdomain podman[82670]: 2025-12-06 08:42:07.767291373 +0000 UTC m=+0.281804837 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, version=17.1.12, tcib_managed=true, container_name=metrics_qdr, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.)
Dec 06 08:42:07 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:42:12 np0005548790.localdomain sudo[82711]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xrurucsofayfvvszeoxwghnjhqyawsdx ; /usr/bin/python3
Dec 06 08:42:12 np0005548790.localdomain sudo[82711]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 08:42:12 np0005548790.localdomain python3[82713]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhceph-7-tools-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 08:42:15 np0005548790.localdomain rhsm-service[6610]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 06 08:42:19 np0005548790.localdomain sudo[82711]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:42:20 np0005548790.localdomain podman[82901]: 2025-12-06 08:42:20.567514785 +0000 UTC m=+0.083808885 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, architecture=x86_64, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container)
Dec 06 08:42:20 np0005548790.localdomain podman[82901]: 2025-12-06 08:42:20.578120569 +0000 UTC m=+0.094414609 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Dec 06 08:42:20 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:42:22 np0005548790.localdomain sshd[82921]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:42:23 np0005548790.localdomain sshd[82945]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:42:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:42:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:42:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:42:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:42:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:42:23 np0005548790.localdomain systemd[1]: tmp-crun.Qedn1M.mount: Deactivated successfully.
Dec 06 08:42:23 np0005548790.localdomain podman[82948]: 2025-12-06 08:42:23.58786104 +0000 UTC m=+0.089672132 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, batch=17.1_20251118.1)
Dec 06 08:42:23 np0005548790.localdomain podman[82948]: 2025-12-06 08:42:23.597812497 +0000 UTC m=+0.099623469 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, container_name=iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc.)
Dec 06 08:42:23 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:42:23 np0005548790.localdomain podman[82951]: 2025-12-06 08:42:23.645593827 +0000 UTC m=+0.141675534 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step4, release=1761123044, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public)
Dec 06 08:42:23 np0005548790.localdomain sshd[82921]: Received disconnect from 35.247.75.98 port 39374:11: Bye Bye [preauth]
Dec 06 08:42:23 np0005548790.localdomain sshd[82921]: Disconnected from authenticating user root 35.247.75.98 port 39374 [preauth]
Dec 06 08:42:23 np0005548790.localdomain podman[82950]: 2025-12-06 08:42:23.701468603 +0000 UTC m=+0.199623736 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, tcib_managed=true, io.openshift.expose-services=, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 06 08:42:23 np0005548790.localdomain podman[82952]: 2025-12-06 08:42:23.754608526 +0000 UTC m=+0.245681490 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, version=17.1.12, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.buildah.version=1.41.4, container_name=logrotate_crond, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public)
Dec 06 08:42:23 np0005548790.localdomain podman[82950]: 2025-12-06 08:42:23.780858298 +0000 UTC m=+0.279013421 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi)
Dec 06 08:42:23 np0005548790.localdomain podman[82952]: 2025-12-06 08:42:23.789710536 +0000 UTC m=+0.280783500 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, release=1761123044, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public)
Dec 06 08:42:23 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:42:23 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:42:23 np0005548790.localdomain podman[82949]: 2025-12-06 08:42:23.795043148 +0000 UTC m=+0.296855960 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 08:42:23 np0005548790.localdomain podman[82949]: 2025-12-06 08:42:23.874323022 +0000 UTC m=+0.376135874 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, distribution-scope=public, release=1761123044, managed_by=tripleo_ansible)
Dec 06 08:42:23 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:42:24 np0005548790.localdomain podman[82951]: 2025-12-06 08:42:24.046181203 +0000 UTC m=+0.542262980 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Dec 06 08:42:24 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:42:24 np0005548790.localdomain sshd[82945]: Received disconnect from 43.163.123.45 port 49402:11: Bye Bye [preauth]
Dec 06 08:42:24 np0005548790.localdomain sshd[82945]: Disconnected from authenticating user root 43.163.123.45 port 49402 [preauth]
Dec 06 08:42:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:42:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:42:27 np0005548790.localdomain systemd[1]: tmp-crun.QERXWX.mount: Deactivated successfully.
Dec 06 08:42:27 np0005548790.localdomain podman[83077]: 2025-12-06 08:42:27.57261179 +0000 UTC m=+0.089981290 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.expose-services=, distribution-scope=public)
Dec 06 08:42:27 np0005548790.localdomain podman[83078]: 2025-12-06 08:42:27.617147073 +0000 UTC m=+0.130836785 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=ovn_controller, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container)
Dec 06 08:42:27 np0005548790.localdomain podman[83077]: 2025-12-06 08:42:27.627606353 +0000 UTC m=+0.144975853 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, container_name=ovn_metadata_agent, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 06 08:42:27 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:42:27 np0005548790.localdomain podman[83078]: 2025-12-06 08:42:27.644038512 +0000 UTC m=+0.157728224 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, build-date=2025-11-18T23:34:05Z, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 08:42:27 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:42:31 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:42:31 np0005548790.localdomain podman[83124]: 2025-12-06 08:42:31.575915406 +0000 UTC m=+0.087965996 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, config_id=tripleo_step5, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:42:31 np0005548790.localdomain podman[83124]: 2025-12-06 08:42:31.629976394 +0000 UTC m=+0.142026974 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, container_name=nova_compute, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 08:42:31 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:42:34 np0005548790.localdomain python3[83163]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname
Dec 06 08:42:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:42:38 np0005548790.localdomain systemd[1]: tmp-crun.WBBo3Q.mount: Deactivated successfully.
Dec 06 08:42:38 np0005548790.localdomain podman[83164]: 2025-12-06 08:42:38.575244156 +0000 UTC m=+0.093310939 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, architecture=x86_64, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.buildah.version=1.41.4, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vcs-type=git, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team)
Dec 06 08:42:38 np0005548790.localdomain podman[83164]: 2025-12-06 08:42:38.793912692 +0000 UTC m=+0.311979485 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr)
Dec 06 08:42:38 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:42:45 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 08:42:45 np0005548790.localdomain recover_tripleo_nova_virtqemud[83194]: 62556
Dec 06 08:42:45 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 08:42:45 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 08:42:45 np0005548790.localdomain sudo[83195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:42:45 np0005548790.localdomain sudo[83195]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:42:45 np0005548790.localdomain sudo[83195]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:45 np0005548790.localdomain sudo[83210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 08:42:45 np0005548790.localdomain sudo[83210]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:42:46 np0005548790.localdomain podman[83294]: 2025-12-06 08:42:46.596536522 +0000 UTC m=+0.092790916 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, RELEASE=main, architecture=x86_64, com.redhat.component=rhceph-container, GIT_BRANCH=main, release=1763362218, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., version=7)
Dec 06 08:42:46 np0005548790.localdomain podman[83294]: 2025-12-06 08:42:46.726263525 +0000 UTC m=+0.222517949 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.buildah.version=1.41.4, RELEASE=main, ceph=True, distribution-scope=public, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, release=1763362218, version=7, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7)
Dec 06 08:42:46 np0005548790.localdomain sudo[83210]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:47 np0005548790.localdomain sudo[83361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:42:47 np0005548790.localdomain sudo[83361]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:42:47 np0005548790.localdomain sudo[83361]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:47 np0005548790.localdomain sudo[83376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:42:47 np0005548790.localdomain sudo[83376]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:42:47 np0005548790.localdomain sudo[83376]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:48 np0005548790.localdomain sudo[83424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:42:48 np0005548790.localdomain sudo[83424]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:42:48 np0005548790.localdomain sudo[83424]: pam_unix(sudo:session): session closed for user root
Dec 06 08:42:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:42:51 np0005548790.localdomain podman[83439]: 2025-12-06 08:42:51.570805387 +0000 UTC m=+0.086226509 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1)
Dec 06 08:42:51 np0005548790.localdomain podman[83439]: 2025-12-06 08:42:51.586121408 +0000 UTC m=+0.101542580 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, version=17.1.12, io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:42:51 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:42:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:42:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:42:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:42:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:42:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:42:54 np0005548790.localdomain podman[83463]: 2025-12-06 08:42:54.585299727 +0000 UTC m=+0.079077259 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-nova-compute-container, vcs-type=git, release=1761123044, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4)
Dec 06 08:42:54 np0005548790.localdomain podman[83459]: 2025-12-06 08:42:54.623839188 +0000 UTC m=+0.124587217 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, container_name=iscsid, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, release=1761123044, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, tcib_managed=true)
Dec 06 08:42:54 np0005548790.localdomain podman[83460]: 2025-12-06 08:42:54.690872133 +0000 UTC m=+0.191614441 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, architecture=x86_64, maintainer=OpenStack TripleO Team)
Dec 06 08:42:54 np0005548790.localdomain podman[83459]: 2025-12-06 08:42:54.709127522 +0000 UTC m=+0.209875561 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.12)
Dec 06 08:42:54 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:42:54 np0005548790.localdomain podman[83460]: 2025-12-06 08:42:54.747214662 +0000 UTC m=+0.247956960 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, managed_by=tripleo_ansible)
Dec 06 08:42:54 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:42:54 np0005548790.localdomain podman[83461]: 2025-12-06 08:42:54.661965679 +0000 UTC m=+0.156116231 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 08:42:54 np0005548790.localdomain podman[83461]: 2025-12-06 08:42:54.795140715 +0000 UTC m=+0.289291347 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public)
Dec 06 08:42:54 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:42:54 np0005548790.localdomain podman[83473]: 2025-12-06 08:42:54.846926072 +0000 UTC m=+0.333056719 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=logrotate_crond, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64)
Dec 06 08:42:54 np0005548790.localdomain podman[83473]: 2025-12-06 08:42:54.859217521 +0000 UTC m=+0.345348098 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, architecture=x86_64, version=17.1.12)
Dec 06 08:42:54 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:42:54 np0005548790.localdomain podman[83463]: 2025-12-06 08:42:54.981549767 +0000 UTC m=+0.475327299 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true)
Dec 06 08:42:54 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:42:55 np0005548790.localdomain systemd[1]: tmp-crun.RH9dYN.mount: Deactivated successfully.
Dec 06 08:42:58 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:42:58 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:42:58 np0005548790.localdomain systemd[1]: tmp-crun.GZ46dE.mount: Deactivated successfully.
Dec 06 08:42:58 np0005548790.localdomain podman[83573]: 2025-12-06 08:42:58.568461013 +0000 UTC m=+0.085239153 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 06 08:42:58 np0005548790.localdomain podman[83573]: 2025-12-06 08:42:58.589142847 +0000 UTC m=+0.105920997 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true)
Dec 06 08:42:58 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:42:58 np0005548790.localdomain podman[83572]: 2025-12-06 08:42:58.604135868 +0000 UTC m=+0.122831310 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, distribution-scope=public, tcib_managed=true, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 06 08:42:58 np0005548790.localdomain podman[83572]: 2025-12-06 08:42:58.650270124 +0000 UTC m=+0.168965596 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, release=1761123044, architecture=x86_64)
Dec 06 08:42:58 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:43:02 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:43:02 np0005548790.localdomain systemd[1]: tmp-crun.cpZA34.mount: Deactivated successfully.
Dec 06 08:43:02 np0005548790.localdomain podman[83618]: 2025-12-06 08:43:02.559361908 +0000 UTC m=+0.078668188 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, version=17.1.12, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 08:43:02 np0005548790.localdomain podman[83618]: 2025-12-06 08:43:02.590095561 +0000 UTC m=+0.109401781 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, name=rhosp17/openstack-nova-compute, release=1761123044, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, managed_by=tripleo_ansible, container_name=nova_compute, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:43:02 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:43:05 np0005548790.localdomain sshd[83644]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:43:07 np0005548790.localdomain sshd[83644]: Received disconnect from 103.226.138.52 port 46968:11: Bye Bye [preauth]
Dec 06 08:43:07 np0005548790.localdomain sshd[83644]: Disconnected from authenticating user root 103.226.138.52 port 46968 [preauth]
Dec 06 08:43:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:43:09 np0005548790.localdomain podman[83646]: 2025-12-06 08:43:09.561759161 +0000 UTC m=+0.073826469 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12)
Dec 06 08:43:09 np0005548790.localdomain podman[83646]: 2025-12-06 08:43:09.777115087 +0000 UTC m=+0.289182315 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1761123044, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:43:09 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:43:22 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:43:22 np0005548790.localdomain podman[83676]: 2025-12-06 08:43:22.554070376 +0000 UTC m=+0.070565070 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, vcs-type=git, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com)
Dec 06 08:43:22 np0005548790.localdomain podman[83676]: 2025-12-06 08:43:22.595096325 +0000 UTC m=+0.111590949 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, distribution-scope=public, tcib_managed=true, vcs-type=git, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Dec 06 08:43:22 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:43:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:43:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:43:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:43:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:43:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:43:25 np0005548790.localdomain podman[83742]: 2025-12-06 08:43:25.585805548 +0000 UTC m=+0.097508592 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, tcib_managed=true)
Dec 06 08:43:25 np0005548790.localdomain podman[83742]: 2025-12-06 08:43:25.622169202 +0000 UTC m=+0.133872216 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.4, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, version=17.1.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vcs-type=git)
Dec 06 08:43:25 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:43:25 np0005548790.localdomain podman[83743]: 2025-12-06 08:43:25.641350536 +0000 UTC m=+0.145987341 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, release=1761123044, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4)
Dec 06 08:43:25 np0005548790.localdomain podman[83756]: 2025-12-06 08:43:25.693865321 +0000 UTC m=+0.185657372 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com)
Dec 06 08:43:25 np0005548790.localdomain podman[83756]: 2025-12-06 08:43:25.700657864 +0000 UTC m=+0.192449975 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, managed_by=tripleo_ansible, batch=17.1_20251118.1, release=1761123044, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Dec 06 08:43:25 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:43:25 np0005548790.localdomain podman[83750]: 2025-12-06 08:43:25.749309526 +0000 UTC m=+0.246474021 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, version=17.1.12, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:43:25 np0005548790.localdomain podman[83746]: 2025-12-06 08:43:25.805065749 +0000 UTC m=+0.304323720 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, distribution-scope=public, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1)
Dec 06 08:43:25 np0005548790.localdomain podman[83743]: 2025-12-06 08:43:25.823307227 +0000 UTC m=+0.327944002 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team)
Dec 06 08:43:25 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:43:25 np0005548790.localdomain podman[83746]: 2025-12-06 08:43:25.83757119 +0000 UTC m=+0.336829211 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 08:43:25 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:43:26 np0005548790.localdomain podman[83750]: 2025-12-06 08:43:26.157020274 +0000 UTC m=+0.654184759 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, architecture=x86_64, container_name=nova_migration_target, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4)
Dec 06 08:43:26 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:43:29 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:43:29 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:43:29 np0005548790.localdomain podman[83858]: 2025-12-06 08:43:29.575240233 +0000 UTC m=+0.088542033 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, vcs-type=git, config_id=tripleo_step4, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com)
Dec 06 08:43:29 np0005548790.localdomain podman[83859]: 2025-12-06 08:43:29.630364228 +0000 UTC m=+0.140079292 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, version=17.1.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044)
Dec 06 08:43:29 np0005548790.localdomain podman[83858]: 2025-12-06 08:43:29.645412802 +0000 UTC m=+0.158714632 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Dec 06 08:43:29 np0005548790.localdomain podman[83859]: 2025-12-06 08:43:29.655435109 +0000 UTC m=+0.165150133 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step4, architecture=x86_64, container_name=ovn_controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 08:43:29 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:43:29 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:43:33 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:43:33 np0005548790.localdomain podman[83907]: 2025-12-06 08:43:33.569204568 +0000 UTC m=+0.083220669 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, distribution-scope=public, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git)
Dec 06 08:43:33 np0005548790.localdomain podman[83907]: 2025-12-06 08:43:33.627252182 +0000 UTC m=+0.141268303 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5)
Dec 06 08:43:33 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:43:34 np0005548790.localdomain sshd[80737]: Received disconnect from 38.102.83.114 port 36278:11: disconnected by user
Dec 06 08:43:34 np0005548790.localdomain sshd[80737]: Disconnected from user zuul 38.102.83.114 port 36278
Dec 06 08:43:34 np0005548790.localdomain sshd[80724]: pam_unix(sshd:session): session closed for user zuul
Dec 06 08:43:34 np0005548790.localdomain systemd[1]: session-34.scope: Deactivated successfully.
Dec 06 08:43:34 np0005548790.localdomain systemd[1]: session-34.scope: Consumed 19.256s CPU time.
Dec 06 08:43:34 np0005548790.localdomain systemd-logind[760]: Session 34 logged out. Waiting for processes to exit.
Dec 06 08:43:34 np0005548790.localdomain systemd-logind[760]: Removed session 34.
Dec 06 08:43:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:43:40 np0005548790.localdomain podman[83933]: 2025-12-06 08:43:40.56069053 +0000 UTC m=+0.078637337 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, version=17.1.12)
Dec 06 08:43:40 np0005548790.localdomain podman[83933]: 2025-12-06 08:43:40.775141642 +0000 UTC m=+0.293088449 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=)
Dec 06 08:43:40 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:43:42 np0005548790.localdomain sshd[83961]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:43:43 np0005548790.localdomain sshd[83961]: Received disconnect from 43.163.123.45 port 48074:11: Bye Bye [preauth]
Dec 06 08:43:43 np0005548790.localdomain sshd[83961]: Disconnected from authenticating user root 43.163.123.45 port 48074 [preauth]
Dec 06 08:43:48 np0005548790.localdomain sudo[83964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:43:48 np0005548790.localdomain sudo[83964]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:43:48 np0005548790.localdomain sudo[83964]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:48 np0005548790.localdomain sudo[83979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:43:48 np0005548790.localdomain sudo[83979]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:43:49 np0005548790.localdomain sudo[83979]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:50 np0005548790.localdomain sudo[84025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:43:50 np0005548790.localdomain sudo[84025]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:43:50 np0005548790.localdomain sudo[84025]: pam_unix(sudo:session): session closed for user root
Dec 06 08:43:53 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:43:53 np0005548790.localdomain podman[84040]: 2025-12-06 08:43:53.57782173 +0000 UTC m=+0.094416860 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:43:53 np0005548790.localdomain podman[84040]: 2025-12-06 08:43:53.617274026 +0000 UTC m=+0.133869106 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, architecture=x86_64, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:43:53 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:43:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:43:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:43:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:43:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:43:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:43:56 np0005548790.localdomain podman[84074]: 2025-12-06 08:43:56.604487505 +0000 UTC m=+0.097464220 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Dec 06 08:43:56 np0005548790.localdomain systemd[1]: tmp-crun.T2R14U.mount: Deactivated successfully.
Dec 06 08:43:56 np0005548790.localdomain podman[84060]: 2025-12-06 08:43:56.584836988 +0000 UTC m=+0.094490800 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044, name=rhosp17/openstack-iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, config_id=tripleo_step3, architecture=x86_64, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:43:56 np0005548790.localdomain podman[84068]: 2025-12-06 08:43:56.64425691 +0000 UTC m=+0.140840853 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 06 08:43:56 np0005548790.localdomain podman[84074]: 2025-12-06 08:43:56.667187424 +0000 UTC m=+0.160164099 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible)
Dec 06 08:43:56 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:43:56 np0005548790.localdomain podman[84061]: 2025-12-06 08:43:56.70663318 +0000 UTC m=+0.212416889 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.12, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 06 08:43:56 np0005548790.localdomain podman[84060]: 2025-12-06 08:43:56.725703421 +0000 UTC m=+0.235357253 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, release=1761123044, architecture=x86_64, container_name=iscsid)
Dec 06 08:43:56 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:43:56 np0005548790.localdomain podman[84061]: 2025-12-06 08:43:56.762370483 +0000 UTC m=+0.268154222 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, release=1761123044, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc.)
Dec 06 08:43:56 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:43:56 np0005548790.localdomain podman[84062]: 2025-12-06 08:43:56.801927092 +0000 UTC m=+0.303879059 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc.)
Dec 06 08:43:56 np0005548790.localdomain podman[84062]: 2025-12-06 08:43:56.821683001 +0000 UTC m=+0.323634928 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 08:43:56 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:43:56 np0005548790.localdomain podman[84068]: 2025-12-06 08:43:56.993189083 +0000 UTC m=+0.489773066 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, url=https://www.redhat.com)
Dec 06 08:43:57 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:44:00 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:44:00 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:44:00 np0005548790.localdomain podman[84174]: 2025-12-06 08:44:00.561095011 +0000 UTC m=+0.074031784 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64)
Dec 06 08:44:00 np0005548790.localdomain systemd[1]: tmp-crun.2Jc35z.mount: Deactivated successfully.
Dec 06 08:44:00 np0005548790.localdomain podman[84175]: 2025-12-06 08:44:00.62790945 +0000 UTC m=+0.138058129 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 08:44:00 np0005548790.localdomain podman[84174]: 2025-12-06 08:44:00.643151628 +0000 UTC m=+0.156088471 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1)
Dec 06 08:44:00 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:44:00 np0005548790.localdomain podman[84175]: 2025-12-06 08:44:00.699877976 +0000 UTC m=+0.210026585 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z)
Dec 06 08:44:00 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:44:04 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:44:04 np0005548790.localdomain podman[84224]: 2025-12-06 08:44:04.573313176 +0000 UTC m=+0.084295368 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-type=git, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute)
Dec 06 08:44:04 np0005548790.localdomain podman[84224]: 2025-12-06 08:44:04.601137791 +0000 UTC m=+0.112119933 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:44:04 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:44:08 np0005548790.localdomain sshd[84250]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:44:10 np0005548790.localdomain sshd[84250]: Received disconnect from 35.247.75.98 port 55076:11: Bye Bye [preauth]
Dec 06 08:44:10 np0005548790.localdomain sshd[84250]: Disconnected from authenticating user root 35.247.75.98 port 55076 [preauth]
Dec 06 08:44:10 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:44:11 np0005548790.localdomain podman[84252]: 2025-12-06 08:44:11.020232095 +0000 UTC m=+0.077995070 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_id=tripleo_step1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:44:11 np0005548790.localdomain podman[84252]: 2025-12-06 08:44:11.210065298 +0000 UTC m=+0.267828233 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, managed_by=tripleo_ansible)
Dec 06 08:44:11 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:44:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:44:24 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 08:44:24 np0005548790.localdomain recover_tripleo_nova_virtqemud[84327]: 62556
Dec 06 08:44:24 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 08:44:24 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 08:44:24 np0005548790.localdomain systemd[1]: tmp-crun.mGGLRh.mount: Deactivated successfully.
Dec 06 08:44:24 np0005548790.localdomain podman[84325]: 2025-12-06 08:44:24.577794817 +0000 UTC m=+0.089408895 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., container_name=collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Dec 06 08:44:24 np0005548790.localdomain podman[84325]: 2025-12-06 08:44:24.59133909 +0000 UTC m=+0.102953138 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-18T22:51:28Z, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:44:24 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:44:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:44:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:44:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:44:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:44:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:44:27 np0005548790.localdomain podman[84350]: 2025-12-06 08:44:27.581952999 +0000 UTC m=+0.088864490 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, container_name=nova_migration_target, distribution-scope=public, url=https://www.redhat.com, managed_by=tripleo_ansible)
Dec 06 08:44:27 np0005548790.localdomain systemd[1]: tmp-crun.FeZlFO.mount: Deactivated successfully.
Dec 06 08:44:27 np0005548790.localdomain podman[84356]: 2025-12-06 08:44:27.603207618 +0000 UTC m=+0.101096658 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vendor=Red Hat, Inc.)
Dec 06 08:44:27 np0005548790.localdomain podman[84356]: 2025-12-06 08:44:27.607562104 +0000 UTC m=+0.105451104 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, architecture=x86_64, config_id=tripleo_step4, container_name=logrotate_crond, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4)
Dec 06 08:44:27 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:44:27 np0005548790.localdomain podman[84348]: 2025-12-06 08:44:27.687354502 +0000 UTC m=+0.194159610 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, release=1761123044, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 08:44:27 np0005548790.localdomain podman[84348]: 2025-12-06 08:44:27.741015988 +0000 UTC m=+0.247821116 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 08:44:27 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:44:27 np0005548790.localdomain podman[84347]: 2025-12-06 08:44:27.745950601 +0000 UTC m=+0.256526541 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, config_id=tripleo_step3)
Dec 06 08:44:27 np0005548790.localdomain podman[84349]: 2025-12-06 08:44:27.805433903 +0000 UTC m=+0.311286886 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:44:27 np0005548790.localdomain podman[84347]: 2025-12-06 08:44:27.825510411 +0000 UTC m=+0.336086361 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, vendor=Red Hat, Inc., container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Dec 06 08:44:27 np0005548790.localdomain podman[84349]: 2025-12-06 08:44:27.833135245 +0000 UTC m=+0.338988178 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true)
Dec 06 08:44:27 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:44:27 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:44:27 np0005548790.localdomain podman[84350]: 2025-12-06 08:44:27.972508257 +0000 UTC m=+0.479419778 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, architecture=x86_64, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, tcib_managed=true)
Dec 06 08:44:27 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:44:31 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:44:31 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:44:31 np0005548790.localdomain podman[84462]: 2025-12-06 08:44:31.583887649 +0000 UTC m=+0.094431660 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 08:44:31 np0005548790.localdomain podman[84463]: 2025-12-06 08:44:31.637755111 +0000 UTC m=+0.144484150 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 06 08:44:31 np0005548790.localdomain podman[84463]: 2025-12-06 08:44:31.659707079 +0000 UTC m=+0.166436168 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:44:31 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:44:31 np0005548790.localdomain podman[84462]: 2025-12-06 08:44:31.712712279 +0000 UTC m=+0.223256300 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-type=git)
Dec 06 08:44:31 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:44:35 np0005548790.localdomain sshd[84511]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:44:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:44:35 np0005548790.localdomain podman[84513]: 2025-12-06 08:44:35.296592812 +0000 UTC m=+0.081942945 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 08:44:35 np0005548790.localdomain podman[84513]: 2025-12-06 08:44:35.352252113 +0000 UTC m=+0.137602216 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, config_id=tripleo_step5)
Dec 06 08:44:35 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:44:37 np0005548790.localdomain sshd[84511]: Received disconnect from 103.226.138.52 port 57172:11: Bye Bye [preauth]
Dec 06 08:44:37 np0005548790.localdomain sshd[84511]: Disconnected from authenticating user root 103.226.138.52 port 57172 [preauth]
Dec 06 08:44:41 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:44:41 np0005548790.localdomain podman[84539]: 2025-12-06 08:44:41.566546052 +0000 UTC m=+0.077039934 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step1, io.buildah.version=1.41.4, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:44:41 np0005548790.localdomain podman[84539]: 2025-12-06 08:44:41.778628241 +0000 UTC m=+0.289122133 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:44:41 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:44:50 np0005548790.localdomain sudo[84569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:44:50 np0005548790.localdomain sudo[84569]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:44:50 np0005548790.localdomain sudo[84569]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:50 np0005548790.localdomain sudo[84584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:44:50 np0005548790.localdomain sudo[84584]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:44:50 np0005548790.localdomain sudo[84584]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:51 np0005548790.localdomain sudo[84630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:44:51 np0005548790.localdomain sudo[84630]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:44:51 np0005548790.localdomain sudo[84630]: pam_unix(sudo:session): session closed for user root
Dec 06 08:44:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:44:55 np0005548790.localdomain systemd[1]: tmp-crun.7ssUzd.mount: Deactivated successfully.
Dec 06 08:44:55 np0005548790.localdomain podman[84646]: 2025-12-06 08:44:55.593523471 +0000 UTC m=+0.097415968 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, vcs-type=git, container_name=collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true)
Dec 06 08:44:55 np0005548790.localdomain podman[84646]: 2025-12-06 08:44:55.631429257 +0000 UTC m=+0.135321744 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, container_name=collectd, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64)
Dec 06 08:44:55 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:44:56 np0005548790.localdomain sshd[84667]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:44:57 np0005548790.localdomain sshd[84667]: Received disconnect from 43.163.123.45 port 46726:11: Bye Bye [preauth]
Dec 06 08:44:57 np0005548790.localdomain sshd[84667]: Disconnected from authenticating user root 43.163.123.45 port 46726 [preauth]
Dec 06 08:44:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:44:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:44:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:44:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:44:57 np0005548790.localdomain systemd[1]: tmp-crun.xNdUF2.mount: Deactivated successfully.
Dec 06 08:44:57 np0005548790.localdomain podman[84670]: 2025-12-06 08:44:57.97795754 +0000 UTC m=+0.103604165 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, version=17.1.12)
Dec 06 08:44:58 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:44:58 np0005548790.localdomain podman[84669]: 2025-12-06 08:44:58.02686414 +0000 UTC m=+0.155429683 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step3, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, vcs-type=git, io.buildah.version=1.41.4, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc.)
Dec 06 08:44:58 np0005548790.localdomain podman[84670]: 2025-12-06 08:44:58.03133902 +0000 UTC m=+0.156985715 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute)
Dec 06 08:44:58 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:44:58 np0005548790.localdomain podman[84669]: 2025-12-06 08:44:58.065123834 +0000 UTC m=+0.193689327 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, container_name=iscsid, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, vcs-type=git)
Dec 06 08:44:58 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:44:58 np0005548790.localdomain podman[84671]: 2025-12-06 08:44:58.116546481 +0000 UTC m=+0.239243187 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public)
Dec 06 08:44:58 np0005548790.localdomain podman[84734]: 2025-12-06 08:44:58.158649668 +0000 UTC m=+0.128186483 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 08:44:58 np0005548790.localdomain podman[84672]: 2025-12-06 08:44:58.173108865 +0000 UTC m=+0.293812048 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, release=1761123044, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Dec 06 08:44:58 np0005548790.localdomain podman[84672]: 2025-12-06 08:44:58.207423014 +0000 UTC m=+0.328126187 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=logrotate_crond, config_id=tripleo_step4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team)
Dec 06 08:44:58 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:44:58 np0005548790.localdomain podman[84671]: 2025-12-06 08:44:58.224380929 +0000 UTC m=+0.347077605 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, distribution-scope=public, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi)
Dec 06 08:44:58 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:44:58 np0005548790.localdomain podman[84734]: 2025-12-06 08:44:58.553318326 +0000 UTC m=+0.522855111 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, release=1761123044, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z)
Dec 06 08:44:58 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:45:02 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:45:02 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:45:02 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 08:45:02 np0005548790.localdomain recover_tripleo_nova_virtqemud[84795]: 62556
Dec 06 08:45:02 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 08:45:02 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 08:45:02 np0005548790.localdomain systemd[1]: tmp-crun.QVX5uM.mount: Deactivated successfully.
Dec 06 08:45:02 np0005548790.localdomain podman[84782]: 2025-12-06 08:45:02.565041378 +0000 UTC m=+0.081930555 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, architecture=x86_64)
Dec 06 08:45:02 np0005548790.localdomain systemd[1]: tmp-crun.mL0haS.mount: Deactivated successfully.
Dec 06 08:45:02 np0005548790.localdomain podman[84783]: 2025-12-06 08:45:02.593921491 +0000 UTC m=+0.101963031 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, config_id=tripleo_step4, container_name=ovn_controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, build-date=2025-11-18T23:34:05Z, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 08:45:02 np0005548790.localdomain podman[84783]: 2025-12-06 08:45:02.616892117 +0000 UTC m=+0.124933657 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack TripleO Team)
Dec 06 08:45:02 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:45:02 np0005548790.localdomain podman[84782]: 2025-12-06 08:45:02.640980141 +0000 UTC m=+0.157869298 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:45:02 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:45:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:45:05 np0005548790.localdomain systemd[1]: tmp-crun.teeDzP.mount: Deactivated successfully.
Dec 06 08:45:05 np0005548790.localdomain podman[84828]: 2025-12-06 08:45:05.575900169 +0000 UTC m=+0.088135062 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, release=1761123044, tcib_managed=true, batch=17.1_20251118.1)
Dec 06 08:45:05 np0005548790.localdomain podman[84828]: 2025-12-06 08:45:05.627042228 +0000 UTC m=+0.139277081 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_id=tripleo_step5, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible)
Dec 06 08:45:05 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:45:06 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 06 08:45:12 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:45:12 np0005548790.localdomain podman[84854]: 2025-12-06 08:45:12.572894886 +0000 UTC m=+0.083960858 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step1, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr)
Dec 06 08:45:12 np0005548790.localdomain podman[84854]: 2025-12-06 08:45:12.768366971 +0000 UTC m=+0.279432953 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team)
Dec 06 08:45:12 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:45:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:45:26 np0005548790.localdomain systemd[1]: tmp-crun.AbhN9N.mount: Deactivated successfully.
Dec 06 08:45:26 np0005548790.localdomain podman[84928]: 2025-12-06 08:45:26.576534021 +0000 UTC m=+0.095873878 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z)
Dec 06 08:45:26 np0005548790.localdomain podman[84928]: 2025-12-06 08:45:26.587960947 +0000 UTC m=+0.107300854 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc.)
Dec 06 08:45:26 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:45:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:45:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:45:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:45:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:45:28 np0005548790.localdomain systemd[1]: tmp-crun.sogDD0.mount: Deactivated successfully.
Dec 06 08:45:28 np0005548790.localdomain podman[84948]: 2025-12-06 08:45:28.581791136 +0000 UTC m=+0.095151419 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step3, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:45:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:45:28 np0005548790.localdomain podman[84949]: 2025-12-06 08:45:28.630689476 +0000 UTC m=+0.139504037 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, tcib_managed=true, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute)
Dec 06 08:45:28 np0005548790.localdomain podman[84950]: 2025-12-06 08:45:28.685881583 +0000 UTC m=+0.192400912 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi)
Dec 06 08:45:28 np0005548790.localdomain podman[84951]: 2025-12-06 08:45:28.736436496 +0000 UTC m=+0.240804019 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20251118.1, release=1761123044, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=logrotate_crond, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=)
Dec 06 08:45:28 np0005548790.localdomain podman[84948]: 2025-12-06 08:45:28.755810295 +0000 UTC m=+0.269170628 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, vcs-type=git, batch=17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044)
Dec 06 08:45:28 np0005548790.localdomain podman[84949]: 2025-12-06 08:45:28.789054956 +0000 UTC m=+0.297869497 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, io.openshift.expose-services=, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:45:28 np0005548790.localdomain podman[84997]: 2025-12-06 08:45:28.795948591 +0000 UTC m=+0.184509752 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 06 08:45:28 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:45:28 np0005548790.localdomain podman[84950]: 2025-12-06 08:45:28.816232574 +0000 UTC m=+0.322751843 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1)
Dec 06 08:45:28 np0005548790.localdomain podman[84951]: 2025-12-06 08:45:28.821374781 +0000 UTC m=+0.325742234 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team)
Dec 06 08:45:28 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:45:28 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:45:28 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:45:29 np0005548790.localdomain podman[84997]: 2025-12-06 08:45:29.154060439 +0000 UTC m=+0.542621550 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=)
Dec 06 08:45:29 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:45:33 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:45:33 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:45:33 np0005548790.localdomain systemd[1]: tmp-crun.1a5K1Q.mount: Deactivated successfully.
Dec 06 08:45:33 np0005548790.localdomain podman[85065]: 2025-12-06 08:45:33.573530199 +0000 UTC m=+0.084209715 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container)
Dec 06 08:45:33 np0005548790.localdomain podman[85064]: 2025-12-06 08:45:33.622402058 +0000 UTC m=+0.136018293 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, batch=17.1_20251118.1)
Dec 06 08:45:33 np0005548790.localdomain podman[85065]: 2025-12-06 08:45:33.651548499 +0000 UTC m=+0.162228065 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=)
Dec 06 08:45:33 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:45:33 np0005548790.localdomain podman[85064]: 2025-12-06 08:45:33.691004885 +0000 UTC m=+0.204621060 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64)
Dec 06 08:45:33 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:45:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:45:36 np0005548790.localdomain systemd[1]: tmp-crun.sdGr0g.mount: Deactivated successfully.
Dec 06 08:45:36 np0005548790.localdomain podman[85112]: 2025-12-06 08:45:36.58696527 +0000 UTC m=+0.095433937 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1761123044)
Dec 06 08:45:36 np0005548790.localdomain podman[85112]: 2025-12-06 08:45:36.618187555 +0000 UTC m=+0.126656262 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5)
Dec 06 08:45:36 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:45:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:45:43 np0005548790.localdomain podman[85138]: 2025-12-06 08:45:43.558589517 +0000 UTC m=+0.071864854 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=)
Dec 06 08:45:43 np0005548790.localdomain podman[85138]: 2025-12-06 08:45:43.763284298 +0000 UTC m=+0.276559655 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, url=https://www.redhat.com, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, version=17.1.12, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team)
Dec 06 08:45:43 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:45:51 np0005548790.localdomain sudo[85168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:45:51 np0005548790.localdomain sudo[85168]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:45:51 np0005548790.localdomain sudo[85168]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:51 np0005548790.localdomain sudo[85183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:45:51 np0005548790.localdomain sudo[85183]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:45:52 np0005548790.localdomain sudo[85183]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:53 np0005548790.localdomain sudo[85229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:45:53 np0005548790.localdomain sudo[85229]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:45:53 np0005548790.localdomain sudo[85229]: pam_unix(sudo:session): session closed for user root
Dec 06 08:45:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:45:57 np0005548790.localdomain podman[85244]: 2025-12-06 08:45:57.572028685 +0000 UTC m=+0.086497257 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_id=tripleo_step3, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=)
Dec 06 08:45:57 np0005548790.localdomain podman[85244]: 2025-12-06 08:45:57.610431003 +0000 UTC m=+0.124899575 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Dec 06 08:45:57 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:45:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:45:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:45:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:45:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:45:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:45:59 np0005548790.localdomain podman[85264]: 2025-12-06 08:45:59.589369983 +0000 UTC m=+0.101313144 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-iscsid-container, container_name=iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3)
Dec 06 08:45:59 np0005548790.localdomain podman[85264]: 2025-12-06 08:45:59.628253194 +0000 UTC m=+0.140196365 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, architecture=x86_64, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z)
Dec 06 08:45:59 np0005548790.localdomain systemd[1]: tmp-crun.E1mHIW.mount: Deactivated successfully.
Dec 06 08:45:59 np0005548790.localdomain podman[85266]: 2025-12-06 08:45:59.641415196 +0000 UTC m=+0.147845679 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, distribution-scope=public, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git)
Dec 06 08:45:59 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:45:59 np0005548790.localdomain podman[85266]: 2025-12-06 08:45:59.673185317 +0000 UTC m=+0.179615850 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, tcib_managed=true, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:45:59 np0005548790.localdomain podman[85267]: 2025-12-06 08:45:59.684131621 +0000 UTC m=+0.187448411 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, container_name=nova_migration_target, release=1761123044, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible)
Dec 06 08:45:59 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:45:59 np0005548790.localdomain podman[85265]: 2025-12-06 08:45:59.736159443 +0000 UTC m=+0.244352044 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, build-date=2025-11-19T00:11:48Z, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public)
Dec 06 08:45:59 np0005548790.localdomain podman[85274]: 2025-12-06 08:45:59.800226859 +0000 UTC m=+0.300643491 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, tcib_managed=true, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:45:59 np0005548790.localdomain podman[85265]: 2025-12-06 08:45:59.804311739 +0000 UTC m=+0.312504340 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:45:59 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:45:59 np0005548790.localdomain podman[85274]: 2025-12-06 08:45:59.861048448 +0000 UTC m=+0.361465080 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, container_name=logrotate_crond, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:45:59 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:46:00 np0005548790.localdomain podman[85267]: 2025-12-06 08:46:00.050080479 +0000 UTC m=+0.553397239 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-19T00:36:58Z, version=17.1.12, io.openshift.expose-services=, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:46:00 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:46:00 np0005548790.localdomain sshd[85377]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:46:01 np0005548790.localdomain sshd[85377]: Received disconnect from 35.247.75.98 port 35634:11: Bye Bye [preauth]
Dec 06 08:46:01 np0005548790.localdomain sshd[85377]: Disconnected from authenticating user root 35.247.75.98 port 35634 [preauth]
Dec 06 08:46:03 np0005548790.localdomain sshd[85379]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:46:04 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:46:04 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:46:04 np0005548790.localdomain podman[85381]: 2025-12-06 08:46:04.566285899 +0000 UTC m=+0.079518600 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, build-date=2025-11-19T00:14:25Z, release=1761123044, tcib_managed=true, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 08:46:04 np0005548790.localdomain systemd[1]: tmp-crun.uPH0Qu.mount: Deactivated successfully.
Dec 06 08:46:04 np0005548790.localdomain podman[85381]: 2025-12-06 08:46:04.638269767 +0000 UTC m=+0.151502438 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1)
Dec 06 08:46:04 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:46:04 np0005548790.localdomain podman[85382]: 2025-12-06 08:46:04.63799187 +0000 UTC m=+0.147528372 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, batch=17.1_20251118.1, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, vcs-type=git, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step4, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 08:46:04 np0005548790.localdomain podman[85382]: 2025-12-06 08:46:04.720415366 +0000 UTC m=+0.229951878 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, container_name=ovn_controller, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 08:46:04 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:46:06 np0005548790.localdomain sshd[85379]: Received disconnect from 103.226.138.52 port 46986:11: Bye Bye [preauth]
Dec 06 08:46:06 np0005548790.localdomain sshd[85379]: Disconnected from authenticating user root 103.226.138.52 port 46986 [preauth]
Dec 06 08:46:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:46:07 np0005548790.localdomain podman[85429]: 2025-12-06 08:46:07.567107113 +0000 UTC m=+0.082253965 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, container_name=nova_compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:46:07 np0005548790.localdomain podman[85429]: 2025-12-06 08:46:07.592329498 +0000 UTC m=+0.107476330 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute)
Dec 06 08:46:07 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:46:10 np0005548790.localdomain sshd[85456]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:46:12 np0005548790.localdomain sshd[85456]: Received disconnect from 43.163.123.45 port 45388:11: Bye Bye [preauth]
Dec 06 08:46:12 np0005548790.localdomain sshd[85456]: Disconnected from authenticating user root 43.163.123.45 port 45388 [preauth]
Dec 06 08:46:14 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:46:14 np0005548790.localdomain podman[85458]: 2025-12-06 08:46:14.570215343 +0000 UTC m=+0.083735762 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:46:14 np0005548790.localdomain podman[85458]: 2025-12-06 08:46:14.768124543 +0000 UTC m=+0.281644932 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec 06 08:46:14 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:46:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:46:28 np0005548790.localdomain podman[85534]: 2025-12-06 08:46:28.570925349 +0000 UTC m=+0.088917152 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, release=1761123044, architecture=x86_64, vcs-type=git, batch=17.1_20251118.1, container_name=collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd)
Dec 06 08:46:28 np0005548790.localdomain podman[85534]: 2025-12-06 08:46:28.585403727 +0000 UTC m=+0.103395530 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-11-18T22:51:28Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, release=1761123044)
Dec 06 08:46:28 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:46:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:46:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:46:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:46:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:46:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:46:30 np0005548790.localdomain systemd[1]: tmp-crun.vbAmpv.mount: Deactivated successfully.
Dec 06 08:46:30 np0005548790.localdomain podman[85556]: 2025-12-06 08:46:30.602324684 +0000 UTC m=+0.108236798 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20251118.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 08:46:30 np0005548790.localdomain podman[85556]: 2025-12-06 08:46:30.687366832 +0000 UTC m=+0.193278936 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, release=1761123044, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z)
Dec 06 08:46:30 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:46:30 np0005548790.localdomain podman[85557]: 2025-12-06 08:46:30.739120867 +0000 UTC m=+0.239865023 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:46:30 np0005548790.localdomain podman[85555]: 2025-12-06 08:46:30.692607192 +0000 UTC m=+0.199389460 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, distribution-scope=public, config_id=tripleo_step3, vcs-type=git, name=rhosp17/openstack-iscsid, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 08:46:30 np0005548790.localdomain podman[85557]: 2025-12-06 08:46:30.767269241 +0000 UTC m=+0.268013357 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, version=17.1.12, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4)
Dec 06 08:46:30 np0005548790.localdomain podman[85565]: 2025-12-06 08:46:30.664403597 +0000 UTC m=+0.160021117 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron)
Dec 06 08:46:30 np0005548790.localdomain podman[85555]: 2025-12-06 08:46:30.773491497 +0000 UTC m=+0.280273735 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z)
Dec 06 08:46:30 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:46:30 np0005548790.localdomain podman[85558]: 2025-12-06 08:46:30.856997193 +0000 UTC m=+0.355106998 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=)
Dec 06 08:46:30 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:46:30 np0005548790.localdomain podman[85565]: 2025-12-06 08:46:30.900370205 +0000 UTC m=+0.395987755 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_id=tripleo_step4, release=1761123044, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64)
Dec 06 08:46:30 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:46:31 np0005548790.localdomain podman[85558]: 2025-12-06 08:46:31.246905244 +0000 UTC m=+0.745015049 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64)
Dec 06 08:46:31 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:46:31 np0005548790.localdomain systemd[1]: tmp-crun.Le7Qdo.mount: Deactivated successfully.
Dec 06 08:46:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:46:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:46:35 np0005548790.localdomain systemd[1]: tmp-crun.1Iqaci.mount: Deactivated successfully.
Dec 06 08:46:35 np0005548790.localdomain podman[85670]: 2025-12-06 08:46:35.581850909 +0000 UTC m=+0.096295359 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, distribution-scope=public, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044)
Dec 06 08:46:35 np0005548790.localdomain systemd[1]: tmp-crun.L4OgAB.mount: Deactivated successfully.
Dec 06 08:46:35 np0005548790.localdomain podman[85671]: 2025-12-06 08:46:35.623576787 +0000 UTC m=+0.134708878 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4)
Dec 06 08:46:35 np0005548790.localdomain podman[85670]: 2025-12-06 08:46:35.651254928 +0000 UTC m=+0.165699408 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, version=17.1.12, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true)
Dec 06 08:46:35 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:46:35 np0005548790.localdomain podman[85671]: 2025-12-06 08:46:35.671221773 +0000 UTC m=+0.182353884 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4)
Dec 06 08:46:35 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:46:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:46:38 np0005548790.localdomain podman[85718]: 2025-12-06 08:46:38.561537316 +0000 UTC m=+0.076855129 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 08:46:38 np0005548790.localdomain podman[85718]: 2025-12-06 08:46:38.61400277 +0000 UTC m=+0.129320573 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, container_name=nova_compute, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12)
Dec 06 08:46:38 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:46:45 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:46:45 np0005548790.localdomain podman[85745]: 2025-12-06 08:46:45.548202547 +0000 UTC m=+0.068810253 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, architecture=x86_64, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team)
Dec 06 08:46:45 np0005548790.localdomain podman[85745]: 2025-12-06 08:46:45.751008798 +0000 UTC m=+0.271616474 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, tcib_managed=true)
Dec 06 08:46:45 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:46:53 np0005548790.localdomain sudo[85772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:46:53 np0005548790.localdomain sudo[85772]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:46:53 np0005548790.localdomain sudo[85772]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:53 np0005548790.localdomain sudo[85787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:46:53 np0005548790.localdomain sudo[85787]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:46:54 np0005548790.localdomain sudo[85787]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:54 np0005548790.localdomain sudo[85833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:46:54 np0005548790.localdomain sudo[85833]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:46:54 np0005548790.localdomain sudo[85833]: pam_unix(sudo:session): session closed for user root
Dec 06 08:46:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:46:59 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 08:46:59 np0005548790.localdomain recover_tripleo_nova_virtqemud[85850]: 62556
Dec 06 08:46:59 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 08:46:59 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 08:46:59 np0005548790.localdomain podman[85848]: 2025-12-06 08:46:59.576024709 +0000 UTC m=+0.088184732 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step3, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vendor=Red Hat, Inc., release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd)
Dec 06 08:46:59 np0005548790.localdomain podman[85848]: 2025-12-06 08:46:59.614329125 +0000 UTC m=+0.126489118 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:46:59 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:47:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:47:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:47:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:47:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:47:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:47:01 np0005548790.localdomain podman[85871]: 2025-12-06 08:47:01.558841603 +0000 UTC m=+0.074586839 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 08:47:01 np0005548790.localdomain podman[85874]: 2025-12-06 08:47:01.607453084 +0000 UTC m=+0.118244437 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, container_name=logrotate_crond, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Dec 06 08:47:01 np0005548790.localdomain podman[85873]: 2025-12-06 08:47:01.626753061 +0000 UTC m=+0.136408804 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, distribution-scope=public, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.12)
Dec 06 08:47:01 np0005548790.localdomain podman[85871]: 2025-12-06 08:47:01.662514739 +0000 UTC m=+0.178259965 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:11:48Z, version=17.1.12, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute)
Dec 06 08:47:01 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:47:01 np0005548790.localdomain podman[85870]: 2025-12-06 08:47:01.676534935 +0000 UTC m=+0.188648854 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, container_name=iscsid, release=1761123044, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, com.redhat.component=openstack-iscsid-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 08:47:01 np0005548790.localdomain podman[85870]: 2025-12-06 08:47:01.686008278 +0000 UTC m=+0.198122147 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible)
Dec 06 08:47:01 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:47:01 np0005548790.localdomain podman[85874]: 2025-12-06 08:47:01.743523658 +0000 UTC m=+0.254315011 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z)
Dec 06 08:47:01 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:47:01 np0005548790.localdomain podman[85872]: 2025-12-06 08:47:01.834263998 +0000 UTC m=+0.346216541 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, version=17.1.12, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible)
Dec 06 08:47:01 np0005548790.localdomain podman[85872]: 2025-12-06 08:47:01.862106043 +0000 UTC m=+0.374058626 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 08:47:01 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:47:01 np0005548790.localdomain podman[85873]: 2025-12-06 08:47:01.990231924 +0000 UTC m=+0.499887687 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4)
Dec 06 08:47:02 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:47:02 np0005548790.localdomain systemd[1]: tmp-crun.1qCMhD.mount: Deactivated successfully.
Dec 06 08:47:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:47:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:47:06 np0005548790.localdomain podman[85986]: 2025-12-06 08:47:06.567035366 +0000 UTC m=+0.081309848 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044)
Dec 06 08:47:06 np0005548790.localdomain podman[85986]: 2025-12-06 08:47:06.611081115 +0000 UTC m=+0.125355587 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, tcib_managed=true)
Dec 06 08:47:06 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:47:06 np0005548790.localdomain podman[85987]: 2025-12-06 08:47:06.621063953 +0000 UTC m=+0.133621560 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.openshift.expose-services=, container_name=ovn_controller, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:47:06 np0005548790.localdomain podman[85987]: 2025-12-06 08:47:06.704102936 +0000 UTC m=+0.216660523 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, tcib_managed=true, version=17.1.12, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1761123044, io.buildah.version=1.41.4)
Dec 06 08:47:06 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:47:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:47:09 np0005548790.localdomain systemd[1]: tmp-crun.pLcm6s.mount: Deactivated successfully.
Dec 06 08:47:09 np0005548790.localdomain podman[86035]: 2025-12-06 08:47:09.576009238 +0000 UTC m=+0.091005368 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, tcib_managed=true, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12, distribution-scope=public, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container)
Dec 06 08:47:09 np0005548790.localdomain podman[86035]: 2025-12-06 08:47:09.604048629 +0000 UTC m=+0.119044709 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, release=1761123044)
Dec 06 08:47:09 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:47:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:47:16 np0005548790.localdomain podman[86061]: 2025-12-06 08:47:16.572722697 +0000 UTC m=+0.083161227 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:47:16 np0005548790.localdomain podman[86061]: 2025-12-06 08:47:16.762804277 +0000 UTC m=+0.273242797 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, release=1761123044, batch=17.1_20251118.1, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=)
Dec 06 08:47:16 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:47:27 np0005548790.localdomain sshd[86112]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:47:29 np0005548790.localdomain sshd[86112]: Received disconnect from 43.163.123.45 port 44062:11: Bye Bye [preauth]
Dec 06 08:47:29 np0005548790.localdomain sshd[86112]: Disconnected from authenticating user root 43.163.123.45 port 44062 [preauth]
Dec 06 08:47:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:47:30 np0005548790.localdomain podman[86137]: 2025-12-06 08:47:30.567132025 +0000 UTC m=+0.082066140 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, distribution-scope=public, version=17.1.12, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Dec 06 08:47:30 np0005548790.localdomain podman[86137]: 2025-12-06 08:47:30.605244045 +0000 UTC m=+0.120178180 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, distribution-scope=public, version=17.1.12, build-date=2025-11-18T22:51:28Z, release=1761123044, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com)
Dec 06 08:47:30 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:47:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:47:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:47:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:47:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:47:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:47:32 np0005548790.localdomain podman[86158]: 2025-12-06 08:47:32.586901259 +0000 UTC m=+0.098985962 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step3, release=1761123044, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., container_name=iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 08:47:32 np0005548790.localdomain podman[86158]: 2025-12-06 08:47:32.597006719 +0000 UTC m=+0.109091452 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, version=17.1.12, config_id=tripleo_step3, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044)
Dec 06 08:47:32 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:47:32 np0005548790.localdomain systemd[1]: tmp-crun.2wyYfT.mount: Deactivated successfully.
Dec 06 08:47:32 np0005548790.localdomain podman[86159]: 2025-12-06 08:47:32.660718785 +0000 UTC m=+0.165155833 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1761123044, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 08:47:32 np0005548790.localdomain podman[86166]: 2025-12-06 08:47:32.741986391 +0000 UTC m=+0.242072302 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-nova-compute-container, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1)
Dec 06 08:47:32 np0005548790.localdomain podman[86159]: 2025-12-06 08:47:32.758539245 +0000 UTC m=+0.262976253 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 08:47:32 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:47:32 np0005548790.localdomain podman[86160]: 2025-12-06 08:47:32.835822205 +0000 UTC m=+0.339574475 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, vcs-type=git, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi)
Dec 06 08:47:32 np0005548790.localdomain podman[86160]: 2025-12-06 08:47:32.89241738 +0000 UTC m=+0.396169660 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4)
Dec 06 08:47:32 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:47:32 np0005548790.localdomain podman[86167]: 2025-12-06 08:47:32.896644192 +0000 UTC m=+0.394044001 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, tcib_managed=true, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 06 08:47:32 np0005548790.localdomain podman[86167]: 2025-12-06 08:47:32.979181223 +0000 UTC m=+0.476581062 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Dec 06 08:47:32 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:47:33 np0005548790.localdomain podman[86166]: 2025-12-06 08:47:33.113623593 +0000 UTC m=+0.613709504 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 06 08:47:33 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:47:33 np0005548790.localdomain systemd[1]: tmp-crun.uFkDac.mount: Deactivated successfully.
Dec 06 08:47:33 np0005548790.localdomain sshd[86269]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:47:35 np0005548790.localdomain sshd[86269]: Received disconnect from 103.226.138.52 port 53912:11: Bye Bye [preauth]
Dec 06 08:47:35 np0005548790.localdomain sshd[86269]: Disconnected from authenticating user root 103.226.138.52 port 53912 [preauth]
Dec 06 08:47:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:47:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:47:37 np0005548790.localdomain podman[86271]: 2025-12-06 08:47:37.567815632 +0000 UTC m=+0.080511497 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, vcs-type=git, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 06 08:47:37 np0005548790.localdomain systemd[1]: tmp-crun.Tts313.mount: Deactivated successfully.
Dec 06 08:47:37 np0005548790.localdomain podman[86272]: 2025-12-06 08:47:37.625140036 +0000 UTC m=+0.136026843 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, version=17.1.12, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=)
Dec 06 08:47:37 np0005548790.localdomain podman[86271]: 2025-12-06 08:47:37.639236174 +0000 UTC m=+0.151931999 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:47:37 np0005548790.localdomain podman[86272]: 2025-12-06 08:47:37.648993326 +0000 UTC m=+0.159880083 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, release=1761123044, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 06 08:47:37 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:47:37 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:47:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:47:40 np0005548790.localdomain podman[86319]: 2025-12-06 08:47:40.598667129 +0000 UTC m=+0.082642825 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step5, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 08:47:40 np0005548790.localdomain podman[86319]: 2025-12-06 08:47:40.655229863 +0000 UTC m=+0.139205519 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, release=1761123044, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute)
Dec 06 08:47:40 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:47:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:47:47 np0005548790.localdomain podman[86345]: 2025-12-06 08:47:47.557623259 +0000 UTC m=+0.075683098 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, release=1761123044, architecture=x86_64, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z)
Dec 06 08:47:47 np0005548790.localdomain podman[86345]: 2025-12-06 08:47:47.781322709 +0000 UTC m=+0.299382588 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Dec 06 08:47:47 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:47:48 np0005548790.localdomain sshd[86374]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:47:50 np0005548790.localdomain sshd[86374]: Received disconnect from 35.247.75.98 port 57604:11: Bye Bye [preauth]
Dec 06 08:47:50 np0005548790.localdomain sshd[86374]: Disconnected from authenticating user root 35.247.75.98 port 57604 [preauth]
Dec 06 08:47:54 np0005548790.localdomain sudo[86376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:47:54 np0005548790.localdomain sudo[86376]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:47:54 np0005548790.localdomain sudo[86376]: pam_unix(sudo:session): session closed for user root
Dec 06 08:47:54 np0005548790.localdomain sudo[86391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:47:54 np0005548790.localdomain sudo[86391]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:47:55 np0005548790.localdomain sudo[86391]: pam_unix(sudo:session): session closed for user root
Dec 06 08:47:56 np0005548790.localdomain sudo[86438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:47:56 np0005548790.localdomain sudo[86438]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:47:56 np0005548790.localdomain sudo[86438]: pam_unix(sudo:session): session closed for user root
Dec 06 08:48:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:48:01 np0005548790.localdomain podman[86453]: 2025-12-06 08:48:01.580630871 +0000 UTC m=+0.094389559 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, container_name=collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12)
Dec 06 08:48:01 np0005548790.localdomain podman[86453]: 2025-12-06 08:48:01.620303743 +0000 UTC m=+0.134062421 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, architecture=x86_64, build-date=2025-11-18T22:51:28Z, release=1761123044, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1)
Dec 06 08:48:01 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:48:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:48:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:48:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:48:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:48:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:48:03 np0005548790.localdomain podman[86473]: 2025-12-06 08:48:03.597966149 +0000 UTC m=+0.106015980 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Dec 06 08:48:03 np0005548790.localdomain podman[86473]: 2025-12-06 08:48:03.606459336 +0000 UTC m=+0.114509157 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, container_name=iscsid, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, distribution-scope=public, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible)
Dec 06 08:48:03 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:48:03 np0005548790.localdomain systemd[1]: tmp-crun.8Sp96T.mount: Deactivated successfully.
Dec 06 08:48:03 np0005548790.localdomain podman[86474]: 2025-12-06 08:48:03.656152266 +0000 UTC m=+0.162824081 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 08:48:03 np0005548790.localdomain podman[86475]: 2025-12-06 08:48:03.700905964 +0000 UTC m=+0.203359475 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, managed_by=tripleo_ansible, distribution-scope=public, url=https://www.redhat.com)
Dec 06 08:48:03 np0005548790.localdomain podman[86474]: 2025-12-06 08:48:03.720501509 +0000 UTC m=+0.227173324 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=)
Dec 06 08:48:03 np0005548790.localdomain podman[86475]: 2025-12-06 08:48:03.729707476 +0000 UTC m=+0.232161017 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64)
Dec 06 08:48:03 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:48:03 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:48:03 np0005548790.localdomain podman[86487]: 2025-12-06 08:48:03.737097154 +0000 UTC m=+0.232334853 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Dec 06 08:48:03 np0005548790.localdomain podman[86476]: 2025-12-06 08:48:03.806285246 +0000 UTC m=+0.304671289 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-type=git, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public)
Dec 06 08:48:03 np0005548790.localdomain podman[86487]: 2025-12-06 08:48:03.818471132 +0000 UTC m=+0.313708861 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, release=1761123044, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z)
Dec 06 08:48:03 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:48:04 np0005548790.localdomain podman[86476]: 2025-12-06 08:48:04.187292629 +0000 UTC m=+0.685678722 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step4, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec 06 08:48:04 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:48:08 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:48:08 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:48:08 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 08:48:08 np0005548790.localdomain recover_tripleo_nova_virtqemud[86592]: 62556
Dec 06 08:48:08 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 08:48:08 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 08:48:08 np0005548790.localdomain podman[86589]: 2025-12-06 08:48:08.552515436 +0000 UTC m=+0.069190273 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, distribution-scope=public, container_name=ovn_metadata_agent, release=1761123044)
Dec 06 08:48:08 np0005548790.localdomain podman[86589]: 2025-12-06 08:48:08.609159103 +0000 UTC m=+0.125833940 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, config_id=tripleo_step4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git)
Dec 06 08:48:08 np0005548790.localdomain systemd[1]: tmp-crun.MYPY3u.mount: Deactivated successfully.
Dec 06 08:48:08 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:48:08 np0005548790.localdomain podman[86590]: 2025-12-06 08:48:08.626014165 +0000 UTC m=+0.142480367 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1761123044, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 08:48:08 np0005548790.localdomain podman[86590]: 2025-12-06 08:48:08.675444627 +0000 UTC m=+0.191910829 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, tcib_managed=true, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 08:48:08 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:48:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:48:11 np0005548790.localdomain systemd[1]: tmp-crun.ISjztp.mount: Deactivated successfully.
Dec 06 08:48:11 np0005548790.localdomain podman[86639]: 2025-12-06 08:48:11.565729621 +0000 UTC m=+0.081078562 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vendor=Red Hat, Inc., config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute)
Dec 06 08:48:11 np0005548790.localdomain podman[86639]: 2025-12-06 08:48:11.617440315 +0000 UTC m=+0.132789206 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, container_name=nova_compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1)
Dec 06 08:48:11 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:48:18 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:48:18 np0005548790.localdomain systemd[1]: tmp-crun.dOF1uS.mount: Deactivated successfully.
Dec 06 08:48:18 np0005548790.localdomain podman[86667]: 2025-12-06 08:48:18.577168437 +0000 UTC m=+0.092642392 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1761123044, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z)
Dec 06 08:48:18 np0005548790.localdomain podman[86667]: 2025-12-06 08:48:18.806384165 +0000 UTC m=+0.321858090 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:48:18 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:48:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:48:32 np0005548790.localdomain systemd[1]: tmp-crun.3wAkQX.mount: Deactivated successfully.
Dec 06 08:48:32 np0005548790.localdomain podman[86741]: 2025-12-06 08:48:32.580927277 +0000 UTC m=+0.086638881 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., architecture=x86_64, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z)
Dec 06 08:48:32 np0005548790.localdomain podman[86741]: 2025-12-06 08:48:32.618266976 +0000 UTC m=+0.123978600 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vcs-type=git, release=1761123044)
Dec 06 08:48:32 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:48:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:48:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:48:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:48:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:48:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:48:34 np0005548790.localdomain systemd[1]: tmp-crun.R4mpKq.mount: Deactivated successfully.
Dec 06 08:48:34 np0005548790.localdomain podman[86762]: 2025-12-06 08:48:34.615822665 +0000 UTC m=+0.133540427 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 06 08:48:34 np0005548790.localdomain podman[86764]: 2025-12-06 08:48:34.637535846 +0000 UTC m=+0.148143698 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1)
Dec 06 08:48:34 np0005548790.localdomain podman[86762]: 2025-12-06 08:48:34.640250049 +0000 UTC m=+0.157967891 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, config_id=tripleo_step4, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:48:34 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:48:34 np0005548790.localdomain podman[86765]: 2025-12-06 08:48:34.686945969 +0000 UTC m=+0.193797940 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-cron, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:48:34 np0005548790.localdomain podman[86761]: 2025-12-06 08:48:34.589733766 +0000 UTC m=+0.107558201 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, container_name=iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, release=1761123044)
Dec 06 08:48:34 np0005548790.localdomain podman[86765]: 2025-12-06 08:48:34.697081151 +0000 UTC m=+0.203933102 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, architecture=x86_64, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:48:34 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:48:34 np0005548790.localdomain podman[86763]: 2025-12-06 08:48:34.560862653 +0000 UTC m=+0.077955378 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 08:48:34 np0005548790.localdomain podman[86761]: 2025-12-06 08:48:34.725246835 +0000 UTC m=+0.243071250 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, architecture=x86_64, release=1761123044, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible)
Dec 06 08:48:34 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:48:34 np0005548790.localdomain podman[86763]: 2025-12-06 08:48:34.744162862 +0000 UTC m=+0.261255577 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 08:48:34 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:48:35 np0005548790.localdomain podman[86764]: 2025-12-06 08:48:35.019293768 +0000 UTC m=+0.529901670 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git)
Dec 06 08:48:35 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:48:39 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:48:39 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:48:39 np0005548790.localdomain podman[86874]: 2025-12-06 08:48:39.569902729 +0000 UTC m=+0.084096313 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:48:39 np0005548790.localdomain podman[86874]: 2025-12-06 08:48:39.609120799 +0000 UTC m=+0.123314413 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, container_name=ovn_metadata_agent, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 08:48:39 np0005548790.localdomain systemd[1]: tmp-crun.mwp1XL.mount: Deactivated successfully.
Dec 06 08:48:39 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:48:39 np0005548790.localdomain podman[86875]: 2025-12-06 08:48:39.627334908 +0000 UTC m=+0.137665557 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z)
Dec 06 08:48:39 np0005548790.localdomain podman[86875]: 2025-12-06 08:48:39.652627635 +0000 UTC m=+0.162958244 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4)
Dec 06 08:48:39 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:48:41 np0005548790.localdomain sshd[86921]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:48:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:48:42 np0005548790.localdomain systemd[1]: tmp-crun.idudPc.mount: Deactivated successfully.
Dec 06 08:48:42 np0005548790.localdomain podman[86923]: 2025-12-06 08:48:42.555537456 +0000 UTC m=+0.074518946 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Dec 06 08:48:42 np0005548790.localdomain podman[86923]: 2025-12-06 08:48:42.612392049 +0000 UTC m=+0.131373159 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, config_id=tripleo_step5, io.buildah.version=1.41.4, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044)
Dec 06 08:48:42 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:48:42 np0005548790.localdomain sshd[86921]: Received disconnect from 43.163.123.45 port 42728:11: Bye Bye [preauth]
Dec 06 08:48:42 np0005548790.localdomain sshd[86921]: Disconnected from authenticating user root 43.163.123.45 port 42728 [preauth]
Dec 06 08:48:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:48:49 np0005548790.localdomain podman[86949]: 2025-12-06 08:48:49.565749937 +0000 UTC m=+0.081277688 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, release=1761123044, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true)
Dec 06 08:48:49 np0005548790.localdomain podman[86949]: 2025-12-06 08:48:49.763210764 +0000 UTC m=+0.278738545 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, vcs-type=git)
Dec 06 08:48:49 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:48:56 np0005548790.localdomain sudo[86977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:48:56 np0005548790.localdomain sudo[86977]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:48:56 np0005548790.localdomain sudo[86977]: pam_unix(sudo:session): session closed for user root
Dec 06 08:48:56 np0005548790.localdomain sudo[86992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:48:56 np0005548790.localdomain sudo[86992]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:48:57 np0005548790.localdomain sudo[86992]: pam_unix(sudo:session): session closed for user root
Dec 06 08:48:57 np0005548790.localdomain sudo[87038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:48:57 np0005548790.localdomain sudo[87038]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:48:57 np0005548790.localdomain sudo[87038]: pam_unix(sudo:session): session closed for user root
Dec 06 08:49:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:49:03 np0005548790.localdomain podman[87053]: 2025-12-06 08:49:03.567455989 +0000 UTC m=+0.083858422 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, name=rhosp17/openstack-collectd, batch=17.1_20251118.1)
Dec 06 08:49:03 np0005548790.localdomain podman[87053]: 2025-12-06 08:49:03.605199321 +0000 UTC m=+0.121601774 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, tcib_managed=true, config_id=tripleo_step3, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, version=17.1.12, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd)
Dec 06 08:49:03 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:49:04 np0005548790.localdomain sshd[87071]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:49:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:49:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:49:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:49:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:49:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:49:05 np0005548790.localdomain systemd[1]: tmp-crun.nS7wrk.mount: Deactivated successfully.
Dec 06 08:49:05 np0005548790.localdomain podman[87074]: 2025-12-06 08:49:05.580896052 +0000 UTC m=+0.090784078 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 08:49:05 np0005548790.localdomain systemd[1]: tmp-crun.WUVcVf.mount: Deactivated successfully.
Dec 06 08:49:05 np0005548790.localdomain podman[87074]: 2025-12-06 08:49:05.634067769 +0000 UTC m=+0.143955834 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container)
Dec 06 08:49:05 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:49:05 np0005548790.localdomain podman[87076]: 2025-12-06 08:49:05.678755288 +0000 UTC m=+0.181315886 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1)
Dec 06 08:49:05 np0005548790.localdomain podman[87077]: 2025-12-06 08:49:05.635646392 +0000 UTC m=+0.134831890 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, batch=17.1_20251118.1, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, tcib_managed=true, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond)
Dec 06 08:49:05 np0005548790.localdomain podman[87077]: 2025-12-06 08:49:05.714356824 +0000 UTC m=+0.213542392 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, container_name=logrotate_crond, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:49:05 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:49:05 np0005548790.localdomain podman[87073]: 2025-12-06 08:49:05.737585547 +0000 UTC m=+0.246772944 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, container_name=iscsid, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.component=openstack-iscsid-container, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible)
Dec 06 08:49:05 np0005548790.localdomain podman[87075]: 2025-12-06 08:49:05.782288637 +0000 UTC m=+0.288991867 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true)
Dec 06 08:49:05 np0005548790.localdomain podman[87073]: 2025-12-06 08:49:05.78613433 +0000 UTC m=+0.295321707 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, version=17.1.12, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:49:05 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:49:05 np0005548790.localdomain podman[87075]: 2025-12-06 08:49:05.837768966 +0000 UTC m=+0.344472276 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, tcib_managed=true, version=17.1.12, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, release=1761123044, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git)
Dec 06 08:49:05 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:49:06 np0005548790.localdomain podman[87076]: 2025-12-06 08:49:06.062367443 +0000 UTC m=+0.564928051 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step4, container_name=nova_migration_target, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 08:49:06 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:49:07 np0005548790.localdomain sshd[87071]: Received disconnect from 103.226.138.52 port 57922:11: Bye Bye [preauth]
Dec 06 08:49:07 np0005548790.localdomain sshd[87071]: Disconnected from authenticating user root 103.226.138.52 port 57922 [preauth]
Dec 06 08:49:10 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:49:10 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:49:10 np0005548790.localdomain systemd[1]: tmp-crun.Yn5sVq.mount: Deactivated successfully.
Dec 06 08:49:10 np0005548790.localdomain podman[87191]: 2025-12-06 08:49:10.566573361 +0000 UTC m=+0.078899028 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, distribution-scope=public, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 06 08:49:10 np0005548790.localdomain podman[87190]: 2025-12-06 08:49:10.618480154 +0000 UTC m=+0.132362743 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64)
Dec 06 08:49:10 np0005548790.localdomain podman[87191]: 2025-12-06 08:49:10.64365298 +0000 UTC m=+0.155978627 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, vcs-type=git, release=1761123044)
Dec 06 08:49:10 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:49:10 np0005548790.localdomain podman[87190]: 2025-12-06 08:49:10.661122538 +0000 UTC m=+0.175005087 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, version=17.1.12, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent)
Dec 06 08:49:10 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:49:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:49:13 np0005548790.localdomain podman[87237]: 2025-12-06 08:49:13.567967738 +0000 UTC m=+0.084006774 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, url=https://www.redhat.com)
Dec 06 08:49:13 np0005548790.localdomain podman[87237]: 2025-12-06 08:49:13.620819105 +0000 UTC m=+0.136858141 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, container_name=nova_compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:49:13 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:49:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:49:20 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 08:49:20 np0005548790.localdomain recover_tripleo_nova_virtqemud[87271]: 62556
Dec 06 08:49:20 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 08:49:20 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 08:49:20 np0005548790.localdomain podman[87264]: 2025-12-06 08:49:20.581795633 +0000 UTC m=+0.085909346 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, distribution-scope=public, config_id=tripleo_step1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:49:20 np0005548790.localdomain podman[87264]: 2025-12-06 08:49:20.801303715 +0000 UTC m=+0.305417338 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1)
Dec 06 08:49:20 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:49:32 np0005548790.localdomain sshd[87338]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:49:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:49:34 np0005548790.localdomain systemd[1]: tmp-crun.ZjrTrO.mount: Deactivated successfully.
Dec 06 08:49:34 np0005548790.localdomain podman[87340]: 2025-12-06 08:49:34.576936123 +0000 UTC m=+0.093404028 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, version=17.1.12, release=1761123044, tcib_managed=true, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd)
Dec 06 08:49:34 np0005548790.localdomain podman[87340]: 2025-12-06 08:49:34.590220899 +0000 UTC m=+0.106688814 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, release=1761123044, distribution-scope=public, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:49:34 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:49:34 np0005548790.localdomain sshd[87338]: Received disconnect from 35.247.75.98 port 50984:11: Bye Bye [preauth]
Dec 06 08:49:34 np0005548790.localdomain sshd[87338]: Disconnected from authenticating user root 35.247.75.98 port 50984 [preauth]
Dec 06 08:49:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:49:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:49:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:49:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:49:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:49:36 np0005548790.localdomain systemd[1]: tmp-crun.UT4wh6.mount: Deactivated successfully.
Dec 06 08:49:36 np0005548790.localdomain podman[87362]: 2025-12-06 08:49:36.585397252 +0000 UTC m=+0.094495917 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute)
Dec 06 08:49:36 np0005548790.localdomain podman[87376]: 2025-12-06 08:49:36.624157042 +0000 UTC m=+0.114647667 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-cron, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:49:36 np0005548790.localdomain podman[87376]: 2025-12-06 08:49:36.634453699 +0000 UTC m=+0.124944374 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, vcs-type=git, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-cron-container, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com)
Dec 06 08:49:36 np0005548790.localdomain podman[87362]: 2025-12-06 08:49:36.664864334 +0000 UTC m=+0.173963009 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.4, batch=17.1_20251118.1)
Dec 06 08:49:36 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:49:36 np0005548790.localdomain podman[87361]: 2025-12-06 08:49:36.63525852 +0000 UTC m=+0.145775563 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step3, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, url=https://www.redhat.com, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, build-date=2025-11-18T23:44:13Z, container_name=iscsid, tcib_managed=true)
Dec 06 08:49:36 np0005548790.localdomain podman[87364]: 2025-12-06 08:49:36.686574077 +0000 UTC m=+0.187332848 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, tcib_managed=true, container_name=nova_migration_target, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12)
Dec 06 08:49:36 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:49:36 np0005548790.localdomain podman[87363]: 2025-12-06 08:49:36.753685368 +0000 UTC m=+0.256430002 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, tcib_managed=true, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:49:36 np0005548790.localdomain podman[87361]: 2025-12-06 08:49:36.768532966 +0000 UTC m=+0.279050059 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, container_name=iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, batch=17.1_20251118.1)
Dec 06 08:49:36 np0005548790.localdomain podman[87363]: 2025-12-06 08:49:36.778132945 +0000 UTC m=+0.280877529 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, vcs-type=git, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 08:49:36 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:49:36 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:49:37 np0005548790.localdomain podman[87364]: 2025-12-06 08:49:37.022631926 +0000 UTC m=+0.523390637 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Dec 06 08:49:37 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:49:41 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:49:41 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:49:41 np0005548790.localdomain systemd[1]: tmp-crun.QuXoRQ.mount: Deactivated successfully.
Dec 06 08:49:41 np0005548790.localdomain podman[87476]: 2025-12-06 08:49:41.572844329 +0000 UTC m=+0.089115732 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, release=1761123044)
Dec 06 08:49:41 np0005548790.localdomain systemd[1]: tmp-crun.ls3lsy.mount: Deactivated successfully.
Dec 06 08:49:41 np0005548790.localdomain podman[87477]: 2025-12-06 08:49:41.63586337 +0000 UTC m=+0.146778569 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, container_name=ovn_controller, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller)
Dec 06 08:49:41 np0005548790.localdomain podman[87476]: 2025-12-06 08:49:41.646130516 +0000 UTC m=+0.162401949 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, release=1761123044, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step4, version=17.1.12)
Dec 06 08:49:41 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:49:41 np0005548790.localdomain podman[87477]: 2025-12-06 08:49:41.663210294 +0000 UTC m=+0.174125503 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, batch=17.1_20251118.1, architecture=x86_64, distribution-scope=public, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true)
Dec 06 08:49:41 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:49:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:49:44 np0005548790.localdomain podman[87523]: 2025-12-06 08:49:44.563761465 +0000 UTC m=+0.078176019 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:49:44 np0005548790.localdomain podman[87523]: 2025-12-06 08:49:44.592140106 +0000 UTC m=+0.106554690 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, architecture=x86_64, release=1761123044, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 08:49:44 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:49:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:49:51 np0005548790.localdomain podman[87550]: 2025-12-06 08:49:51.56635415 +0000 UTC m=+0.084385145 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com)
Dec 06 08:49:51 np0005548790.localdomain podman[87550]: 2025-12-06 08:49:51.755166357 +0000 UTC m=+0.273197332 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, config_id=tripleo_step1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:49:51 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:49:57 np0005548790.localdomain sudo[87579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:49:57 np0005548790.localdomain sudo[87579]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:49:57 np0005548790.localdomain sudo[87579]: pam_unix(sudo:session): session closed for user root
Dec 06 08:49:57 np0005548790.localdomain sudo[87594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:49:57 np0005548790.localdomain sudo[87594]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:49:58 np0005548790.localdomain sudo[87594]: pam_unix(sudo:session): session closed for user root
Dec 06 08:50:00 np0005548790.localdomain sshd[87640]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:50:01 np0005548790.localdomain sudo[87642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:50:01 np0005548790.localdomain sudo[87642]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:50:01 np0005548790.localdomain sudo[87642]: pam_unix(sudo:session): session closed for user root
Dec 06 08:50:02 np0005548790.localdomain sshd[87640]: Received disconnect from 43.163.123.45 port 41394:11: Bye Bye [preauth]
Dec 06 08:50:02 np0005548790.localdomain sshd[87640]: Disconnected from authenticating user root 43.163.123.45 port 41394 [preauth]
Dec 06 08:50:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:50:05 np0005548790.localdomain podman[87657]: 2025-12-06 08:50:05.553860865 +0000 UTC m=+0.068538060 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, vendor=Red Hat, Inc.)
Dec 06 08:50:05 np0005548790.localdomain podman[87657]: 2025-12-06 08:50:05.587055256 +0000 UTC m=+0.101732461 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, version=17.1.12, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Dec 06 08:50:05 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:50:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:50:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:50:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:50:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:50:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:50:07 np0005548790.localdomain podman[87680]: 2025-12-06 08:50:07.579261719 +0000 UTC m=+0.085099705 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, batch=17.1_20251118.1)
Dec 06 08:50:07 np0005548790.localdomain systemd[1]: tmp-crun.6RDoWF.mount: Deactivated successfully.
Dec 06 08:50:07 np0005548790.localdomain podman[87681]: 2025-12-06 08:50:07.651299053 +0000 UTC m=+0.152840583 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=nova_migration_target, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, distribution-scope=public)
Dec 06 08:50:07 np0005548790.localdomain podman[87680]: 2025-12-06 08:50:07.656187493 +0000 UTC m=+0.162025469 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container)
Dec 06 08:50:07 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:50:07 np0005548790.localdomain podman[87679]: 2025-12-06 08:50:07.74588484 +0000 UTC m=+0.252667701 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git)
Dec 06 08:50:07 np0005548790.localdomain podman[87678]: 2025-12-06 08:50:07.794004332 +0000 UTC m=+0.304755629 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3)
Dec 06 08:50:07 np0005548790.localdomain podman[87679]: 2025-12-06 08:50:07.79913869 +0000 UTC m=+0.305921571 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, version=17.1.12)
Dec 06 08:50:07 np0005548790.localdomain podman[87678]: 2025-12-06 08:50:07.807122354 +0000 UTC m=+0.317873621 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 08:50:07 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:50:07 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:50:07 np0005548790.localdomain podman[87687]: 2025-12-06 08:50:07.847876508 +0000 UTC m=+0.344119586 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:50:07 np0005548790.localdomain podman[87687]: 2025-12-06 08:50:07.881184062 +0000 UTC m=+0.377427140 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, version=17.1.12, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, container_name=logrotate_crond, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec 06 08:50:07 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:50:08 np0005548790.localdomain podman[87681]: 2025-12-06 08:50:08.038950886 +0000 UTC m=+0.540492406 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, batch=17.1_20251118.1, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 08:50:08 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:50:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:50:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 3000.1 total, 600.0 interval
                                                          Cumulative writes: 5186 writes, 23K keys, 5186 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 5186 writes, 682 syncs, 7.60 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 545 writes, 2058 keys, 545 commit groups, 1.0 writes per commit group, ingest: 2.58 MB, 0.00 MB/s
                                                          Interval WAL: 545 writes, 193 syncs, 2.82 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 08:50:12 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:50:12 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:50:12 np0005548790.localdomain systemd[1]: tmp-crun.b6PDJw.mount: Deactivated successfully.
Dec 06 08:50:12 np0005548790.localdomain podman[87795]: 2025-12-06 08:50:12.538394035 +0000 UTC m=+0.055111790 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, distribution-scope=public, container_name=ovn_controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:50:12 np0005548790.localdomain podman[87795]: 2025-12-06 08:50:12.58515778 +0000 UTC m=+0.101875555 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-ovn-controller, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team)
Dec 06 08:50:12 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:50:12 np0005548790.localdomain podman[87794]: 2025-12-06 08:50:12.663978876 +0000 UTC m=+0.175851221 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, container_name=ovn_metadata_agent, version=17.1.12, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., io.openshift.expose-services=)
Dec 06 08:50:12 np0005548790.localdomain podman[87794]: 2025-12-06 08:50:12.69321691 +0000 UTC m=+0.205089305 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 08:50:12 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:50:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 08:50:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 3000.2 total, 600.0 interval
                                                          Cumulative writes: 5446 writes, 23K keys, 5446 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 5446 writes, 742 syncs, 7.34 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 488 writes, 1934 keys, 488 commit groups, 1.0 writes per commit group, ingest: 2.32 MB, 0.00 MB/s
                                                          Interval WAL: 488 writes, 166 syncs, 2.94 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 08:50:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:50:15 np0005548790.localdomain podman[87842]: 2025-12-06 08:50:15.567867255 +0000 UTC m=+0.082216796 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:50:15 np0005548790.localdomain podman[87842]: 2025-12-06 08:50:15.594271724 +0000 UTC m=+0.108621235 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, container_name=nova_compute)
Dec 06 08:50:15 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:50:22 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:50:22 np0005548790.localdomain systemd[1]: tmp-crun.TViQyc.mount: Deactivated successfully.
Dec 06 08:50:22 np0005548790.localdomain podman[87869]: 2025-12-06 08:50:22.563549475 +0000 UTC m=+0.077169692 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_id=tripleo_step1, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Dec 06 08:50:22 np0005548790.localdomain podman[87869]: 2025-12-06 08:50:22.769241805 +0000 UTC m=+0.282862062 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:50:22 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:50:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:50:36 np0005548790.localdomain podman[87944]: 2025-12-06 08:50:36.559387224 +0000 UTC m=+0.072004733 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, release=1761123044, architecture=x86_64)
Dec 06 08:50:36 np0005548790.localdomain podman[87944]: 2025-12-06 08:50:36.566542466 +0000 UTC m=+0.079159975 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-18T22:51:28Z, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, distribution-scope=public, name=rhosp17/openstack-collectd, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:50:36 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:50:37 np0005548790.localdomain sshd[87964]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:50:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:50:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:50:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:50:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:50:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:50:38 np0005548790.localdomain systemd[1]: tmp-crun.f0ZqmJ.mount: Deactivated successfully.
Dec 06 08:50:38 np0005548790.localdomain podman[87968]: 2025-12-06 08:50:38.576573989 +0000 UTC m=+0.083455451 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, version=17.1.12, architecture=x86_64, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team)
Dec 06 08:50:38 np0005548790.localdomain podman[87968]: 2025-12-06 08:50:38.634097262 +0000 UTC m=+0.140978734 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, url=https://www.redhat.com, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1761123044, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 06 08:50:38 np0005548790.localdomain podman[87970]: 2025-12-06 08:50:38.641145382 +0000 UTC m=+0.138053736 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc.)
Dec 06 08:50:38 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:50:38 np0005548790.localdomain podman[87970]: 2025-12-06 08:50:38.676992554 +0000 UTC m=+0.173900898 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, config_id=tripleo_step4, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:50:38 np0005548790.localdomain podman[87969]: 2025-12-06 08:50:38.682517172 +0000 UTC m=+0.185756677 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, container_name=nova_migration_target, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044)
Dec 06 08:50:38 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:50:38 np0005548790.localdomain podman[87966]: 2025-12-06 08:50:38.72828548 +0000 UTC m=+0.236941540 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-iscsid)
Dec 06 08:50:38 np0005548790.localdomain podman[87967]: 2025-12-06 08:50:38.685232585 +0000 UTC m=+0.193951947 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, version=17.1.12, url=https://www.redhat.com, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc.)
Dec 06 08:50:38 np0005548790.localdomain podman[87967]: 2025-12-06 08:50:38.765960791 +0000 UTC m=+0.274680113 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, config_id=tripleo_step4, architecture=x86_64, release=1761123044, vcs-type=git, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute)
Dec 06 08:50:38 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:50:38 np0005548790.localdomain podman[87966]: 2025-12-06 08:50:38.787984302 +0000 UTC m=+0.296640382 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, container_name=iscsid, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, vcs-type=git)
Dec 06 08:50:38 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:50:38 np0005548790.localdomain sshd[87964]: Received disconnect from 103.226.138.52 port 36508:11: Bye Bye [preauth]
Dec 06 08:50:38 np0005548790.localdomain sshd[87964]: Disconnected from authenticating user root 103.226.138.52 port 36508 [preauth]
Dec 06 08:50:39 np0005548790.localdomain podman[87969]: 2025-12-06 08:50:39.045294077 +0000 UTC m=+0.548533532 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, container_name=nova_migration_target, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:50:39 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:50:39 np0005548790.localdomain systemd[1]: tmp-crun.0QUAN8.mount: Deactivated successfully.
Dec 06 08:50:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:50:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:50:43 np0005548790.localdomain systemd[1]: tmp-crun.Gep82g.mount: Deactivated successfully.
Dec 06 08:50:43 np0005548790.localdomain podman[88086]: 2025-12-06 08:50:43.584281937 +0000 UTC m=+0.095522125 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, vcs-type=git, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 08:50:43 np0005548790.localdomain podman[88085]: 2025-12-06 08:50:43.623762167 +0000 UTC m=+0.137193433 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4)
Dec 06 08:50:43 np0005548790.localdomain podman[88086]: 2025-12-06 08:50:43.64029991 +0000 UTC m=+0.151540118 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.buildah.version=1.41.4, batch=17.1_20251118.1, version=17.1.12, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:50:43 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:50:43 np0005548790.localdomain podman[88085]: 2025-12-06 08:50:43.694272449 +0000 UTC m=+0.207703745 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=ovn_metadata_agent)
Dec 06 08:50:43 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:50:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:50:46 np0005548790.localdomain systemd[1]: tmp-crun.OIvOlp.mount: Deactivated successfully.
Dec 06 08:50:46 np0005548790.localdomain podman[88132]: 2025-12-06 08:50:46.571245608 +0000 UTC m=+0.086361379 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:50:46 np0005548790.localdomain podman[88132]: 2025-12-06 08:50:46.623157461 +0000 UTC m=+0.138273082 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, build-date=2025-11-19T00:36:58Z)
Dec 06 08:50:46 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:50:53 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:50:53 np0005548790.localdomain podman[88157]: 2025-12-06 08:50:53.561841799 +0000 UTC m=+0.080389799 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, container_name=metrics_qdr, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64)
Dec 06 08:50:53 np0005548790.localdomain podman[88157]: 2025-12-06 08:50:53.76424212 +0000 UTC m=+0.282790060 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:50:53 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:51:01 np0005548790.localdomain sudo[88186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:51:01 np0005548790.localdomain sudo[88186]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:51:01 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 08:51:01 np0005548790.localdomain sudo[88186]: pam_unix(sudo:session): session closed for user root
Dec 06 08:51:01 np0005548790.localdomain recover_tripleo_nova_virtqemud[88202]: 62556
Dec 06 08:51:01 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 08:51:01 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 08:51:01 np0005548790.localdomain sudo[88203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 06 08:51:01 np0005548790.localdomain sudo[88203]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:51:02 np0005548790.localdomain sudo[88203]: pam_unix(sudo:session): session closed for user root
Dec 06 08:51:02 np0005548790.localdomain sudo[88239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:51:02 np0005548790.localdomain sudo[88239]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:51:02 np0005548790.localdomain sudo[88239]: pam_unix(sudo:session): session closed for user root
Dec 06 08:51:02 np0005548790.localdomain sudo[88254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:51:02 np0005548790.localdomain sudo[88254]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:51:03 np0005548790.localdomain sudo[88254]: pam_unix(sudo:session): session closed for user root
Dec 06 08:51:04 np0005548790.localdomain sudo[88302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:51:04 np0005548790.localdomain sudo[88302]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:51:04 np0005548790.localdomain sudo[88302]: pam_unix(sudo:session): session closed for user root
Dec 06 08:51:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:51:07 np0005548790.localdomain systemd[1]: tmp-crun.zehRCR.mount: Deactivated successfully.
Dec 06 08:51:07 np0005548790.localdomain podman[88317]: 2025-12-06 08:51:07.564702387 +0000 UTC m=+0.081726165 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step3, release=1761123044)
Dec 06 08:51:07 np0005548790.localdomain podman[88317]: 2025-12-06 08:51:07.599959523 +0000 UTC m=+0.116983331 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step3, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true)
Dec 06 08:51:07 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:51:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:51:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:51:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:51:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:51:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:51:09 np0005548790.localdomain systemd[1]: tmp-crun.PXAndZ.mount: Deactivated successfully.
Dec 06 08:51:09 np0005548790.localdomain podman[88340]: 2025-12-06 08:51:09.589456673 +0000 UTC m=+0.090291404 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Dec 06 08:51:09 np0005548790.localdomain podman[88340]: 2025-12-06 08:51:09.619154241 +0000 UTC m=+0.119989012 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 06 08:51:09 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:51:09 np0005548790.localdomain podman[88339]: 2025-12-06 08:51:09.631955625 +0000 UTC m=+0.134547452 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, architecture=x86_64)
Dec 06 08:51:09 np0005548790.localdomain podman[88339]: 2025-12-06 08:51:09.658439695 +0000 UTC m=+0.161031502 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=)
Dec 06 08:51:09 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:51:09 np0005548790.localdomain podman[88338]: 2025-12-06 08:51:09.709329051 +0000 UTC m=+0.215729321 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, build-date=2025-11-18T23:44:13Z, version=17.1.12, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, container_name=iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3)
Dec 06 08:51:09 np0005548790.localdomain podman[88338]: 2025-12-06 08:51:09.74545237 +0000 UTC m=+0.251852650 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, container_name=iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, architecture=x86_64)
Dec 06 08:51:09 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:51:09 np0005548790.localdomain podman[88341]: 2025-12-06 08:51:09.78604083 +0000 UTC m=+0.283200162 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, container_name=nova_migration_target, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc.)
Dec 06 08:51:09 np0005548790.localdomain podman[88350]: 2025-12-06 08:51:09.851407664 +0000 UTC m=+0.342698018 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public)
Dec 06 08:51:09 np0005548790.localdomain podman[88350]: 2025-12-06 08:51:09.884321747 +0000 UTC m=+0.375612081 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Dec 06 08:51:09 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:51:10 np0005548790.localdomain podman[88341]: 2025-12-06 08:51:10.178159013 +0000 UTC m=+0.675318355 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z)
Dec 06 08:51:10 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:51:12 np0005548790.localdomain sshd[88448]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:51:14 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:51:14 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:51:14 np0005548790.localdomain podman[88451]: 2025-12-06 08:51:14.567040254 +0000 UTC m=+0.077684766 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, tcib_managed=true, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 08:51:14 np0005548790.localdomain sshd[88448]: Received disconnect from 43.163.123.45 port 40054:11: Bye Bye [preauth]
Dec 06 08:51:14 np0005548790.localdomain sshd[88448]: Disconnected from authenticating user root 43.163.123.45 port 40054 [preauth]
Dec 06 08:51:14 np0005548790.localdomain podman[88450]: 2025-12-06 08:51:14.625146194 +0000 UTC m=+0.137833220 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, config_id=tripleo_step4, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=)
Dec 06 08:51:14 np0005548790.localdomain podman[88451]: 2025-12-06 08:51:14.648912821 +0000 UTC m=+0.159557363 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, config_id=tripleo_step4, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com)
Dec 06 08:51:14 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:51:14 np0005548790.localdomain podman[88450]: 2025-12-06 08:51:14.692231934 +0000 UTC m=+0.204918790 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, version=17.1.12, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 06 08:51:14 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:51:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:51:17 np0005548790.localdomain podman[88494]: 2025-12-06 08:51:17.562657496 +0000 UTC m=+0.080667216 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:51:17 np0005548790.localdomain podman[88494]: 2025-12-06 08:51:17.593127783 +0000 UTC m=+0.111137493 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, batch=17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, container_name=nova_compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4)
Dec 06 08:51:17 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:51:22 np0005548790.localdomain sshd[88520]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:51:23 np0005548790.localdomain sshd[88520]: Received disconnect from 35.247.75.98 port 41714:11: Bye Bye [preauth]
Dec 06 08:51:23 np0005548790.localdomain sshd[88520]: Disconnected from authenticating user root 35.247.75.98 port 41714 [preauth]
Dec 06 08:51:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:51:24 np0005548790.localdomain systemd[1]: tmp-crun.KwbXq4.mount: Deactivated successfully.
Dec 06 08:51:24 np0005548790.localdomain podman[88522]: 2025-12-06 08:51:24.553377521 +0000 UTC m=+0.073714808 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, release=1761123044, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com)
Dec 06 08:51:24 np0005548790.localdomain podman[88522]: 2025-12-06 08:51:24.743432152 +0000 UTC m=+0.263769439 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, container_name=metrics_qdr)
Dec 06 08:51:24 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:51:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:51:38 np0005548790.localdomain podman[88597]: 2025-12-06 08:51:38.572423983 +0000 UTC m=+0.088919666 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step3, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, version=17.1.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Dec 06 08:51:38 np0005548790.localdomain podman[88597]: 2025-12-06 08:51:38.585055593 +0000 UTC m=+0.101551296 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, container_name=collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1)
Dec 06 08:51:38 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:51:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:51:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:51:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:51:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:51:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:51:40 np0005548790.localdomain systemd[1]: tmp-crun.WEOexA.mount: Deactivated successfully.
Dec 06 08:51:40 np0005548790.localdomain podman[88617]: 2025-12-06 08:51:40.580855543 +0000 UTC m=+0.093451259 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, container_name=iscsid, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step3, distribution-scope=public, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:51:40 np0005548790.localdomain podman[88617]: 2025-12-06 08:51:40.590090981 +0000 UTC m=+0.102686657 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, architecture=x86_64, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, batch=17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc.)
Dec 06 08:51:40 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:51:40 np0005548790.localdomain podman[88631]: 2025-12-06 08:51:40.651963711 +0000 UTC m=+0.151824496 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, release=1761123044, distribution-scope=public, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:51:40 np0005548790.localdomain podman[88631]: 2025-12-06 08:51:40.684942456 +0000 UTC m=+0.184803211 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, container_name=logrotate_crond, version=17.1.12, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:51:40 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:51:40 np0005548790.localdomain podman[88620]: 2025-12-06 08:51:40.754367919 +0000 UTC m=+0.256346540 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container)
Dec 06 08:51:40 np0005548790.localdomain podman[88619]: 2025-12-06 08:51:40.686747925 +0000 UTC m=+0.192645782 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4)
Dec 06 08:51:40 np0005548790.localdomain podman[88619]: 2025-12-06 08:51:40.832621749 +0000 UTC m=+0.338519586 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible)
Dec 06 08:51:40 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:51:40 np0005548790.localdomain podman[88618]: 2025-12-06 08:51:40.846924183 +0000 UTC m=+0.358272566 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, distribution-scope=public)
Dec 06 08:51:40 np0005548790.localdomain podman[88618]: 2025-12-06 08:51:40.896907385 +0000 UTC m=+0.408255768 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.12, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true)
Dec 06 08:51:40 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:51:41 np0005548790.localdomain podman[88620]: 2025-12-06 08:51:41.1683912 +0000 UTC m=+0.670369861 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:51:41 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:51:45 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:51:45 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:51:45 np0005548790.localdomain systemd[1]: tmp-crun.Ym6jHj.mount: Deactivated successfully.
Dec 06 08:51:45 np0005548790.localdomain podman[88732]: 2025-12-06 08:51:45.544744296 +0000 UTC m=+0.060325029 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, release=1761123044, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public)
Dec 06 08:51:45 np0005548790.localdomain systemd[1]: tmp-crun.id3tOT.mount: Deactivated successfully.
Dec 06 08:51:45 np0005548790.localdomain podman[88731]: 2025-12-06 08:51:45.568601347 +0000 UTC m=+0.086404420 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, version=17.1.12, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:51:45 np0005548790.localdomain podman[88732]: 2025-12-06 08:51:45.600099942 +0000 UTC m=+0.115680665 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, distribution-scope=public, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044)
Dec 06 08:51:45 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:51:45 np0005548790.localdomain podman[88731]: 2025-12-06 08:51:45.634075044 +0000 UTC m=+0.151878117 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:51:45 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:51:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:51:48 np0005548790.localdomain podman[88778]: 2025-12-06 08:51:48.560965822 +0000 UTC m=+0.076489583 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step5, release=1761123044, io.buildah.version=1.41.4, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container)
Dec 06 08:51:48 np0005548790.localdomain podman[88778]: 2025-12-06 08:51:48.614164169 +0000 UTC m=+0.129687920 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=nova_compute, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:51:48 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:51:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:51:55 np0005548790.localdomain podman[88805]: 2025-12-06 08:51:55.564924093 +0000 UTC m=+0.078573219 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044)
Dec 06 08:51:55 np0005548790.localdomain podman[88805]: 2025-12-06 08:51:55.75406107 +0000 UTC m=+0.267710186 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=)
Dec 06 08:51:55 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:52:04 np0005548790.localdomain sudo[88835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:52:04 np0005548790.localdomain sudo[88835]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:52:04 np0005548790.localdomain sudo[88835]: pam_unix(sudo:session): session closed for user root
Dec 06 08:52:04 np0005548790.localdomain sudo[88850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:52:04 np0005548790.localdomain sudo[88850]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:52:04 np0005548790.localdomain sudo[88850]: pam_unix(sudo:session): session closed for user root
Dec 06 08:52:05 np0005548790.localdomain sshd[88896]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:52:05 np0005548790.localdomain sudo[88898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:52:05 np0005548790.localdomain sudo[88898]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:52:05 np0005548790.localdomain sudo[88898]: pam_unix(sudo:session): session closed for user root
Dec 06 08:52:07 np0005548790.localdomain sshd[88896]: Received disconnect from 103.226.138.52 port 36228:11: Bye Bye [preauth]
Dec 06 08:52:07 np0005548790.localdomain sshd[88896]: Disconnected from authenticating user root 103.226.138.52 port 36228 [preauth]
Dec 06 08:52:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:52:09 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 08:52:09 np0005548790.localdomain recover_tripleo_nova_virtqemud[88915]: 62556
Dec 06 08:52:09 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 08:52:09 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 08:52:09 np0005548790.localdomain podman[88913]: 2025-12-06 08:52:09.571899189 +0000 UTC m=+0.090610732 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, container_name=collectd, config_id=tripleo_step3, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd)
Dec 06 08:52:09 np0005548790.localdomain podman[88913]: 2025-12-06 08:52:09.58234083 +0000 UTC m=+0.101052373 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, tcib_managed=true, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, container_name=collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc.)
Dec 06 08:52:09 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:52:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:52:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:52:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:52:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:52:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:52:11 np0005548790.localdomain systemd[1]: tmp-crun.MJInkj.mount: Deactivated successfully.
Dec 06 08:52:11 np0005548790.localdomain podman[88933]: 2025-12-06 08:52:11.599193615 +0000 UTC m=+0.102592494 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:52:11 np0005548790.localdomain podman[88933]: 2025-12-06 08:52:11.633513876 +0000 UTC m=+0.136912735 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:52:11 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:52:11 np0005548790.localdomain podman[88934]: 2025-12-06 08:52:11.684823783 +0000 UTC m=+0.183851135 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 08:52:11 np0005548790.localdomain podman[88946]: 2025-12-06 08:52:11.642055045 +0000 UTC m=+0.131345786 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, architecture=x86_64, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, container_name=logrotate_crond, batch=17.1_20251118.1, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:52:11 np0005548790.localdomain podman[88934]: 2025-12-06 08:52:11.743440146 +0000 UTC m=+0.242467468 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi)
Dec 06 08:52:11 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:52:11 np0005548790.localdomain podman[88932]: 2025-12-06 08:52:11.757032671 +0000 UTC m=+0.259766923 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, config_id=tripleo_step3, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:52:11 np0005548790.localdomain podman[88946]: 2025-12-06 08:52:11.775680091 +0000 UTC m=+0.264970872 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, container_name=logrotate_crond, architecture=x86_64, managed_by=tripleo_ansible, release=1761123044)
Dec 06 08:52:11 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:52:11 np0005548790.localdomain podman[88932]: 2025-12-06 08:52:11.794358862 +0000 UTC m=+0.297092784 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, container_name=iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, release=1761123044, batch=17.1_20251118.1, version=17.1.12, tcib_managed=true)
Dec 06 08:52:11 np0005548790.localdomain podman[88935]: 2025-12-06 08:52:11.794382563 +0000 UTC m=+0.287529467 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:52:11 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:52:12 np0005548790.localdomain podman[88935]: 2025-12-06 08:52:12.178639606 +0000 UTC m=+0.671786460 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:52:12 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:52:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:52:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:52:16 np0005548790.localdomain podman[89043]: 2025-12-06 08:52:16.564415994 +0000 UTC m=+0.081128978 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1761123044, architecture=x86_64, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true)
Dec 06 08:52:16 np0005548790.localdomain podman[89042]: 2025-12-06 08:52:16.619283897 +0000 UTC m=+0.137966063 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:52:16 np0005548790.localdomain podman[89043]: 2025-12-06 08:52:16.642616713 +0000 UTC m=+0.159329677 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:52:16 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:52:16 np0005548790.localdomain podman[89042]: 2025-12-06 08:52:16.678678221 +0000 UTC m=+0.197360367 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_id=tripleo_step4, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., vcs-type=git)
Dec 06 08:52:16 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:52:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:52:19 np0005548790.localdomain systemd[1]: tmp-crun.JYgZb4.mount: Deactivated successfully.
Dec 06 08:52:19 np0005548790.localdomain podman[89090]: 2025-12-06 08:52:19.563090278 +0000 UTC m=+0.079174216 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, tcib_managed=true, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12)
Dec 06 08:52:19 np0005548790.localdomain podman[89090]: 2025-12-06 08:52:19.589489926 +0000 UTC m=+0.105573824 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, url=https://www.redhat.com)
Dec 06 08:52:19 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:52:23 np0005548790.localdomain sshd[89115]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:52:25 np0005548790.localdomain sshd[89115]: Received disconnect from 43.163.123.45 port 38716:11: Bye Bye [preauth]
Dec 06 08:52:25 np0005548790.localdomain sshd[89115]: Disconnected from authenticating user root 43.163.123.45 port 38716 [preauth]
Dec 06 08:52:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:52:26 np0005548790.localdomain systemd[1]: tmp-crun.b6ZtaR.mount: Deactivated successfully.
Dec 06 08:52:26 np0005548790.localdomain podman[89117]: 2025-12-06 08:52:26.56408472 +0000 UTC m=+0.080468131 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible)
Dec 06 08:52:26 np0005548790.localdomain podman[89117]: 2025-12-06 08:52:26.749407803 +0000 UTC m=+0.265791204 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Dec 06 08:52:26 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:52:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:52:40 np0005548790.localdomain podman[89169]: 2025-12-06 08:52:40.549983581 +0000 UTC m=+0.066410123 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step3, distribution-scope=public, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd)
Dec 06 08:52:40 np0005548790.localdomain podman[89169]: 2025-12-06 08:52:40.563278808 +0000 UTC m=+0.079705340 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, url=https://www.redhat.com, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=1761123044, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:52:40 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:52:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:52:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:52:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:52:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:52:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:52:42 np0005548790.localdomain systemd[1]: tmp-crun.TCzl90.mount: Deactivated successfully.
Dec 06 08:52:42 np0005548790.localdomain podman[89189]: 2025-12-06 08:52:42.560215509 +0000 UTC m=+0.079254618 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.41.4, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc.)
Dec 06 08:52:42 np0005548790.localdomain podman[89190]: 2025-12-06 08:52:42.577217085 +0000 UTC m=+0.090306144 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1)
Dec 06 08:52:42 np0005548790.localdomain podman[89203]: 2025-12-06 08:52:42.630090714 +0000 UTC m=+0.133520515 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, release=1761123044, vendor=Red Hat, Inc., container_name=logrotate_crond, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, architecture=x86_64)
Dec 06 08:52:42 np0005548790.localdomain podman[89203]: 2025-12-06 08:52:42.64302132 +0000 UTC m=+0.146451081 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, distribution-scope=public, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron)
Dec 06 08:52:42 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:52:42 np0005548790.localdomain podman[89191]: 2025-12-06 08:52:42.678503253 +0000 UTC m=+0.188153910 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1)
Dec 06 08:52:42 np0005548790.localdomain podman[89190]: 2025-12-06 08:52:42.683936779 +0000 UTC m=+0.197025858 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, release=1761123044)
Dec 06 08:52:42 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:52:42 np0005548790.localdomain podman[89189]: 2025-12-06 08:52:42.697531674 +0000 UTC m=+0.216570803 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, url=https://www.redhat.com, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., container_name=iscsid)
Dec 06 08:52:42 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:52:42 np0005548790.localdomain podman[89191]: 2025-12-06 08:52:42.729568233 +0000 UTC m=+0.239218880 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1)
Dec 06 08:52:42 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:52:42 np0005548790.localdomain podman[89197]: 2025-12-06 08:52:42.786858191 +0000 UTC m=+0.292215614 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:52:43 np0005548790.localdomain podman[89197]: 2025-12-06 08:52:43.159213314 +0000 UTC m=+0.664570817 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, container_name=nova_migration_target, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vcs-type=git)
Dec 06 08:52:43 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:52:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:52:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:52:47 np0005548790.localdomain podman[89301]: 2025-12-06 08:52:47.577170067 +0000 UTC m=+0.087031076 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 08:52:47 np0005548790.localdomain podman[89302]: 2025-12-06 08:52:47.625002951 +0000 UTC m=+0.131900880 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git)
Dec 06 08:52:47 np0005548790.localdomain podman[89301]: 2025-12-06 08:52:47.646536208 +0000 UTC m=+0.156397197 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4)
Dec 06 08:52:47 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:52:47 np0005548790.localdomain podman[89302]: 2025-12-06 08:52:47.679339669 +0000 UTC m=+0.186237608 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, managed_by=tripleo_ansible, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 08:52:47 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:52:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:52:50 np0005548790.localdomain podman[89349]: 2025-12-06 08:52:50.565861974 +0000 UTC m=+0.083272066 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, architecture=x86_64, container_name=nova_compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:52:50 np0005548790.localdomain podman[89349]: 2025-12-06 08:52:50.621158547 +0000 UTC m=+0.138568639 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute)
Dec 06 08:52:50 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:52:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:52:57 np0005548790.localdomain systemd[1]: tmp-crun.7WNoUB.mount: Deactivated successfully.
Dec 06 08:52:57 np0005548790.localdomain podman[89375]: 2025-12-06 08:52:57.560633389 +0000 UTC m=+0.079307829 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, distribution-scope=public, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:52:57 np0005548790.localdomain podman[89375]: 2025-12-06 08:52:57.747700519 +0000 UTC m=+0.266374929 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:52:57 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:53:05 np0005548790.localdomain sudo[89404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:53:05 np0005548790.localdomain sudo[89404]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:53:05 np0005548790.localdomain sudo[89404]: pam_unix(sudo:session): session closed for user root
Dec 06 08:53:05 np0005548790.localdomain sudo[89419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 08:53:05 np0005548790.localdomain sudo[89419]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:53:06 np0005548790.localdomain podman[89503]: 2025-12-06 08:53:06.522505015 +0000 UTC m=+0.090609393 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, ceph=True, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.openshift.tags=rhceph ceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, release=1763362218, RELEASE=main, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 06 08:53:06 np0005548790.localdomain podman[89503]: 2025-12-06 08:53:06.619468567 +0000 UTC m=+0.187572985 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=1763362218, version=7, architecture=x86_64, distribution-scope=public, name=rhceph)
Dec 06 08:53:06 np0005548790.localdomain sudo[89419]: pam_unix(sudo:session): session closed for user root
Dec 06 08:53:06 np0005548790.localdomain sudo[89571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:53:06 np0005548790.localdomain sudo[89571]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:53:06 np0005548790.localdomain sudo[89571]: pam_unix(sudo:session): session closed for user root
Dec 06 08:53:07 np0005548790.localdomain sudo[89586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:53:07 np0005548790.localdomain sudo[89586]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:53:07 np0005548790.localdomain sudo[89586]: pam_unix(sudo:session): session closed for user root
Dec 06 08:53:08 np0005548790.localdomain sudo[89632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:53:08 np0005548790.localdomain sudo[89632]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:53:08 np0005548790.localdomain sudo[89632]: pam_unix(sudo:session): session closed for user root
Dec 06 08:53:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:53:11 np0005548790.localdomain podman[89647]: 2025-12-06 08:53:11.575873259 +0000 UTC m=+0.087484468 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, distribution-scope=public, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-collectd)
Dec 06 08:53:11 np0005548790.localdomain podman[89647]: 2025-12-06 08:53:11.590418631 +0000 UTC m=+0.102029800 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.expose-services=, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team)
Dec 06 08:53:11 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:53:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:53:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:53:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:53:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:53:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:53:13 np0005548790.localdomain podman[89668]: 2025-12-06 08:53:13.593191247 +0000 UTC m=+0.102296666 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, release=1761123044, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, vcs-type=git, container_name=iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible)
Dec 06 08:53:13 np0005548790.localdomain podman[89671]: 2025-12-06 08:53:13.642916571 +0000 UTC m=+0.144377755 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, container_name=nova_migration_target, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=)
Dec 06 08:53:13 np0005548790.localdomain podman[89668]: 2025-12-06 08:53:13.657292967 +0000 UTC m=+0.166398386 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, vcs-type=git, url=https://www.redhat.com, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, release=1761123044, distribution-scope=public, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3)
Dec 06 08:53:13 np0005548790.localdomain podman[89669]: 2025-12-06 08:53:13.700065195 +0000 UTC m=+0.208759044 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, batch=17.1_20251118.1, release=1761123044, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Dec 06 08:53:13 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:53:13 np0005548790.localdomain podman[89669]: 2025-12-06 08:53:13.739222296 +0000 UTC m=+0.247916155 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, container_name=ceilometer_agent_compute)
Dec 06 08:53:13 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:53:13 np0005548790.localdomain podman[89670]: 2025-12-06 08:53:13.802302388 +0000 UTC m=+0.308097859 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:53:13 np0005548790.localdomain podman[89670]: 2025-12-06 08:53:13.834200265 +0000 UTC m=+0.339995776 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team)
Dec 06 08:53:13 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:53:13 np0005548790.localdomain podman[89672]: 2025-12-06 08:53:13.851069708 +0000 UTC m=+0.347684533 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true)
Dec 06 08:53:13 np0005548790.localdomain podman[89672]: 2025-12-06 08:53:13.884850794 +0000 UTC m=+0.381465649 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, architecture=x86_64, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 06 08:53:13 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:53:14 np0005548790.localdomain podman[89671]: 2025-12-06 08:53:14.036272948 +0000 UTC m=+0.537734082 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4)
Dec 06 08:53:14 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:53:14 np0005548790.localdomain systemd[1]: tmp-crun.5qCa3C.mount: Deactivated successfully.
Dec 06 08:53:18 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:53:18 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:53:18 np0005548790.localdomain podman[89778]: 2025-12-06 08:53:18.571923397 +0000 UTC m=+0.082354621 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, version=17.1.12)
Dec 06 08:53:18 np0005548790.localdomain podman[89779]: 2025-12-06 08:53:18.624907909 +0000 UTC m=+0.132285151 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, tcib_managed=true, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git)
Dec 06 08:53:18 np0005548790.localdomain podman[89778]: 2025-12-06 08:53:18.646621481 +0000 UTC m=+0.157052705 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, vcs-type=git, config_id=tripleo_step4, batch=17.1_20251118.1, tcib_managed=true, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:53:18 np0005548790.localdomain sshd[89822]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:53:18 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:53:18 np0005548790.localdomain podman[89779]: 2025-12-06 08:53:18.680317326 +0000 UTC m=+0.187694528 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, io.buildah.version=1.41.4, distribution-scope=public, container_name=ovn_controller, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 06 08:53:18 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:53:20 np0005548790.localdomain sshd[89822]: Received disconnect from 35.247.75.98 port 34680:11: Bye Bye [preauth]
Dec 06 08:53:20 np0005548790.localdomain sshd[89822]: Disconnected from authenticating user root 35.247.75.98 port 34680 [preauth]
Dec 06 08:53:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:53:21 np0005548790.localdomain podman[89827]: 2025-12-06 08:53:21.567462816 +0000 UTC m=+0.085038222 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, architecture=x86_64, config_id=tripleo_step5, batch=17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Dec 06 08:53:21 np0005548790.localdomain podman[89827]: 2025-12-06 08:53:21.601232253 +0000 UTC m=+0.118807679 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, container_name=nova_compute, distribution-scope=public, release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, name=rhosp17/openstack-nova-compute)
Dec 06 08:53:21 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:53:25 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 08:53:25 np0005548790.localdomain recover_tripleo_nova_virtqemud[89854]: 62556
Dec 06 08:53:25 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 08:53:25 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 08:53:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:53:28 np0005548790.localdomain podman[89855]: 2025-12-06 08:53:28.567582575 +0000 UTC m=+0.085231958 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=metrics_qdr, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:53:28 np0005548790.localdomain podman[89855]: 2025-12-06 08:53:28.79246781 +0000 UTC m=+0.310117233 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, release=1761123044, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:53:28 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:53:34 np0005548790.localdomain sshd[89908]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:53:36 np0005548790.localdomain sshd[89908]: Received disconnect from 43.163.123.45 port 37378:11: Bye Bye [preauth]
Dec 06 08:53:36 np0005548790.localdomain sshd[89908]: Disconnected from authenticating user root 43.163.123.45 port 37378 [preauth]
Dec 06 08:53:41 np0005548790.localdomain sshd[89910]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:53:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:53:42 np0005548790.localdomain podman[89912]: 2025-12-06 08:53:42.580969244 +0000 UTC m=+0.098046532 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc.)
Dec 06 08:53:42 np0005548790.localdomain podman[89912]: 2025-12-06 08:53:42.595156945 +0000 UTC m=+0.112234223 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, container_name=collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-collectd-container)
Dec 06 08:53:42 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:53:43 np0005548790.localdomain sshd[89910]: Received disconnect from 103.226.138.52 port 50506:11: Bye Bye [preauth]
Dec 06 08:53:43 np0005548790.localdomain sshd[89910]: Disconnected from authenticating user root 103.226.138.52 port 50506 [preauth]
Dec 06 08:53:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:53:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:53:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:53:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:53:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:53:44 np0005548790.localdomain podman[89940]: 2025-12-06 08:53:44.587293196 +0000 UTC m=+0.084583050 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4)
Dec 06 08:53:44 np0005548790.localdomain systemd[1]: tmp-crun.2aE9XC.mount: Deactivated successfully.
Dec 06 08:53:44 np0005548790.localdomain podman[89935]: 2025-12-06 08:53:44.631659807 +0000 UTC m=+0.135285551 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, tcib_managed=true, release=1761123044, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 08:53:44 np0005548790.localdomain systemd[1]: tmp-crun.TITHGm.mount: Deactivated successfully.
Dec 06 08:53:44 np0005548790.localdomain podman[89933]: 2025-12-06 08:53:44.673973923 +0000 UTC m=+0.183483585 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git)
Dec 06 08:53:44 np0005548790.localdomain podman[89935]: 2025-12-06 08:53:44.68614378 +0000 UTC m=+0.189769574 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.expose-services=, config_id=tripleo_step4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 08:53:44 np0005548790.localdomain podman[89942]: 2025-12-06 08:53:44.695646334 +0000 UTC m=+0.189493006 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Dec 06 08:53:44 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:53:44 np0005548790.localdomain podman[89942]: 2025-12-06 08:53:44.731221769 +0000 UTC m=+0.225068431 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=logrotate_crond, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Dec 06 08:53:44 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:53:44 np0005548790.localdomain podman[89933]: 2025-12-06 08:53:44.757185066 +0000 UTC m=+0.266694688 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, container_name=iscsid, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044)
Dec 06 08:53:44 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:53:44 np0005548790.localdomain podman[89934]: 2025-12-06 08:53:44.735147914 +0000 UTC m=+0.240279929 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1761123044, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 08:53:44 np0005548790.localdomain podman[89934]: 2025-12-06 08:53:44.81767295 +0000 UTC m=+0.322804945 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, version=17.1.12, container_name=ceilometer_agent_compute, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, release=1761123044, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:53:44 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:53:45 np0005548790.localdomain podman[89940]: 2025-12-06 08:53:45.002287594 +0000 UTC m=+0.499577468 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, version=17.1.12, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:53:45 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:53:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:53:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:53:49 np0005548790.localdomain podman[90046]: 2025-12-06 08:53:49.569949204 +0000 UTC m=+0.075954079 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, release=1761123044, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4)
Dec 06 08:53:49 np0005548790.localdomain systemd[1]: tmp-crun.97XPAZ.mount: Deactivated successfully.
Dec 06 08:53:49 np0005548790.localdomain podman[90047]: 2025-12-06 08:53:49.628393953 +0000 UTC m=+0.131730917 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 06 08:53:49 np0005548790.localdomain podman[90046]: 2025-12-06 08:53:49.649220051 +0000 UTC m=+0.155224906 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Dec 06 08:53:49 np0005548790.localdomain podman[90047]: 2025-12-06 08:53:49.652036487 +0000 UTC m=+0.155373461 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, io.buildah.version=1.41.4, container_name=ovn_controller, io.openshift.expose-services=, release=1761123044, config_id=tripleo_step4)
Dec 06 08:53:49 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:53:49 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:53:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:53:52 np0005548790.localdomain systemd[1]: tmp-crun.rAcL8L.mount: Deactivated successfully.
Dec 06 08:53:52 np0005548790.localdomain podman[90093]: 2025-12-06 08:53:52.564271141 +0000 UTC m=+0.081399966 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, container_name=nova_compute, tcib_managed=true, release=1761123044, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com)
Dec 06 08:53:52 np0005548790.localdomain podman[90093]: 2025-12-06 08:53:52.59035287 +0000 UTC m=+0.107481685 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=)
Dec 06 08:53:52 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:53:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:53:59 np0005548790.localdomain systemd[1]: tmp-crun.3mgoWm.mount: Deactivated successfully.
Dec 06 08:53:59 np0005548790.localdomain podman[90120]: 2025-12-06 08:53:59.571237043 +0000 UTC m=+0.086558863 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd)
Dec 06 08:53:59 np0005548790.localdomain podman[90120]: 2025-12-06 08:53:59.800156515 +0000 UTC m=+0.315478355 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-18T22:49:46Z, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, name=rhosp17/openstack-qdrouterd, distribution-scope=public)
Dec 06 08:53:59 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:54:08 np0005548790.localdomain sudo[90150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:54:08 np0005548790.localdomain sudo[90150]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:54:08 np0005548790.localdomain sudo[90150]: pam_unix(sudo:session): session closed for user root
Dec 06 08:54:08 np0005548790.localdomain sudo[90165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:54:08 np0005548790.localdomain sudo[90165]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:54:09 np0005548790.localdomain sudo[90165]: pam_unix(sudo:session): session closed for user root
Dec 06 08:54:09 np0005548790.localdomain sudo[90211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:54:09 np0005548790.localdomain sudo[90211]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:54:09 np0005548790.localdomain sudo[90211]: pam_unix(sudo:session): session closed for user root
Dec 06 08:54:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:54:13 np0005548790.localdomain podman[90226]: 2025-12-06 08:54:13.566000581 +0000 UTC m=+0.080444049 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:54:13 np0005548790.localdomain podman[90226]: 2025-12-06 08:54:13.579116444 +0000 UTC m=+0.093559952 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, container_name=collectd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible)
Dec 06 08:54:13 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:54:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:54:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:54:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:54:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:54:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:54:15 np0005548790.localdomain podman[90246]: 2025-12-06 08:54:15.582316213 +0000 UTC m=+0.093510541 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, container_name=iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, version=17.1.12, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3)
Dec 06 08:54:15 np0005548790.localdomain podman[90247]: 2025-12-06 08:54:15.613028457 +0000 UTC m=+0.122102078 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, version=17.1.12, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:54:15 np0005548790.localdomain podman[90249]: 2025-12-06 08:54:15.633871096 +0000 UTC m=+0.136343250 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.12, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 08:54:15 np0005548790.localdomain podman[90247]: 2025-12-06 08:54:15.645292503 +0000 UTC m=+0.154366124 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1)
Dec 06 08:54:15 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:54:15 np0005548790.localdomain podman[90262]: 2025-12-06 08:54:15.689115949 +0000 UTC m=+0.184839651 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, vcs-type=git, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, distribution-scope=public, container_name=logrotate_crond, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com)
Dec 06 08:54:15 np0005548790.localdomain podman[90262]: 2025-12-06 08:54:15.696544628 +0000 UTC m=+0.192268300 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., container_name=logrotate_crond, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:54:15 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:54:15 np0005548790.localdomain podman[90246]: 2025-12-06 08:54:15.71786183 +0000 UTC m=+0.229056208 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-18T23:44:13Z, tcib_managed=true)
Dec 06 08:54:15 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:54:15 np0005548790.localdomain podman[90248]: 2025-12-06 08:54:15.731709602 +0000 UTC m=+0.235991524 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, url=https://www.redhat.com, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, build-date=2025-11-19T00:12:45Z)
Dec 06 08:54:15 np0005548790.localdomain podman[90248]: 2025-12-06 08:54:15.761013888 +0000 UTC m=+0.265295830 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, release=1761123044, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, tcib_managed=true, build-date=2025-11-19T00:12:45Z, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1)
Dec 06 08:54:15 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:54:15 np0005548790.localdomain podman[90249]: 2025-12-06 08:54:15.999292612 +0000 UTC m=+0.501764816 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, vcs-type=git, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:54:16 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:54:16 np0005548790.localdomain systemd[1]: tmp-crun.bk5THR.mount: Deactivated successfully.
Dec 06 08:54:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:54:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:54:20 np0005548790.localdomain systemd[1]: tmp-crun.W9yNh7.mount: Deactivated successfully.
Dec 06 08:54:20 np0005548790.localdomain podman[90363]: 2025-12-06 08:54:20.55642329 +0000 UTC m=+0.070523063 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64)
Dec 06 08:54:20 np0005548790.localdomain podman[90362]: 2025-12-06 08:54:20.569297946 +0000 UTC m=+0.083434880 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git)
Dec 06 08:54:20 np0005548790.localdomain podman[90363]: 2025-12-06 08:54:20.609062113 +0000 UTC m=+0.123161816 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:54:20 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:54:20 np0005548790.localdomain podman[90362]: 2025-12-06 08:54:20.659204539 +0000 UTC m=+0.173341513 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:54:20 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:54:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:54:23 np0005548790.localdomain podman[90410]: 2025-12-06 08:54:23.561392444 +0000 UTC m=+0.077516842 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute)
Dec 06 08:54:23 np0005548790.localdomain podman[90410]: 2025-12-06 08:54:23.615912727 +0000 UTC m=+0.132037145 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:54:23 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:54:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:54:30 np0005548790.localdomain podman[90436]: 2025-12-06 08:54:30.571164581 +0000 UTC m=+0.088958518 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Dec 06 08:54:30 np0005548790.localdomain podman[90436]: 2025-12-06 08:54:30.779071371 +0000 UTC m=+0.296865338 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, distribution-scope=public, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:54:30 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:54:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:54:44 np0005548790.localdomain systemd[1]: tmp-crun.JuE5UD.mount: Deactivated successfully.
Dec 06 08:54:44 np0005548790.localdomain podman[90488]: 2025-12-06 08:54:44.566887286 +0000 UTC m=+0.081851747 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, config_id=tripleo_step3, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Dec 06 08:54:44 np0005548790.localdomain podman[90488]: 2025-12-06 08:54:44.604184876 +0000 UTC m=+0.119149357 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1)
Dec 06 08:54:44 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:54:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:54:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:54:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:54:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:54:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:54:46 np0005548790.localdomain systemd[1]: tmp-crun.bVd3x4.mount: Deactivated successfully.
Dec 06 08:54:46 np0005548790.localdomain podman[90510]: 2025-12-06 08:54:46.582076626 +0000 UTC m=+0.088615638 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4)
Dec 06 08:54:46 np0005548790.localdomain systemd[1]: tmp-crun.FqApmP.mount: Deactivated successfully.
Dec 06 08:54:46 np0005548790.localdomain podman[90508]: 2025-12-06 08:54:46.632032297 +0000 UTC m=+0.146960524 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, tcib_managed=true, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 08:54:46 np0005548790.localdomain podman[90510]: 2025-12-06 08:54:46.638422769 +0000 UTC m=+0.144961731 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 08:54:46 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:54:46 np0005548790.localdomain podman[90508]: 2025-12-06 08:54:46.665207498 +0000 UTC m=+0.180135695 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-iscsid-container, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12)
Dec 06 08:54:46 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:54:46 np0005548790.localdomain podman[90509]: 2025-12-06 08:54:46.686871429 +0000 UTC m=+0.197659816 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1)
Dec 06 08:54:46 np0005548790.localdomain podman[90516]: 2025-12-06 08:54:46.642229911 +0000 UTC m=+0.145798273 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:54:46 np0005548790.localdomain podman[90522]: 2025-12-06 08:54:46.733818609 +0000 UTC m=+0.233246591 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git)
Dec 06 08:54:46 np0005548790.localdomain podman[90509]: 2025-12-06 08:54:46.769070425 +0000 UTC m=+0.279858812 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:54:46 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:54:46 np0005548790.localdomain podman[90522]: 2025-12-06 08:54:46.822518379 +0000 UTC m=+0.321946391 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, vcs-type=git, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.openshift.expose-services=, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12)
Dec 06 08:54:46 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:54:47 np0005548790.localdomain podman[90516]: 2025-12-06 08:54:47.025183719 +0000 UTC m=+0.528752121 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=)
Dec 06 08:54:47 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:54:48 np0005548790.localdomain sshd[90620]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:54:49 np0005548790.localdomain sshd[90620]: Received disconnect from 43.163.123.45 port 36036:11: Bye Bye [preauth]
Dec 06 08:54:49 np0005548790.localdomain sshd[90620]: Disconnected from authenticating user root 43.163.123.45 port 36036 [preauth]
Dec 06 08:54:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:54:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:54:51 np0005548790.localdomain podman[90622]: 2025-12-06 08:54:51.566732508 +0000 UTC m=+0.080576583 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, architecture=x86_64, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Dec 06 08:54:51 np0005548790.localdomain podman[90622]: 2025-12-06 08:54:51.610166393 +0000 UTC m=+0.124010478 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:54:51 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:54:51 np0005548790.localdomain podman[90623]: 2025-12-06 08:54:51.614588082 +0000 UTC m=+0.125635522 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, batch=17.1_20251118.1, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4)
Dec 06 08:54:51 np0005548790.localdomain podman[90623]: 2025-12-06 08:54:51.698255158 +0000 UTC m=+0.209302668 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1)
Dec 06 08:54:51 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:54:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:54:54 np0005548790.localdomain podman[90670]: 2025-12-06 08:54:54.563675666 +0000 UTC m=+0.081579591 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, container_name=nova_compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step5)
Dec 06 08:54:54 np0005548790.localdomain podman[90670]: 2025-12-06 08:54:54.594174744 +0000 UTC m=+0.112078679 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, config_id=tripleo_step5, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:54:54 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:55:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:55:01 np0005548790.localdomain podman[90697]: 2025-12-06 08:55:01.564856414 +0000 UTC m=+0.079891756 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:55:01 np0005548790.localdomain podman[90697]: 2025-12-06 08:55:01.763287619 +0000 UTC m=+0.278322981 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, container_name=metrics_qdr, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:55:01 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:55:02 np0005548790.localdomain sshd[90726]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:55:02 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 08:55:02 np0005548790.localdomain recover_tripleo_nova_virtqemud[90729]: 62556
Dec 06 08:55:02 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 08:55:02 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 08:55:03 np0005548790.localdomain sshd[90726]: Received disconnect from 35.247.75.98 port 57360:11: Bye Bye [preauth]
Dec 06 08:55:03 np0005548790.localdomain sshd[90726]: Disconnected from authenticating user root 35.247.75.98 port 57360 [preauth]
Dec 06 08:55:09 np0005548790.localdomain sudo[90730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:55:09 np0005548790.localdomain sudo[90730]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:55:09 np0005548790.localdomain sudo[90730]: pam_unix(sudo:session): session closed for user root
Dec 06 08:55:10 np0005548790.localdomain sudo[90745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:55:10 np0005548790.localdomain sudo[90745]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:55:10 np0005548790.localdomain sudo[90745]: pam_unix(sudo:session): session closed for user root
Dec 06 08:55:11 np0005548790.localdomain sudo[90792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:55:11 np0005548790.localdomain sudo[90792]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:55:11 np0005548790.localdomain sudo[90792]: pam_unix(sudo:session): session closed for user root
Dec 06 08:55:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:55:15 np0005548790.localdomain podman[90807]: 2025-12-06 08:55:15.575582232 +0000 UTC m=+0.085432123 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, config_id=tripleo_step3, architecture=x86_64, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public)
Dec 06 08:55:15 np0005548790.localdomain podman[90807]: 2025-12-06 08:55:15.581707876 +0000 UTC m=+0.091553997 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:55:15 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:55:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:55:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:55:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:55:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:55:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:55:17 np0005548790.localdomain podman[90828]: 2025-12-06 08:55:17.554853289 +0000 UTC m=+0.069862936 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 06 08:55:17 np0005548790.localdomain podman[90828]: 2025-12-06 08:55:17.58621864 +0000 UTC m=+0.101228287 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1)
Dec 06 08:55:17 np0005548790.localdomain podman[90827]: 2025-12-06 08:55:17.614033307 +0000 UTC m=+0.126180547 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 06 08:55:17 np0005548790.localdomain podman[90838]: 2025-12-06 08:55:17.634204698 +0000 UTC m=+0.136186076 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, tcib_managed=true, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:36:58Z)
Dec 06 08:55:17 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:55:17 np0005548790.localdomain podman[90841]: 2025-12-06 08:55:17.677954503 +0000 UTC m=+0.177367322 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, container_name=logrotate_crond, url=https://www.redhat.com, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Dec 06 08:55:17 np0005548790.localdomain podman[90827]: 2025-12-06 08:55:17.699019727 +0000 UTC m=+0.211166977 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, container_name=iscsid, maintainer=OpenStack TripleO Team, version=17.1.12, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container)
Dec 06 08:55:17 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:55:17 np0005548790.localdomain podman[90841]: 2025-12-06 08:55:17.712847438 +0000 UTC m=+0.212260247 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, distribution-scope=public, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.buildah.version=1.41.4, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-cron-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Dec 06 08:55:17 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:55:17 np0005548790.localdomain podman[90829]: 2025-12-06 08:55:17.777271547 +0000 UTC m=+0.282861592 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.buildah.version=1.41.4)
Dec 06 08:55:17 np0005548790.localdomain podman[90829]: 2025-12-06 08:55:17.807024456 +0000 UTC m=+0.312614491 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team)
Dec 06 08:55:17 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:55:17 np0005548790.localdomain podman[90838]: 2025-12-06 08:55:17.995218527 +0000 UTC m=+0.497199875 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, vcs-type=git)
Dec 06 08:55:18 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:55:18 np0005548790.localdomain systemd[1]: tmp-crun.Ekm2p1.mount: Deactivated successfully.
Dec 06 08:55:22 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:55:22 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:55:22 np0005548790.localdomain podman[90939]: 2025-12-06 08:55:22.569052262 +0000 UTC m=+0.084970092 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=ovn_metadata_agent, release=1761123044, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64)
Dec 06 08:55:22 np0005548790.localdomain podman[90939]: 2025-12-06 08:55:22.611195183 +0000 UTC m=+0.127113073 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, container_name=ovn_metadata_agent, io.buildah.version=1.41.4)
Dec 06 08:55:22 np0005548790.localdomain podman[90940]: 2025-12-06 08:55:22.619631749 +0000 UTC m=+0.128375566 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-type=git, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container)
Dec 06 08:55:22 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:55:22 np0005548790.localdomain podman[90940]: 2025-12-06 08:55:22.674404259 +0000 UTC m=+0.183148136 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 08:55:22 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:55:23 np0005548790.localdomain sshd[90988]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:55:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:55:25 np0005548790.localdomain podman[90990]: 2025-12-06 08:55:25.564490969 +0000 UTC m=+0.078325323 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044)
Dec 06 08:55:25 np0005548790.localdomain sshd[90988]: Received disconnect from 103.226.138.52 port 54396:11: Bye Bye [preauth]
Dec 06 08:55:25 np0005548790.localdomain sshd[90988]: Disconnected from authenticating user root 103.226.138.52 port 54396 [preauth]
Dec 06 08:55:25 np0005548790.localdomain podman[90990]: 2025-12-06 08:55:25.590086275 +0000 UTC m=+0.103920569 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.openshift.expose-services=)
Dec 06 08:55:25 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:55:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:55:32 np0005548790.localdomain podman[91016]: 2025-12-06 08:55:32.562617704 +0000 UTC m=+0.081231661 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec 06 08:55:32 np0005548790.localdomain podman[91016]: 2025-12-06 08:55:32.77783724 +0000 UTC m=+0.296451167 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, architecture=x86_64, tcib_managed=true, version=17.1.12, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:55:32 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:55:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:55:46 np0005548790.localdomain podman[91069]: 2025-12-06 08:55:46.569883411 +0000 UTC m=+0.085793604 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, io.buildah.version=1.41.4, container_name=collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1)
Dec 06 08:55:46 np0005548790.localdomain podman[91069]: 2025-12-06 08:55:46.583081375 +0000 UTC m=+0.098991538 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, container_name=collectd, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-18T22:51:28Z, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3)
Dec 06 08:55:46 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:55:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:55:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:55:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:55:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:55:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:55:48 np0005548790.localdomain podman[91091]: 2025-12-06 08:55:48.590088316 +0000 UTC m=+0.093724486 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:55:48 np0005548790.localdomain podman[91095]: 2025-12-06 08:55:48.632012501 +0000 UTC m=+0.134974163 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com)
Dec 06 08:55:48 np0005548790.localdomain podman[91089]: 2025-12-06 08:55:48.685938278 +0000 UTC m=+0.199270429 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, config_id=tripleo_step3, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, container_name=iscsid, tcib_managed=true, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, version=17.1.12, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044)
Dec 06 08:55:48 np0005548790.localdomain podman[91091]: 2025-12-06 08:55:48.69904378 +0000 UTC m=+0.202679990 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi)
Dec 06 08:55:48 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:55:48 np0005548790.localdomain podman[91089]: 2025-12-06 08:55:48.721436901 +0000 UTC m=+0.234769082 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, architecture=x86_64, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4)
Dec 06 08:55:48 np0005548790.localdomain podman[91090]: 2025-12-06 08:55:48.728695666 +0000 UTC m=+0.237239688 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1)
Dec 06 08:55:48 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:55:48 np0005548790.localdomain podman[91090]: 2025-12-06 08:55:48.752589037 +0000 UTC m=+0.261133009 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, release=1761123044, architecture=x86_64, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team)
Dec 06 08:55:48 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:55:48 np0005548790.localdomain podman[91098]: 2025-12-06 08:55:48.836454897 +0000 UTC m=+0.336879161 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vendor=Red Hat, Inc.)
Dec 06 08:55:48 np0005548790.localdomain podman[91098]: 2025-12-06 08:55:48.869152365 +0000 UTC m=+0.369576589 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, vcs-type=git)
Dec 06 08:55:48 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:55:48 np0005548790.localdomain podman[91095]: 2025-12-06 08:55:48.990191623 +0000 UTC m=+0.493153275 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:55:49 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:55:49 np0005548790.localdomain systemd[1]: tmp-crun.0YdKyr.mount: Deactivated successfully.
Dec 06 08:55:53 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:55:53 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:55:53 np0005548790.localdomain podman[91203]: 2025-12-06 08:55:53.537302542 +0000 UTC m=+0.054767840 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, vcs-type=git, container_name=ovn_metadata_agent, release=1761123044, tcib_managed=true, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Dec 06 08:55:53 np0005548790.localdomain systemd[1]: tmp-crun.ODTdm9.mount: Deactivated successfully.
Dec 06 08:55:53 np0005548790.localdomain podman[91204]: 2025-12-06 08:55:53.571278603 +0000 UTC m=+0.081568859 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, version=17.1.12, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 06 08:55:53 np0005548790.localdomain podman[91203]: 2025-12-06 08:55:53.620290019 +0000 UTC m=+0.137755337 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, url=https://www.redhat.com, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Dec 06 08:55:53 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:55:53 np0005548790.localdomain podman[91204]: 2025-12-06 08:55:53.670947178 +0000 UTC m=+0.181237474 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4)
Dec 06 08:55:53 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:55:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:55:56 np0005548790.localdomain podman[91252]: 2025-12-06 08:55:56.572707883 +0000 UTC m=+0.085728292 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-19T00:36:58Z, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.12, container_name=nova_compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team)
Dec 06 08:55:56 np0005548790.localdomain podman[91252]: 2025-12-06 08:55:56.632316262 +0000 UTC m=+0.145336691 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_id=tripleo_step5, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:55:56 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:56:00 np0005548790.localdomain sshd[91279]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:56:02 np0005548790.localdomain sshd[91279]: Received disconnect from 43.163.123.45 port 34688:11: Bye Bye [preauth]
Dec 06 08:56:02 np0005548790.localdomain sshd[91279]: Disconnected from authenticating user root 43.163.123.45 port 34688 [preauth]
Dec 06 08:56:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:56:03 np0005548790.localdomain podman[91281]: 2025-12-06 08:56:03.569656135 +0000 UTC m=+0.083919413 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step1, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, io.buildah.version=1.41.4)
Dec 06 08:56:03 np0005548790.localdomain podman[91281]: 2025-12-06 08:56:03.760203729 +0000 UTC m=+0.274467027 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd)
Dec 06 08:56:03 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:56:11 np0005548790.localdomain sudo[91310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:56:11 np0005548790.localdomain sudo[91310]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:56:11 np0005548790.localdomain sudo[91310]: pam_unix(sudo:session): session closed for user root
Dec 06 08:56:11 np0005548790.localdomain sudo[91325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:56:11 np0005548790.localdomain sudo[91325]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:56:12 np0005548790.localdomain sudo[91325]: pam_unix(sudo:session): session closed for user root
Dec 06 08:56:12 np0005548790.localdomain sudo[91371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:56:12 np0005548790.localdomain sudo[91371]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:56:12 np0005548790.localdomain sudo[91371]: pam_unix(sudo:session): session closed for user root
Dec 06 08:56:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:56:17 np0005548790.localdomain podman[91386]: 2025-12-06 08:56:17.572102102 +0000 UTC m=+0.086524243 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-18T22:51:28Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:56:17 np0005548790.localdomain podman[91386]: 2025-12-06 08:56:17.608251472 +0000 UTC m=+0.122673593 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, release=1761123044, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:56:17 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:56:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:56:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:56:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:56:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:56:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:56:19 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 08:56:19 np0005548790.localdomain recover_tripleo_nova_virtqemud[91433]: 62556
Dec 06 08:56:19 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 08:56:19 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 08:56:19 np0005548790.localdomain podman[91419]: 2025-12-06 08:56:19.590800586 +0000 UTC m=+0.090786677 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=logrotate_crond, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, url=https://www.redhat.com)
Dec 06 08:56:19 np0005548790.localdomain podman[91419]: 2025-12-06 08:56:19.59875846 +0000 UTC m=+0.098744541 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:56:19 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:56:19 np0005548790.localdomain podman[91408]: 2025-12-06 08:56:19.642272478 +0000 UTC m=+0.148230289 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 06 08:56:19 np0005548790.localdomain podman[91406]: 2025-12-06 08:56:19.694495739 +0000 UTC m=+0.205646419 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, vcs-type=git, name=rhosp17/openstack-iscsid, architecture=x86_64, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1)
Dec 06 08:56:19 np0005548790.localdomain podman[91406]: 2025-12-06 08:56:19.706194854 +0000 UTC m=+0.217345534 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, vcs-type=git, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public)
Dec 06 08:56:19 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:56:19 np0005548790.localdomain podman[91409]: 2025-12-06 08:56:19.749425033 +0000 UTC m=+0.245495809 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, container_name=nova_migration_target, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:56:19 np0005548790.localdomain podman[91407]: 2025-12-06 08:56:19.810910473 +0000 UTC m=+0.320313336 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 08:56:19 np0005548790.localdomain podman[91408]: 2025-12-06 08:56:19.824967881 +0000 UTC m=+0.330925612 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 06 08:56:19 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:56:19 np0005548790.localdomain podman[91407]: 2025-12-06 08:56:19.843256332 +0000 UTC m=+0.352659205 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Dec 06 08:56:19 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:56:20 np0005548790.localdomain podman[91409]: 2025-12-06 08:56:20.139875261 +0000 UTC m=+0.635946057 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12)
Dec 06 08:56:20 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:56:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:56:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:56:24 np0005548790.localdomain systemd[1]: tmp-crun.BppXU6.mount: Deactivated successfully.
Dec 06 08:56:24 np0005548790.localdomain podman[91516]: 2025-12-06 08:56:24.581032727 +0000 UTC m=+0.095400581 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-type=git)
Dec 06 08:56:24 np0005548790.localdomain podman[91516]: 2025-12-06 08:56:24.625168842 +0000 UTC m=+0.139536746 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 06 08:56:24 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:56:24 np0005548790.localdomain podman[91517]: 2025-12-06 08:56:24.665752441 +0000 UTC m=+0.176568540 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, vcs-type=git, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team)
Dec 06 08:56:24 np0005548790.localdomain podman[91517]: 2025-12-06 08:56:24.692282422 +0000 UTC m=+0.203098551 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, architecture=x86_64, io.openshift.expose-services=, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, release=1761123044, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4)
Dec 06 08:56:24 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:56:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:56:27 np0005548790.localdomain systemd[1]: tmp-crun.ICy6qp.mount: Deactivated successfully.
Dec 06 08:56:27 np0005548790.localdomain podman[91564]: 2025-12-06 08:56:27.593286875 +0000 UTC m=+0.081615521 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:56:27 np0005548790.localdomain podman[91564]: 2025-12-06 08:56:27.647253013 +0000 UTC m=+0.135581619 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., container_name=nova_compute, name=rhosp17/openstack-nova-compute, release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z)
Dec 06 08:56:27 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:56:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:56:34 np0005548790.localdomain podman[91590]: 2025-12-06 08:56:34.550249614 +0000 UTC m=+0.071559901 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, release=1761123044, version=17.1.12, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, url=https://www.redhat.com, container_name=metrics_qdr, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., batch=17.1_20251118.1)
Dec 06 08:56:34 np0005548790.localdomain podman[91590]: 2025-12-06 08:56:34.725201149 +0000 UTC m=+0.246511436 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:56:34 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:56:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:56:48 np0005548790.localdomain podman[91619]: 2025-12-06 08:56:48.559532335 +0000 UTC m=+0.074543871 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, tcib_managed=true, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1)
Dec 06 08:56:48 np0005548790.localdomain podman[91619]: 2025-12-06 08:56:48.597234777 +0000 UTC m=+0.112246293 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, version=17.1.12, managed_by=tripleo_ansible, container_name=collectd, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, name=rhosp17/openstack-collectd, distribution-scope=public, release=1761123044, url=https://www.redhat.com)
Dec 06 08:56:48 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:56:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:56:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:56:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:56:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:56:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:56:50 np0005548790.localdomain podman[91641]: 2025-12-06 08:56:50.595909454 +0000 UTC m=+0.104847644 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:56:50 np0005548790.localdomain podman[91647]: 2025-12-06 08:56:50.64085173 +0000 UTC m=+0.144289232 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, distribution-scope=public)
Dec 06 08:56:50 np0005548790.localdomain podman[91653]: 2025-12-06 08:56:50.655065112 +0000 UTC m=+0.155649888 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, container_name=logrotate_crond, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:56:50 np0005548790.localdomain podman[91639]: 2025-12-06 08:56:50.575721203 +0000 UTC m=+0.090853169 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, build-date=2025-11-18T23:44:13Z, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:56:50 np0005548790.localdomain podman[91653]: 2025-12-06 08:56:50.683381611 +0000 UTC m=+0.183966427 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20251118.1, name=rhosp17/openstack-cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond)
Dec 06 08:56:50 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:56:50 np0005548790.localdomain podman[91639]: 2025-12-06 08:56:50.708151586 +0000 UTC m=+0.223283542 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z)
Dec 06 08:56:50 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:56:50 np0005548790.localdomain podman[91641]: 2025-12-06 08:56:50.734903505 +0000 UTC m=+0.243841745 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:56:50 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:56:50 np0005548790.localdomain podman[91640]: 2025-12-06 08:56:50.688636833 +0000 UTC m=+0.198372974 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, container_name=ceilometer_agent_compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true)
Dec 06 08:56:50 np0005548790.localdomain podman[91640]: 2025-12-06 08:56:50.823206084 +0000 UTC m=+0.332942265 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4)
Dec 06 08:56:50 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:56:50 np0005548790.localdomain sshd[91745]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:56:51 np0005548790.localdomain podman[91647]: 2025-12-06 08:56:51.004120529 +0000 UTC m=+0.507558061 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:56:51 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:56:51 np0005548790.localdomain systemd[1]: tmp-crun.A5iwvf.mount: Deactivated successfully.
Dec 06 08:56:52 np0005548790.localdomain sshd[91745]: Received disconnect from 35.247.75.98 port 37128:11: Bye Bye [preauth]
Dec 06 08:56:52 np0005548790.localdomain sshd[91745]: Disconnected from authenticating user root 35.247.75.98 port 37128 [preauth]
Dec 06 08:56:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:56:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:56:55 np0005548790.localdomain systemd[1]: tmp-crun.sIeQNE.mount: Deactivated successfully.
Dec 06 08:56:55 np0005548790.localdomain podman[91747]: 2025-12-06 08:56:55.576830714 +0000 UTC m=+0.087012526 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent)
Dec 06 08:56:55 np0005548790.localdomain podman[91748]: 2025-12-06 08:56:55.63031183 +0000 UTC m=+0.137649805 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 06 08:56:55 np0005548790.localdomain podman[91748]: 2025-12-06 08:56:55.6563676 +0000 UTC m=+0.163705565 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, name=rhosp17/openstack-ovn-controller, tcib_managed=true, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, version=17.1.12)
Dec 06 08:56:55 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:56:55 np0005548790.localdomain podman[91747]: 2025-12-06 08:56:55.706468724 +0000 UTC m=+0.216650526 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, build-date=2025-11-19T00:14:25Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, release=1761123044, config_id=tripleo_step4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team)
Dec 06 08:56:55 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:56:58 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:56:58 np0005548790.localdomain podman[91795]: 2025-12-06 08:56:58.561880104 +0000 UTC m=+0.081114438 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, release=1761123044, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com)
Dec 06 08:56:58 np0005548790.localdomain podman[91795]: 2025-12-06 08:56:58.587316736 +0000 UTC m=+0.106551070 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step5, vcs-type=git, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:56:58 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:57:01 np0005548790.localdomain sshd[91822]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:57:04 np0005548790.localdomain sshd[91822]: Received disconnect from 103.226.138.52 port 41922:11: Bye Bye [preauth]
Dec 06 08:57:04 np0005548790.localdomain sshd[91822]: Disconnected from authenticating user root 103.226.138.52 port 41922 [preauth]
Dec 06 08:57:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:57:05 np0005548790.localdomain systemd[1]: tmp-crun.dNnoUz.mount: Deactivated successfully.
Dec 06 08:57:05 np0005548790.localdomain podman[91825]: 2025-12-06 08:57:05.567440228 +0000 UTC m=+0.082161506 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, release=1761123044, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-type=git)
Dec 06 08:57:05 np0005548790.localdomain podman[91825]: 2025-12-06 08:57:05.770265641 +0000 UTC m=+0.284986889 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-qdrouterd, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.)
Dec 06 08:57:05 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:57:12 np0005548790.localdomain sudo[91854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:57:12 np0005548790.localdomain sudo[91854]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:57:12 np0005548790.localdomain sudo[91854]: pam_unix(sudo:session): session closed for user root
Dec 06 08:57:13 np0005548790.localdomain sudo[91869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:57:13 np0005548790.localdomain sudo[91869]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:57:13 np0005548790.localdomain sudo[91869]: pam_unix(sudo:session): session closed for user root
Dec 06 08:57:14 np0005548790.localdomain sshd[91917]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:57:14 np0005548790.localdomain sudo[91919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:57:14 np0005548790.localdomain sudo[91919]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:57:14 np0005548790.localdomain sudo[91919]: pam_unix(sudo:session): session closed for user root
Dec 06 08:57:16 np0005548790.localdomain sshd[91917]: Received disconnect from 43.163.123.45 port 33354:11: Bye Bye [preauth]
Dec 06 08:57:16 np0005548790.localdomain sshd[91917]: Disconnected from authenticating user root 43.163.123.45 port 33354 [preauth]
Dec 06 08:57:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:57:19 np0005548790.localdomain podman[91934]: 2025-12-06 08:57:19.586606533 +0000 UTC m=+0.093088249 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, distribution-scope=public, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, vcs-type=git, container_name=collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:57:19 np0005548790.localdomain podman[91934]: 2025-12-06 08:57:19.601138333 +0000 UTC m=+0.107620099 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step3, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12)
Dec 06 08:57:19 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:57:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:57:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:57:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:57:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:57:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:57:21 np0005548790.localdomain podman[91954]: 2025-12-06 08:57:21.595089414 +0000 UTC m=+0.098650818 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, vendor=Red Hat, Inc., container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:57:21 np0005548790.localdomain podman[91954]: 2025-12-06 08:57:21.6277151 +0000 UTC m=+0.131276454 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-iscsid, version=17.1.12, config_id=tripleo_step3, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, distribution-scope=public, architecture=x86_64)
Dec 06 08:57:21 np0005548790.localdomain systemd[1]: tmp-crun.ZkYaYu.mount: Deactivated successfully.
Dec 06 08:57:21 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:57:21 np0005548790.localdomain podman[91955]: 2025-12-06 08:57:21.653155823 +0000 UTC m=+0.154886869 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, distribution-scope=public)
Dec 06 08:57:21 np0005548790.localdomain podman[91955]: 2025-12-06 08:57:21.690114004 +0000 UTC m=+0.191845020 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 08:57:21 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:57:21 np0005548790.localdomain podman[91957]: 2025-12-06 08:57:21.740911437 +0000 UTC m=+0.234176035 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, release=1761123044, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:57:21 np0005548790.localdomain podman[91956]: 2025-12-06 08:57:21.694505862 +0000 UTC m=+0.192992951 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, version=17.1.12, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 08:57:21 np0005548790.localdomain podman[91956]: 2025-12-06 08:57:21.774507369 +0000 UTC m=+0.272994528 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 06 08:57:21 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:57:21 np0005548790.localdomain podman[91965]: 2025-12-06 08:57:21.856702755 +0000 UTC m=+0.348402791 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, name=rhosp17/openstack-cron, container_name=logrotate_crond, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Dec 06 08:57:21 np0005548790.localdomain podman[91965]: 2025-12-06 08:57:21.868518581 +0000 UTC m=+0.360218597 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git)
Dec 06 08:57:21 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:57:22 np0005548790.localdomain podman[91957]: 2025-12-06 08:57:22.112397227 +0000 UTC m=+0.605661785 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:57:22 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:57:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:57:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:57:26 np0005548790.localdomain podman[92067]: 2025-12-06 08:57:26.550754587 +0000 UTC m=+0.072317932 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Dec 06 08:57:26 np0005548790.localdomain systemd[1]: tmp-crun.AG0lA1.mount: Deactivated successfully.
Dec 06 08:57:26 np0005548790.localdomain podman[92068]: 2025-12-06 08:57:26.612022361 +0000 UTC m=+0.127321008 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, name=rhosp17/openstack-ovn-controller, release=1761123044, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:57:26 np0005548790.localdomain podman[92067]: 2025-12-06 08:57:26.624182397 +0000 UTC m=+0.145745712 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 06 08:57:26 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:57:26 np0005548790.localdomain podman[92068]: 2025-12-06 08:57:26.657280246 +0000 UTC m=+0.172578883 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true)
Dec 06 08:57:26 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:57:29 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:57:29 np0005548790.localdomain podman[92115]: 2025-12-06 08:57:29.566994813 +0000 UTC m=+0.082509986 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, container_name=nova_compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, com.redhat.component=openstack-nova-compute-container, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Dec 06 08:57:29 np0005548790.localdomain podman[92115]: 2025-12-06 08:57:29.617218361 +0000 UTC m=+0.132733554 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, vcs-type=git, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team)
Dec 06 08:57:29 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:57:31 np0005548790.localdomain sshd[92141]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:57:32 np0005548790.localdomain sshd[92141]: Invalid user admin from 114.111.54.188 port 49042
Dec 06 08:57:32 np0005548790.localdomain sshd[92141]: Connection closed by invalid user admin 114.111.54.188 port 49042 [preauth]
Dec 06 08:57:32 np0005548790.localdomain sshd[92143]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:57:33 np0005548790.localdomain sshd[92143]: Invalid user orangepi from 114.111.54.188 port 33124
Dec 06 08:57:33 np0005548790.localdomain sshd[92143]: Connection closed by invalid user orangepi 114.111.54.188 port 33124 [preauth]
Dec 06 08:57:33 np0005548790.localdomain sshd[92145]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:57:34 np0005548790.localdomain sshd[92145]: Connection closed by authenticating user root 114.111.54.188 port 33138 [preauth]
Dec 06 08:57:35 np0005548790.localdomain sshd[92147]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:57:36 np0005548790.localdomain sshd[92147]: Connection closed by authenticating user root 114.111.54.188 port 33140 [preauth]
Dec 06 08:57:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:57:36 np0005548790.localdomain systemd[1]: tmp-crun.cq5Ujg.mount: Deactivated successfully.
Dec 06 08:57:36 np0005548790.localdomain podman[92149]: 2025-12-06 08:57:36.4372765 +0000 UTC m=+0.092616208 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:57:36 np0005548790.localdomain podman[92149]: 2025-12-06 08:57:36.661275633 +0000 UTC m=+0.316615271 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, io.openshift.expose-services=)
Dec 06 08:57:36 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:57:45 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 08:57:45 np0005548790.localdomain recover_tripleo_nova_virtqemud[92178]: 62556
Dec 06 08:57:45 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 08:57:45 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 08:57:49 np0005548790.localdomain sshd[92179]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:57:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:57:50 np0005548790.localdomain podman[92181]: 2025-12-06 08:57:50.571924303 +0000 UTC m=+0.087381120 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64)
Dec 06 08:57:50 np0005548790.localdomain podman[92181]: 2025-12-06 08:57:50.580900901 +0000 UTC m=+0.096357768 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, name=rhosp17/openstack-collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:57:50 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:57:51 np0005548790.localdomain sshd[92179]: Invalid user admin from 45.135.232.92 port 58792
Dec 06 08:57:51 np0005548790.localdomain sshd[92179]: Connection reset by invalid user admin 45.135.232.92 port 58792 [preauth]
Dec 06 08:57:51 np0005548790.localdomain sshd[92201]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:57:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:57:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:57:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:57:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:57:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:57:52 np0005548790.localdomain podman[92207]: 2025-12-06 08:57:52.567354496 +0000 UTC m=+0.075685869 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, architecture=x86_64, container_name=logrotate_crond, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team)
Dec 06 08:57:52 np0005548790.localdomain podman[92207]: 2025-12-06 08:57:52.580592917 +0000 UTC m=+0.088924230 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-cron, container_name=logrotate_crond, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-cron-container, tcib_managed=true, build-date=2025-11-18T22:49:32Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:57:52 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:57:52 np0005548790.localdomain systemd[1]: tmp-crun.Hvvqld.mount: Deactivated successfully.
Dec 06 08:57:52 np0005548790.localdomain podman[92204]: 2025-12-06 08:57:52.625337945 +0000 UTC m=+0.135359763 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, io.openshift.expose-services=)
Dec 06 08:57:52 np0005548790.localdomain podman[92206]: 2025-12-06 08:57:52.674487908 +0000 UTC m=+0.182088472 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.12, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_migration_target, io.openshift.expose-services=)
Dec 06 08:57:52 np0005548790.localdomain podman[92204]: 2025-12-06 08:57:52.678165255 +0000 UTC m=+0.188187083 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, version=17.1.12, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:57:52 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:57:52 np0005548790.localdomain podman[92205]: 2025-12-06 08:57:52.718584178 +0000 UTC m=+0.230968939 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 08:57:52 np0005548790.localdomain podman[92205]: 2025-12-06 08:57:52.743382206 +0000 UTC m=+0.255767017 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4)
Dec 06 08:57:52 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:57:52 np0005548790.localdomain podman[92203]: 2025-12-06 08:57:52.770141486 +0000 UTC m=+0.283383030 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, distribution-scope=public, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Dec 06 08:57:52 np0005548790.localdomain podman[92203]: 2025-12-06 08:57:52.781973511 +0000 UTC m=+0.295215055 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, version=17.1.12, vcs-type=git, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step3, container_name=iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:57:52 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:57:53 np0005548790.localdomain podman[92206]: 2025-12-06 08:57:53.036941475 +0000 UTC m=+0.544542019 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, architecture=x86_64, build-date=2025-11-19T00:36:58Z)
Dec 06 08:57:53 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:57:53 np0005548790.localdomain sshd[92201]: Invalid user admin from 45.135.232.92 port 58816
Dec 06 08:57:53 np0005548790.localdomain sshd[92201]: Connection reset by invalid user admin 45.135.232.92 port 58816 [preauth]
Dec 06 08:57:53 np0005548790.localdomain sshd[92319]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:57:56 np0005548790.localdomain sshd[92319]: Connection reset by authenticating user root 45.135.232.92 port 58818 [preauth]
Dec 06 08:57:56 np0005548790.localdomain sshd[92321]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:57:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:57:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:57:57 np0005548790.localdomain podman[92323]: 2025-12-06 08:57:57.570979152 +0000 UTC m=+0.085094598 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 06 08:57:57 np0005548790.localdomain systemd[1]: tmp-crun.BsQxQM.mount: Deactivated successfully.
Dec 06 08:57:57 np0005548790.localdomain podman[92324]: 2025-12-06 08:57:57.62065308 +0000 UTC m=+0.132063344 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:57:57 np0005548790.localdomain podman[92323]: 2025-12-06 08:57:57.640555118 +0000 UTC m=+0.154670574 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 08:57:57 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:57:57 np0005548790.localdomain podman[92324]: 2025-12-06 08:57:57.672381622 +0000 UTC m=+0.183791896 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public)
Dec 06 08:57:57 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:57:58 np0005548790.localdomain sshd[92321]: Connection reset by authenticating user root 45.135.232.92 port 62716 [preauth]
Dec 06 08:57:58 np0005548790.localdomain sshd[92372]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:57:59 np0005548790.localdomain sshd[92372]: Invalid user admin from 45.135.232.92 port 62738
Dec 06 08:57:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:57:59 np0005548790.localdomain systemd[1]: tmp-crun.FAEwAK.mount: Deactivated successfully.
Dec 06 08:57:59 np0005548790.localdomain podman[92374]: 2025-12-06 08:57:59.9012761 +0000 UTC m=+0.081825912 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Dec 06 08:57:59 np0005548790.localdomain podman[92374]: 2025-12-06 08:57:59.93031372 +0000 UTC m=+0.110863522 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 08:57:59 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:58:00 np0005548790.localdomain sshd[92372]: Connection reset by invalid user admin 45.135.232.92 port 62738 [preauth]
Dec 06 08:58:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:58:07 np0005548790.localdomain podman[92400]: 2025-12-06 08:58:07.555475442 +0000 UTC m=+0.077360143 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Dec 06 08:58:07 np0005548790.localdomain podman[92400]: 2025-12-06 08:58:07.705671637 +0000 UTC m=+0.227556338 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, managed_by=tripleo_ansible, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:58:07 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:58:14 np0005548790.localdomain sudo[92430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:58:14 np0005548790.localdomain sudo[92430]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:58:14 np0005548790.localdomain sudo[92430]: pam_unix(sudo:session): session closed for user root
Dec 06 08:58:14 np0005548790.localdomain sudo[92445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:58:14 np0005548790.localdomain sudo[92445]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:58:15 np0005548790.localdomain sudo[92445]: pam_unix(sudo:session): session closed for user root
Dec 06 08:58:16 np0005548790.localdomain sudo[92492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:58:16 np0005548790.localdomain sudo[92492]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:58:16 np0005548790.localdomain sudo[92492]: pam_unix(sudo:session): session closed for user root
Dec 06 08:58:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:58:21 np0005548790.localdomain podman[92507]: 2025-12-06 08:58:21.577851804 +0000 UTC m=+0.091733766 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, container_name=collectd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-collectd-container, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 08:58:21 np0005548790.localdomain podman[92507]: 2025-12-06 08:58:21.596278192 +0000 UTC m=+0.110160144 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Dec 06 08:58:21 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:58:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:58:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:58:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:58:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:58:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:58:23 np0005548790.localdomain systemd[1]: tmp-crun.AjGMyj.mount: Deactivated successfully.
Dec 06 08:58:23 np0005548790.localdomain podman[92527]: 2025-12-06 08:58:23.590737798 +0000 UTC m=+0.097302522 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.openshift.expose-services=)
Dec 06 08:58:23 np0005548790.localdomain systemd[1]: tmp-crun.J4htn5.mount: Deactivated successfully.
Dec 06 08:58:23 np0005548790.localdomain podman[92527]: 2025-12-06 08:58:23.628000298 +0000 UTC m=+0.134564952 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, tcib_managed=true, io.buildah.version=1.41.4, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z)
Dec 06 08:58:23 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:58:23 np0005548790.localdomain podman[92534]: 2025-12-06 08:58:23.639331638 +0000 UTC m=+0.132694072 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, version=17.1.12, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-nova-compute-container)
Dec 06 08:58:23 np0005548790.localdomain podman[92540]: 2025-12-06 08:58:23.689871249 +0000 UTC m=+0.179675759 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron)
Dec 06 08:58:23 np0005548790.localdomain podman[92529]: 2025-12-06 08:58:23.611319555 +0000 UTC m=+0.107862183 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, architecture=x86_64, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4)
Dec 06 08:58:23 np0005548790.localdomain podman[92540]: 2025-12-06 08:58:23.726156742 +0000 UTC m=+0.215961232 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, io.buildah.version=1.41.4)
Dec 06 08:58:23 np0005548790.localdomain podman[92528]: 2025-12-06 08:58:23.737519403 +0000 UTC m=+0.238749196 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container)
Dec 06 08:58:23 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:58:23 np0005548790.localdomain podman[92529]: 2025-12-06 08:58:23.741117159 +0000 UTC m=+0.237659847 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 08:58:23 np0005548790.localdomain podman[92528]: 2025-12-06 08:58:23.763006889 +0000 UTC m=+0.264236682 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1)
Dec 06 08:58:23 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:58:23 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:58:23 np0005548790.localdomain podman[92534]: 2025-12-06 08:58:23.962132573 +0000 UTC m=+0.455495057 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, container_name=nova_migration_target, batch=17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:58:23 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:58:27 np0005548790.localdomain sshd[92640]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:58:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:58:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:58:28 np0005548790.localdomain podman[92643]: 2025-12-06 08:58:28.57342917 +0000 UTC m=+0.084236326 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, vcs-type=git)
Dec 06 08:58:28 np0005548790.localdomain podman[92642]: 2025-12-06 08:58:28.620571931 +0000 UTC m=+0.133503054 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.)
Dec 06 08:58:28 np0005548790.localdomain podman[92643]: 2025-12-06 08:58:28.624390052 +0000 UTC m=+0.135197278 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-18T23:34:05Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:58:28 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:58:28 np0005548790.localdomain podman[92642]: 2025-12-06 08:58:28.654276555 +0000 UTC m=+0.167207668 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, release=1761123044)
Dec 06 08:58:28 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:58:29 np0005548790.localdomain sshd[92640]: Received disconnect from 43.163.123.45 port 60250:11: Bye Bye [preauth]
Dec 06 08:58:29 np0005548790.localdomain sshd[92640]: Disconnected from authenticating user root 43.163.123.45 port 60250 [preauth]
Dec 06 08:58:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:58:30 np0005548790.localdomain podman[92690]: 2025-12-06 08:58:30.566315344 +0000 UTC m=+0.082711775 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step5, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container)
Dec 06 08:58:30 np0005548790.localdomain podman[92690]: 2025-12-06 08:58:30.592705985 +0000 UTC m=+0.109102406 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, tcib_managed=true, container_name=nova_compute, managed_by=tripleo_ansible, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-type=git, build-date=2025-11-19T00:36:58Z, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 08:58:30 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:58:35 np0005548790.localdomain sshd[92716]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:58:37 np0005548790.localdomain sshd[92716]: Received disconnect from 103.226.138.52 port 47054:11: Bye Bye [preauth]
Dec 06 08:58:37 np0005548790.localdomain sshd[92716]: Disconnected from authenticating user root 103.226.138.52 port 47054 [preauth]
Dec 06 08:58:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:58:38 np0005548790.localdomain podman[92718]: 2025-12-06 08:58:38.562876328 +0000 UTC m=+0.081758060 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, name=rhosp17/openstack-qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr)
Dec 06 08:58:38 np0005548790.localdomain podman[92718]: 2025-12-06 08:58:38.790167468 +0000 UTC m=+0.309049170 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, release=1761123044, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 08:58:38 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:58:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:58:52 np0005548790.localdomain podman[92747]: 2025-12-06 08:58:52.552077979 +0000 UTC m=+0.070059760 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, url=https://www.redhat.com, io.buildah.version=1.41.4, container_name=collectd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:58:52 np0005548790.localdomain podman[92747]: 2025-12-06 08:58:52.586873662 +0000 UTC m=+0.104855453 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, name=rhosp17/openstack-collectd, container_name=collectd, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, tcib_managed=true)
Dec 06 08:58:52 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:58:53 np0005548790.localdomain sshd[92767]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:58:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:58:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:58:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:58:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:58:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:58:54 np0005548790.localdomain systemd[1]: tmp-crun.f3b995.mount: Deactivated successfully.
Dec 06 08:58:54 np0005548790.localdomain podman[92769]: 2025-12-06 08:58:54.587166134 +0000 UTC m=+0.094888319 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z)
Dec 06 08:58:54 np0005548790.localdomain podman[92771]: 2025-12-06 08:58:54.64806707 +0000 UTC m=+0.149084917 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true)
Dec 06 08:58:54 np0005548790.localdomain podman[92778]: 2025-12-06 08:58:54.604978726 +0000 UTC m=+0.100438856 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=logrotate_crond, batch=17.1_20251118.1, name=rhosp17/openstack-cron, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible)
Dec 06 08:58:54 np0005548790.localdomain podman[92769]: 2025-12-06 08:58:54.674949663 +0000 UTC m=+0.182671898 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, release=1761123044, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc.)
Dec 06 08:58:54 np0005548790.localdomain podman[92770]: 2025-12-06 08:58:54.685177575 +0000 UTC m=+0.190150957 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute)
Dec 06 08:58:54 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:58:54 np0005548790.localdomain podman[92778]: 2025-12-06 08:58:54.688134943 +0000 UTC m=+0.183595033 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, tcib_managed=true, config_id=tripleo_step4, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible)
Dec 06 08:58:54 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:58:54 np0005548790.localdomain podman[92771]: 2025-12-06 08:58:54.703411388 +0000 UTC m=+0.204429195 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 06 08:58:54 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:58:54 np0005548790.localdomain podman[92770]: 2025-12-06 08:58:54.74718174 +0000 UTC m=+0.252155142 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:58:54 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:58:54 np0005548790.localdomain podman[92772]: 2025-12-06 08:58:54.787719456 +0000 UTC m=+0.286685398 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.openshift.expose-services=)
Dec 06 08:58:55 np0005548790.localdomain podman[92772]: 2025-12-06 08:58:55.138096262 +0000 UTC m=+0.637062214 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.openshift.expose-services=)
Dec 06 08:58:55 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:58:56 np0005548790.localdomain sshd[92767]: Received disconnect from 35.247.75.98 port 59714:11: Bye Bye [preauth]
Dec 06 08:58:56 np0005548790.localdomain sshd[92767]: Disconnected from authenticating user root 35.247.75.98 port 59714 [preauth]
Dec 06 08:58:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:58:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:58:59 np0005548790.localdomain podman[92886]: 2025-12-06 08:58:59.568272722 +0000 UTC m=+0.082689394 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:58:59 np0005548790.localdomain podman[92887]: 2025-12-06 08:58:59.617498689 +0000 UTC m=+0.128325176 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, architecture=x86_64, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12)
Dec 06 08:58:59 np0005548790.localdomain podman[92886]: 2025-12-06 08:58:59.637479159 +0000 UTC m=+0.151895801 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 08:58:59 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:58:59 np0005548790.localdomain podman[92887]: 2025-12-06 08:58:59.692681293 +0000 UTC m=+0.203507770 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, release=1761123044, distribution-scope=public, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller)
Dec 06 08:58:59 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Deactivated successfully.
Dec 06 08:59:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:59:01 np0005548790.localdomain podman[92933]: 2025-12-06 08:59:01.574890183 +0000 UTC m=+0.086543648 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, config_id=tripleo_step5, version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 08:59:01 np0005548790.localdomain podman[92933]: 2025-12-06 08:59:01.609276016 +0000 UTC m=+0.120929521 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:59:01 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:59:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:59:09 np0005548790.localdomain systemd[1]: tmp-crun.SCYXtu.mount: Deactivated successfully.
Dec 06 08:59:09 np0005548790.localdomain podman[92959]: 2025-12-06 08:59:09.568866229 +0000 UTC m=+0.086767103 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=metrics_qdr)
Dec 06 08:59:09 np0005548790.localdomain podman[92959]: 2025-12-06 08:59:09.764587262 +0000 UTC m=+0.282488136 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, version=17.1.12, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, vcs-type=git, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., tcib_managed=true)
Dec 06 08:59:09 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:59:15 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 08:59:15 np0005548790.localdomain recover_tripleo_nova_virtqemud[92989]: 62556
Dec 06 08:59:15 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 08:59:15 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 08:59:16 np0005548790.localdomain sudo[92990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:59:16 np0005548790.localdomain sudo[92990]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:59:16 np0005548790.localdomain sudo[92990]: pam_unix(sudo:session): session closed for user root
Dec 06 08:59:16 np0005548790.localdomain sudo[93005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 08:59:16 np0005548790.localdomain sudo[93005]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:59:17 np0005548790.localdomain sudo[93005]: pam_unix(sudo:session): session closed for user root
Dec 06 08:59:17 np0005548790.localdomain sudo[93051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 08:59:17 np0005548790.localdomain sudo[93051]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:59:17 np0005548790.localdomain sudo[93051]: pam_unix(sudo:session): session closed for user root
Dec 06 08:59:17 np0005548790.localdomain sudo[93066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 06 08:59:17 np0005548790.localdomain sudo[93066]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:59:17 np0005548790.localdomain sudo[93066]: pam_unix(sudo:session): session closed for user root
Dec 06 08:59:21 np0005548790.localdomain sudo[93100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 08:59:21 np0005548790.localdomain sudo[93100]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 08:59:21 np0005548790.localdomain sudo[93100]: pam_unix(sudo:session): session closed for user root
Dec 06 08:59:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:59:23 np0005548790.localdomain podman[93115]: 2025-12-06 08:59:23.587601354 +0000 UTC m=+0.093622425 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step3, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Dec 06 08:59:23 np0005548790.localdomain podman[93115]: 2025-12-06 08:59:23.598236036 +0000 UTC m=+0.104257057 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, url=https://www.redhat.com, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, container_name=collectd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, distribution-scope=public)
Dec 06 08:59:23 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:59:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:59:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:59:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:59:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:59:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:59:25 np0005548790.localdomain systemd[1]: tmp-crun.hNVxKC.mount: Deactivated successfully.
Dec 06 08:59:25 np0005548790.localdomain podman[93135]: 2025-12-06 08:59:25.581296871 +0000 UTC m=+0.092985628 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, build-date=2025-11-18T23:44:13Z, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc.)
Dec 06 08:59:25 np0005548790.localdomain podman[93149]: 2025-12-06 08:59:25.635013976 +0000 UTC m=+0.127927255 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, distribution-scope=public, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, release=1761123044, batch=17.1_20251118.1, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Dec 06 08:59:25 np0005548790.localdomain podman[93136]: 2025-12-06 08:59:25.593869694 +0000 UTC m=+0.101967376 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 08:59:25 np0005548790.localdomain podman[93136]: 2025-12-06 08:59:25.672939983 +0000 UTC m=+0.181037665 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 08:59:25 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:59:25 np0005548790.localdomain podman[93137]: 2025-12-06 08:59:25.69018739 +0000 UTC m=+0.194044680 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true)
Dec 06 08:59:25 np0005548790.localdomain podman[93135]: 2025-12-06 08:59:25.71130876 +0000 UTC m=+0.222997557 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=iscsid, managed_by=tripleo_ansible, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 08:59:25 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:59:25 np0005548790.localdomain podman[93137]: 2025-12-06 08:59:25.742425056 +0000 UTC m=+0.246282356 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4)
Dec 06 08:59:25 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:59:25 np0005548790.localdomain podman[93140]: 2025-12-06 08:59:25.788611101 +0000 UTC m=+0.290890699 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.12, config_id=tripleo_step4, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 08:59:25 np0005548790.localdomain podman[93149]: 2025-12-06 08:59:25.819706106 +0000 UTC m=+0.312619415 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, container_name=logrotate_crond, io.openshift.expose-services=, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team)
Dec 06 08:59:25 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:59:26 np0005548790.localdomain podman[93140]: 2025-12-06 08:59:26.156678987 +0000 UTC m=+0.658958645 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, version=17.1.12, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 08:59:26 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 08:59:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 08:59:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 08:59:30 np0005548790.localdomain systemd[1]: tmp-crun.R6T4iP.mount: Deactivated successfully.
Dec 06 08:59:30 np0005548790.localdomain podman[93251]: 2025-12-06 08:59:30.58125466 +0000 UTC m=+0.095886616 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Dec 06 08:59:30 np0005548790.localdomain podman[93251]: 2025-12-06 08:59:30.667288722 +0000 UTC m=+0.181920648 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, url=https://www.redhat.com, container_name=ovn_metadata_agent, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1)
Dec 06 08:59:30 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 08:59:30 np0005548790.localdomain podman[93252]: 2025-12-06 08:59:30.670496387 +0000 UTC m=+0.182212865 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z)
Dec 06 08:59:30 np0005548790.localdomain podman[93252]: 2025-12-06 08:59:30.754369183 +0000 UTC m=+0.266085601 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 08:59:30 np0005548790.localdomain podman[93252]: unhealthy
Dec 06 08:59:30 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 08:59:30 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 08:59:31 np0005548790.localdomain systemd[1]: tmp-crun.8ZaOqQ.mount: Deactivated successfully.
Dec 06 08:59:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 08:59:32 np0005548790.localdomain podman[93302]: 2025-12-06 08:59:32.557691068 +0000 UTC m=+0.076047698 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4)
Dec 06 08:59:32 np0005548790.localdomain podman[93302]: 2025-12-06 08:59:32.59014009 +0000 UTC m=+0.108496670 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, tcib_managed=true)
Dec 06 08:59:32 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 08:59:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 08:59:40 np0005548790.localdomain podman[93328]: 2025-12-06 08:59:40.563800626 +0000 UTC m=+0.079893760 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible)
Dec 06 08:59:40 np0005548790.localdomain podman[93328]: 2025-12-06 08:59:40.782700534 +0000 UTC m=+0.298793718 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, tcib_managed=true, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 08:59:40 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 08:59:42 np0005548790.localdomain sshd[93357]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 08:59:43 np0005548790.localdomain sshd[93357]: Received disconnect from 43.163.123.45 port 58910:11: Bye Bye [preauth]
Dec 06 08:59:43 np0005548790.localdomain sshd[93357]: Disconnected from authenticating user root 43.163.123.45 port 58910 [preauth]
Dec 06 08:59:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 08:59:54 np0005548790.localdomain systemd[1]: tmp-crun.G1Su3C.mount: Deactivated successfully.
Dec 06 08:59:54 np0005548790.localdomain podman[93359]: 2025-12-06 08:59:54.591926519 +0000 UTC m=+0.102712447 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, container_name=collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 08:59:54 np0005548790.localdomain podman[93359]: 2025-12-06 08:59:54.605333884 +0000 UTC m=+0.116119812 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, tcib_managed=true, vcs-type=git, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=collectd, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Dec 06 08:59:54 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 08:59:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 08:59:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 08:59:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 08:59:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 08:59:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 08:59:56 np0005548790.localdomain systemd[1]: tmp-crun.PCcmPq.mount: Deactivated successfully.
Dec 06 08:59:56 np0005548790.localdomain podman[93380]: 2025-12-06 08:59:56.57982179 +0000 UTC m=+0.092400652 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, container_name=iscsid, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc.)
Dec 06 08:59:56 np0005548790.localdomain podman[93399]: 2025-12-06 08:59:56.602638886 +0000 UTC m=+0.097050476 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, container_name=logrotate_crond, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-cron, architecture=x86_64, release=1761123044)
Dec 06 08:59:56 np0005548790.localdomain podman[93380]: 2025-12-06 08:59:56.622993966 +0000 UTC m=+0.135572808 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, tcib_managed=true, vcs-type=git, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 08:59:56 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 08:59:56 np0005548790.localdomain podman[93382]: 2025-12-06 08:59:56.627850804 +0000 UTC m=+0.133884622 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, batch=17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 08:59:56 np0005548790.localdomain podman[93399]: 2025-12-06 08:59:56.685214757 +0000 UTC m=+0.179626337 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, release=1761123044)
Dec 06 08:59:56 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 08:59:56 np0005548790.localdomain podman[93388]: 2025-12-06 08:59:56.737235107 +0000 UTC m=+0.238043577 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, version=17.1.12, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, distribution-scope=public, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4)
Dec 06 08:59:56 np0005548790.localdomain podman[93382]: 2025-12-06 08:59:56.757232858 +0000 UTC m=+0.263266686 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=)
Dec 06 08:59:56 np0005548790.localdomain podman[93381]: 2025-12-06 08:59:56.785922019 +0000 UTC m=+0.292192373 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, version=17.1.12, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true)
Dec 06 08:59:56 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 08:59:56 np0005548790.localdomain podman[93381]: 2025-12-06 08:59:56.821343398 +0000 UTC m=+0.327613792 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 08:59:56 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 08:59:57 np0005548790.localdomain podman[93388]: 2025-12-06 08:59:57.115219685 +0000 UTC m=+0.616028175 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1)
Dec 06 08:59:57 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:00:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:00:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:00:01 np0005548790.localdomain podman[93492]: 2025-12-06 09:00:01.558904116 +0000 UTC m=+0.075501334 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true)
Dec 06 09:00:01 np0005548790.localdomain podman[93492]: 2025-12-06 09:00:01.613271078 +0000 UTC m=+0.129868276 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, architecture=x86_64, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=)
Dec 06 09:00:01 np0005548790.localdomain podman[93493]: 2025-12-06 09:00:01.614158092 +0000 UTC m=+0.125882521 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible)
Dec 06 09:00:01 np0005548790.localdomain podman[93493]: 2025-12-06 09:00:01.662653479 +0000 UTC m=+0.174377958 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.)
Dec 06 09:00:01 np0005548790.localdomain podman[93493]: unhealthy
Dec 06 09:00:01 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:00:01 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:00:01 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Deactivated successfully.
Dec 06 09:00:01 np0005548790.localdomain CROND[93542]: (root) CMD (sleep `expr ${RANDOM} % 90`; /usr/sbin/logrotate -s /var/lib/logrotate/logrotate-crond.status /etc/logrotate-crond.conf 2>&1|logger -t logrotate-crond)
Dec 06 09:00:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:00:03 np0005548790.localdomain systemd[1]: tmp-crun.GGhZ05.mount: Deactivated successfully.
Dec 06 09:00:03 np0005548790.localdomain podman[93545]: 2025-12-06 09:00:03.552070058 +0000 UTC m=+0.071421715 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git)
Dec 06 09:00:03 np0005548790.localdomain podman[93545]: 2025-12-06 09:00:03.58115674 +0000 UTC m=+0.100508377 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.buildah.version=1.41.4, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Dec 06 09:00:03 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:00:05 np0005548790.localdomain sshd[93572]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:00:07 np0005548790.localdomain sshd[93572]: Received disconnect from 103.226.138.52 port 33128:11: Bye Bye [preauth]
Dec 06 09:00:07 np0005548790.localdomain sshd[93572]: Disconnected from authenticating user root 103.226.138.52 port 33128 [preauth]
Dec 06 09:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 09:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 3600.1 total, 600.0 interval
                                                          Cumulative writes: 5186 writes, 23K keys, 5186 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 5186 writes, 682 syncs, 7.60 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 09:00:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:00:11 np0005548790.localdomain podman[93574]: 2025-12-06 09:00:11.567680328 +0000 UTC m=+0.082604203 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, config_id=tripleo_step1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 09:00:11 np0005548790.localdomain podman[93574]: 2025-12-06 09:00:11.761214392 +0000 UTC m=+0.276138277 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z)
Dec 06 09:00:11 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:00:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 09:00:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 3600.2 total, 600.0 interval
                                                          Cumulative writes: 5446 writes, 23K keys, 5446 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 5446 writes, 742 syncs, 7.34 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 09:00:18 np0005548790.localdomain CROND[93541]: (root) CMDEND (sleep `expr ${RANDOM} % 90`; /usr/sbin/logrotate -s /var/lib/logrotate/logrotate-crond.status /etc/logrotate-crond.conf 2>&1|logger -t logrotate-crond)
Dec 06 09:00:21 np0005548790.localdomain sudo[93605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:00:21 np0005548790.localdomain sudo[93605]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:00:21 np0005548790.localdomain sudo[93605]: pam_unix(sudo:session): session closed for user root
Dec 06 09:00:21 np0005548790.localdomain sudo[93620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:00:21 np0005548790.localdomain sudo[93620]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:00:21 np0005548790.localdomain sudo[93620]: pam_unix(sudo:session): session closed for user root
Dec 06 09:00:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:00:25 np0005548790.localdomain podman[93668]: 2025-12-06 09:00:25.552724189 +0000 UTC m=+0.071341174 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1761123044, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.openshift.expose-services=)
Dec 06 09:00:25 np0005548790.localdomain podman[93668]: 2025-12-06 09:00:25.563873665 +0000 UTC m=+0.082490680 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, version=17.1.12, name=rhosp17/openstack-collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 09:00:25 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:00:26 np0005548790.localdomain sudo[93688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:00:26 np0005548790.localdomain sudo[93688]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:00:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:00:26 np0005548790.localdomain sudo[93688]: pam_unix(sudo:session): session closed for user root
Dec 06 09:00:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:00:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:00:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:00:26 np0005548790.localdomain podman[93703]: 2025-12-06 09:00:26.883877717 +0000 UTC m=+0.087555934 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git)
Dec 06 09:00:26 np0005548790.localdomain podman[93703]: 2025-12-06 09:00:26.897065296 +0000 UTC m=+0.100743493 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 09:00:26 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:00:26 np0005548790.localdomain systemd[1]: tmp-crun.ayaFuM.mount: Deactivated successfully.
Dec 06 09:00:26 np0005548790.localdomain podman[93733]: 2025-12-06 09:00:26.982095803 +0000 UTC m=+0.089626469 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 09:00:27 np0005548790.localdomain podman[93734]: 2025-12-06 09:00:27.022697569 +0000 UTC m=+0.129983349 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com)
Dec 06 09:00:27 np0005548790.localdomain podman[93704]: 2025-12-06 09:00:26.984841295 +0000 UTC m=+0.185035220 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, distribution-scope=public, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, container_name=logrotate_crond, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1)
Dec 06 09:00:27 np0005548790.localdomain podman[93704]: 2025-12-06 09:00:27.068133616 +0000 UTC m=+0.268327481 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:49:32Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=)
Dec 06 09:00:27 np0005548790.localdomain podman[93733]: 2025-12-06 09:00:27.077879984 +0000 UTC m=+0.185410690 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z)
Dec 06 09:00:27 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:00:27 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:00:27 np0005548790.localdomain podman[93734]: 2025-12-06 09:00:27.118517272 +0000 UTC m=+0.225803002 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12)
Dec 06 09:00:27 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:00:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:00:27 np0005548790.localdomain podman[93793]: 2025-12-06 09:00:27.218769592 +0000 UTC m=+0.064757139 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, distribution-scope=public)
Dec 06 09:00:27 np0005548790.localdomain podman[93793]: 2025-12-06 09:00:27.522562232 +0000 UTC m=+0.368549769 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 06 09:00:27 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:00:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:00:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:00:32 np0005548790.localdomain podman[93817]: 2025-12-06 09:00:32.567536916 +0000 UTC m=+0.079330577 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public)
Dec 06 09:00:32 np0005548790.localdomain podman[93817]: 2025-12-06 09:00:32.586044816 +0000 UTC m=+0.097838487 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 06 09:00:32 np0005548790.localdomain podman[93817]: unhealthy
Dec 06 09:00:32 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:00:32 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:00:32 np0005548790.localdomain podman[93818]: 2025-12-06 09:00:32.675587602 +0000 UTC m=+0.184833335 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, batch=17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 09:00:32 np0005548790.localdomain podman[93818]: 2025-12-06 09:00:32.719249741 +0000 UTC m=+0.228495514 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, config_id=tripleo_step4)
Dec 06 09:00:32 np0005548790.localdomain podman[93818]: unhealthy
Dec 06 09:00:32 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:00:32 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:00:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:00:34 np0005548790.localdomain podman[93856]: 2025-12-06 09:00:34.552424218 +0000 UTC m=+0.068931029 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute)
Dec 06 09:00:34 np0005548790.localdomain podman[93856]: 2025-12-06 09:00:34.578385106 +0000 UTC m=+0.094891937 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, url=https://www.redhat.com, container_name=nova_compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, version=17.1.12)
Dec 06 09:00:34 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:00:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:00:42 np0005548790.localdomain podman[93881]: 2025-12-06 09:00:42.538088945 +0000 UTC m=+0.057683833 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_id=tripleo_step1, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true)
Dec 06 09:00:42 np0005548790.localdomain podman[93881]: 2025-12-06 09:00:42.748190639 +0000 UTC m=+0.267785547 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, tcib_managed=true, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, release=1761123044, io.openshift.expose-services=, version=17.1.12)
Dec 06 09:00:42 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:00:43 np0005548790.localdomain sshd[93910]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:00:46 np0005548790.localdomain sshd[93910]: Received disconnect from 35.247.75.98 port 46624:11: Bye Bye [preauth]
Dec 06 09:00:46 np0005548790.localdomain sshd[93910]: Disconnected from authenticating user root 35.247.75.98 port 46624 [preauth]
Dec 06 09:00:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:00:56 np0005548790.localdomain podman[93912]: 2025-12-06 09:00:56.546962146 +0000 UTC m=+0.065097068 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, distribution-scope=public, architecture=x86_64, vcs-type=git, version=17.1.12, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4)
Dec 06 09:00:56 np0005548790.localdomain podman[93912]: 2025-12-06 09:00:56.559081608 +0000 UTC m=+0.077216530 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, config_id=tripleo_step3)
Dec 06 09:00:56 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:00:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:00:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:00:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:00:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:00:57 np0005548790.localdomain podman[93933]: 2025-12-06 09:00:57.576100692 +0000 UTC m=+0.087939295 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, config_id=tripleo_step4, release=1761123044, distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 09:00:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:00:57 np0005548790.localdomain podman[93932]: 2025-12-06 09:00:57.622884043 +0000 UTC m=+0.140528310 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, container_name=iscsid, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 09:00:57 np0005548790.localdomain podman[93933]: 2025-12-06 09:00:57.628769349 +0000 UTC m=+0.140607952 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64)
Dec 06 09:00:57 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:00:57 np0005548790.localdomain systemd[1]: tmp-crun.eUim6E.mount: Deactivated successfully.
Dec 06 09:00:57 np0005548790.localdomain podman[93940]: 2025-12-06 09:00:57.668217746 +0000 UTC m=+0.175785185 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.buildah.version=1.41.4, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, config_id=tripleo_step4, release=1761123044, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public)
Dec 06 09:00:57 np0005548790.localdomain podman[93932]: 2025-12-06 09:00:57.678863049 +0000 UTC m=+0.196507326 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_id=tripleo_step3, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 06 09:00:57 np0005548790.localdomain podman[93934]: 2025-12-06 09:00:57.689245683 +0000 UTC m=+0.198631471 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, architecture=x86_64, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z)
Dec 06 09:00:57 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:00:57 np0005548790.localdomain podman[93940]: 2025-12-06 09:00:57.70418496 +0000 UTC m=+0.211752409 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, architecture=x86_64, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 09:00:57 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:00:57 np0005548790.localdomain podman[93934]: 2025-12-06 09:00:57.721264084 +0000 UTC m=+0.230649862 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=ceilometer_agent_ipmi, version=17.1.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4)
Dec 06 09:00:57 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:00:57 np0005548790.localdomain podman[93987]: 2025-12-06 09:00:57.795051031 +0000 UTC m=+0.191136132 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, container_name=nova_migration_target, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 09:00:58 np0005548790.localdomain podman[93987]: 2025-12-06 09:00:58.190905403 +0000 UTC m=+0.586990494 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, version=17.1.12, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git)
Dec 06 09:00:58 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:01:00 np0005548790.localdomain sshd[94046]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:01:01 np0005548790.localdomain CROND[94049]: (root) CMD (run-parts /etc/cron.hourly)
Dec 06 09:01:01 np0005548790.localdomain run-parts[94052]: (/etc/cron.hourly) starting 0anacron
Dec 06 09:01:01 np0005548790.localdomain run-parts[94058]: (/etc/cron.hourly) finished 0anacron
Dec 06 09:01:01 np0005548790.localdomain CROND[94048]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 06 09:01:01 np0005548790.localdomain CROND[94060]: (root) CMD (run-parts /etc/cron.hourly)
Dec 06 09:01:01 np0005548790.localdomain run-parts[94063]: (/etc/cron.hourly) starting 0anacron
Dec 06 09:01:02 np0005548790.localdomain anacron[94071]: Anacron started on 2025-12-06
Dec 06 09:01:02 np0005548790.localdomain anacron[94071]: Will run job `cron.daily' in 8 min.
Dec 06 09:01:02 np0005548790.localdomain anacron[94071]: Will run job `cron.weekly' in 28 min.
Dec 06 09:01:02 np0005548790.localdomain anacron[94071]: Will run job `cron.monthly' in 48 min.
Dec 06 09:01:02 np0005548790.localdomain anacron[94071]: Jobs will be executed sequentially
Dec 06 09:01:02 np0005548790.localdomain run-parts[94073]: (/etc/cron.hourly) finished 0anacron
Dec 06 09:01:02 np0005548790.localdomain CROND[94059]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 06 09:01:02 np0005548790.localdomain sshd[94046]: Received disconnect from 43.163.123.45 port 57572:11: Bye Bye [preauth]
Dec 06 09:01:02 np0005548790.localdomain sshd[94046]: Disconnected from authenticating user root 43.163.123.45 port 57572 [preauth]
Dec 06 09:01:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:01:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:01:03 np0005548790.localdomain podman[94074]: 2025-12-06 09:01:03.559174254 +0000 UTC m=+0.074328263 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com)
Dec 06 09:01:03 np0005548790.localdomain systemd[1]: tmp-crun.TYq6M1.mount: Deactivated successfully.
Dec 06 09:01:03 np0005548790.localdomain podman[94075]: 2025-12-06 09:01:03.620330776 +0000 UTC m=+0.133709818 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, container_name=ovn_controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z)
Dec 06 09:01:03 np0005548790.localdomain podman[94075]: 2025-12-06 09:01:03.640154163 +0000 UTC m=+0.153533235 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1)
Dec 06 09:01:03 np0005548790.localdomain podman[94075]: unhealthy
Dec 06 09:01:03 np0005548790.localdomain podman[94074]: 2025-12-06 09:01:03.64946865 +0000 UTC m=+0.164622649 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Dec 06 09:01:03 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:01:03 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:01:03 np0005548790.localdomain podman[94074]: unhealthy
Dec 06 09:01:03 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:01:03 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:01:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:01:05 np0005548790.localdomain podman[94115]: 2025-12-06 09:01:05.557889314 +0000 UTC m=+0.076142862 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5)
Dec 06 09:01:05 np0005548790.localdomain podman[94115]: 2025-12-06 09:01:05.591190697 +0000 UTC m=+0.109444305 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 09:01:05 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:01:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:01:13 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 09:01:13 np0005548790.localdomain recover_tripleo_nova_virtqemud[94144]: 62556
Dec 06 09:01:13 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 09:01:13 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 09:01:13 np0005548790.localdomain podman[94142]: 2025-12-06 09:01:13.567588556 +0000 UTC m=+0.082404967 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1)
Dec 06 09:01:13 np0005548790.localdomain podman[94142]: 2025-12-06 09:01:13.758129802 +0000 UTC m=+0.272946153 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, batch=17.1_20251118.1, config_id=tripleo_step1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 09:01:13 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:01:26 np0005548790.localdomain sudo[94171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:01:26 np0005548790.localdomain sudo[94171]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:01:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:01:26 np0005548790.localdomain sudo[94171]: pam_unix(sudo:session): session closed for user root
Dec 06 09:01:26 np0005548790.localdomain sudo[94196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 06 09:01:26 np0005548790.localdomain sudo[94196]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:01:26 np0005548790.localdomain podman[94185]: 2025-12-06 09:01:26.963392554 +0000 UTC m=+0.073545933 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.openshift.expose-services=)
Dec 06 09:01:26 np0005548790.localdomain podman[94185]: 2025-12-06 09:01:26.974030416 +0000 UTC m=+0.084183745 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 09:01:26 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:01:27 np0005548790.localdomain sudo[94196]: pam_unix(sudo:session): session closed for user root
Dec 06 09:01:27 np0005548790.localdomain sudo[94238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:01:27 np0005548790.localdomain sudo[94238]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:01:27 np0005548790.localdomain sudo[94238]: pam_unix(sudo:session): session closed for user root
Dec 06 09:01:27 np0005548790.localdomain sudo[94253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:01:27 np0005548790.localdomain sudo[94253]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:01:28 np0005548790.localdomain sudo[94253]: pam_unix(sudo:session): session closed for user root
Dec 06 09:01:28 np0005548790.localdomain sudo[94299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:01:28 np0005548790.localdomain sudo[94299]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:01:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:01:28 np0005548790.localdomain sudo[94299]: pam_unix(sudo:session): session closed for user root
Dec 06 09:01:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:01:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:01:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:01:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:01:28 np0005548790.localdomain sudo[94355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 -- inventory --format=json-pretty --filter-for-batch
Dec 06 09:01:28 np0005548790.localdomain sudo[94355]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:01:28 np0005548790.localdomain systemd[1]: tmp-crun.eqtiIs.mount: Deactivated successfully.
Dec 06 09:01:28 np0005548790.localdomain podman[94315]: 2025-12-06 09:01:28.558021692 +0000 UTC m=+0.101443212 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, container_name=ceilometer_agent_compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 09:01:28 np0005548790.localdomain podman[94313]: 2025-12-06 09:01:28.593180935 +0000 UTC m=+0.138656880 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, name=rhosp17/openstack-iscsid, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc.)
Dec 06 09:01:28 np0005548790.localdomain podman[94323]: 2025-12-06 09:01:28.609280853 +0000 UTC m=+0.141670871 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, io.openshift.expose-services=, vcs-type=git)
Dec 06 09:01:28 np0005548790.localdomain podman[94313]: 2025-12-06 09:01:28.624872836 +0000 UTC m=+0.170348721 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64, version=17.1.12, distribution-scope=public, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 09:01:28 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:01:28 np0005548790.localdomain podman[94315]: 2025-12-06 09:01:28.638473817 +0000 UTC m=+0.181895397 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.12, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64)
Dec 06 09:01:28 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:01:28 np0005548790.localdomain podman[94323]: 2025-12-06 09:01:28.667501667 +0000 UTC m=+0.199891705 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, distribution-scope=public, version=17.1.12, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, release=1761123044, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 09:01:28 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:01:28 np0005548790.localdomain podman[94317]: 2025-12-06 09:01:28.709864181 +0000 UTC m=+0.246503271 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, tcib_managed=true, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, managed_by=tripleo_ansible, architecture=x86_64)
Dec 06 09:01:28 np0005548790.localdomain podman[94316]: 2025-12-06 09:01:28.642520185 +0000 UTC m=+0.181995830 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container)
Dec 06 09:01:28 np0005548790.localdomain podman[94316]: 2025-12-06 09:01:28.775187944 +0000 UTC m=+0.314663589 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 09:01:28 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:01:29 np0005548790.localdomain podman[94317]: 2025-12-06 09:01:29.098105182 +0000 UTC m=+0.634744312 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, vcs-type=git)
Dec 06 09:01:29 np0005548790.localdomain podman[94473]: 
Dec 06 09:01:29 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:01:29 np0005548790.localdomain podman[94473]: 2025-12-06 09:01:29.119315825 +0000 UTC m=+0.067171213 container create fd7d9803bb54abb3b320eba91322173440d77a7de06183560611715d6804e18a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_lehmann, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, distribution-scope=public, io.openshift.tags=rhceph ceph, ceph=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, release=1763362218, architecture=x86_64, com.redhat.component=rhceph-container, version=7, GIT_CLEAN=True, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 06 09:01:29 np0005548790.localdomain systemd[1]: Started libpod-conmon-fd7d9803bb54abb3b320eba91322173440d77a7de06183560611715d6804e18a.scope.
Dec 06 09:01:29 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:01:29 np0005548790.localdomain podman[94473]: 2025-12-06 09:01:29.175637569 +0000 UTC m=+0.123492987 container init fd7d9803bb54abb3b320eba91322173440d77a7de06183560611715d6804e18a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_lehmann, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, RELEASE=main, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, vendor=Red Hat, Inc., GIT_BRANCH=main, vcs-type=git, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 09:01:29 np0005548790.localdomain podman[94473]: 2025-12-06 09:01:29.082926329 +0000 UTC m=+0.030781787 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 09:01:29 np0005548790.localdomain podman[94473]: 2025-12-06 09:01:29.185731346 +0000 UTC m=+0.133586764 container start fd7d9803bb54abb3b320eba91322173440d77a7de06183560611715d6804e18a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_lehmann, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, architecture=x86_64, ceph=True, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, release=1763362218, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_BRANCH=main)
Dec 06 09:01:29 np0005548790.localdomain podman[94473]: 2025-12-06 09:01:29.186012085 +0000 UTC m=+0.133867503 container attach fd7d9803bb54abb3b320eba91322173440d77a7de06183560611715d6804e18a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_lehmann, architecture=x86_64, GIT_BRANCH=main, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, version=7, GIT_CLEAN=True, io.buildah.version=1.41.4, release=1763362218, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7)
Dec 06 09:01:29 np0005548790.localdomain hardcore_lehmann[94490]: 167 167
Dec 06 09:01:29 np0005548790.localdomain systemd[1]: libpod-fd7d9803bb54abb3b320eba91322173440d77a7de06183560611715d6804e18a.scope: Deactivated successfully.
Dec 06 09:01:29 np0005548790.localdomain podman[94473]: 2025-12-06 09:01:29.188975913 +0000 UTC m=+0.136831361 container died fd7d9803bb54abb3b320eba91322173440d77a7de06183560611715d6804e18a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_lehmann, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, release=1763362218, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, vcs-type=git, com.redhat.component=rhceph-container, GIT_BRANCH=main, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 06 09:01:29 np0005548790.localdomain podman[94495]: 2025-12-06 09:01:29.286471799 +0000 UTC m=+0.084897733 container remove fd7d9803bb54abb3b320eba91322173440d77a7de06183560611715d6804e18a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_lehmann, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, release=1763362218, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, version=7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, ceph=True, GIT_CLEAN=True, io.openshift.tags=rhceph ceph)
Dec 06 09:01:29 np0005548790.localdomain systemd[1]: libpod-conmon-fd7d9803bb54abb3b320eba91322173440d77a7de06183560611715d6804e18a.scope: Deactivated successfully.
Dec 06 09:01:29 np0005548790.localdomain podman[94517]: 
Dec 06 09:01:29 np0005548790.localdomain podman[94517]: 2025-12-06 09:01:29.504840943 +0000 UTC m=+0.072292968 container create c11bc7540ea5b767c3bc64a10f3bdb7e6634a06a87f7c2e14aaa4dfcf546cd8f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_lamarr, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, release=1763362218, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, RELEASE=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhceph, vendor=Red Hat, Inc., GIT_BRANCH=main)
Dec 06 09:01:29 np0005548790.localdomain systemd[1]: Started libpod-conmon-c11bc7540ea5b767c3bc64a10f3bdb7e6634a06a87f7c2e14aaa4dfcf546cd8f.scope.
Dec 06 09:01:29 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:01:29 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00717220acd01a2af189578ba1d90f3254725b66cb0443d0ff1f7317b10b9d7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:01:29 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00717220acd01a2af189578ba1d90f3254725b66cb0443d0ff1f7317b10b9d7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:01:29 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00717220acd01a2af189578ba1d90f3254725b66cb0443d0ff1f7317b10b9d7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:01:29 np0005548790.localdomain podman[94517]: 2025-12-06 09:01:29.475964867 +0000 UTC m=+0.043416942 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 09:01:29 np0005548790.localdomain podman[94517]: 2025-12-06 09:01:29.581297632 +0000 UTC m=+0.148749657 container init c11bc7540ea5b767c3bc64a10f3bdb7e6634a06a87f7c2e14aaa4dfcf546cd8f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_lamarr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, name=rhceph, vendor=Red Hat, Inc., io.openshift.expose-services=, RELEASE=main, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, version=7, CEPH_POINT_RELEASE=)
Dec 06 09:01:29 np0005548790.localdomain systemd[1]: tmp-crun.vI0OMi.mount: Deactivated successfully.
Dec 06 09:01:29 np0005548790.localdomain podman[94517]: 2025-12-06 09:01:29.592855599 +0000 UTC m=+0.160307624 container start c11bc7540ea5b767c3bc64a10f3bdb7e6634a06a87f7c2e14aaa4dfcf546cd8f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_lamarr, name=rhceph, release=1763362218, vcs-type=git, version=7, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, distribution-scope=public, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph)
Dec 06 09:01:29 np0005548790.localdomain podman[94517]: 2025-12-06 09:01:29.593109695 +0000 UTC m=+0.160561750 container attach c11bc7540ea5b767c3bc64a10f3bdb7e6634a06a87f7c2e14aaa4dfcf546cd8f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_lamarr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, RELEASE=main, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, release=1763362218, architecture=x86_64, io.openshift.tags=rhceph ceph, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]: [
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:     {
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:         "available": false,
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:         "ceph_device": false,
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:         "lsm_data": {},
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:         "lvs": [],
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:         "path": "/dev/sr0",
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:         "rejected_reasons": [
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:             "Has a FileSystem",
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:             "Insufficient space (<5GB)"
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:         ],
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:         "sys_api": {
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:             "actuators": null,
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:             "device_nodes": "sr0",
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:             "human_readable_size": "482.00 KB",
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:             "id_bus": "ata",
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:             "model": "QEMU DVD-ROM",
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:             "nr_requests": "2",
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:             "partitions": {},
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:             "path": "/dev/sr0",
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:             "removable": "1",
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:             "rev": "2.5+",
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:             "ro": "0",
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:             "rotational": "1",
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:             "sas_address": "",
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:             "sas_device_handle": "",
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:             "scheduler_mode": "mq-deadline",
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:             "sectors": 0,
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:             "sectorsize": "2048",
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:             "size": 493568.0,
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:             "support_discard": "0",
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:             "type": "disk",
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:             "vendor": "QEMU"
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:         }
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]:     }
Dec 06 09:01:30 np0005548790.localdomain friendly_lamarr[94533]: ]
Dec 06 09:01:30 np0005548790.localdomain systemd[1]: libpod-c11bc7540ea5b767c3bc64a10f3bdb7e6634a06a87f7c2e14aaa4dfcf546cd8f.scope: Deactivated successfully.
Dec 06 09:01:30 np0005548790.localdomain systemd[1]: libpod-c11bc7540ea5b767c3bc64a10f3bdb7e6634a06a87f7c2e14aaa4dfcf546cd8f.scope: Consumed 1.070s CPU time.
Dec 06 09:01:30 np0005548790.localdomain podman[94517]: 2025-12-06 09:01:30.615389889 +0000 UTC m=+1.182841934 container died c11bc7540ea5b767c3bc64a10f3bdb7e6634a06a87f7c2e14aaa4dfcf546cd8f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_lamarr, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, architecture=x86_64, ceph=True, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, release=1763362218, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7)
Dec 06 09:01:30 np0005548790.localdomain systemd[1]: tmp-crun.Wz3C9H.mount: Deactivated successfully.
Dec 06 09:01:30 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-00717220acd01a2af189578ba1d90f3254725b66cb0443d0ff1f7317b10b9d7f-merged.mount: Deactivated successfully.
Dec 06 09:01:30 np0005548790.localdomain podman[96407]: 2025-12-06 09:01:30.695423651 +0000 UTC m=+0.073277435 container remove c11bc7540ea5b767c3bc64a10f3bdb7e6634a06a87f7c2e14aaa4dfcf546cd8f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_lamarr, GIT_BRANCH=main, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, RELEASE=main, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., name=rhceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 06 09:01:30 np0005548790.localdomain systemd[1]: libpod-conmon-c11bc7540ea5b767c3bc64a10f3bdb7e6634a06a87f7c2e14aaa4dfcf546cd8f.scope: Deactivated successfully.
Dec 06 09:01:30 np0005548790.localdomain sudo[94355]: pam_unix(sudo:session): session closed for user root
Dec 06 09:01:31 np0005548790.localdomain sudo[96420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:01:31 np0005548790.localdomain sudo[96420]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:01:31 np0005548790.localdomain sudo[96420]: pam_unix(sudo:session): session closed for user root
Dec 06 09:01:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:01:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:01:34 np0005548790.localdomain podman[96435]: 2025-12-06 09:01:34.584509057 +0000 UTC m=+0.094378715 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Dec 06 09:01:34 np0005548790.localdomain podman[96436]: 2025-12-06 09:01:34.628630727 +0000 UTC m=+0.138639599 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, architecture=x86_64, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, distribution-scope=public, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com)
Dec 06 09:01:34 np0005548790.localdomain podman[96436]: 2025-12-06 09:01:34.644335864 +0000 UTC m=+0.154344766 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true)
Dec 06 09:01:34 np0005548790.localdomain podman[96436]: unhealthy
Dec 06 09:01:34 np0005548790.localdomain podman[96435]: 2025-12-06 09:01:34.652210113 +0000 UTC m=+0.162079811 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_id=tripleo_step4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 06 09:01:34 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:01:34 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:01:34 np0005548790.localdomain podman[96435]: unhealthy
Dec 06 09:01:34 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:01:34 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:01:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:01:36 np0005548790.localdomain systemd[1]: tmp-crun.WLjH8L.mount: Deactivated successfully.
Dec 06 09:01:36 np0005548790.localdomain podman[96474]: 2025-12-06 09:01:36.584376068 +0000 UTC m=+0.092009172 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, release=1761123044, tcib_managed=true, container_name=nova_compute, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z)
Dec 06 09:01:36 np0005548790.localdomain podman[96474]: 2025-12-06 09:01:36.620460415 +0000 UTC m=+0.128093529 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_compute, release=1761123044, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, architecture=x86_64, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 09:01:36 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:01:41 np0005548790.localdomain sshd[96500]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:01:43 np0005548790.localdomain sshd[96500]: Received disconnect from 103.226.138.52 port 44486:11: Bye Bye [preauth]
Dec 06 09:01:43 np0005548790.localdomain sshd[96500]: Disconnected from authenticating user root 103.226.138.52 port 44486 [preauth]
Dec 06 09:01:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:01:44 np0005548790.localdomain systemd[1]: tmp-crun.dBBUd8.mount: Deactivated successfully.
Dec 06 09:01:44 np0005548790.localdomain podman[96502]: 2025-12-06 09:01:44.575463669 +0000 UTC m=+0.086722842 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z)
Dec 06 09:01:44 np0005548790.localdomain podman[96502]: 2025-12-06 09:01:44.798873907 +0000 UTC m=+0.310133110 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step1, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, release=1761123044, container_name=metrics_qdr, tcib_managed=true, io.openshift.expose-services=)
Dec 06 09:01:44 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:01:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:01:57 np0005548790.localdomain podman[96531]: 2025-12-06 09:01:57.5749181 +0000 UTC m=+0.090378718 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 09:01:57 np0005548790.localdomain podman[96531]: 2025-12-06 09:01:57.609980571 +0000 UTC m=+0.125441179 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, distribution-scope=public, batch=17.1_20251118.1, container_name=collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, release=1761123044, maintainer=OpenStack TripleO Team)
Dec 06 09:01:57 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:01:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:01:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:01:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:01:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:01:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:01:59 np0005548790.localdomain podman[96552]: 2025-12-06 09:01:59.577404511 +0000 UTC m=+0.081340169 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, build-date=2025-11-18T23:44:13Z, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git)
Dec 06 09:01:59 np0005548790.localdomain podman[96554]: 2025-12-06 09:01:59.633836618 +0000 UTC m=+0.129423385 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible)
Dec 06 09:01:59 np0005548790.localdomain podman[96554]: 2025-12-06 09:01:59.658074032 +0000 UTC m=+0.153660759 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 06 09:01:59 np0005548790.localdomain podman[96552]: 2025-12-06 09:01:59.667193163 +0000 UTC m=+0.171128891 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:01:59 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:01:59 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:01:59 np0005548790.localdomain podman[96553]: 2025-12-06 09:01:59.750231647 +0000 UTC m=+0.250769135 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, distribution-scope=public, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64)
Dec 06 09:01:59 np0005548790.localdomain podman[96561]: 2025-12-06 09:01:59.801589439 +0000 UTC m=+0.289684857 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com)
Dec 06 09:01:59 np0005548790.localdomain podman[96561]: 2025-12-06 09:01:59.807441204 +0000 UTC m=+0.295536662 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible)
Dec 06 09:01:59 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:01:59 np0005548790.localdomain podman[96555]: 2025-12-06 09:01:59.873544178 +0000 UTC m=+0.366082834 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, distribution-scope=public, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 09:01:59 np0005548790.localdomain podman[96553]: 2025-12-06 09:01:59.877684158 +0000 UTC m=+0.378221656 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 09:01:59 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:02:00 np0005548790.localdomain podman[96555]: 2025-12-06 09:02:00.205228678 +0000 UTC m=+0.697767374 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, release=1761123044, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., distribution-scope=public, container_name=nova_migration_target, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12)
Dec 06 09:02:00 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:02:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:02:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:02:05 np0005548790.localdomain systemd[1]: tmp-crun.fgb5PT.mount: Deactivated successfully.
Dec 06 09:02:05 np0005548790.localdomain podman[96671]: 2025-12-06 09:02:05.568392984 +0000 UTC m=+0.075157066 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public)
Dec 06 09:02:05 np0005548790.localdomain podman[96671]: 2025-12-06 09:02:05.583589046 +0000 UTC m=+0.090353098 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4)
Dec 06 09:02:05 np0005548790.localdomain podman[96671]: unhealthy
Dec 06 09:02:05 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:02:05 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:02:05 np0005548790.localdomain podman[96670]: 2025-12-06 09:02:05.673906672 +0000 UTC m=+0.182453782 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, release=1761123044, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, version=17.1.12)
Dec 06 09:02:05 np0005548790.localdomain podman[96670]: 2025-12-06 09:02:05.713414592 +0000 UTC m=+0.221961711 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044)
Dec 06 09:02:05 np0005548790.localdomain podman[96670]: unhealthy
Dec 06 09:02:05 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:02:05 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:02:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:02:07 np0005548790.localdomain podman[96710]: 2025-12-06 09:02:07.561186526 +0000 UTC m=+0.080150687 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, tcib_managed=true)
Dec 06 09:02:07 np0005548790.localdomain podman[96710]: 2025-12-06 09:02:07.615534629 +0000 UTC m=+0.134498760 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044)
Dec 06 09:02:07 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:02:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:02:15 np0005548790.localdomain podman[96736]: 2025-12-06 09:02:15.563334468 +0000 UTC m=+0.082001858 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=metrics_qdr, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 09:02:15 np0005548790.localdomain podman[96736]: 2025-12-06 09:02:15.759083892 +0000 UTC m=+0.277751212 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible)
Dec 06 09:02:15 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:02:25 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 09:02:25 np0005548790.localdomain recover_tripleo_nova_virtqemud[96766]: 62556
Dec 06 09:02:25 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 09:02:25 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 09:02:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:02:28 np0005548790.localdomain podman[96767]: 2025-12-06 09:02:28.565042488 +0000 UTC m=+0.083282531 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, release=1761123044)
Dec 06 09:02:28 np0005548790.localdomain podman[96767]: 2025-12-06 09:02:28.578106945 +0000 UTC m=+0.096347058 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git)
Dec 06 09:02:28 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:02:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:02:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:02:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:02:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:02:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:02:30 np0005548790.localdomain podman[96788]: 2025-12-06 09:02:30.578312795 +0000 UTC m=+0.085587243 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-iscsid, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z)
Dec 06 09:02:30 np0005548790.localdomain podman[96788]: 2025-12-06 09:02:30.585916566 +0000 UTC m=+0.093191044 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, release=1761123044, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, name=rhosp17/openstack-iscsid, vcs-type=git)
Dec 06 09:02:30 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:02:30 np0005548790.localdomain podman[96789]: 2025-12-06 09:02:30.628256769 +0000 UTC m=+0.130866892 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, tcib_managed=true, release=1761123044, container_name=ceilometer_agent_compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible)
Dec 06 09:02:30 np0005548790.localdomain podman[96789]: 2025-12-06 09:02:30.675229885 +0000 UTC m=+0.177840008 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 09:02:30 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:02:30 np0005548790.localdomain podman[96796]: 2025-12-06 09:02:30.687534303 +0000 UTC m=+0.184038215 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 09:02:30 np0005548790.localdomain podman[96802]: 2025-12-06 09:02:30.734624791 +0000 UTC m=+0.226253673 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4, architecture=x86_64)
Dec 06 09:02:30 np0005548790.localdomain podman[96802]: 2025-12-06 09:02:30.741371241 +0000 UTC m=+0.233000133 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, release=1761123044, container_name=logrotate_crond, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 09:02:30 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:02:30 np0005548790.localdomain podman[96790]: 2025-12-06 09:02:30.785006368 +0000 UTC m=+0.284398866 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 09:02:30 np0005548790.localdomain podman[96790]: 2025-12-06 09:02:30.812133208 +0000 UTC m=+0.311525726 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, release=1761123044, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, maintainer=OpenStack TripleO Team)
Dec 06 09:02:30 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:02:31 np0005548790.localdomain podman[96796]: 2025-12-06 09:02:31.057250711 +0000 UTC m=+0.553754673 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_migration_target, batch=17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, url=https://www.redhat.com)
Dec 06 09:02:31 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:02:31 np0005548790.localdomain sudo[96895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:02:31 np0005548790.localdomain sudo[96895]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:02:31 np0005548790.localdomain sudo[96895]: pam_unix(sudo:session): session closed for user root
Dec 06 09:02:31 np0005548790.localdomain sudo[96910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:02:31 np0005548790.localdomain sudo[96910]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:02:32 np0005548790.localdomain sudo[96910]: pam_unix(sudo:session): session closed for user root
Dec 06 09:02:33 np0005548790.localdomain sudo[96957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:02:33 np0005548790.localdomain sudo[96957]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:02:33 np0005548790.localdomain sudo[96957]: pam_unix(sudo:session): session closed for user root
Dec 06 09:02:34 np0005548790.localdomain sshd[96972]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:02:35 np0005548790.localdomain sshd[96972]: Received disconnect from 35.247.75.98 port 33372:11: Bye Bye [preauth]
Dec 06 09:02:35 np0005548790.localdomain sshd[96972]: Disconnected from authenticating user root 35.247.75.98 port 33372 [preauth]
Dec 06 09:02:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:02:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:02:35 np0005548790.localdomain podman[96974]: 2025-12-06 09:02:35.811588384 +0000 UTC m=+0.108606363 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, release=1761123044, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=)
Dec 06 09:02:35 np0005548790.localdomain podman[96974]: 2025-12-06 09:02:35.854268376 +0000 UTC m=+0.151286375 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 09:02:35 np0005548790.localdomain podman[96974]: unhealthy
Dec 06 09:02:35 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:02:35 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:02:35 np0005548790.localdomain podman[96992]: 2025-12-06 09:02:35.90835111 +0000 UTC m=+0.085802227 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, distribution-scope=public, version=17.1.12, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 09:02:35 np0005548790.localdomain podman[96992]: 2025-12-06 09:02:35.954187207 +0000 UTC m=+0.131638284 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Dec 06 09:02:35 np0005548790.localdomain podman[96992]: unhealthy
Dec 06 09:02:35 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:02:35 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:02:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:02:38 np0005548790.localdomain podman[97015]: 2025-12-06 09:02:38.576915522 +0000 UTC m=+0.085909591 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, container_name=nova_compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, release=1761123044, vcs-type=git, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container)
Dec 06 09:02:38 np0005548790.localdomain podman[97015]: 2025-12-06 09:02:38.633205136 +0000 UTC m=+0.142199185 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, container_name=nova_compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, distribution-scope=public, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 06 09:02:38 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:02:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:02:46 np0005548790.localdomain podman[97040]: 2025-12-06 09:02:46.559117756 +0000 UTC m=+0.077378124 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-18T22:49:46Z)
Dec 06 09:02:46 np0005548790.localdomain podman[97040]: 2025-12-06 09:02:46.751408598 +0000 UTC m=+0.269668936 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd)
Dec 06 09:02:47 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:02:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:02:59 np0005548790.localdomain systemd[1]: tmp-crun.504qPt.mount: Deactivated successfully.
Dec 06 09:02:59 np0005548790.localdomain podman[97070]: 2025-12-06 09:02:59.574293245 +0000 UTC m=+0.090956574 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step3, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, release=1761123044, name=rhosp17/openstack-collectd)
Dec 06 09:02:59 np0005548790.localdomain podman[97070]: 2025-12-06 09:02:59.586253712 +0000 UTC m=+0.102917031 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step3, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:02:59 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:03:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:03:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:03:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:03:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:03:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:03:01 np0005548790.localdomain podman[97099]: 2025-12-06 09:03:01.566257736 +0000 UTC m=+0.072810163 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public)
Dec 06 09:03:01 np0005548790.localdomain podman[97099]: 2025-12-06 09:03:01.579352503 +0000 UTC m=+0.085904980 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044, distribution-scope=public, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1)
Dec 06 09:03:01 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:03:01 np0005548790.localdomain systemd[1]: tmp-crun.zkM7rQ.mount: Deactivated successfully.
Dec 06 09:03:01 np0005548790.localdomain podman[97090]: 2025-12-06 09:03:01.636286074 +0000 UTC m=+0.150277448 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 09:03:01 np0005548790.localdomain podman[97093]: 2025-12-06 09:03:01.676549933 +0000 UTC m=+0.182984257 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1761123044, container_name=nova_migration_target, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1)
Dec 06 09:03:01 np0005548790.localdomain podman[97092]: 2025-12-06 09:03:01.732450856 +0000 UTC m=+0.242625469 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, config_id=tripleo_step4, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044)
Dec 06 09:03:01 np0005548790.localdomain podman[97092]: 2025-12-06 09:03:01.780013127 +0000 UTC m=+0.290187670 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, tcib_managed=true)
Dec 06 09:03:01 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:03:01 np0005548790.localdomain podman[97091]: 2025-12-06 09:03:01.788854292 +0000 UTC m=+0.298985154 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 09:03:01 np0005548790.localdomain podman[97090]: 2025-12-06 09:03:01.805710469 +0000 UTC m=+0.319701793 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, container_name=iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com)
Dec 06 09:03:01 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:03:01 np0005548790.localdomain podman[97091]: 2025-12-06 09:03:01.863004829 +0000 UTC m=+0.373135751 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, distribution-scope=public)
Dec 06 09:03:01 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:03:02 np0005548790.localdomain podman[97093]: 2025-12-06 09:03:02.049100927 +0000 UTC m=+0.555535261 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 09:03:02 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:03:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:03:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:03:06 np0005548790.localdomain systemd[1]: tmp-crun.SMd8J6.mount: Deactivated successfully.
Dec 06 09:03:06 np0005548790.localdomain podman[97204]: 2025-12-06 09:03:06.590296315 +0000 UTC m=+0.101053402 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, build-date=2025-11-19T00:14:25Z, distribution-scope=public, release=1761123044, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=)
Dec 06 09:03:06 np0005548790.localdomain podman[97204]: 2025-12-06 09:03:06.609115433 +0000 UTC m=+0.119872590 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, version=17.1.12, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com)
Dec 06 09:03:06 np0005548790.localdomain podman[97204]: unhealthy
Dec 06 09:03:06 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:03:06 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:03:06 np0005548790.localdomain podman[97205]: 2025-12-06 09:03:06.691551641 +0000 UTC m=+0.197043769 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller)
Dec 06 09:03:06 np0005548790.localdomain podman[97205]: 2025-12-06 09:03:06.709158818 +0000 UTC m=+0.214650986 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Dec 06 09:03:06 np0005548790.localdomain podman[97205]: unhealthy
Dec 06 09:03:06 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:03:06 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:03:07 np0005548790.localdomain systemd[1]: tmp-crun.MJhRBR.mount: Deactivated successfully.
Dec 06 09:03:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:03:09 np0005548790.localdomain podman[97245]: 2025-12-06 09:03:09.565360259 +0000 UTC m=+0.080103177 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 09:03:09 np0005548790.localdomain podman[97245]: 2025-12-06 09:03:09.596085104 +0000 UTC m=+0.110828032 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, distribution-scope=public, architecture=x86_64, release=1761123044, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible)
Dec 06 09:03:09 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:03:12 np0005548790.localdomain sshd[97272]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:03:15 np0005548790.localdomain sshd[97272]: Received disconnect from 103.226.138.52 port 49832:11: Bye Bye [preauth]
Dec 06 09:03:15 np0005548790.localdomain sshd[97272]: Disconnected from authenticating user root 103.226.138.52 port 49832 [preauth]
Dec 06 09:03:18 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:03:18 np0005548790.localdomain podman[97274]: 2025-12-06 09:03:18.562649224 +0000 UTC m=+0.080137306 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, version=17.1.12, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 09:03:18 np0005548790.localdomain podman[97274]: 2025-12-06 09:03:18.77427826 +0000 UTC m=+0.291766302 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 09:03:18 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:03:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:03:30 np0005548790.localdomain podman[97304]: 2025-12-06 09:03:30.567748135 +0000 UTC m=+0.078334530 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, vcs-type=git, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Dec 06 09:03:30 np0005548790.localdomain podman[97304]: 2025-12-06 09:03:30.607212342 +0000 UTC m=+0.117798727 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, tcib_managed=true, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:03:30 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:03:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:03:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:03:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:03:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:03:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:03:32 np0005548790.localdomain systemd[1]: tmp-crun.f6InLL.mount: Deactivated successfully.
Dec 06 09:03:32 np0005548790.localdomain podman[97324]: 2025-12-06 09:03:32.583743402 +0000 UTC m=+0.089868406 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 06 09:03:32 np0005548790.localdomain podman[97324]: 2025-12-06 09:03:32.639418899 +0000 UTC m=+0.145543913 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 09:03:32 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:03:32 np0005548790.localdomain podman[97323]: 2025-12-06 09:03:32.684168866 +0000 UTC m=+0.193285999 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 06 09:03:32 np0005548790.localdomain podman[97325]: 2025-12-06 09:03:32.640870278 +0000 UTC m=+0.143786197 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 09:03:32 np0005548790.localdomain podman[97331]: 2025-12-06 09:03:32.741840607 +0000 UTC m=+0.242743652 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, container_name=logrotate_crond, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron)
Dec 06 09:03:32 np0005548790.localdomain podman[97331]: 2025-12-06 09:03:32.773730642 +0000 UTC m=+0.274633737 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044)
Dec 06 09:03:32 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:03:32 np0005548790.localdomain podman[97322]: 2025-12-06 09:03:32.797218816 +0000 UTC m=+0.309955605 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., container_name=iscsid, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, distribution-scope=public)
Dec 06 09:03:32 np0005548790.localdomain podman[97322]: 2025-12-06 09:03:32.810069616 +0000 UTC m=+0.322806405 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=iscsid, version=17.1.12)
Dec 06 09:03:32 np0005548790.localdomain podman[97323]: 2025-12-06 09:03:32.820107083 +0000 UTC m=+0.329224256 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4)
Dec 06 09:03:32 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:03:32 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:03:33 np0005548790.localdomain podman[97325]: 2025-12-06 09:03:33.002308347 +0000 UTC m=+0.505224286 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, name=rhosp17/openstack-nova-compute)
Dec 06 09:03:33 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:03:33 np0005548790.localdomain sudo[97440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:03:33 np0005548790.localdomain sudo[97440]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:03:33 np0005548790.localdomain sudo[97440]: pam_unix(sudo:session): session closed for user root
Dec 06 09:03:33 np0005548790.localdomain systemd[1]: tmp-crun.khMw5O.mount: Deactivated successfully.
Dec 06 09:03:33 np0005548790.localdomain sudo[97455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 09:03:33 np0005548790.localdomain sudo[97455]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:03:34 np0005548790.localdomain systemd[1]: tmp-crun.ah94ni.mount: Deactivated successfully.
Dec 06 09:03:34 np0005548790.localdomain podman[97540]: 2025-12-06 09:03:34.42637525 +0000 UTC m=+0.100888327 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, name=rhceph, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.buildah.version=1.41.4, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vendor=Red Hat, Inc., GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, vcs-type=git)
Dec 06 09:03:34 np0005548790.localdomain podman[97540]: 2025-12-06 09:03:34.552421845 +0000 UTC m=+0.226934962 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=7, description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, com.redhat.component=rhceph-container, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 06 09:03:34 np0005548790.localdomain sudo[97455]: pam_unix(sudo:session): session closed for user root
Dec 06 09:03:34 np0005548790.localdomain sudo[97609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:03:34 np0005548790.localdomain sudo[97609]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:03:34 np0005548790.localdomain sudo[97609]: pam_unix(sudo:session): session closed for user root
Dec 06 09:03:35 np0005548790.localdomain sudo[97624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:03:35 np0005548790.localdomain sudo[97624]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:03:35 np0005548790.localdomain sudo[97624]: pam_unix(sudo:session): session closed for user root
Dec 06 09:03:36 np0005548790.localdomain sudo[97671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:03:36 np0005548790.localdomain sudo[97671]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:03:36 np0005548790.localdomain sudo[97671]: pam_unix(sudo:session): session closed for user root
Dec 06 09:03:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:03:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:03:37 np0005548790.localdomain podman[97687]: 2025-12-06 09:03:37.569381501 +0000 UTC m=+0.079933352 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, container_name=ovn_controller, release=1761123044, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git)
Dec 06 09:03:37 np0005548790.localdomain systemd[1]: tmp-crun.pBV7h9.mount: Deactivated successfully.
Dec 06 09:03:37 np0005548790.localdomain podman[97686]: 2025-12-06 09:03:37.622573722 +0000 UTC m=+0.134404337 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, vcs-type=git, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Dec 06 09:03:37 np0005548790.localdomain podman[97686]: 2025-12-06 09:03:37.635508105 +0000 UTC m=+0.147338730 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Dec 06 09:03:37 np0005548790.localdomain podman[97686]: unhealthy
Dec 06 09:03:37 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:03:37 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:03:37 np0005548790.localdomain podman[97687]: 2025-12-06 09:03:37.691484651 +0000 UTC m=+0.202036472 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller)
Dec 06 09:03:37 np0005548790.localdomain podman[97687]: unhealthy
Dec 06 09:03:37 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:03:37 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:03:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:03:40 np0005548790.localdomain podman[97722]: 2025-12-06 09:03:40.571368569 +0000 UTC m=+0.086760843 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, release=1761123044, vcs-type=git, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12)
Dec 06 09:03:40 np0005548790.localdomain podman[97722]: 2025-12-06 09:03:40.628984378 +0000 UTC m=+0.144376622 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute)
Dec 06 09:03:40 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:03:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:03:49 np0005548790.localdomain podman[97748]: 2025-12-06 09:03:49.554842818 +0000 UTC m=+0.070851152 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.4, container_name=metrics_qdr, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1)
Dec 06 09:03:49 np0005548790.localdomain podman[97748]: 2025-12-06 09:03:49.748250039 +0000 UTC m=+0.264258373 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, batch=17.1_20251118.1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 09:03:49 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:04:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:04:01 np0005548790.localdomain systemd[1]: tmp-crun.GH3zSh.mount: Deactivated successfully.
Dec 06 09:04:01 np0005548790.localdomain podman[97779]: 2025-12-06 09:04:01.574671768 +0000 UTC m=+0.091791557 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, version=17.1.12, io.buildah.version=1.41.4, vcs-type=git, container_name=collectd, managed_by=tripleo_ansible)
Dec 06 09:04:01 np0005548790.localdomain podman[97779]: 2025-12-06 09:04:01.609457851 +0000 UTC m=+0.126577650 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd)
Dec 06 09:04:01 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:04:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:04:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:04:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:04:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:04:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:04:03 np0005548790.localdomain podman[97799]: 2025-12-06 09:04:03.581133953 +0000 UTC m=+0.091597621 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, tcib_managed=true)
Dec 06 09:04:03 np0005548790.localdomain podman[97799]: 2025-12-06 09:04:03.594182859 +0000 UTC m=+0.104646497 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, distribution-scope=public, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_id=tripleo_step3, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc.)
Dec 06 09:04:03 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:04:03 np0005548790.localdomain systemd[1]: tmp-crun.GIBtDZ.mount: Deactivated successfully.
Dec 06 09:04:03 np0005548790.localdomain podman[97802]: 2025-12-06 09:04:03.641571197 +0000 UTC m=+0.142815230 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, version=17.1.12, config_id=tripleo_step4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 09:04:03 np0005548790.localdomain podman[97808]: 2025-12-06 09:04:03.686256042 +0000 UTC m=+0.182331768 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 09:04:03 np0005548790.localdomain podman[97800]: 2025-12-06 09:04:03.744068357 +0000 UTC m=+0.250690253 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, batch=17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 09:04:03 np0005548790.localdomain podman[97801]: 2025-12-06 09:04:03.787302003 +0000 UTC m=+0.291142955 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 06 09:04:03 np0005548790.localdomain podman[97800]: 2025-12-06 09:04:03.79810253 +0000 UTC m=+0.304724356 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:04:03 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:04:03 np0005548790.localdomain podman[97801]: 2025-12-06 09:04:03.823057962 +0000 UTC m=+0.326898924 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, release=1761123044, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 06 09:04:03 np0005548790.localdomain podman[97808]: 2025-12-06 09:04:03.822949619 +0000 UTC m=+0.319025405 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=)
Dec 06 09:04:03 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:04:03 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:04:04 np0005548790.localdomain podman[97802]: 2025-12-06 09:04:04.028532693 +0000 UTC m=+0.529776716 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 09:04:04 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:04:08 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:04:08 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:04:08 np0005548790.localdomain podman[97913]: 2025-12-06 09:04:08.564998844 +0000 UTC m=+0.081704118 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Dec 06 09:04:08 np0005548790.localdomain podman[97913]: 2025-12-06 09:04:08.578728239 +0000 UTC m=+0.095433503 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z)
Dec 06 09:04:08 np0005548790.localdomain podman[97913]: unhealthy
Dec 06 09:04:08 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:04:08 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:04:08 np0005548790.localdomain systemd[1]: tmp-crun.0qnX9F.mount: Deactivated successfully.
Dec 06 09:04:08 np0005548790.localdomain podman[97914]: 2025-12-06 09:04:08.670013401 +0000 UTC m=+0.183748077 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:04:08 np0005548790.localdomain podman[97914]: 2025-12-06 09:04:08.687063763 +0000 UTC m=+0.200798439 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, version=17.1.12, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4)
Dec 06 09:04:08 np0005548790.localdomain podman[97914]: unhealthy
Dec 06 09:04:08 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:04:08 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:04:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:04:11 np0005548790.localdomain podman[97953]: 2025-12-06 09:04:11.539693407 +0000 UTC m=+0.061416960 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_compute, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 09:04:11 np0005548790.localdomain podman[97953]: 2025-12-06 09:04:11.558414264 +0000 UTC m=+0.080137797 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_compute, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 09:04:11 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:04:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:04:20 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 09:04:20 np0005548790.localdomain recover_tripleo_nova_virtqemud[97986]: 62556
Dec 06 09:04:20 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 09:04:20 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 09:04:20 np0005548790.localdomain systemd[1]: tmp-crun.GmENJ3.mount: Deactivated successfully.
Dec 06 09:04:20 np0005548790.localdomain podman[97979]: 2025-12-06 09:04:20.586978708 +0000 UTC m=+0.100649372 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, version=17.1.12, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 09:04:20 np0005548790.localdomain podman[97979]: 2025-12-06 09:04:20.789189763 +0000 UTC m=+0.302860387 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, url=https://www.redhat.com, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible)
Dec 06 09:04:20 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:04:28 np0005548790.localdomain sshd[98010]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:04:30 np0005548790.localdomain sshd[98010]: Received disconnect from 35.247.75.98 port 56046:11: Bye Bye [preauth]
Dec 06 09:04:30 np0005548790.localdomain sshd[98010]: Disconnected from authenticating user root 35.247.75.98 port 56046 [preauth]
Dec 06 09:04:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:04:32 np0005548790.localdomain podman[98012]: 2025-12-06 09:04:32.569929261 +0000 UTC m=+0.087567595 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, architecture=x86_64, config_id=tripleo_step3, maintainer=OpenStack TripleO Team)
Dec 06 09:04:32 np0005548790.localdomain podman[98012]: 2025-12-06 09:04:32.605480084 +0000 UTC m=+0.123118378 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, url=https://www.redhat.com, release=1761123044, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, name=rhosp17/openstack-collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 09:04:32 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:04:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:04:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:04:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:04:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:04:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:04:34 np0005548790.localdomain podman[98033]: 2025-12-06 09:04:34.585122888 +0000 UTC m=+0.095391752 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git)
Dec 06 09:04:34 np0005548790.localdomain systemd[1]: tmp-crun.3GQkcp.mount: Deactivated successfully.
Dec 06 09:04:34 np0005548790.localdomain podman[98033]: 2025-12-06 09:04:34.645167181 +0000 UTC m=+0.155436035 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible)
Dec 06 09:04:34 np0005548790.localdomain podman[98032]: 2025-12-06 09:04:34.653473231 +0000 UTC m=+0.165440530 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, release=1761123044, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com)
Dec 06 09:04:34 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:04:34 np0005548790.localdomain podman[98034]: 2025-12-06 09:04:34.689924868 +0000 UTC m=+0.195885908 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, release=1761123044, vcs-type=git, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi)
Dec 06 09:04:34 np0005548790.localdomain podman[98039]: 2025-12-06 09:04:34.750951908 +0000 UTC m=+0.249854630 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team)
Dec 06 09:04:34 np0005548790.localdomain podman[98032]: 2025-12-06 09:04:34.7645954 +0000 UTC m=+0.276562699 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, tcib_managed=true, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4)
Dec 06 09:04:34 np0005548790.localdomain podman[98034]: 2025-12-06 09:04:34.771990396 +0000 UTC m=+0.277951416 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.12, tcib_managed=true, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 09:04:34 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:04:34 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:04:34 np0005548790.localdomain podman[98046]: 2025-12-06 09:04:34.844168101 +0000 UTC m=+0.336772076 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, distribution-scope=public, version=17.1.12, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 09:04:34 np0005548790.localdomain podman[98046]: 2025-12-06 09:04:34.852497692 +0000 UTC m=+0.345101737 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=logrotate_crond, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-cron, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container)
Dec 06 09:04:34 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:04:35 np0005548790.localdomain podman[98039]: 2025-12-06 09:04:35.117202615 +0000 UTC m=+0.616105357 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, vcs-type=git, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64)
Dec 06 09:04:35 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:04:36 np0005548790.localdomain sudo[98145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:04:36 np0005548790.localdomain sudo[98145]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:04:36 np0005548790.localdomain sudo[98145]: pam_unix(sudo:session): session closed for user root
Dec 06 09:04:36 np0005548790.localdomain sudo[98160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:04:36 np0005548790.localdomain sudo[98160]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:04:37 np0005548790.localdomain sudo[98160]: pam_unix(sudo:session): session closed for user root
Dec 06 09:04:37 np0005548790.localdomain sudo[98207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:04:37 np0005548790.localdomain sudo[98207]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:04:37 np0005548790.localdomain sudo[98207]: pam_unix(sudo:session): session closed for user root
Dec 06 09:04:39 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:04:39 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:04:39 np0005548790.localdomain podman[98223]: 2025-12-06 09:04:39.571910139 +0000 UTC m=+0.081719949 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, release=1761123044, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, container_name=ovn_controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 06 09:04:39 np0005548790.localdomain podman[98223]: 2025-12-06 09:04:39.61833414 +0000 UTC m=+0.128143940 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 06 09:04:39 np0005548790.localdomain podman[98223]: unhealthy
Dec 06 09:04:39 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:04:39 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:04:39 np0005548790.localdomain podman[98222]: 2025-12-06 09:04:39.61983107 +0000 UTC m=+0.130309218 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12)
Dec 06 09:04:39 np0005548790.localdomain podman[98222]: 2025-12-06 09:04:39.700293115 +0000 UTC m=+0.210771263 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 09:04:39 np0005548790.localdomain podman[98222]: unhealthy
Dec 06 09:04:39 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:04:39 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:04:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:04:42 np0005548790.localdomain podman[98261]: 2025-12-06 09:04:42.591761951 +0000 UTC m=+0.109556717 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step5, container_name=nova_compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, vcs-type=git, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 09:04:42 np0005548790.localdomain podman[98261]: 2025-12-06 09:04:42.652414061 +0000 UTC m=+0.170208877 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, batch=17.1_20251118.1, config_id=tripleo_step5, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 09:04:42 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:04:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:04:51 np0005548790.localdomain systemd[1]: tmp-crun.gn3c2i.mount: Deactivated successfully.
Dec 06 09:04:51 np0005548790.localdomain podman[98287]: 2025-12-06 09:04:51.58543766 +0000 UTC m=+0.104582145 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, architecture=x86_64, tcib_managed=true, version=17.1.12, build-date=2025-11-18T22:49:46Z, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4)
Dec 06 09:04:51 np0005548790.localdomain podman[98287]: 2025-12-06 09:04:51.802827548 +0000 UTC m=+0.321972093 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 09:04:51 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:05:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:05:03 np0005548790.localdomain podman[98316]: 2025-12-06 09:05:03.561837888 +0000 UTC m=+0.078994178 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_id=tripleo_step3, container_name=collectd)
Dec 06 09:05:03 np0005548790.localdomain podman[98316]: 2025-12-06 09:05:03.575481609 +0000 UTC m=+0.092637869 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, vcs-type=git, name=rhosp17/openstack-collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Dec 06 09:05:03 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:05:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:05:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:05:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:05:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:05:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:05:05 np0005548790.localdomain podman[98336]: 2025-12-06 09:05:05.575395151 +0000 UTC m=+0.090269116 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container)
Dec 06 09:05:05 np0005548790.localdomain podman[98336]: 2025-12-06 09:05:05.582831808 +0000 UTC m=+0.097705813 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, version=17.1.12, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-iscsid-container, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team)
Dec 06 09:05:05 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:05:05 np0005548790.localdomain podman[98337]: 2025-12-06 09:05:05.633163204 +0000 UTC m=+0.144082145 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:05:05 np0005548790.localdomain podman[98337]: 2025-12-06 09:05:05.665389548 +0000 UTC m=+0.176308459 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20251118.1, vcs-type=git, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container)
Dec 06 09:05:05 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:05:05 np0005548790.localdomain podman[98347]: 2025-12-06 09:05:05.682908864 +0000 UTC m=+0.184075085 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, architecture=x86_64, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, batch=17.1_20251118.1)
Dec 06 09:05:05 np0005548790.localdomain podman[98344]: 2025-12-06 09:05:05.740583933 +0000 UTC m=+0.244330333 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, io.buildah.version=1.41.4, io.openshift.expose-services=)
Dec 06 09:05:05 np0005548790.localdomain podman[98338]: 2025-12-06 09:05:05.7887141 +0000 UTC m=+0.293967749 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-type=git, url=https://www.redhat.com)
Dec 06 09:05:05 np0005548790.localdomain podman[98338]: 2025-12-06 09:05:05.815702787 +0000 UTC m=+0.320956446 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 09:05:05 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:05:05 np0005548790.localdomain podman[98347]: 2025-12-06 09:05:05.868180239 +0000 UTC m=+0.369346460 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., container_name=logrotate_crond, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container)
Dec 06 09:05:05 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:05:06 np0005548790.localdomain podman[98344]: 2025-12-06 09:05:06.091332879 +0000 UTC m=+0.595079279 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 09:05:06 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:05:10 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:05:10 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:05:10 np0005548790.localdomain podman[98451]: 2025-12-06 09:05:10.569541516 +0000 UTC m=+0.081168835 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, batch=17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., container_name=ovn_controller, io.buildah.version=1.41.4, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, release=1761123044, distribution-scope=public)
Dec 06 09:05:10 np0005548790.localdomain podman[98451]: 2025-12-06 09:05:10.588287194 +0000 UTC m=+0.099914333 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, release=1761123044, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12)
Dec 06 09:05:10 np0005548790.localdomain podman[98451]: unhealthy
Dec 06 09:05:10 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:05:10 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:05:10 np0005548790.localdomain podman[98450]: 2025-12-06 09:05:10.677610074 +0000 UTC m=+0.191170554 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.expose-services=)
Dec 06 09:05:10 np0005548790.localdomain podman[98450]: 2025-12-06 09:05:10.723266265 +0000 UTC m=+0.236826735 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Dec 06 09:05:10 np0005548790.localdomain podman[98450]: unhealthy
Dec 06 09:05:10 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:05:10 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:05:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:05:13 np0005548790.localdomain systemd[1]: tmp-crun.Bfl55o.mount: Deactivated successfully.
Dec 06 09:05:13 np0005548790.localdomain podman[98490]: 2025-12-06 09:05:13.529667835 +0000 UTC m=+0.054316523 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, vcs-type=git, distribution-scope=public, config_id=tripleo_step5, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com)
Dec 06 09:05:13 np0005548790.localdomain podman[98490]: 2025-12-06 09:05:13.553171238 +0000 UTC m=+0.077819966 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, vcs-type=git)
Dec 06 09:05:13 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:05:22 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:05:22 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 09:05:22 np0005548790.localdomain recover_tripleo_nova_virtqemud[98520]: 62556
Dec 06 09:05:22 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 09:05:22 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 09:05:22 np0005548790.localdomain systemd[1]: tmp-crun.z935Pq.mount: Deactivated successfully.
Dec 06 09:05:22 np0005548790.localdomain podman[98518]: 2025-12-06 09:05:22.57247499 +0000 UTC m=+0.085989433 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64)
Dec 06 09:05:22 np0005548790.localdomain podman[98518]: 2025-12-06 09:05:22.787226257 +0000 UTC m=+0.300740750 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Dec 06 09:05:22 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:05:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:05:34 np0005548790.localdomain systemd[1]: tmp-crun.1HhfIe.mount: Deactivated successfully.
Dec 06 09:05:34 np0005548790.localdomain podman[98547]: 2025-12-06 09:05:34.623880977 +0000 UTC m=+0.133985806 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044)
Dec 06 09:05:34 np0005548790.localdomain podman[98547]: 2025-12-06 09:05:34.636326987 +0000 UTC m=+0.146431886 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container)
Dec 06 09:05:34 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:05:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:05:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:05:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:05:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:05:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:05:36 np0005548790.localdomain systemd[1]: tmp-crun.Dh13OU.mount: Deactivated successfully.
Dec 06 09:05:36 np0005548790.localdomain podman[98569]: 2025-12-06 09:05:36.588364969 +0000 UTC m=+0.095754711 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 09:05:36 np0005548790.localdomain systemd[1]: tmp-crun.IRw9vJ.mount: Deactivated successfully.
Dec 06 09:05:36 np0005548790.localdomain podman[98568]: 2025-12-06 09:05:36.640329707 +0000 UTC m=+0.150759750 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vcs-type=git, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, tcib_managed=true)
Dec 06 09:05:36 np0005548790.localdomain podman[98567]: 2025-12-06 09:05:36.691253169 +0000 UTC m=+0.204194688 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, container_name=iscsid, name=rhosp17/openstack-iscsid, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 06 09:05:36 np0005548790.localdomain podman[98568]: 2025-12-06 09:05:36.744304067 +0000 UTC m=+0.254734110 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, vcs-type=git)
Dec 06 09:05:36 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:05:36 np0005548790.localdomain podman[98567]: 2025-12-06 09:05:36.77834536 +0000 UTC m=+0.291286879 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, distribution-scope=public, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, managed_by=tripleo_ansible, container_name=iscsid, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 09:05:36 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:05:36 np0005548790.localdomain podman[98570]: 2025-12-06 09:05:36.749858314 +0000 UTC m=+0.251319649 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc.)
Dec 06 09:05:36 np0005548790.localdomain podman[98569]: 2025-12-06 09:05:36.816890192 +0000 UTC m=+0.324279934 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, release=1761123044, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 09:05:36 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:05:36 np0005548790.localdomain podman[98577]: 2025-12-06 09:05:36.802146831 +0000 UTC m=+0.301636763 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron)
Dec 06 09:05:36 np0005548790.localdomain podman[98577]: 2025-12-06 09:05:36.882201355 +0000 UTC m=+0.381691287 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=)
Dec 06 09:05:36 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:05:37 np0005548790.localdomain podman[98570]: 2025-12-06 09:05:37.213386092 +0000 UTC m=+0.714847437 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, vcs-type=git)
Dec 06 09:05:37 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:05:38 np0005548790.localdomain sudo[98682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:05:38 np0005548790.localdomain sudo[98682]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:05:38 np0005548790.localdomain sudo[98682]: pam_unix(sudo:session): session closed for user root
Dec 06 09:05:38 np0005548790.localdomain sudo[98697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:05:38 np0005548790.localdomain sudo[98697]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:05:38 np0005548790.localdomain sudo[98697]: pam_unix(sudo:session): session closed for user root
Dec 06 09:05:39 np0005548790.localdomain sudo[98744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:05:39 np0005548790.localdomain sudo[98744]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:05:39 np0005548790.localdomain sudo[98744]: pam_unix(sudo:session): session closed for user root
Dec 06 09:05:41 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:05:41 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:05:41 np0005548790.localdomain podman[98759]: 2025-12-06 09:05:41.568225185 +0000 UTC m=+0.077464877 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 09:05:41 np0005548790.localdomain podman[98759]: 2025-12-06 09:05:41.609679714 +0000 UTC m=+0.118919446 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, release=1761123044, build-date=2025-11-19T00:14:25Z, vcs-type=git, io.buildah.version=1.41.4, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:05:41 np0005548790.localdomain podman[98759]: unhealthy
Dec 06 09:05:41 np0005548790.localdomain podman[98760]: 2025-12-06 09:05:41.622138465 +0000 UTC m=+0.130106254 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 06 09:05:41 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:05:41 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:05:41 np0005548790.localdomain podman[98760]: 2025-12-06 09:05:41.666248975 +0000 UTC m=+0.174216784 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, io.buildah.version=1.41.4, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 06 09:05:41 np0005548790.localdomain podman[98760]: unhealthy
Dec 06 09:05:41 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:05:41 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:05:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:05:44 np0005548790.localdomain systemd[1]: tmp-crun.6IkSxV.mount: Deactivated successfully.
Dec 06 09:05:44 np0005548790.localdomain podman[98799]: 2025-12-06 09:05:44.570828909 +0000 UTC m=+0.089495356 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, managed_by=tripleo_ansible)
Dec 06 09:05:44 np0005548790.localdomain podman[98799]: 2025-12-06 09:05:44.599322795 +0000 UTC m=+0.117989292 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, container_name=nova_compute, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 06 09:05:44 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:05:53 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:05:53 np0005548790.localdomain podman[98825]: 2025-12-06 09:05:53.571506804 +0000 UTC m=+0.083201878 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.buildah.version=1.41.4, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, managed_by=tripleo_ansible, url=https://www.redhat.com)
Dec 06 09:05:53 np0005548790.localdomain podman[98825]: 2025-12-06 09:05:53.7661745 +0000 UTC m=+0.277869484 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, architecture=x86_64, release=1761123044)
Dec 06 09:05:53 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:06:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:06:05 np0005548790.localdomain podman[98855]: 2025-12-06 09:06:05.562847379 +0000 UTC m=+0.080580780 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, release=1761123044, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, url=https://www.redhat.com, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd)
Dec 06 09:06:05 np0005548790.localdomain podman[98855]: 2025-12-06 09:06:05.575091543 +0000 UTC m=+0.092824974 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12)
Dec 06 09:06:05 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:06:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:06:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:06:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:06:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:06:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:06:07 np0005548790.localdomain podman[98882]: 2025-12-06 09:06:07.588923054 +0000 UTC m=+0.087712398 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 09:06:07 np0005548790.localdomain systemd[1]: tmp-crun.NNbwiN.mount: Deactivated successfully.
Dec 06 09:06:07 np0005548790.localdomain podman[98875]: 2025-12-06 09:06:07.652517652 +0000 UTC m=+0.161844216 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 09:06:07 np0005548790.localdomain podman[98876]: 2025-12-06 09:06:07.708913988 +0000 UTC m=+0.213703121 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, architecture=x86_64, tcib_managed=true, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z)
Dec 06 09:06:07 np0005548790.localdomain podman[98876]: 2025-12-06 09:06:07.737199718 +0000 UTC m=+0.241988821 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, distribution-scope=public, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 06 09:06:07 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:06:07 np0005548790.localdomain podman[98884]: 2025-12-06 09:06:07.752038352 +0000 UTC m=+0.248130354 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1761123044, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron)
Dec 06 09:06:07 np0005548790.localdomain podman[98884]: 2025-12-06 09:06:07.784843422 +0000 UTC m=+0.280935424 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, build-date=2025-11-18T22:49:32Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, maintainer=OpenStack TripleO Team)
Dec 06 09:06:07 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:06:07 np0005548790.localdomain podman[98875]: 2025-12-06 09:06:07.820317644 +0000 UTC m=+0.329644218 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, architecture=x86_64, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 09:06:07 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:06:07 np0005548790.localdomain podman[98877]: 2025-12-06 09:06:07.793941503 +0000 UTC m=+0.295617043 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team)
Dec 06 09:06:07 np0005548790.localdomain podman[98877]: 2025-12-06 09:06:07.876104844 +0000 UTC m=+0.377780414 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 06 09:06:07 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:06:07 np0005548790.localdomain podman[98882]: 2025-12-06 09:06:07.952220033 +0000 UTC m=+0.451009337 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, distribution-scope=public)
Dec 06 09:06:07 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:06:12 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:06:12 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:06:12 np0005548790.localdomain podman[98985]: 2025-12-06 09:06:12.577019678 +0000 UTC m=+0.089725792 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=ovn_controller)
Dec 06 09:06:12 np0005548790.localdomain podman[98985]: 2025-12-06 09:06:12.588089242 +0000 UTC m=+0.100795346 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, release=1761123044, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 09:06:12 np0005548790.localdomain podman[98985]: unhealthy
Dec 06 09:06:12 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:06:12 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:06:12 np0005548790.localdomain systemd[1]: tmp-crun.nW5B9Y.mount: Deactivated successfully.
Dec 06 09:06:12 np0005548790.localdomain podman[98984]: 2025-12-06 09:06:12.63818855 +0000 UTC m=+0.153321198 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com)
Dec 06 09:06:12 np0005548790.localdomain podman[98984]: 2025-12-06 09:06:12.675672675 +0000 UTC m=+0.190805263 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, architecture=x86_64, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044)
Dec 06 09:06:12 np0005548790.localdomain podman[98984]: unhealthy
Dec 06 09:06:12 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:06:12 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:06:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:06:15 np0005548790.localdomain podman[99025]: 2025-12-06 09:06:15.557205807 +0000 UTC m=+0.076518163 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=)
Dec 06 09:06:15 np0005548790.localdomain podman[99025]: 2025-12-06 09:06:15.583144243 +0000 UTC m=+0.102456589 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, release=1761123044, tcib_managed=true, io.openshift.expose-services=, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container)
Dec 06 09:06:15 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:06:20 np0005548790.localdomain sshd[99050]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:06:21 np0005548790.localdomain sshd[99050]: Received disconnect from 35.247.75.98 port 55592:11: Bye Bye [preauth]
Dec 06 09:06:21 np0005548790.localdomain sshd[99050]: Disconnected from authenticating user root 35.247.75.98 port 55592 [preauth]
Dec 06 09:06:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:06:24 np0005548790.localdomain systemd[1]: tmp-crun.L7hGw6.mount: Deactivated successfully.
Dec 06 09:06:24 np0005548790.localdomain podman[99052]: 2025-12-06 09:06:24.579932723 +0000 UTC m=+0.088306429 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, version=17.1.12, io.buildah.version=1.41.4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, vcs-type=git, io.openshift.expose-services=)
Dec 06 09:06:24 np0005548790.localdomain podman[99052]: 2025-12-06 09:06:24.769130257 +0000 UTC m=+0.277503903 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, container_name=metrics_qdr, url=https://www.redhat.com, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, batch=17.1_20251118.1)
Dec 06 09:06:24 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:06:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:06:36 np0005548790.localdomain podman[99081]: 2025-12-06 09:06:36.566438009 +0000 UTC m=+0.081628770 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, name=rhosp17/openstack-collectd, architecture=x86_64, io.buildah.version=1.41.4)
Dec 06 09:06:36 np0005548790.localdomain podman[99081]: 2025-12-06 09:06:36.576751496 +0000 UTC m=+0.091942217 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, container_name=collectd, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.12, tcib_managed=true)
Dec 06 09:06:36 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:06:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:06:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:06:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:06:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:06:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:06:38 np0005548790.localdomain podman[99103]: 2025-12-06 09:06:38.563586468 +0000 UTC m=+0.077894940 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 09:06:38 np0005548790.localdomain systemd[1]: tmp-crun.WTjYyy.mount: Deactivated successfully.
Dec 06 09:06:38 np0005548790.localdomain podman[99102]: 2025-12-06 09:06:38.622856348 +0000 UTC m=+0.139339249 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com)
Dec 06 09:06:38 np0005548790.localdomain podman[99103]: 2025-12-06 09:06:38.626355821 +0000 UTC m=+0.140664303 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12)
Dec 06 09:06:38 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:06:38 np0005548790.localdomain podman[99101]: 2025-12-06 09:06:38.677455702 +0000 UTC m=+0.193270784 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 09:06:38 np0005548790.localdomain podman[99102]: 2025-12-06 09:06:38.682362693 +0000 UTC m=+0.198845604 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true)
Dec 06 09:06:38 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:06:38 np0005548790.localdomain podman[99101]: 2025-12-06 09:06:38.710979091 +0000 UTC m=+0.226794183 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=iscsid, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid)
Dec 06 09:06:38 np0005548790.localdomain podman[99104]: 2025-12-06 09:06:38.720841115 +0000 UTC m=+0.235170368 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team)
Dec 06 09:06:38 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:06:38 np0005548790.localdomain podman[99105]: 2025-12-06 09:06:38.774917205 +0000 UTC m=+0.287404718 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, tcib_managed=true, com.redhat.component=openstack-cron-container, release=1761123044, container_name=logrotate_crond, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Dec 06 09:06:38 np0005548790.localdomain podman[99105]: 2025-12-06 09:06:38.810184981 +0000 UTC m=+0.322672444 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, build-date=2025-11-18T22:49:32Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, container_name=logrotate_crond)
Dec 06 09:06:38 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:06:39 np0005548790.localdomain podman[99104]: 2025-12-06 09:06:39.064281896 +0000 UTC m=+0.578611129 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public)
Dec 06 09:06:39 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:06:39 np0005548790.localdomain systemd[1]: tmp-crun.fcqf4U.mount: Deactivated successfully.
Dec 06 09:06:39 np0005548790.localdomain sudo[99214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:06:39 np0005548790.localdomain sudo[99214]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:06:39 np0005548790.localdomain sudo[99214]: pam_unix(sudo:session): session closed for user root
Dec 06 09:06:39 np0005548790.localdomain sudo[99229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:06:39 np0005548790.localdomain sudo[99229]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:06:40 np0005548790.localdomain sudo[99229]: pam_unix(sudo:session): session closed for user root
Dec 06 09:06:41 np0005548790.localdomain sudo[99276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:06:41 np0005548790.localdomain sudo[99276]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:06:41 np0005548790.localdomain sudo[99276]: pam_unix(sudo:session): session closed for user root
Dec 06 09:06:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:06:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:06:43 np0005548790.localdomain podman[99291]: 2025-12-06 09:06:43.578670189 +0000 UTC m=+0.088719450 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.12, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z)
Dec 06 09:06:43 np0005548790.localdomain podman[99291]: 2025-12-06 09:06:43.622523706 +0000 UTC m=+0.132572977 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-type=git, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., release=1761123044, io.buildah.version=1.41.4)
Dec 06 09:06:43 np0005548790.localdomain podman[99291]: unhealthy
Dec 06 09:06:43 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:06:43 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:06:43 np0005548790.localdomain podman[99292]: 2025-12-06 09:06:43.634657461 +0000 UTC m=+0.144413274 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, container_name=ovn_controller, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 06 09:06:43 np0005548790.localdomain podman[99292]: 2025-12-06 09:06:43.653162248 +0000 UTC m=+0.162918051 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, name=rhosp17/openstack-ovn-controller, release=1761123044, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public)
Dec 06 09:06:43 np0005548790.localdomain podman[99292]: unhealthy
Dec 06 09:06:43 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:06:43 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:06:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:06:46 np0005548790.localdomain podman[99332]: 2025-12-06 09:06:46.562014205 +0000 UTC m=+0.080019316 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 09:06:46 np0005548790.localdomain podman[99332]: 2025-12-06 09:06:46.589567824 +0000 UTC m=+0.107572905 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, managed_by=tripleo_ansible)
Dec 06 09:06:46 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:06:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:06:55 np0005548790.localdomain podman[99358]: 2025-12-06 09:06:55.552677841 +0000 UTC m=+0.073806200 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, version=17.1.12, config_id=tripleo_step1)
Dec 06 09:06:55 np0005548790.localdomain podman[99358]: 2025-12-06 09:06:55.735367141 +0000 UTC m=+0.256495470 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4)
Dec 06 09:06:55 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:07:05 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 09:07:05 np0005548790.localdomain recover_tripleo_nova_virtqemud[99388]: 62556
Dec 06 09:07:05 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 09:07:05 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 09:07:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:07:07 np0005548790.localdomain systemd[1]: tmp-crun.DlKLI6.mount: Deactivated successfully.
Dec 06 09:07:07 np0005548790.localdomain podman[99389]: 2025-12-06 09:07:07.570682063 +0000 UTC m=+0.086829379 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, build-date=2025-11-18T22:51:28Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, config_id=tripleo_step3, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 09:07:07 np0005548790.localdomain podman[99389]: 2025-12-06 09:07:07.586157008 +0000 UTC m=+0.102304304 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, version=17.1.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com)
Dec 06 09:07:07 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:07:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:07:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:07:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:07:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:07:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:07:09 np0005548790.localdomain systemd[1]: tmp-crun.snlyj6.mount: Deactivated successfully.
Dec 06 09:07:09 np0005548790.localdomain podman[99414]: 2025-12-06 09:07:09.567581224 +0000 UTC m=+0.075725452 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container)
Dec 06 09:07:09 np0005548790.localdomain podman[99414]: 2025-12-06 09:07:09.577930332 +0000 UTC m=+0.086074580 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron)
Dec 06 09:07:09 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:07:09 np0005548790.localdomain podman[99413]: 2025-12-06 09:07:09.629061543 +0000 UTC m=+0.134146119 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, container_name=nova_migration_target, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, version=17.1.12)
Dec 06 09:07:09 np0005548790.localdomain podman[99411]: 2025-12-06 09:07:09.582839754 +0000 UTC m=+0.095371559 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-19T00:11:48Z)
Dec 06 09:07:09 np0005548790.localdomain podman[99411]: 2025-12-06 09:07:09.666154078 +0000 UTC m=+0.178685883 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com)
Dec 06 09:07:09 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:07:09 np0005548790.localdomain podman[99412]: 2025-12-06 09:07:09.683217456 +0000 UTC m=+0.189791972 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, release=1761123044, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc.)
Dec 06 09:07:09 np0005548790.localdomain podman[99410]: 2025-12-06 09:07:09.714051863 +0000 UTC m=+0.228121990 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-iscsid-container, tcib_managed=true, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, release=1761123044, distribution-scope=public, name=rhosp17/openstack-iscsid, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1)
Dec 06 09:07:09 np0005548790.localdomain podman[99412]: 2025-12-06 09:07:09.73408304 +0000 UTC m=+0.240657546 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-19T00:12:45Z, version=17.1.12)
Dec 06 09:07:09 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:07:09 np0005548790.localdomain podman[99410]: 2025-12-06 09:07:09.747708625 +0000 UTC m=+0.261778802 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com)
Dec 06 09:07:09 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:07:09 np0005548790.localdomain podman[99413]: 2025-12-06 09:07:09.978893645 +0000 UTC m=+0.483978241 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 09:07:09 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:07:14 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:07:14 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:07:14 np0005548790.localdomain podman[99527]: 2025-12-06 09:07:14.561321864 +0000 UTC m=+0.075911157 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, container_name=ovn_metadata_agent, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 09:07:14 np0005548790.localdomain podman[99527]: 2025-12-06 09:07:14.603280629 +0000 UTC m=+0.117869872 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044)
Dec 06 09:07:14 np0005548790.localdomain podman[99527]: unhealthy
Dec 06 09:07:14 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:07:14 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:07:14 np0005548790.localdomain systemd[1]: tmp-crun.T44sYr.mount: Deactivated successfully.
Dec 06 09:07:14 np0005548790.localdomain podman[99528]: 2025-12-06 09:07:14.642596183 +0000 UTC m=+0.153670612 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller)
Dec 06 09:07:14 np0005548790.localdomain podman[99528]: 2025-12-06 09:07:14.680911761 +0000 UTC m=+0.191986150 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.12, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 06 09:07:14 np0005548790.localdomain podman[99528]: unhealthy
Dec 06 09:07:14 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:07:14 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:07:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:07:17 np0005548790.localdomain podman[99568]: 2025-12-06 09:07:17.54806307 +0000 UTC m=+0.066271168 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, tcib_managed=true, version=17.1.12, config_id=tripleo_step5, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 09:07:17 np0005548790.localdomain podman[99568]: 2025-12-06 09:07:17.575233759 +0000 UTC m=+0.093441867 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, io.openshift.expose-services=, version=17.1.12, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container)
Dec 06 09:07:17 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:07:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:07:26 np0005548790.localdomain systemd[1]: tmp-crun.pBR2Vl.mount: Deactivated successfully.
Dec 06 09:07:26 np0005548790.localdomain podman[99594]: 2025-12-06 09:07:26.554844879 +0000 UTC m=+0.071382215 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 09:07:26 np0005548790.localdomain podman[99594]: 2025-12-06 09:07:26.736243554 +0000 UTC m=+0.252780860 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, version=17.1.12)
Dec 06 09:07:26 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:07:35 np0005548790.localdomain sshd[36062]: Received disconnect from 192.168.122.100 port 55068:11: disconnected by user
Dec 06 09:07:35 np0005548790.localdomain sshd[36062]: Disconnected from user tripleo-admin 192.168.122.100 port 55068
Dec 06 09:07:35 np0005548790.localdomain sshd[36042]: pam_unix(sshd:session): session closed for user tripleo-admin
Dec 06 09:07:35 np0005548790.localdomain systemd[1]: session-28.scope: Deactivated successfully.
Dec 06 09:07:35 np0005548790.localdomain systemd[1]: session-28.scope: Consumed 7min 4.905s CPU time.
Dec 06 09:07:35 np0005548790.localdomain systemd-logind[760]: Session 28 logged out. Waiting for processes to exit.
Dec 06 09:07:35 np0005548790.localdomain systemd-logind[760]: Removed session 28.
Dec 06 09:07:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:07:38 np0005548790.localdomain podman[99623]: 2025-12-06 09:07:38.562835532 +0000 UTC m=+0.079949066 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 09:07:38 np0005548790.localdomain podman[99623]: 2025-12-06 09:07:38.574234808 +0000 UTC m=+0.091348382 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, name=rhosp17/openstack-collectd)
Dec 06 09:07:38 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:07:38 np0005548790.localdomain kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:7e:49:5b MACPROTO=0800 SRC=77.42.16.162 DST=38.102.83.234 LEN=40 TOS=0x00 PREC=0x00 TTL=239 ID=61195 PROTO=TCP SPT=51887 DPT=9090 SEQ=2976882531 ACK=0 WINDOW=1024 RES=0x00 SYN URGP=0 
Dec 06 09:07:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:07:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:07:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:07:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:07:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:07:40 np0005548790.localdomain systemd[1]: tmp-crun.gSShYV.mount: Deactivated successfully.
Dec 06 09:07:40 np0005548790.localdomain podman[99642]: 2025-12-06 09:07:40.565695353 +0000 UTC m=+0.081995011 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid)
Dec 06 09:07:40 np0005548790.localdomain podman[99643]: 2025-12-06 09:07:40.638930057 +0000 UTC m=+0.150570630 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute)
Dec 06 09:07:40 np0005548790.localdomain podman[99642]: 2025-12-06 09:07:40.648291728 +0000 UTC m=+0.164591346 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, container_name=iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 06 09:07:40 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:07:40 np0005548790.localdomain podman[99656]: 2025-12-06 09:07:40.609126477 +0000 UTC m=+0.110248497 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, name=rhosp17/openstack-cron, io.openshift.expose-services=)
Dec 06 09:07:40 np0005548790.localdomain podman[99643]: 2025-12-06 09:07:40.665182721 +0000 UTC m=+0.176823284 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, tcib_managed=true, url=https://www.redhat.com, release=1761123044, container_name=ceilometer_agent_compute, config_id=tripleo_step4, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:07:40 np0005548790.localdomain podman[99656]: 2025-12-06 09:07:40.693395147 +0000 UTC m=+0.194517187 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, container_name=logrotate_crond, name=rhosp17/openstack-cron, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.buildah.version=1.41.4)
Dec 06 09:07:40 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:07:40 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:07:40 np0005548790.localdomain podman[99650]: 2025-12-06 09:07:40.73527228 +0000 UTC m=+0.238505307 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, tcib_managed=true)
Dec 06 09:07:40 np0005548790.localdomain podman[99644]: 2025-12-06 09:07:40.786524495 +0000 UTC m=+0.294234493 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, build-date=2025-11-19T00:12:45Z, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc.)
Dec 06 09:07:40 np0005548790.localdomain podman[99644]: 2025-12-06 09:07:40.841279813 +0000 UTC m=+0.348989821 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, release=1761123044, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 09:07:40 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:07:41 np0005548790.localdomain podman[99650]: 2025-12-06 09:07:41.129302528 +0000 UTC m=+0.632535515 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, distribution-scope=public, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 09:07:41 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:07:41 np0005548790.localdomain sudo[99755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:07:41 np0005548790.localdomain sudo[99755]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:07:41 np0005548790.localdomain sudo[99755]: pam_unix(sudo:session): session closed for user root
Dec 06 09:07:41 np0005548790.localdomain sudo[99770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:07:41 np0005548790.localdomain sudo[99770]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:07:41 np0005548790.localdomain systemd[1]: tmp-crun.FHz1MI.mount: Deactivated successfully.
Dec 06 09:07:41 np0005548790.localdomain sudo[99770]: pam_unix(sudo:session): session closed for user root
Dec 06 09:07:42 np0005548790.localdomain sudo[99816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:07:42 np0005548790.localdomain sudo[99816]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:07:42 np0005548790.localdomain sudo[99816]: pam_unix(sudo:session): session closed for user root
Dec 06 09:07:45 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:07:45 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:07:45 np0005548790.localdomain systemd[1]: tmp-crun.HjM7yR.mount: Deactivated successfully.
Dec 06 09:07:45 np0005548790.localdomain podman[99831]: 2025-12-06 09:07:45.617487919 +0000 UTC m=+0.099577212 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Dec 06 09:07:45 np0005548790.localdomain podman[99831]: 2025-12-06 09:07:45.658974861 +0000 UTC m=+0.141064174 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent)
Dec 06 09:07:45 np0005548790.localdomain podman[99831]: unhealthy
Dec 06 09:07:45 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:07:45 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:07:45 np0005548790.localdomain podman[99832]: 2025-12-06 09:07:45.708657003 +0000 UTC m=+0.190160490 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.)
Dec 06 09:07:45 np0005548790.localdomain podman[99832]: 2025-12-06 09:07:45.726158523 +0000 UTC m=+0.207662020 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, release=1761123044, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, config_id=tripleo_step4, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4)
Dec 06 09:07:45 np0005548790.localdomain podman[99832]: unhealthy
Dec 06 09:07:45 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:07:45 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:07:46 np0005548790.localdomain systemd[1]: Stopping User Manager for UID 1003...
Dec 06 09:07:46 np0005548790.localdomain systemd[36046]: Activating special unit Exit the Session...
Dec 06 09:07:46 np0005548790.localdomain systemd[36046]: Removed slice User Background Tasks Slice.
Dec 06 09:07:46 np0005548790.localdomain systemd[36046]: Stopped target Main User Target.
Dec 06 09:07:46 np0005548790.localdomain systemd[36046]: Stopped target Basic System.
Dec 06 09:07:46 np0005548790.localdomain systemd[36046]: Stopped target Paths.
Dec 06 09:07:46 np0005548790.localdomain systemd[36046]: Stopped target Sockets.
Dec 06 09:07:46 np0005548790.localdomain systemd[36046]: Stopped target Timers.
Dec 06 09:07:46 np0005548790.localdomain systemd[36046]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 06 09:07:46 np0005548790.localdomain systemd[36046]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 06 09:07:46 np0005548790.localdomain systemd[36046]: Closed D-Bus User Message Bus Socket.
Dec 06 09:07:46 np0005548790.localdomain systemd[36046]: Stopped Create User's Volatile Files and Directories.
Dec 06 09:07:46 np0005548790.localdomain systemd[36046]: Removed slice User Application Slice.
Dec 06 09:07:46 np0005548790.localdomain systemd[36046]: Reached target Shutdown.
Dec 06 09:07:46 np0005548790.localdomain systemd[36046]: Finished Exit the Session.
Dec 06 09:07:46 np0005548790.localdomain systemd[36046]: Reached target Exit the Session.
Dec 06 09:07:46 np0005548790.localdomain systemd[1]: user@1003.service: Deactivated successfully.
Dec 06 09:07:46 np0005548790.localdomain systemd[1]: Stopped User Manager for UID 1003.
Dec 06 09:07:46 np0005548790.localdomain systemd[1]: user@1003.service: Consumed 4.497s CPU time, read 0B from disk, written 7.0K to disk.
Dec 06 09:07:46 np0005548790.localdomain systemd[1]: Stopping User Runtime Directory /run/user/1003...
Dec 06 09:07:46 np0005548790.localdomain systemd[1]: run-user-1003.mount: Deactivated successfully.
Dec 06 09:07:47 np0005548790.localdomain systemd[1]: user-runtime-dir@1003.service: Deactivated successfully.
Dec 06 09:07:47 np0005548790.localdomain systemd[1]: Stopped User Runtime Directory /run/user/1003.
Dec 06 09:07:47 np0005548790.localdomain systemd[1]: Removed slice User Slice of UID 1003.
Dec 06 09:07:47 np0005548790.localdomain systemd[1]: user-1003.slice: Consumed 7min 9.429s CPU time.
Dec 06 09:07:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:07:48 np0005548790.localdomain podman[99873]: 2025-12-06 09:07:48.562470645 +0000 UTC m=+0.081422424 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64)
Dec 06 09:07:48 np0005548790.localdomain podman[99873]: 2025-12-06 09:07:48.590143767 +0000 UTC m=+0.109095536 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step5, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 06 09:07:48 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:07:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:07:57 np0005548790.localdomain systemd[1]: tmp-crun.xBiDYf.mount: Deactivated successfully.
Dec 06 09:07:57 np0005548790.localdomain podman[99899]: 2025-12-06 09:07:57.559935243 +0000 UTC m=+0.081063125 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, config_id=tripleo_step1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 09:07:57 np0005548790.localdomain podman[99899]: 2025-12-06 09:07:57.756889675 +0000 UTC m=+0.278017607 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, url=https://www.redhat.com, version=17.1.12, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, distribution-scope=public, build-date=2025-11-18T22:49:46Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 09:07:57 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:08:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:08:09 np0005548790.localdomain systemd[1]: tmp-crun.9Xj5Az.mount: Deactivated successfully.
Dec 06 09:08:09 np0005548790.localdomain podman[99929]: 2025-12-06 09:08:09.580251957 +0000 UTC m=+0.098646366 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, vcs-type=git, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 09:08:09 np0005548790.localdomain podman[99929]: 2025-12-06 09:08:09.596218596 +0000 UTC m=+0.114613015 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step3, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=)
Dec 06 09:08:09 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:08:11 np0005548790.localdomain sshd[99949]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:08:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:08:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:08:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:08:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:08:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:08:11 np0005548790.localdomain systemd[1]: tmp-crun.0lpaGM.mount: Deactivated successfully.
Dec 06 09:08:11 np0005548790.localdomain podman[99951]: 2025-12-06 09:08:11.631832084 +0000 UTC m=+0.141145866 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, release=1761123044, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute)
Dec 06 09:08:11 np0005548790.localdomain podman[99959]: 2025-12-06 09:08:11.642310686 +0000 UTC m=+0.139390150 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-cron, version=17.1.12, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, container_name=logrotate_crond)
Dec 06 09:08:11 np0005548790.localdomain podman[99959]: 2025-12-06 09:08:11.674306923 +0000 UTC m=+0.171386387 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=logrotate_crond, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible)
Dec 06 09:08:11 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:08:11 np0005548790.localdomain podman[99953]: 2025-12-06 09:08:11.59363032 +0000 UTC m=+0.094978798 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, release=1761123044, container_name=nova_migration_target, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 09:08:11 np0005548790.localdomain podman[99952]: 2025-12-06 09:08:11.728445336 +0000 UTC m=+0.234338896 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, architecture=x86_64, version=17.1.12, distribution-scope=public)
Dec 06 09:08:11 np0005548790.localdomain podman[99952]: 2025-12-06 09:08:11.756635012 +0000 UTC m=+0.262528612 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 06 09:08:11 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:08:11 np0005548790.localdomain podman[99950]: 2025-12-06 09:08:11.677672864 +0000 UTC m=+0.189395751 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12)
Dec 06 09:08:11 np0005548790.localdomain podman[99950]: 2025-12-06 09:08:11.808357139 +0000 UTC m=+0.320080006 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, distribution-scope=public, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Dec 06 09:08:11 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:08:11 np0005548790.localdomain podman[99951]: 2025-12-06 09:08:11.859804439 +0000 UTC m=+0.369118211 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, container_name=ceilometer_agent_compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 09:08:11 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:08:11 np0005548790.localdomain podman[99953]: 2025-12-06 09:08:11.982641023 +0000 UTC m=+0.483989521 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=nova_migration_target, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:08:11 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:08:13 np0005548790.localdomain sshd[99949]: Received disconnect from 35.247.75.98 port 51506:11: Bye Bye [preauth]
Dec 06 09:08:13 np0005548790.localdomain sshd[99949]: Disconnected from authenticating user root 35.247.75.98 port 51506 [preauth]
Dec 06 09:08:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:08:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:08:16 np0005548790.localdomain podman[100065]: 2025-12-06 09:08:16.572035819 +0000 UTC m=+0.081844027 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, vcs-type=git, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 06 09:08:16 np0005548790.localdomain podman[100065]: 2025-12-06 09:08:16.592286312 +0000 UTC m=+0.102094530 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Dec 06 09:08:16 np0005548790.localdomain podman[100065]: unhealthy
Dec 06 09:08:16 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:08:16 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:08:16 np0005548790.localdomain podman[100064]: 2025-12-06 09:08:16.680708983 +0000 UTC m=+0.193337796 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12)
Dec 06 09:08:16 np0005548790.localdomain podman[100064]: 2025-12-06 09:08:16.720697186 +0000 UTC m=+0.233325959 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, container_name=ovn_metadata_agent, architecture=x86_64, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, release=1761123044, distribution-scope=public, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.)
Dec 06 09:08:16 np0005548790.localdomain podman[100064]: unhealthy
Dec 06 09:08:16 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:08:16 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:08:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:08:19 np0005548790.localdomain podman[100105]: 2025-12-06 09:08:19.564091438 +0000 UTC m=+0.080481860 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64, build-date=2025-11-19T00:36:58Z, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute)
Dec 06 09:08:19 np0005548790.localdomain podman[100105]: 2025-12-06 09:08:19.594303998 +0000 UTC m=+0.110694470 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 09:08:19 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:08:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:08:28 np0005548790.localdomain systemd[1]: tmp-crun.J6OqDB.mount: Deactivated successfully.
Dec 06 09:08:28 np0005548790.localdomain podman[100132]: 2025-12-06 09:08:28.561754012 +0000 UTC m=+0.082317309 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, release=1761123044, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, version=17.1.12, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:08:28 np0005548790.localdomain podman[100132]: 2025-12-06 09:08:28.740143166 +0000 UTC m=+0.260706423 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z)
Dec 06 09:08:28 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:08:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:08:40 np0005548790.localdomain podman[100161]: 2025-12-06 09:08:40.564590005 +0000 UTC m=+0.082882823 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, container_name=collectd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4)
Dec 06 09:08:40 np0005548790.localdomain podman[100161]: 2025-12-06 09:08:40.603369525 +0000 UTC m=+0.121662313 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vcs-type=git, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., release=1761123044, build-date=2025-11-18T22:51:28Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12)
Dec 06 09:08:40 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:08:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:08:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:08:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:08:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:08:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:08:42 np0005548790.localdomain podman[100181]: 2025-12-06 09:08:42.580913059 +0000 UTC m=+0.094201378 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, distribution-scope=public, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Dec 06 09:08:42 np0005548790.localdomain podman[100183]: 2025-12-06 09:08:42.638273626 +0000 UTC m=+0.144880475 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-19T00:12:45Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 06 09:08:42 np0005548790.localdomain podman[100184]: 2025-12-06 09:08:42.691444432 +0000 UTC m=+0.194443415 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute)
Dec 06 09:08:42 np0005548790.localdomain podman[100183]: 2025-12-06 09:08:42.696350364 +0000 UTC m=+0.202957203 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, url=https://www.redhat.com, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 09:08:42 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:08:42 np0005548790.localdomain podman[100190]: 2025-12-06 09:08:42.758878192 +0000 UTC m=+0.258297709 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 06 09:08:42 np0005548790.localdomain sudo[100264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:08:42 np0005548790.localdomain sudo[100264]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:08:42 np0005548790.localdomain sudo[100264]: pam_unix(sudo:session): session closed for user root
Dec 06 09:08:42 np0005548790.localdomain podman[100190]: 2025-12-06 09:08:42.794396204 +0000 UTC m=+0.293815731 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, container_name=logrotate_crond, architecture=x86_64, com.redhat.component=openstack-cron-container, release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, version=17.1.12)
Dec 06 09:08:42 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:08:42 np0005548790.localdomain podman[100181]: 2025-12-06 09:08:42.819649431 +0000 UTC m=+0.332937830 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, version=17.1.12)
Dec 06 09:08:42 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:08:42 np0005548790.localdomain podman[100182]: 2025-12-06 09:08:42.804523936 +0000 UTC m=+0.313474888 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 09:08:42 np0005548790.localdomain sudo[100301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:08:42 np0005548790.localdomain sudo[100301]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:08:42 np0005548790.localdomain podman[100182]: 2025-12-06 09:08:42.884399497 +0000 UTC m=+0.393350409 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044)
Dec 06 09:08:42 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:08:43 np0005548790.localdomain podman[100184]: 2025-12-06 09:08:43.066432789 +0000 UTC m=+0.569431822 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4, tcib_managed=true, container_name=nova_migration_target, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:08:43 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:08:43 np0005548790.localdomain sudo[100301]: pam_unix(sudo:session): session closed for user root
Dec 06 09:08:44 np0005548790.localdomain sudo[100358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:08:44 np0005548790.localdomain sudo[100358]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:08:44 np0005548790.localdomain sudo[100358]: pam_unix(sudo:session): session closed for user root
Dec 06 09:08:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:08:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:08:47 np0005548790.localdomain podman[100374]: 2025-12-06 09:08:47.574650088 +0000 UTC m=+0.081517208 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64)
Dec 06 09:08:47 np0005548790.localdomain systemd[1]: tmp-crun.ePGOkJ.mount: Deactivated successfully.
Dec 06 09:08:47 np0005548790.localdomain podman[100373]: 2025-12-06 09:08:47.647002478 +0000 UTC m=+0.155984184 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com)
Dec 06 09:08:47 np0005548790.localdomain podman[100374]: 2025-12-06 09:08:47.648053737 +0000 UTC m=+0.154920897 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible)
Dec 06 09:08:47 np0005548790.localdomain podman[100374]: unhealthy
Dec 06 09:08:47 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:08:47 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:08:47 np0005548790.localdomain podman[100373]: 2025-12-06 09:08:47.726771317 +0000 UTC m=+0.235753033 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z)
Dec 06 09:08:47 np0005548790.localdomain podman[100373]: unhealthy
Dec 06 09:08:47 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:08:47 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:08:47 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 09:08:47 np0005548790.localdomain recover_tripleo_nova_virtqemud[100415]: 62556
Dec 06 09:08:47 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 09:08:47 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 09:08:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:08:50 np0005548790.localdomain podman[100416]: 2025-12-06 09:08:50.570408674 +0000 UTC m=+0.087578689 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1761123044, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, url=https://www.redhat.com, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute)
Dec 06 09:08:50 np0005548790.localdomain podman[100416]: 2025-12-06 09:08:50.602268639 +0000 UTC m=+0.119438654 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git)
Dec 06 09:08:50 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:08:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:08:59 np0005548790.localdomain podman[100442]: 2025-12-06 09:08:59.565257125 +0000 UTC m=+0.082903995 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=metrics_qdr, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 09:08:59 np0005548790.localdomain podman[100442]: 2025-12-06 09:08:59.785243043 +0000 UTC m=+0.302889903 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=metrics_qdr, url=https://www.redhat.com)
Dec 06 09:08:59 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:09:02 np0005548790.localdomain anacron[94071]: Job `cron.daily' started
Dec 06 09:09:02 np0005548790.localdomain anacron[94071]: Job `cron.daily' terminated
Dec 06 09:09:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:09:11 np0005548790.localdomain podman[100473]: 2025-12-06 09:09:11.566257289 +0000 UTC m=+0.082044332 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_id=tripleo_step3, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, architecture=x86_64, build-date=2025-11-18T22:51:28Z, container_name=collectd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd)
Dec 06 09:09:11 np0005548790.localdomain podman[100473]: 2025-12-06 09:09:11.581269541 +0000 UTC m=+0.097056585 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, vcs-type=git, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, config_id=tripleo_step3, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:09:11 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:09:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:09:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:09:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:09:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:09:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:09:13 np0005548790.localdomain podman[100494]: 2025-12-06 09:09:13.609358901 +0000 UTC m=+0.121296794 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-iscsid)
Dec 06 09:09:13 np0005548790.localdomain systemd[1]: tmp-crun.NAnDIS.mount: Deactivated successfully.
Dec 06 09:09:13 np0005548790.localdomain podman[100497]: 2025-12-06 09:09:13.637180127 +0000 UTC m=+0.140674863 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, release=1761123044, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Dec 06 09:09:13 np0005548790.localdomain podman[100496]: 2025-12-06 09:09:13.690179999 +0000 UTC m=+0.193246584 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, distribution-scope=public, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 09:09:13 np0005548790.localdomain podman[100503]: 2025-12-06 09:09:13.748612776 +0000 UTC m=+0.248619489 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step4)
Dec 06 09:09:13 np0005548790.localdomain podman[100496]: 2025-12-06 09:09:13.774079169 +0000 UTC m=+0.277145754 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, vcs-type=git, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 09:09:13 np0005548790.localdomain podman[100503]: 2025-12-06 09:09:13.783109191 +0000 UTC m=+0.283115924 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, distribution-scope=public, container_name=logrotate_crond, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron)
Dec 06 09:09:13 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:09:13 np0005548790.localdomain podman[100494]: 2025-12-06 09:09:13.792459871 +0000 UTC m=+0.304397724 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc.)
Dec 06 09:09:13 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:09:13 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:09:13 np0005548790.localdomain podman[100495]: 2025-12-06 09:09:13.848894825 +0000 UTC m=+0.356601775 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 09:09:13 np0005548790.localdomain podman[100495]: 2025-12-06 09:09:13.905159084 +0000 UTC m=+0.412866014 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 09:09:13 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:09:14 np0005548790.localdomain podman[100497]: 2025-12-06 09:09:14.033261169 +0000 UTC m=+0.536755865 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, release=1761123044, io.openshift.expose-services=, architecture=x86_64)
Dec 06 09:09:14 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:09:14 np0005548790.localdomain systemd[1]: tmp-crun.lrSfrg.mount: Deactivated successfully.
Dec 06 09:09:18 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:09:18 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:09:18 np0005548790.localdomain systemd[1]: tmp-crun.nECRxL.mount: Deactivated successfully.
Dec 06 09:09:18 np0005548790.localdomain podman[100607]: 2025-12-06 09:09:18.566085307 +0000 UTC m=+0.089362007 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, container_name=ovn_metadata_agent)
Dec 06 09:09:18 np0005548790.localdomain podman[100608]: 2025-12-06 09:09:18.572838459 +0000 UTC m=+0.090209661 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., container_name=ovn_controller, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, release=1761123044, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12)
Dec 06 09:09:18 np0005548790.localdomain podman[100608]: 2025-12-06 09:09:18.673329174 +0000 UTC m=+0.190700406 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z)
Dec 06 09:09:18 np0005548790.localdomain podman[100608]: unhealthy
Dec 06 09:09:18 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:09:18 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:09:18 np0005548790.localdomain podman[100607]: 2025-12-06 09:09:18.68733901 +0000 UTC m=+0.210615710 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public)
Dec 06 09:09:18 np0005548790.localdomain podman[100607]: unhealthy
Dec 06 09:09:18 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:09:18 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:09:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:09:21 np0005548790.localdomain systemd[1]: tmp-crun.e8Hmtg.mount: Deactivated successfully.
Dec 06 09:09:21 np0005548790.localdomain podman[100648]: 2025-12-06 09:09:21.561869027 +0000 UTC m=+0.079692669 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z)
Dec 06 09:09:21 np0005548790.localdomain podman[100648]: 2025-12-06 09:09:21.589913329 +0000 UTC m=+0.107737001 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, maintainer=OpenStack TripleO Team, container_name=nova_compute, build-date=2025-11-19T00:36:58Z)
Dec 06 09:09:21 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:09:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:09:30 np0005548790.localdomain systemd[1]: tmp-crun.zIXJhE.mount: Deactivated successfully.
Dec 06 09:09:30 np0005548790.localdomain podman[100674]: 2025-12-06 09:09:30.566341473 +0000 UTC m=+0.082755320 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, version=17.1.12)
Dec 06 09:09:30 np0005548790.localdomain podman[100674]: 2025-12-06 09:09:30.738377837 +0000 UTC m=+0.254791674 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible)
Dec 06 09:09:30 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:09:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:09:42 np0005548790.localdomain systemd[1]: tmp-crun.utQIg0.mount: Deactivated successfully.
Dec 06 09:09:42 np0005548790.localdomain podman[100703]: 2025-12-06 09:09:42.569725752 +0000 UTC m=+0.085567656 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, release=1761123044)
Dec 06 09:09:42 np0005548790.localdomain podman[100703]: 2025-12-06 09:09:42.608136652 +0000 UTC m=+0.123978506 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 09:09:42 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:09:44 np0005548790.localdomain sudo[100723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:09:44 np0005548790.localdomain sudo[100723]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:09:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:09:44 np0005548790.localdomain sudo[100723]: pam_unix(sudo:session): session closed for user root
Dec 06 09:09:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:09:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:09:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:09:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:09:44 np0005548790.localdomain sudo[100774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:09:44 np0005548790.localdomain sudo[100774]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:09:44 np0005548790.localdomain systemd[1]: tmp-crun.LXFae9.mount: Deactivated successfully.
Dec 06 09:09:44 np0005548790.localdomain podman[100739]: 2025-12-06 09:09:44.510307283 +0000 UTC m=+0.099151090 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute)
Dec 06 09:09:44 np0005548790.localdomain podman[100740]: 2025-12-06 09:09:44.567871897 +0000 UTC m=+0.148828082 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1761123044)
Dec 06 09:09:44 np0005548790.localdomain podman[100746]: 2025-12-06 09:09:44.612596016 +0000 UTC m=+0.190953182 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=nova_migration_target, release=1761123044, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, architecture=x86_64)
Dec 06 09:09:44 np0005548790.localdomain podman[100750]: 2025-12-06 09:09:44.537334928 +0000 UTC m=+0.110351620 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 09:09:44 np0005548790.localdomain podman[100740]: 2025-12-06 09:09:44.670329445 +0000 UTC m=+0.251285660 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 09:09:44 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:09:44 np0005548790.localdomain podman[100750]: 2025-12-06 09:09:44.681975657 +0000 UTC m=+0.254992339 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, version=17.1.12, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.4)
Dec 06 09:09:44 np0005548790.localdomain podman[100739]: 2025-12-06 09:09:44.69587182 +0000 UTC m=+0.284715677 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, url=https://www.redhat.com)
Dec 06 09:09:44 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:09:44 np0005548790.localdomain podman[100737]: 2025-12-06 09:09:44.653457852 +0000 UTC m=+0.243764718 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, container_name=iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec 06 09:09:44 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:09:44 np0005548790.localdomain podman[100737]: 2025-12-06 09:09:44.735249646 +0000 UTC m=+0.325556522 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, batch=17.1_20251118.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Dec 06 09:09:44 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:09:45 np0005548790.localdomain podman[100746]: 2025-12-06 09:09:45.010207259 +0000 UTC m=+0.588564465 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 09:09:45 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:09:45 np0005548790.localdomain sudo[100774]: pam_unix(sudo:session): session closed for user root
Dec 06 09:09:45 np0005548790.localdomain sudo[100897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:09:45 np0005548790.localdomain sudo[100897]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:09:45 np0005548790.localdomain sudo[100897]: pam_unix(sudo:session): session closed for user root
Dec 06 09:09:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:09:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:09:49 np0005548790.localdomain podman[100912]: 2025-12-06 09:09:49.572077247 +0000 UTC m=+0.084596370 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com)
Dec 06 09:09:49 np0005548790.localdomain podman[100912]: 2025-12-06 09:09:49.6106038 +0000 UTC m=+0.123122883 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.12, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Dec 06 09:09:49 np0005548790.localdomain podman[100912]: unhealthy
Dec 06 09:09:49 np0005548790.localdomain systemd[1]: tmp-crun.kBj3QB.mount: Deactivated successfully.
Dec 06 09:09:49 np0005548790.localdomain podman[100913]: 2025-12-06 09:09:49.62923337 +0000 UTC m=+0.141595569 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 06 09:09:49 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:09:49 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:09:49 np0005548790.localdomain podman[100913]: 2025-12-06 09:09:49.648204719 +0000 UTC m=+0.160566978 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller)
Dec 06 09:09:49 np0005548790.localdomain podman[100913]: unhealthy
Dec 06 09:09:49 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:09:49 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:09:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:09:52 np0005548790.localdomain podman[100950]: 2025-12-06 09:09:52.563148039 +0000 UTC m=+0.078245440 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, distribution-scope=public, config_id=tripleo_step5, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 09:09:52 np0005548790.localdomain podman[100950]: 2025-12-06 09:09:52.591218552 +0000 UTC m=+0.106315983 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, container_name=nova_compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, release=1761123044, config_id=tripleo_step5, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Dec 06 09:09:52 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:09:57 np0005548790.localdomain sshd[100977]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:09:59 np0005548790.localdomain sshd[100977]: Received disconnect from 35.247.75.98 port 35808:11: Bye Bye [preauth]
Dec 06 09:09:59 np0005548790.localdomain sshd[100977]: Disconnected from authenticating user root 35.247.75.98 port 35808 [preauth]
Dec 06 09:10:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:10:01 np0005548790.localdomain podman[100979]: 2025-12-06 09:10:01.576202787 +0000 UTC m=+0.092603474 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 09:10:01 np0005548790.localdomain podman[100979]: 2025-12-06 09:10:01.7765812 +0000 UTC m=+0.292981887 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step1, vcs-type=git, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, tcib_managed=true)
Dec 06 09:10:01 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:10:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 09:10:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 4200.1 total, 600.0 interval
                                                          Cumulative writes: 5186 writes, 23K keys, 5186 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5186 writes, 682 syncs, 7.60 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 09:10:08 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 09:10:08 np0005548790.localdomain recover_tripleo_nova_virtqemud[101010]: 62556
Dec 06 09:10:08 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 09:10:08 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 09:10:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 09:10:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 4200.2 total, 600.0 interval
                                                          Cumulative writes: 5446 writes, 23K keys, 5446 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5446 writes, 742 syncs, 7.34 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 09:10:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:10:13 np0005548790.localdomain podman[101011]: 2025-12-06 09:10:13.572394793 +0000 UTC m=+0.087053395 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd)
Dec 06 09:10:13 np0005548790.localdomain podman[101011]: 2025-12-06 09:10:13.612103448 +0000 UTC m=+0.126762010 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd)
Dec 06 09:10:13 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:10:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:10:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:10:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:10:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:10:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:10:15 np0005548790.localdomain podman[101034]: 2025-12-06 09:10:15.578936033 +0000 UTC m=+0.082831532 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.12, release=1761123044)
Dec 06 09:10:15 np0005548790.localdomain podman[101033]: 2025-12-06 09:10:15.626373825 +0000 UTC m=+0.134211121 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044)
Dec 06 09:10:15 np0005548790.localdomain podman[101032]: 2025-12-06 09:10:15.691363428 +0000 UTC m=+0.200939110 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, config_id=tripleo_step4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, tcib_managed=true, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 06 09:10:15 np0005548790.localdomain podman[101031]: 2025-12-06 09:10:15.745184891 +0000 UTC m=+0.257303191 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, container_name=iscsid, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step3)
Dec 06 09:10:15 np0005548790.localdomain podman[101033]: 2025-12-06 09:10:15.775437363 +0000 UTC m=+0.283274729 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.4)
Dec 06 09:10:15 np0005548790.localdomain podman[101031]: 2025-12-06 09:10:15.783073038 +0000 UTC m=+0.295191298 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, container_name=iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 06 09:10:15 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:10:15 np0005548790.localdomain podman[101040]: 2025-12-06 09:10:15.79655458 +0000 UTC m=+0.298686543 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-type=git)
Dec 06 09:10:15 np0005548790.localdomain podman[101032]: 2025-12-06 09:10:15.823826321 +0000 UTC m=+0.333402073 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git)
Dec 06 09:10:15 np0005548790.localdomain podman[101040]: 2025-12-06 09:10:15.82826964 +0000 UTC m=+0.330401583 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, release=1761123044, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Dec 06 09:10:15 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:10:15 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:10:15 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:10:15 np0005548790.localdomain podman[101034]: 2025-12-06 09:10:15.983263126 +0000 UTC m=+0.487158595 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 09:10:15 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:10:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:10:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:10:20 np0005548790.localdomain podman[101143]: 2025-12-06 09:10:20.570554496 +0000 UTC m=+0.083474820 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://www.redhat.com, io.buildah.version=1.41.4, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.)
Dec 06 09:10:20 np0005548790.localdomain podman[101143]: 2025-12-06 09:10:20.584332875 +0000 UTC m=+0.097253219 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64)
Dec 06 09:10:20 np0005548790.localdomain podman[101143]: unhealthy
Dec 06 09:10:20 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:10:20 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:10:20 np0005548790.localdomain podman[101144]: 2025-12-06 09:10:20.629380143 +0000 UTC m=+0.139805710 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 06 09:10:20 np0005548790.localdomain podman[101144]: 2025-12-06 09:10:20.64418695 +0000 UTC m=+0.154612527 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 06 09:10:20 np0005548790.localdomain podman[101144]: unhealthy
Dec 06 09:10:20 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:10:20 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:10:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:10:23 np0005548790.localdomain podman[101184]: 2025-12-06 09:10:23.557462976 +0000 UTC m=+0.075795363 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=)
Dec 06 09:10:23 np0005548790.localdomain podman[101184]: 2025-12-06 09:10:23.608641819 +0000 UTC m=+0.126974216 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, tcib_managed=true, version=17.1.12)
Dec 06 09:10:23 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:10:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:10:32 np0005548790.localdomain systemd[1]: tmp-crun.AabQhx.mount: Deactivated successfully.
Dec 06 09:10:32 np0005548790.localdomain podman[101210]: 2025-12-06 09:10:32.551688208 +0000 UTC m=+0.074704635 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64)
Dec 06 09:10:32 np0005548790.localdomain podman[101210]: 2025-12-06 09:10:32.708423181 +0000 UTC m=+0.231439608 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., container_name=metrics_qdr, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 09:10:32 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:10:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:10:44 np0005548790.localdomain systemd[1]: tmp-crun.UW11we.mount: Deactivated successfully.
Dec 06 09:10:44 np0005548790.localdomain podman[101239]: 2025-12-06 09:10:44.57722441 +0000 UTC m=+0.088830124 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-collectd, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, version=17.1.12, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 09:10:44 np0005548790.localdomain podman[101239]: 2025-12-06 09:10:44.592341315 +0000 UTC m=+0.103947029 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.12, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Dec 06 09:10:44 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:10:45 np0005548790.localdomain sudo[101259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:10:46 np0005548790.localdomain sudo[101259]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:10:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:10:46 np0005548790.localdomain sudo[101259]: pam_unix(sudo:session): session closed for user root
Dec 06 09:10:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:10:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:10:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:10:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:10:46 np0005548790.localdomain sudo[101277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:10:46 np0005548790.localdomain sudo[101277]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:10:46 np0005548790.localdomain systemd[1]: tmp-crun.P4gtoL.mount: Deactivated successfully.
Dec 06 09:10:46 np0005548790.localdomain podman[101276]: 2025-12-06 09:10:46.114130327 +0000 UTC m=+0.085606027 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, batch=17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=)
Dec 06 09:10:46 np0005548790.localdomain podman[101275]: 2025-12-06 09:10:46.174143356 +0000 UTC m=+0.144781214 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, url=https://www.redhat.com, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute)
Dec 06 09:10:46 np0005548790.localdomain podman[101275]: 2025-12-06 09:10:46.199343842 +0000 UTC m=+0.169981730 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 06 09:10:46 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:10:46 np0005548790.localdomain podman[101284]: 2025-12-06 09:10:46.281553377 +0000 UTC m=+0.245989358 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., distribution-scope=public, container_name=nova_migration_target, vcs-type=git, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-nova-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64)
Dec 06 09:10:46 np0005548790.localdomain podman[101276]: 2025-12-06 09:10:46.300355131 +0000 UTC m=+0.271830901 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, version=17.1.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 09:10:46 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:10:46 np0005548790.localdomain podman[101274]: 2025-12-06 09:10:46.152009883 +0000 UTC m=+0.128860987 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, architecture=x86_64, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=)
Dec 06 09:10:46 np0005548790.localdomain podman[101274]: 2025-12-06 09:10:46.38533735 +0000 UTC m=+0.362188404 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, tcib_managed=true, build-date=2025-11-18T23:44:13Z, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 09:10:46 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:10:46 np0005548790.localdomain podman[101288]: 2025-12-06 09:10:46.435134085 +0000 UTC m=+0.393462253 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true)
Dec 06 09:10:46 np0005548790.localdomain podman[101288]: 2025-12-06 09:10:46.469331132 +0000 UTC m=+0.427659290 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, architecture=x86_64, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public)
Dec 06 09:10:46 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:10:46 np0005548790.localdomain podman[101284]: 2025-12-06 09:10:46.631160402 +0000 UTC m=+0.595596473 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, maintainer=OpenStack TripleO Team)
Dec 06 09:10:46 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:10:46 np0005548790.localdomain sudo[101277]: pam_unix(sudo:session): session closed for user root
Dec 06 09:10:50 np0005548790.localdomain sudo[101424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:10:50 np0005548790.localdomain sudo[101424]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:10:50 np0005548790.localdomain sudo[101424]: pam_unix(sudo:session): session closed for user root
Dec 06 09:10:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:10:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:10:51 np0005548790.localdomain podman[101439]: 2025-12-06 09:10:51.595733229 +0000 UTC m=+0.090990882 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, url=https://www.redhat.com, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 09:10:51 np0005548790.localdomain podman[101439]: 2025-12-06 09:10:51.6401684 +0000 UTC m=+0.135425993 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, version=17.1.12, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, vendor=Red Hat, Inc.)
Dec 06 09:10:51 np0005548790.localdomain podman[101440]: 2025-12-06 09:10:51.643732136 +0000 UTC m=+0.139205025 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, io.openshift.expose-services=)
Dec 06 09:10:51 np0005548790.localdomain podman[101440]: 2025-12-06 09:10:51.662287174 +0000 UTC m=+0.157760023 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller)
Dec 06 09:10:51 np0005548790.localdomain podman[101440]: unhealthy
Dec 06 09:10:51 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:10:51 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:10:51 np0005548790.localdomain podman[101439]: unhealthy
Dec 06 09:10:51 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:10:51 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:10:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:10:54 np0005548790.localdomain podman[101481]: 2025-12-06 09:10:54.567989737 +0000 UTC m=+0.082754501 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, architecture=x86_64, release=1761123044, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 09:10:54 np0005548790.localdomain podman[101481]: 2025-12-06 09:10:54.601130066 +0000 UTC m=+0.115894820 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 09:10:54 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:11:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:11:03 np0005548790.localdomain podman[101508]: 2025-12-06 09:11:03.574134519 +0000 UTC m=+0.083012058 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git)
Dec 06 09:11:03 np0005548790.localdomain podman[101508]: 2025-12-06 09:11:03.793281696 +0000 UTC m=+0.302159225 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4)
Dec 06 09:11:03 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:11:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:11:15 np0005548790.localdomain podman[101537]: 2025-12-06 09:11:15.573477448 +0000 UTC m=+0.090905178 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, architecture=x86_64, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, config_id=tripleo_step3, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Dec 06 09:11:15 np0005548790.localdomain podman[101537]: 2025-12-06 09:11:15.585347696 +0000 UTC m=+0.102775436 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, distribution-scope=public, container_name=collectd, tcib_managed=true)
Dec 06 09:11:15 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:11:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:11:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:11:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:11:16 np0005548790.localdomain podman[101557]: 2025-12-06 09:11:16.571256516 +0000 UTC m=+0.087882668 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, release=1761123044, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, version=17.1.12, architecture=x86_64, vcs-type=git)
Dec 06 09:11:16 np0005548790.localdomain podman[101557]: 2025-12-06 09:11:16.582045725 +0000 UTC m=+0.098671897 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 06 09:11:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:11:16 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:11:16 np0005548790.localdomain systemd[1]: tmp-crun.g1jkVZ.mount: Deactivated successfully.
Dec 06 09:11:16 np0005548790.localdomain podman[101559]: 2025-12-06 09:11:16.646057732 +0000 UTC m=+0.153069926 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi)
Dec 06 09:11:16 np0005548790.localdomain podman[101559]: 2025-12-06 09:11:16.680123536 +0000 UTC m=+0.187135740 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 09:11:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:11:16 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:11:16 np0005548790.localdomain podman[101558]: 2025-12-06 09:11:16.685368137 +0000 UTC m=+0.194902978 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container)
Dec 06 09:11:16 np0005548790.localdomain podman[101600]: 2025-12-06 09:11:16.741960664 +0000 UTC m=+0.145453792 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4)
Dec 06 09:11:16 np0005548790.localdomain podman[101558]: 2025-12-06 09:11:16.769124053 +0000 UTC m=+0.278658894 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, vcs-type=git)
Dec 06 09:11:16 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:11:16 np0005548790.localdomain podman[101635]: 2025-12-06 09:11:16.784224278 +0000 UTC m=+0.075172166 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=nova_migration_target, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=)
Dec 06 09:11:16 np0005548790.localdomain podman[101600]: 2025-12-06 09:11:16.805735575 +0000 UTC m=+0.209228743 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.buildah.version=1.41.4)
Dec 06 09:11:16 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:11:17 np0005548790.localdomain podman[101635]: 2025-12-06 09:11:17.166166231 +0000 UTC m=+0.457114039 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, architecture=x86_64)
Dec 06 09:11:17 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:11:22 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:11:22 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:11:22 np0005548790.localdomain systemd[1]: tmp-crun.9PeMsP.mount: Deactivated successfully.
Dec 06 09:11:22 np0005548790.localdomain podman[101672]: 2025-12-06 09:11:22.578852525 +0000 UTC m=+0.085735300 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 09:11:22 np0005548790.localdomain podman[101672]: 2025-12-06 09:11:22.616408412 +0000 UTC m=+0.123291217 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, tcib_managed=true, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, release=1761123044, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller)
Dec 06 09:11:22 np0005548790.localdomain podman[101672]: unhealthy
Dec 06 09:11:22 np0005548790.localdomain podman[101671]: 2025-12-06 09:11:22.626525514 +0000 UTC m=+0.135236718 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step4)
Dec 06 09:11:22 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:11:22 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:11:22 np0005548790.localdomain podman[101671]: 2025-12-06 09:11:22.667354778 +0000 UTC m=+0.176065972 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible)
Dec 06 09:11:22 np0005548790.localdomain podman[101671]: unhealthy
Dec 06 09:11:22 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:11:22 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:11:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:11:25 np0005548790.localdomain podman[101712]: 2025-12-06 09:11:25.571292654 +0000 UTC m=+0.085841513 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 09:11:25 np0005548790.localdomain podman[101712]: 2025-12-06 09:11:25.598627278 +0000 UTC m=+0.113176147 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, vendor=Red Hat, Inc., container_name=nova_compute, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4)
Dec 06 09:11:25 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:11:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:11:34 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 09:11:34 np0005548790.localdomain recover_tripleo_nova_virtqemud[101745]: 62556
Dec 06 09:11:34 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 09:11:34 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 09:11:34 np0005548790.localdomain podman[101738]: 2025-12-06 09:11:34.577080257 +0000 UTC m=+0.086622544 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true)
Dec 06 09:11:34 np0005548790.localdomain podman[101738]: 2025-12-06 09:11:34.787212562 +0000 UTC m=+0.296754879 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, version=17.1.12, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd)
Dec 06 09:11:34 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:11:41 np0005548790.localdomain sshd[101770]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:11:42 np0005548790.localdomain sshd[101770]: Received disconnect from 35.247.75.98 port 42966:11: Bye Bye [preauth]
Dec 06 09:11:42 np0005548790.localdomain sshd[101770]: Disconnected from authenticating user root 35.247.75.98 port 42966 [preauth]
Dec 06 09:11:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:11:46 np0005548790.localdomain podman[101772]: 2025-12-06 09:11:46.563324186 +0000 UTC m=+0.076713069 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, release=1761123044, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git)
Dec 06 09:11:46 np0005548790.localdomain podman[101772]: 2025-12-06 09:11:46.578674517 +0000 UTC m=+0.092063380 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, release=1761123044, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step3, architecture=x86_64)
Dec 06 09:11:46 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:11:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:11:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:11:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:11:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:11:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:11:47 np0005548790.localdomain podman[101792]: 2025-12-06 09:11:47.576619579 +0000 UTC m=+0.087671172 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, distribution-scope=public)
Dec 06 09:11:47 np0005548790.localdomain podman[101794]: 2025-12-06 09:11:47.643186815 +0000 UTC m=+0.148802462 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 09:11:47 np0005548790.localdomain podman[101792]: 2025-12-06 09:11:47.667849636 +0000 UTC m=+0.178901289 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, release=1761123044, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, name=rhosp17/openstack-iscsid)
Dec 06 09:11:47 np0005548790.localdomain systemd[1]: tmp-crun.4bWY5a.mount: Deactivated successfully.
Dec 06 09:11:47 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:11:47 np0005548790.localdomain podman[101793]: 2025-12-06 09:11:47.688024447 +0000 UTC m=+0.195329999 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, distribution-scope=public, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64)
Dec 06 09:11:47 np0005548790.localdomain podman[101794]: 2025-12-06 09:11:47.701194421 +0000 UTC m=+0.206810068 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 09:11:47 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:11:47 np0005548790.localdomain podman[101793]: 2025-12-06 09:11:47.752513966 +0000 UTC m=+0.259819458 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible)
Dec 06 09:11:47 np0005548790.localdomain podman[101802]: 2025-12-06 09:11:47.759898405 +0000 UTC m=+0.255583706 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-cron, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, version=17.1.12, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, config_id=tripleo_step4, architecture=x86_64, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4)
Dec 06 09:11:47 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:11:47 np0005548790.localdomain podman[101802]: 2025-12-06 09:11:47.798270383 +0000 UTC m=+0.293955674 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, tcib_managed=true, vcs-type=git, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container)
Dec 06 09:11:47 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:11:47 np0005548790.localdomain podman[101796]: 2025-12-06 09:11:47.801415907 +0000 UTC m=+0.299825771 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, tcib_managed=true, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute)
Dec 06 09:11:48 np0005548790.localdomain podman[101796]: 2025-12-06 09:11:48.168745939 +0000 UTC m=+0.667155743 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container)
Dec 06 09:11:48 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:11:50 np0005548790.localdomain sudo[101906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:11:50 np0005548790.localdomain sudo[101906]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:11:50 np0005548790.localdomain sudo[101906]: pam_unix(sudo:session): session closed for user root
Dec 06 09:11:50 np0005548790.localdomain sudo[101921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 06 09:11:50 np0005548790.localdomain sudo[101921]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:11:50 np0005548790.localdomain sudo[101921]: pam_unix(sudo:session): session closed for user root
Dec 06 09:11:50 np0005548790.localdomain sudo[101958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:11:50 np0005548790.localdomain sudo[101958]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:11:50 np0005548790.localdomain sudo[101958]: pam_unix(sudo:session): session closed for user root
Dec 06 09:11:50 np0005548790.localdomain sudo[101973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:11:50 np0005548790.localdomain sudo[101973]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:11:51 np0005548790.localdomain sudo[101973]: pam_unix(sudo:session): session closed for user root
Dec 06 09:11:52 np0005548790.localdomain sudo[102021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:11:52 np0005548790.localdomain sudo[102021]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:11:52 np0005548790.localdomain sudo[102021]: pam_unix(sudo:session): session closed for user root
Dec 06 09:11:53 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:11:53 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:11:53 np0005548790.localdomain podman[102036]: 2025-12-06 09:11:53.570758655 +0000 UTC m=+0.081285411 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, version=17.1.12)
Dec 06 09:11:53 np0005548790.localdomain podman[102036]: 2025-12-06 09:11:53.585581833 +0000 UTC m=+0.096108599 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 06 09:11:53 np0005548790.localdomain podman[102036]: unhealthy
Dec 06 09:11:53 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:11:53 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:11:53 np0005548790.localdomain systemd[1]: tmp-crun.2Hau69.mount: Deactivated successfully.
Dec 06 09:11:53 np0005548790.localdomain podman[102037]: 2025-12-06 09:11:53.631949257 +0000 UTC m=+0.138203389 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, tcib_managed=true, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, version=17.1.12, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 06 09:11:53 np0005548790.localdomain podman[102037]: 2025-12-06 09:11:53.676140192 +0000 UTC m=+0.182394354 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, url=https://www.redhat.com, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 09:11:53 np0005548790.localdomain podman[102037]: unhealthy
Dec 06 09:11:53 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:11:53 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:11:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:11:56 np0005548790.localdomain podman[102077]: 2025-12-06 09:11:56.565422816 +0000 UTC m=+0.074380446 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, name=rhosp17/openstack-nova-compute, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container)
Dec 06 09:11:56 np0005548790.localdomain podman[102077]: 2025-12-06 09:11:56.596270823 +0000 UTC m=+0.105228403 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step5, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-nova-compute, container_name=nova_compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team)
Dec 06 09:11:56 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:12:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:12:05 np0005548790.localdomain podman[102103]: 2025-12-06 09:12:05.559194934 +0000 UTC m=+0.079311977 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, release=1761123044, name=rhosp17/openstack-qdrouterd, architecture=x86_64)
Dec 06 09:12:05 np0005548790.localdomain podman[102103]: 2025-12-06 09:12:05.755253762 +0000 UTC m=+0.275370865 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, release=1761123044, config_id=tripleo_step1)
Dec 06 09:12:05 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:12:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:12:17 np0005548790.localdomain podman[102132]: 2025-12-06 09:12:17.565282107 +0000 UTC m=+0.076637086 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, architecture=x86_64, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-collectd-container, version=17.1.12, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, distribution-scope=public, io.openshift.expose-services=)
Dec 06 09:12:17 np0005548790.localdomain podman[102132]: 2025-12-06 09:12:17.583564588 +0000 UTC m=+0.094919607 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vcs-type=git, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 09:12:17 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:12:18 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:12:18 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:12:18 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:12:18 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:12:18 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:12:18 np0005548790.localdomain podman[102152]: 2025-12-06 09:12:18.583949625 +0000 UTC m=+0.092327807 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, container_name=iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container)
Dec 06 09:12:18 np0005548790.localdomain podman[102152]: 2025-12-06 09:12:18.599354648 +0000 UTC m=+0.107732880 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 09:12:18 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:12:18 np0005548790.localdomain podman[102161]: 2025-12-06 09:12:18.653875011 +0000 UTC m=+0.146504842 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-type=git, architecture=x86_64, container_name=logrotate_crond, release=1761123044, name=rhosp17/openstack-cron, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:12:18 np0005548790.localdomain podman[102154]: 2025-12-06 09:12:18.698125018 +0000 UTC m=+0.199999346 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=ceilometer_agent_ipmi, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 09:12:18 np0005548790.localdomain podman[102155]: 2025-12-06 09:12:18.736882826 +0000 UTC m=+0.236194455 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1)
Dec 06 09:12:18 np0005548790.localdomain podman[102154]: 2025-12-06 09:12:18.756200094 +0000 UTC m=+0.258074382 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc.)
Dec 06 09:12:18 np0005548790.localdomain podman[102161]: 2025-12-06 09:12:18.765671249 +0000 UTC m=+0.258301060 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, release=1761123044, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Dec 06 09:12:18 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:12:18 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:12:18 np0005548790.localdomain podman[102153]: 2025-12-06 09:12:18.843680661 +0000 UTC m=+0.348148929 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, version=17.1.12, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 06 09:12:18 np0005548790.localdomain podman[102153]: 2025-12-06 09:12:18.876076379 +0000 UTC m=+0.380544637 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, url=https://www.redhat.com, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z)
Dec 06 09:12:18 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:12:19 np0005548790.localdomain podman[102155]: 2025-12-06 09:12:19.10430079 +0000 UTC m=+0.603612429 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12)
Dec 06 09:12:19 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:12:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:12:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:12:24 np0005548790.localdomain podman[102270]: 2025-12-06 09:12:24.564845438 +0000 UTC m=+0.079243187 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc.)
Dec 06 09:12:24 np0005548790.localdomain podman[102270]: 2025-12-06 09:12:24.582200073 +0000 UTC m=+0.096597872 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=)
Dec 06 09:12:24 np0005548790.localdomain podman[102270]: unhealthy
Dec 06 09:12:24 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:12:24 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:12:24 np0005548790.localdomain podman[102271]: 2025-12-06 09:12:24.669233237 +0000 UTC m=+0.179513095 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, container_name=ovn_controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:12:24 np0005548790.localdomain podman[102271]: 2025-12-06 09:12:24.71409778 +0000 UTC m=+0.224377618 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=)
Dec 06 09:12:24 np0005548790.localdomain podman[102271]: unhealthy
Dec 06 09:12:24 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:12:24 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:12:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:12:27 np0005548790.localdomain podman[102309]: 2025-12-06 09:12:27.552859697 +0000 UTC m=+0.072642779 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, release=1761123044, batch=17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=)
Dec 06 09:12:27 np0005548790.localdomain podman[102309]: 2025-12-06 09:12:27.609267121 +0000 UTC m=+0.129050183 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, release=1761123044, config_id=tripleo_step5, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, architecture=x86_64, version=17.1.12)
Dec 06 09:12:27 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:12:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:12:36 np0005548790.localdomain podman[102334]: 2025-12-06 09:12:36.576736616 +0000 UTC m=+0.089885792 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible)
Dec 06 09:12:36 np0005548790.localdomain podman[102334]: 2025-12-06 09:12:36.785332449 +0000 UTC m=+0.298481595 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.12, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 09:12:36 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:12:45 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 09:12:45 np0005548790.localdomain recover_tripleo_nova_virtqemud[102366]: 62556
Dec 06 09:12:45 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 09:12:45 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 09:12:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:12:48 np0005548790.localdomain podman[102367]: 2025-12-06 09:12:48.5683833 +0000 UTC m=+0.080316244 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.expose-services=, container_name=collectd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, vcs-type=git, io.buildah.version=1.41.4)
Dec 06 09:12:48 np0005548790.localdomain podman[102367]: 2025-12-06 09:12:48.580180916 +0000 UTC m=+0.092113930 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Dec 06 09:12:48 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:12:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:12:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:12:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:12:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:12:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:12:49 np0005548790.localdomain podman[102402]: 2025-12-06 09:12:49.589861593 +0000 UTC m=+0.089121981 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, container_name=logrotate_crond, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, release=1761123044, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git)
Dec 06 09:12:49 np0005548790.localdomain podman[102402]: 2025-12-06 09:12:49.622999192 +0000 UTC m=+0.122259590 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git)
Dec 06 09:12:49 np0005548790.localdomain podman[102390]: 2025-12-06 09:12:49.639153265 +0000 UTC m=+0.145149003 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1)
Dec 06 09:12:49 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:12:49 np0005548790.localdomain podman[102394]: 2025-12-06 09:12:49.688577401 +0000 UTC m=+0.190085609 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044)
Dec 06 09:12:49 np0005548790.localdomain podman[102388]: 2025-12-06 09:12:49.563903238 +0000 UTC m=+0.077339766 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, version=17.1.12, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com)
Dec 06 09:12:49 np0005548790.localdomain podman[102390]: 2025-12-06 09:12:49.741086649 +0000 UTC m=+0.247082397 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, architecture=x86_64, release=1761123044, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 09:12:49 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:12:49 np0005548790.localdomain podman[102389]: 2025-12-06 09:12:49.835533462 +0000 UTC m=+0.345509307 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git)
Dec 06 09:12:49 np0005548790.localdomain podman[102388]: 2025-12-06 09:12:49.854548531 +0000 UTC m=+0.367985049 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, version=17.1.12, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 09:12:49 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:12:49 np0005548790.localdomain podman[102389]: 2025-12-06 09:12:49.883862708 +0000 UTC m=+0.393838553 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 09:12:49 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:12:50 np0005548790.localdomain podman[102394]: 2025-12-06 09:12:50.064168183 +0000 UTC m=+0.565676361 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, distribution-scope=public, vendor=Red Hat, Inc., container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1)
Dec 06 09:12:50 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:12:52 np0005548790.localdomain sudo[102500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:12:52 np0005548790.localdomain sudo[102500]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:12:52 np0005548790.localdomain sudo[102500]: pam_unix(sudo:session): session closed for user root
Dec 06 09:12:52 np0005548790.localdomain sudo[102515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:12:52 np0005548790.localdomain sudo[102515]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:12:53 np0005548790.localdomain sudo[102515]: pam_unix(sudo:session): session closed for user root
Dec 06 09:12:53 np0005548790.localdomain sudo[102561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:12:53 np0005548790.localdomain sudo[102561]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:12:53 np0005548790.localdomain sudo[102561]: pam_unix(sudo:session): session closed for user root
Dec 06 09:12:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:12:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:12:55 np0005548790.localdomain podman[102576]: 2025-12-06 09:12:55.566496141 +0000 UTC m=+0.079453751 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 09:12:55 np0005548790.localdomain podman[102576]: 2025-12-06 09:12:55.633320183 +0000 UTC m=+0.146277833 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, architecture=x86_64)
Dec 06 09:12:55 np0005548790.localdomain podman[102576]: unhealthy
Dec 06 09:12:55 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:12:55 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:12:55 np0005548790.localdomain podman[102577]: 2025-12-06 09:12:55.649084896 +0000 UTC m=+0.155837060 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ovn_controller, distribution-scope=public)
Dec 06 09:12:55 np0005548790.localdomain podman[102577]: 2025-12-06 09:12:55.668161668 +0000 UTC m=+0.174913872 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, batch=17.1_20251118.1, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, architecture=x86_64, tcib_managed=true, release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-18T23:34:05Z, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 06 09:12:55 np0005548790.localdomain podman[102577]: unhealthy
Dec 06 09:12:55 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:12:55 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:12:58 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:12:58 np0005548790.localdomain podman[102617]: 2025-12-06 09:12:58.563685607 +0000 UTC m=+0.079255386 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=nova_compute, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 09:12:58 np0005548790.localdomain podman[102617]: 2025-12-06 09:12:58.621350774 +0000 UTC m=+0.136920543 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible)
Dec 06 09:12:58 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:13:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:13:07 np0005548790.localdomain podman[102643]: 2025-12-06 09:13:07.566842409 +0000 UTC m=+0.081509597 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, managed_by=tripleo_ansible)
Dec 06 09:13:07 np0005548790.localdomain podman[102643]: 2025-12-06 09:13:07.804220846 +0000 UTC m=+0.318888004 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true)
Dec 06 09:13:07 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:13:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:13:19 np0005548790.localdomain podman[102672]: 2025-12-06 09:13:19.566430117 +0000 UTC m=+0.080145670 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-collectd-container)
Dec 06 09:13:19 np0005548790.localdomain podman[102672]: 2025-12-06 09:13:19.57807997 +0000 UTC m=+0.091795513 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, url=https://www.redhat.com, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=)
Dec 06 09:13:19 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:13:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:13:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:13:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:13:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:13:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:13:20 np0005548790.localdomain podman[102702]: 2025-12-06 09:13:20.56999016 +0000 UTC m=+0.075871875 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-11-19T00:36:58Z, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step4, container_name=nova_migration_target)
Dec 06 09:13:20 np0005548790.localdomain podman[102693]: 2025-12-06 09:13:20.629904527 +0000 UTC m=+0.142518403 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, tcib_managed=true, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 09:13:20 np0005548790.localdomain podman[102706]: 2025-12-06 09:13:20.588051894 +0000 UTC m=+0.085815261 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, container_name=logrotate_crond, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible)
Dec 06 09:13:20 np0005548790.localdomain podman[102706]: 2025-12-06 09:13:20.670146826 +0000 UTC m=+0.167910163 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.component=openstack-cron-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-cron, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git)
Dec 06 09:13:20 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:13:20 np0005548790.localdomain podman[102693]: 2025-12-06 09:13:20.687097361 +0000 UTC m=+0.199711207 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container)
Dec 06 09:13:20 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:13:20 np0005548790.localdomain podman[102692]: 2025-12-06 09:13:20.656933392 +0000 UTC m=+0.168686164 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, container_name=iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 06 09:13:20 np0005548790.localdomain podman[102692]: 2025-12-06 09:13:20.737294617 +0000 UTC m=+0.249047339 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_id=tripleo_step3, release=1761123044, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 09:13:20 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:13:20 np0005548790.localdomain podman[102694]: 2025-12-06 09:13:20.691457318 +0000 UTC m=+0.195388551 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 09:13:20 np0005548790.localdomain podman[102694]: 2025-12-06 09:13:20.823226211 +0000 UTC m=+0.327157454 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public)
Dec 06 09:13:20 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:13:20 np0005548790.localdomain podman[102702]: 2025-12-06 09:13:20.924005864 +0000 UTC m=+0.429887529 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.component=openstack-nova-compute-container)
Dec 06 09:13:20 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:13:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:13:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:13:26 np0005548790.localdomain podman[102806]: 2025-12-06 09:13:26.570903938 +0000 UTC m=+0.077385647 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, container_name=ovn_controller, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, distribution-scope=public, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-type=git)
Dec 06 09:13:26 np0005548790.localdomain podman[102806]: 2025-12-06 09:13:26.590454282 +0000 UTC m=+0.096936001 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, distribution-scope=public, container_name=ovn_controller, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=)
Dec 06 09:13:26 np0005548790.localdomain podman[102806]: unhealthy
Dec 06 09:13:26 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:13:26 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:13:26 np0005548790.localdomain systemd[1]: tmp-crun.wXm21D.mount: Deactivated successfully.
Dec 06 09:13:26 np0005548790.localdomain podman[102805]: 2025-12-06 09:13:26.67500341 +0000 UTC m=+0.182629029 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 06 09:13:26 np0005548790.localdomain podman[102805]: 2025-12-06 09:13:26.716171124 +0000 UTC m=+0.223796763 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.openshift.expose-services=, release=1761123044, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:13:26 np0005548790.localdomain podman[102805]: unhealthy
Dec 06 09:13:26 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:13:26 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:13:29 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:13:29 np0005548790.localdomain podman[102845]: 2025-12-06 09:13:29.567294373 +0000 UTC m=+0.081893417 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 09:13:29 np0005548790.localdomain podman[102845]: 2025-12-06 09:13:29.650196777 +0000 UTC m=+0.164795821 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git)
Dec 06 09:13:29 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:13:30 np0005548790.localdomain sshd[102872]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:13:33 np0005548790.localdomain sshd[102872]: Received disconnect from 35.247.75.98 port 38806:11: Bye Bye [preauth]
Dec 06 09:13:33 np0005548790.localdomain sshd[102872]: Disconnected from authenticating user root 35.247.75.98 port 38806 [preauth]
Dec 06 09:13:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:13:38 np0005548790.localdomain podman[102874]: 2025-12-06 09:13:38.579877188 +0000 UTC m=+0.086517481 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, release=1761123044, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 09:13:38 np0005548790.localdomain podman[102874]: 2025-12-06 09:13:38.794242537 +0000 UTC m=+0.300882860 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 09:13:38 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:13:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:13:50 np0005548790.localdomain podman[102903]: 2025-12-06 09:13:50.5598757 +0000 UTC m=+0.077501330 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd)
Dec 06 09:13:50 np0005548790.localdomain podman[102903]: 2025-12-06 09:13:50.56883221 +0000 UTC m=+0.086457870 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true)
Dec 06 09:13:50 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:13:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:13:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:13:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:13:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:13:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:13:51 np0005548790.localdomain podman[102929]: 2025-12-06 09:13:51.585643109 +0000 UTC m=+0.084419725 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z)
Dec 06 09:13:51 np0005548790.localdomain podman[102924]: 2025-12-06 09:13:51.640310845 +0000 UTC m=+0.148501003 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1, version=17.1.12, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git)
Dec 06 09:13:51 np0005548790.localdomain podman[102924]: 2025-12-06 09:13:51.654042363 +0000 UTC m=+0.162232501 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., release=1761123044, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Dec 06 09:13:51 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:13:51 np0005548790.localdomain podman[102925]: 2025-12-06 09:13:51.743546854 +0000 UTC m=+0.249575414 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, version=17.1.12, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc.)
Dec 06 09:13:51 np0005548790.localdomain podman[102925]: 2025-12-06 09:13:51.774221896 +0000 UTC m=+0.280250496 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public)
Dec 06 09:13:51 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:13:51 np0005548790.localdomain podman[102938]: 2025-12-06 09:13:51.790834192 +0000 UTC m=+0.284892881 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_id=tripleo_step4, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, build-date=2025-11-18T22:49:32Z)
Dec 06 09:13:51 np0005548790.localdomain podman[102938]: 2025-12-06 09:13:51.829106958 +0000 UTC m=+0.323165607 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, name=rhosp17/openstack-cron, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, vendor=Red Hat, Inc.)
Dec 06 09:13:51 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:13:51 np0005548790.localdomain podman[102926]: 2025-12-06 09:13:51.834806171 +0000 UTC m=+0.335923080 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, distribution-scope=public, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 06 09:13:51 np0005548790.localdomain podman[102926]: 2025-12-06 09:13:51.917006586 +0000 UTC m=+0.418123495 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, tcib_managed=true, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 09:13:51 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:13:51 np0005548790.localdomain podman[102929]: 2025-12-06 09:13:51.994071732 +0000 UTC m=+0.492848278 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target)
Dec 06 09:13:52 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:13:53 np0005548790.localdomain sudo[103032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:13:53 np0005548790.localdomain sudo[103032]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:13:54 np0005548790.localdomain sudo[103032]: pam_unix(sudo:session): session closed for user root
Dec 06 09:13:54 np0005548790.localdomain sudo[103047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 09:13:54 np0005548790.localdomain sudo[103047]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:13:54 np0005548790.localdomain systemd[1]: tmp-crun.QuqCkQ.mount: Deactivated successfully.
Dec 06 09:13:54 np0005548790.localdomain podman[103133]: 2025-12-06 09:13:54.931478676 +0000 UTC m=+0.095582074 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, release=1763362218, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, version=7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, GIT_CLEAN=True, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z)
Dec 06 09:13:55 np0005548790.localdomain podman[103133]: 2025-12-06 09:13:55.068227262 +0000 UTC m=+0.232330670 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218)
Dec 06 09:13:55 np0005548790.localdomain sudo[103047]: pam_unix(sudo:session): session closed for user root
Dec 06 09:13:55 np0005548790.localdomain sudo[103200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:13:55 np0005548790.localdomain sudo[103200]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:13:55 np0005548790.localdomain sudo[103200]: pam_unix(sudo:session): session closed for user root
Dec 06 09:13:55 np0005548790.localdomain sudo[103215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:13:55 np0005548790.localdomain sudo[103215]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:13:56 np0005548790.localdomain sudo[103215]: pam_unix(sudo:session): session closed for user root
Dec 06 09:13:56 np0005548790.localdomain sudo[103261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:13:56 np0005548790.localdomain sudo[103261]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:13:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:13:56 np0005548790.localdomain sudo[103261]: pam_unix(sudo:session): session closed for user root
Dec 06 09:13:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:13:57 np0005548790.localdomain podman[103277]: 2025-12-06 09:13:57.015484753 +0000 UTC m=+0.069630758 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=ovn_controller, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container)
Dec 06 09:13:57 np0005548790.localdomain podman[103276]: 2025-12-06 09:13:57.024971017 +0000 UTC m=+0.075939207 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, container_name=ovn_metadata_agent)
Dec 06 09:13:57 np0005548790.localdomain podman[103276]: 2025-12-06 09:13:57.035044468 +0000 UTC m=+0.086012708 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Dec 06 09:13:57 np0005548790.localdomain podman[103276]: unhealthy
Dec 06 09:13:57 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:13:57 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:13:57 np0005548790.localdomain podman[103277]: 2025-12-06 09:13:57.05641972 +0000 UTC m=+0.110565745 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.12, container_name=ovn_controller, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 09:13:57 np0005548790.localdomain podman[103277]: unhealthy
Dec 06 09:13:57 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:13:57 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:14:00 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:14:00 np0005548790.localdomain podman[103314]: 2025-12-06 09:14:00.552667361 +0000 UTC m=+0.070394589 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Dec 06 09:14:00 np0005548790.localdomain podman[103314]: 2025-12-06 09:14:00.587387943 +0000 UTC m=+0.105115221 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, tcib_managed=true, version=17.1.12, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_id=tripleo_step5)
Dec 06 09:14:00 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:14:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:14:09 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 09:14:09 np0005548790.localdomain recover_tripleo_nova_virtqemud[103347]: 62556
Dec 06 09:14:09 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 09:14:09 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 09:14:09 np0005548790.localdomain podman[103340]: 2025-12-06 09:14:09.570138725 +0000 UTC m=+0.087204949 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 09:14:09 np0005548790.localdomain podman[103340]: 2025-12-06 09:14:09.771223359 +0000 UTC m=+0.288289513 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, container_name=metrics_qdr, release=1761123044, distribution-scope=public, architecture=x86_64, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 09:14:09 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:14:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:14:21 np0005548790.localdomain podman[103371]: 2025-12-06 09:14:21.558503982 +0000 UTC m=+0.076701498 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc.)
Dec 06 09:14:21 np0005548790.localdomain podman[103371]: 2025-12-06 09:14:21.572122527 +0000 UTC m=+0.090320053 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, tcib_managed=true, build-date=2025-11-18T22:51:28Z, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd)
Dec 06 09:14:21 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:14:22 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:14:22 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:14:22 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:14:22 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:14:22 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:14:22 np0005548790.localdomain podman[103391]: 2025-12-06 09:14:22.588217466 +0000 UTC m=+0.095832481 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 09:14:22 np0005548790.localdomain podman[103391]: 2025-12-06 09:14:22.633592133 +0000 UTC m=+0.141207158 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, release=1761123044, version=17.1.12, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1)
Dec 06 09:14:22 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:14:22 np0005548790.localdomain podman[103393]: 2025-12-06 09:14:22.638747151 +0000 UTC m=+0.139776169 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, tcib_managed=true, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, version=17.1.12)
Dec 06 09:14:22 np0005548790.localdomain podman[103392]: 2025-12-06 09:14:22.693037838 +0000 UTC m=+0.202310197 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-type=git, version=17.1.12, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, managed_by=tripleo_ansible)
Dec 06 09:14:22 np0005548790.localdomain podman[103392]: 2025-12-06 09:14:22.777130653 +0000 UTC m=+0.286402972 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 09:14:22 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:14:22 np0005548790.localdomain podman[103404]: 2025-12-06 09:14:22.746979794 +0000 UTC m=+0.243975034 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 09:14:22 np0005548790.localdomain podman[103393]: 2025-12-06 09:14:22.826829435 +0000 UTC m=+0.327858403 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, tcib_managed=true, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 09:14:22 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:14:22 np0005548790.localdomain podman[103406]: 2025-12-06 09:14:22.903086321 +0000 UTC m=+0.396735841 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, name=rhosp17/openstack-cron, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64)
Dec 06 09:14:22 np0005548790.localdomain podman[103406]: 2025-12-06 09:14:22.939163237 +0000 UTC m=+0.432812797 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, tcib_managed=true, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64)
Dec 06 09:14:22 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:14:23 np0005548790.localdomain podman[103404]: 2025-12-06 09:14:23.156145127 +0000 UTC m=+0.653140387 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044)
Dec 06 09:14:23 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:14:23 np0005548790.localdomain systemd[1]: tmp-crun.Pjg8eq.mount: Deactivated successfully.
Dec 06 09:14:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:14:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:14:27 np0005548790.localdomain podman[103502]: 2025-12-06 09:14:27.565576386 +0000 UTC m=+0.079458622 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, release=1761123044, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public)
Dec 06 09:14:27 np0005548790.localdomain podman[103502]: 2025-12-06 09:14:27.578113562 +0000 UTC m=+0.091995808 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 09:14:27 np0005548790.localdomain podman[103502]: unhealthy
Dec 06 09:14:27 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:14:27 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:14:27 np0005548790.localdomain podman[103503]: 2025-12-06 09:14:27.621844535 +0000 UTC m=+0.128828276 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=ovn_controller, release=1761123044, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 09:14:27 np0005548790.localdomain podman[103503]: 2025-12-06 09:14:27.661832067 +0000 UTC m=+0.168815738 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, release=1761123044, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, tcib_managed=true, config_id=tripleo_step4)
Dec 06 09:14:27 np0005548790.localdomain podman[103503]: unhealthy
Dec 06 09:14:27 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:14:27 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:14:31 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:14:31 np0005548790.localdomain podman[103540]: 2025-12-06 09:14:31.570351954 +0000 UTC m=+0.085383071 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=nova_compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 09:14:31 np0005548790.localdomain podman[103540]: 2025-12-06 09:14:31.602125986 +0000 UTC m=+0.117157103 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, release=1761123044, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, io.openshift.expose-services=)
Dec 06 09:14:31 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:14:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:14:40 np0005548790.localdomain podman[103568]: 2025-12-06 09:14:40.548300839 +0000 UTC m=+0.064189183 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, architecture=x86_64, container_name=metrics_qdr, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 09:14:40 np0005548790.localdomain podman[103568]: 2025-12-06 09:14:40.763366236 +0000 UTC m=+0.279254530 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12)
Dec 06 09:14:40 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:14:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:14:52 np0005548790.localdomain podman[103598]: 2025-12-06 09:14:52.567168263 +0000 UTC m=+0.084073896 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., container_name=collectd, distribution-scope=public, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Dec 06 09:14:52 np0005548790.localdomain podman[103598]: 2025-12-06 09:14:52.582170725 +0000 UTC m=+0.099076358 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, release=1761123044, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, vcs-type=git, vendor=Red Hat, Inc., container_name=collectd, version=17.1.12)
Dec 06 09:14:52 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:14:53 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:14:53 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:14:53 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:14:53 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:14:53 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:14:53 np0005548790.localdomain podman[103618]: 2025-12-06 09:14:53.576760877 +0000 UTC m=+0.090312783 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, maintainer=OpenStack TripleO Team, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20251118.1, url=https://www.redhat.com)
Dec 06 09:14:53 np0005548790.localdomain podman[103618]: 2025-12-06 09:14:53.589449378 +0000 UTC m=+0.103001324 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git)
Dec 06 09:14:53 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:14:53 np0005548790.localdomain podman[103622]: 2025-12-06 09:14:53.636947641 +0000 UTC m=+0.139722958 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, release=1761123044, vendor=Red Hat, Inc., container_name=nova_migration_target, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12)
Dec 06 09:14:53 np0005548790.localdomain podman[103619]: 2025-12-06 09:14:53.690149147 +0000 UTC m=+0.198706349 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, container_name=ceilometer_agent_compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Dec 06 09:14:53 np0005548790.localdomain podman[103630]: 2025-12-06 09:14:53.752388717 +0000 UTC m=+0.250709865 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond)
Dec 06 09:14:53 np0005548790.localdomain podman[103619]: 2025-12-06 09:14:53.757502254 +0000 UTC m=+0.266059416 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64)
Dec 06 09:14:53 np0005548790.localdomain podman[103620]: 2025-12-06 09:14:53.791824475 +0000 UTC m=+0.297279224 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:14:53 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:14:53 np0005548790.localdomain podman[103630]: 2025-12-06 09:14:53.844046474 +0000 UTC m=+0.342367633 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, architecture=x86_64, container_name=logrotate_crond, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team)
Dec 06 09:14:53 np0005548790.localdomain podman[103620]: 2025-12-06 09:14:53.852211184 +0000 UTC m=+0.357665953 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-19T00:12:45Z, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_id=tripleo_step4)
Dec 06 09:14:53 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:14:53 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:14:54 np0005548790.localdomain podman[103622]: 2025-12-06 09:14:54.015205175 +0000 UTC m=+0.517980492 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Dec 06 09:14:54 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:14:54 np0005548790.localdomain systemd[1]: tmp-crun.VnwbTD.mount: Deactivated successfully.
Dec 06 09:14:57 np0005548790.localdomain sudo[103726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:14:57 np0005548790.localdomain sudo[103726]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:14:57 np0005548790.localdomain sudo[103726]: pam_unix(sudo:session): session closed for user root
Dec 06 09:14:57 np0005548790.localdomain sudo[103741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:14:57 np0005548790.localdomain sudo[103741]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:14:57 np0005548790.localdomain sudo[103741]: pam_unix(sudo:session): session closed for user root
Dec 06 09:14:58 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:14:58 np0005548790.localdomain sudo[103787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:14:58 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:14:58 np0005548790.localdomain sudo[103787]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:14:58 np0005548790.localdomain sudo[103787]: pam_unix(sudo:session): session closed for user root
Dec 06 09:14:58 np0005548790.localdomain podman[103801]: 2025-12-06 09:14:58.580301317 +0000 UTC m=+0.085874385 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, distribution-scope=public)
Dec 06 09:14:58 np0005548790.localdomain podman[103801]: 2025-12-06 09:14:58.636376831 +0000 UTC m=+0.141949869 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public)
Dec 06 09:14:58 np0005548790.localdomain podman[103801]: unhealthy
Dec 06 09:14:58 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:14:58 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:14:58 np0005548790.localdomain podman[103802]: 2025-12-06 09:14:58.685818687 +0000 UTC m=+0.189916465 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=ovn_controller, tcib_managed=true, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.)
Dec 06 09:14:58 np0005548790.localdomain podman[103802]: 2025-12-06 09:14:58.70792088 +0000 UTC m=+0.212018658 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, batch=17.1_20251118.1, container_name=ovn_controller, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-18T23:34:05Z, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4)
Dec 06 09:14:58 np0005548790.localdomain podman[103802]: unhealthy
Dec 06 09:14:58 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:14:58 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:15:02 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:15:02 np0005548790.localdomain podman[103841]: 2025-12-06 09:15:02.576861966 +0000 UTC m=+0.091814094 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step5, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, version=17.1.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible)
Dec 06 09:15:02 np0005548790.localdomain podman[103841]: 2025-12-06 09:15:02.608392532 +0000 UTC m=+0.123344619 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, config_id=tripleo_step5, architecture=x86_64, container_name=nova_compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z)
Dec 06 09:15:02 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:15:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:15:11 np0005548790.localdomain podman[103867]: 2025-12-06 09:15:11.582179265 +0000 UTC m=+0.094606478 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, config_id=tripleo_step1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1761123044)
Dec 06 09:15:11 np0005548790.localdomain podman[103867]: 2025-12-06 09:15:11.794271193 +0000 UTC m=+0.306698396 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_id=tripleo_step1, release=1761123044, url=https://www.redhat.com)
Dec 06 09:15:11 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:15:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:15:23 np0005548790.localdomain podman[103896]: 2025-12-06 09:15:23.568879927 +0000 UTC m=+0.083923992 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 09:15:23 np0005548790.localdomain podman[103896]: 2025-12-06 09:15:23.582307508 +0000 UTC m=+0.097351583 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, release=1761123044, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, batch=17.1_20251118.1, distribution-scope=public)
Dec 06 09:15:23 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:15:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:15:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:15:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:15:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:15:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:15:24 np0005548790.localdomain podman[103918]: 2025-12-06 09:15:24.572479421 +0000 UTC m=+0.084528028 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, release=1761123044, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 06 09:15:24 np0005548790.localdomain podman[103916]: 2025-12-06 09:15:24.627355453 +0000 UTC m=+0.141783094 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.)
Dec 06 09:15:24 np0005548790.localdomain podman[103915]: 2025-12-06 09:15:24.6716012 +0000 UTC m=+0.188448755 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, config_id=tripleo_step3, distribution-scope=public, container_name=iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 09:15:24 np0005548790.localdomain podman[103917]: 2025-12-06 09:15:24.678569326 +0000 UTC m=+0.190715815 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, release=1761123044, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git)
Dec 06 09:15:24 np0005548790.localdomain podman[103916]: 2025-12-06 09:15:24.687179687 +0000 UTC m=+0.201607348 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, url=https://www.redhat.com, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute)
Dec 06 09:15:24 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:15:24 np0005548790.localdomain podman[103919]: 2025-12-06 09:15:24.596195968 +0000 UTC m=+0.104197876 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, distribution-scope=public, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Dec 06 09:15:24 np0005548790.localdomain podman[103917]: 2025-12-06 09:15:24.706014152 +0000 UTC m=+0.218160641 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 09:15:24 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:15:24 np0005548790.localdomain podman[103919]: 2025-12-06 09:15:24.731226419 +0000 UTC m=+0.239228347 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, vcs-type=git, config_id=tripleo_step4, tcib_managed=true, container_name=logrotate_crond)
Dec 06 09:15:24 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:15:24 np0005548790.localdomain podman[103915]: 2025-12-06 09:15:24.757116443 +0000 UTC m=+0.273963998 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step3, container_name=iscsid)
Dec 06 09:15:24 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:15:24 np0005548790.localdomain podman[103918]: 2025-12-06 09:15:24.92069344 +0000 UTC m=+0.432742037 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible)
Dec 06 09:15:24 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 09:15:24 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:15:24 np0005548790.localdomain recover_tripleo_nova_virtqemud[104026]: 62556
Dec 06 09:15:24 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 09:15:24 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 09:15:29 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:15:29 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:15:29 np0005548790.localdomain podman[104027]: 2025-12-06 09:15:29.57555555 +0000 UTC m=+0.088018131 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Dec 06 09:15:29 np0005548790.localdomain podman[104027]: 2025-12-06 09:15:29.620342551 +0000 UTC m=+0.132805082 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, batch=17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, version=17.1.12, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public)
Dec 06 09:15:29 np0005548790.localdomain podman[104027]: unhealthy
Dec 06 09:15:29 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:15:29 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:15:29 np0005548790.localdomain podman[104028]: 2025-12-06 09:15:29.621862452 +0000 UTC m=+0.131688713 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, container_name=ovn_controller, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git)
Dec 06 09:15:29 np0005548790.localdomain podman[104028]: 2025-12-06 09:15:29.70529707 +0000 UTC m=+0.215123321 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, container_name=ovn_controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step4, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com)
Dec 06 09:15:29 np0005548790.localdomain podman[104028]: unhealthy
Dec 06 09:15:29 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:15:29 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:15:33 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:15:33 np0005548790.localdomain podman[104065]: 2025-12-06 09:15:33.563170119 +0000 UTC m=+0.078921418 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute)
Dec 06 09:15:33 np0005548790.localdomain podman[104065]: 2025-12-06 09:15:33.591414846 +0000 UTC m=+0.107166105 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, release=1761123044, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, distribution-scope=public, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 09:15:33 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Deactivated successfully.
Dec 06 09:15:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:15:42 np0005548790.localdomain podman[104091]: 2025-12-06 09:15:42.619230749 +0000 UTC m=+0.136011290 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, release=1761123044, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-18T22:49:46Z)
Dec 06 09:15:42 np0005548790.localdomain podman[104091]: 2025-12-06 09:15:42.86129639 +0000 UTC m=+0.378076881 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12)
Dec 06 09:15:42 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:15:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:15:54 np0005548790.localdomain podman[104122]: 2025-12-06 09:15:54.585897352 +0000 UTC m=+0.102359406 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 09:15:54 np0005548790.localdomain podman[104122]: 2025-12-06 09:15:54.594667787 +0000 UTC m=+0.111129801 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, container_name=collectd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, release=1761123044, com.redhat.component=openstack-collectd-container)
Dec 06 09:15:54 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:15:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:15:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:15:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:15:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:15:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:15:55 np0005548790.localdomain podman[104142]: 2025-12-06 09:15:55.57356229 +0000 UTC m=+0.089709077 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, container_name=iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Dec 06 09:15:55 np0005548790.localdomain systemd[1]: tmp-crun.mbcGVo.mount: Deactivated successfully.
Dec 06 09:15:55 np0005548790.localdomain podman[104142]: 2025-12-06 09:15:55.582082988 +0000 UTC m=+0.098229775 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, container_name=iscsid, com.redhat.component=openstack-iscsid-container, release=1761123044, url=https://www.redhat.com, config_id=tripleo_step3, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Dec 06 09:15:55 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:15:55 np0005548790.localdomain podman[104143]: 2025-12-06 09:15:55.636161338 +0000 UTC m=+0.147323372 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-type=git, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.buildah.version=1.41.4, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Dec 06 09:15:55 np0005548790.localdomain systemd[1]: tmp-crun.MlXSTE.mount: Deactivated successfully.
Dec 06 09:15:55 np0005548790.localdomain podman[104150]: 2025-12-06 09:15:55.588480229 +0000 UTC m=+0.091559467 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, release=1761123044)
Dec 06 09:15:55 np0005548790.localdomain podman[104151]: 2025-12-06 09:15:55.693960848 +0000 UTC m=+0.195926365 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, name=rhosp17/openstack-cron, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64)
Dec 06 09:15:55 np0005548790.localdomain podman[104143]: 2025-12-06 09:15:55.71862831 +0000 UTC m=+0.229790284 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, release=1761123044, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com)
Dec 06 09:15:55 np0005548790.localdomain podman[104151]: 2025-12-06 09:15:55.72910703 +0000 UTC m=+0.231072557 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 09:15:55 np0005548790.localdomain podman[104144]: 2025-12-06 09:15:55.736457808 +0000 UTC m=+0.242326631 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-type=git, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64)
Dec 06 09:15:55 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:15:55 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:15:55 np0005548790.localdomain podman[104144]: 2025-12-06 09:15:55.789192312 +0000 UTC m=+0.295061115 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-type=git, tcib_managed=true, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 09:15:55 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:15:55 np0005548790.localdomain podman[104150]: 2025-12-06 09:15:55.957150616 +0000 UTC m=+0.460229904 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, distribution-scope=public, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 06 09:15:55 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:15:58 np0005548790.localdomain sudo[104255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:15:58 np0005548790.localdomain sudo[104255]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:15:58 np0005548790.localdomain sudo[104255]: pam_unix(sudo:session): session closed for user root
Dec 06 09:15:58 np0005548790.localdomain sudo[104270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:15:58 np0005548790.localdomain sudo[104270]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:15:59 np0005548790.localdomain sudo[104270]: pam_unix(sudo:session): session closed for user root
Dec 06 09:16:00 np0005548790.localdomain sudo[104317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:16:00 np0005548790.localdomain sudo[104317]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:16:00 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:16:00 np0005548790.localdomain sudo[104317]: pam_unix(sudo:session): session closed for user root
Dec 06 09:16:00 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:16:00 np0005548790.localdomain podman[104333]: 2025-12-06 09:16:00.253956296 +0000 UTC m=+0.079675748 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., version=17.1.12, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:16:00 np0005548790.localdomain systemd[1]: tmp-crun.ZTAgkc.mount: Deactivated successfully.
Dec 06 09:16:00 np0005548790.localdomain podman[104332]: 2025-12-06 09:16:00.302015164 +0000 UTC m=+0.130740427 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step4, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:14:25Z, vcs-type=git)
Dec 06 09:16:00 np0005548790.localdomain podman[104332]: 2025-12-06 09:16:00.319212975 +0000 UTC m=+0.147938258 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 09:16:00 np0005548790.localdomain podman[104332]: unhealthy
Dec 06 09:16:00 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:16:00 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:16:00 np0005548790.localdomain podman[104333]: 2025-12-06 09:16:00.370722496 +0000 UTC m=+0.196441928 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.buildah.version=1.41.4, version=17.1.12, container_name=ovn_controller, io.openshift.expose-services=, release=1761123044, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container)
Dec 06 09:16:00 np0005548790.localdomain podman[104333]: unhealthy
Dec 06 09:16:00 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:16:00 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:16:04 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:16:04 np0005548790.localdomain podman[104373]: 2025-12-06 09:16:04.565817588 +0000 UTC m=+0.083126121 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, maintainer=OpenStack TripleO Team, container_name=nova_compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-nova-compute-container)
Dec 06 09:16:04 np0005548790.localdomain podman[104373]: 2025-12-06 09:16:04.613169678 +0000 UTC m=+0.130478241 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-nova-compute-container, release=1761123044, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, container_name=nova_compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Dec 06 09:16:04 np0005548790.localdomain podman[104373]: unhealthy
Dec 06 09:16:04 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:16:04 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Failed with result 'exit-code'.
Dec 06 09:16:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:16:13 np0005548790.localdomain systemd[1]: tmp-crun.KA59rq.mount: Deactivated successfully.
Dec 06 09:16:13 np0005548790.localdomain podman[104395]: 2025-12-06 09:16:13.576953083 +0000 UTC m=+0.094848844 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step1, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 09:16:13 np0005548790.localdomain podman[104395]: 2025-12-06 09:16:13.774959243 +0000 UTC m=+0.292855034 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 09:16:13 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:16:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63288 DF PROTO=TCP SPT=34164 DPT=9102 SEQ=2102941100 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E3AB400000000001030307) 
Dec 06 09:16:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13467 DF PROTO=TCP SPT=35202 DPT=9882 SEQ=3303084805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E3AB470000000001030307) 
Dec 06 09:16:19 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13468 DF PROTO=TCP SPT=35202 DPT=9882 SEQ=3303084805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E3AF5F0000000001030307) 
Dec 06 09:16:19 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63289 DF PROTO=TCP SPT=34164 DPT=9102 SEQ=2102941100 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E3AF5F0000000001030307) 
Dec 06 09:16:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63290 DF PROTO=TCP SPT=34164 DPT=9102 SEQ=2102941100 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E3B75F0000000001030307) 
Dec 06 09:16:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13469 DF PROTO=TCP SPT=35202 DPT=9882 SEQ=3303084805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E3B75F0000000001030307) 
Dec 06 09:16:23 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25275 DF PROTO=TCP SPT=57506 DPT=9105 SEQ=3704715307 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E3BF9D0000000001030307) 
Dec 06 09:16:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25276 DF PROTO=TCP SPT=57506 DPT=9105 SEQ=3704715307 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E3C39F0000000001030307) 
Dec 06 09:16:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:16:25 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13470 DF PROTO=TCP SPT=35202 DPT=9882 SEQ=3303084805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E3C71F0000000001030307) 
Dec 06 09:16:25 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63291 DF PROTO=TCP SPT=34164 DPT=9102 SEQ=2102941100 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E3C71F0000000001030307) 
Dec 06 09:16:25 np0005548790.localdomain podman[104424]: 2025-12-06 09:16:25.567841958 +0000 UTC m=+0.085279159 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, architecture=x86_64, version=17.1.12, container_name=collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044)
Dec 06 09:16:25 np0005548790.localdomain podman[104424]: 2025-12-06 09:16:25.577487666 +0000 UTC m=+0.094924877 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-collectd-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z)
Dec 06 09:16:25 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:16:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:16:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:16:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:16:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:16:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:16:26 np0005548790.localdomain systemd[1]: tmp-crun.EC9UQ9.mount: Deactivated successfully.
Dec 06 09:16:26 np0005548790.localdomain podman[104443]: 2025-12-06 09:16:26.627477574 +0000 UTC m=+0.139583594 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, release=1761123044, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:16:26 np0005548790.localdomain podman[104444]: 2025-12-06 09:16:26.58146693 +0000 UTC m=+0.090219091 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, release=1761123044, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z)
Dec 06 09:16:26 np0005548790.localdomain podman[104445]: 2025-12-06 09:16:26.63665454 +0000 UTC m=+0.141208698 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 09:16:26 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25277 DF PROTO=TCP SPT=57506 DPT=9105 SEQ=3704715307 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E3CB9F0000000001030307) 
Dec 06 09:16:26 np0005548790.localdomain podman[104443]: 2025-12-06 09:16:26.68402717 +0000 UTC m=+0.196133260 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, container_name=iscsid, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 09:16:26 np0005548790.localdomain podman[104446]: 2025-12-06 09:16:26.680953008 +0000 UTC m=+0.183867032 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4)
Dec 06 09:16:26 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:16:26 np0005548790.localdomain podman[104452]: 2025-12-06 09:16:26.605091684 +0000 UTC m=+0.101252246 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Dec 06 09:16:26 np0005548790.localdomain podman[104444]: 2025-12-06 09:16:26.713056899 +0000 UTC m=+0.221809040 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 09:16:26 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:16:26 np0005548790.localdomain podman[104452]: 2025-12-06 09:16:26.738125141 +0000 UTC m=+0.234285733 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.41.4, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, container_name=logrotate_crond)
Dec 06 09:16:26 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:16:26 np0005548790.localdomain podman[104445]: 2025-12-06 09:16:26.76378791 +0000 UTC m=+0.268342038 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 09:16:26 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:16:27 np0005548790.localdomain podman[104446]: 2025-12-06 09:16:27.082465585 +0000 UTC m=+0.585379669 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., container_name=nova_migration_target, batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git)
Dec 06 09:16:27 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:16:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:16:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:16:30 np0005548790.localdomain podman[104557]: 2025-12-06 09:16:30.574351199 +0000 UTC m=+0.080816319 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=ovn_controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, name=rhosp17/openstack-ovn-controller, version=17.1.12, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 09:16:30 np0005548790.localdomain podman[104557]: 2025-12-06 09:16:30.592394162 +0000 UTC m=+0.098859302 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 06 09:16:30 np0005548790.localdomain podman[104557]: unhealthy
Dec 06 09:16:30 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:16:30 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:16:30 np0005548790.localdomain podman[104556]: 2025-12-06 09:16:30.678936583 +0000 UTC m=+0.187425037 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, version=17.1.12, container_name=ovn_metadata_agent, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 06 09:16:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25278 DF PROTO=TCP SPT=57506 DPT=9105 SEQ=3704715307 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E3DB600000000001030307) 
Dec 06 09:16:30 np0005548790.localdomain podman[104556]: 2025-12-06 09:16:30.722171063 +0000 UTC m=+0.230659517 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 06 09:16:30 np0005548790.localdomain podman[104556]: unhealthy
Dec 06 09:16:30 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:16:30 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:16:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63292 DF PROTO=TCP SPT=34164 DPT=9102 SEQ=2102941100 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E3E71F0000000001030307) 
Dec 06 09:16:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13471 DF PROTO=TCP SPT=35202 DPT=9882 SEQ=3303084805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E3E7200000000001030307) 
Dec 06 09:16:34 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15808 DF PROTO=TCP SPT=57776 DPT=9101 SEQ=2291162255 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E3EB370000000001030307) 
Dec 06 09:16:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:16:35 np0005548790.localdomain podman[104594]: 2025-12-06 09:16:35.562597751 +0000 UTC m=+0.080954962 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team)
Dec 06 09:16:35 np0005548790.localdomain podman[104594]: 2025-12-06 09:16:35.609318793 +0000 UTC m=+0.127676094 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, distribution-scope=public, release=1761123044, com.redhat.component=openstack-nova-compute-container)
Dec 06 09:16:35 np0005548790.localdomain podman[104594]: unhealthy
Dec 06 09:16:35 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:16:35 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Failed with result 'exit-code'.
Dec 06 09:16:35 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15809 DF PROTO=TCP SPT=57776 DPT=9101 SEQ=2291162255 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E3EF5F0000000001030307) 
Dec 06 09:16:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64087 DF PROTO=TCP SPT=55020 DPT=9100 SEQ=3254910829 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E3F0B60000000001030307) 
Dec 06 09:16:37 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64088 DF PROTO=TCP SPT=55020 DPT=9100 SEQ=3254910829 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E3F49F0000000001030307) 
Dec 06 09:16:37 np0005548790.localdomain sshd[104616]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:16:37 np0005548790.localdomain sshd[104616]: Accepted publickey for zuul from 192.168.122.31 port 38672 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:16:37 np0005548790.localdomain systemd-logind[760]: New session 35 of user zuul.
Dec 06 09:16:37 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15810 DF PROTO=TCP SPT=57776 DPT=9101 SEQ=2291162255 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E3F7600000000001030307) 
Dec 06 09:16:37 np0005548790.localdomain systemd[1]: Started Session 35 of User zuul.
Dec 06 09:16:37 np0005548790.localdomain sshd[104616]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:16:38 np0005548790.localdomain sudo[104709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pqkcjkjylrkzuwidyuterbzcfhfbhdcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012597.9762073-28-266451713446979/AnsiballZ_stat.py
Dec 06 09:16:38 np0005548790.localdomain sudo[104709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:16:38 np0005548790.localdomain python3.9[104711]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:16:38 np0005548790.localdomain sudo[104709]: pam_unix(sudo:session): session closed for user root
Dec 06 09:16:38 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25279 DF PROTO=TCP SPT=57506 DPT=9105 SEQ=3704715307 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E3FB200000000001030307) 
Dec 06 09:16:39 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64089 DF PROTO=TCP SPT=55020 DPT=9100 SEQ=3254910829 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E3FC9F0000000001030307) 
Dec 06 09:16:39 np0005548790.localdomain sudo[104803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yozzsjostqcqnttinhpansgxgyinqzxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012598.8758898-64-18892558119294/AnsiballZ_command.py
Dec 06 09:16:39 np0005548790.localdomain sudo[104803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:16:39 np0005548790.localdomain python3.9[104805]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf'); print(p['DEFAULT']['host'])"
                                                             _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:16:39 np0005548790.localdomain sudo[104803]: pam_unix(sudo:session): session closed for user root
Dec 06 09:16:40 np0005548790.localdomain sudo[104896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ymfywtaezshevzaboamfvpqxumoaxenz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012599.8257246-88-244903424783855/AnsiballZ_stat.py
Dec 06 09:16:40 np0005548790.localdomain sudo[104896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:16:40 np0005548790.localdomain python3.9[104898]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:16:40 np0005548790.localdomain sudo[104896]: pam_unix(sudo:session): session closed for user root
Dec 06 09:16:40 np0005548790.localdomain sudo[104990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lxbfjipacpydpzjbrqpvvbfzrouuvehy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012600.5044239-112-96825295886124/AnsiballZ_command.py
Dec 06 09:16:40 np0005548790.localdomain sudo[104990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:16:41 np0005548790.localdomain python3.9[104992]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf'); print(p['DEFAULT']['host'])"
                                                             _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:16:41 np0005548790.localdomain sudo[104990]: pam_unix(sudo:session): session closed for user root
Dec 06 09:16:41 np0005548790.localdomain sudo[105083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jqmdetukaruguwlqkzzsnvejwffirzld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012601.3536732-140-176866036205388/AnsiballZ_command.py
Dec 06 09:16:41 np0005548790.localdomain sudo[105083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:16:41 np0005548790.localdomain python3.9[105085]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:16:41 np0005548790.localdomain sudo[105083]: pam_unix(sudo:session): session closed for user root
Dec 06 09:16:41 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15811 DF PROTO=TCP SPT=57776 DPT=9101 SEQ=2291162255 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E4071F0000000001030307) 
Dec 06 09:16:42 np0005548790.localdomain python3.9[105176]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 06 09:16:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64090 DF PROTO=TCP SPT=55020 DPT=9100 SEQ=3254910829 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E40C5F0000000001030307) 
Dec 06 09:16:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:16:44 np0005548790.localdomain podman[105264]: 2025-12-06 09:16:44.208155362 +0000 UTC m=+0.085613497 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, architecture=x86_64, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 09:16:44 np0005548790.localdomain python3.9[105267]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:16:44 np0005548790.localdomain podman[105264]: 2025-12-06 09:16:44.403282004 +0000 UTC m=+0.280740129 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12)
Dec 06 09:16:44 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:16:44 np0005548790.localdomain python3.9[105387]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 06 09:16:46 np0005548790.localdomain python3.9[105477]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:16:46 np0005548790.localdomain python3.9[105525]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:16:47 np0005548790.localdomain sshd[104616]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:16:47 np0005548790.localdomain systemd[1]: session-35.scope: Deactivated successfully.
Dec 06 09:16:47 np0005548790.localdomain systemd[1]: session-35.scope: Consumed 4.871s CPU time.
Dec 06 09:16:47 np0005548790.localdomain systemd-logind[760]: Session 35 logged out. Waiting for processes to exit.
Dec 06 09:16:47 np0005548790.localdomain systemd-logind[760]: Removed session 35.
Dec 06 09:16:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11172 DF PROTO=TCP SPT=43846 DPT=9102 SEQ=1763627359 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E420710000000001030307) 
Dec 06 09:16:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5561 DF PROTO=TCP SPT=52076 DPT=9882 SEQ=3963802924 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E420780000000001030307) 
Dec 06 09:16:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11174 DF PROTO=TCP SPT=43846 DPT=9102 SEQ=1763627359 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E42C5F0000000001030307) 
Dec 06 09:16:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60463 DF PROTO=TCP SPT=44496 DPT=9105 SEQ=1532918288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E438DF0000000001030307) 
Dec 06 09:16:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:16:56 np0005548790.localdomain podman[105541]: 2025-12-06 09:16:56.583091135 +0000 UTC m=+0.095435190 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 09:16:56 np0005548790.localdomain podman[105541]: 2025-12-06 09:16:56.600191183 +0000 UTC m=+0.112535248 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_id=tripleo_step3, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=collectd, name=rhosp17/openstack-collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=)
Dec 06 09:16:56 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:16:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:16:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:16:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:16:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:16:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:16:57 np0005548790.localdomain sshd[105609]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:16:57 np0005548790.localdomain podman[105561]: 2025-12-06 09:16:57.583642877 +0000 UTC m=+0.091722350 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, config_id=tripleo_step3, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.openshift.expose-services=, distribution-scope=public)
Dec 06 09:16:57 np0005548790.localdomain podman[105561]: 2025-12-06 09:16:57.621282077 +0000 UTC m=+0.129361570 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, release=1761123044, tcib_managed=true, io.buildah.version=1.41.4)
Dec 06 09:16:57 np0005548790.localdomain systemd[1]: tmp-crun.arelsn.mount: Deactivated successfully.
Dec 06 09:16:57 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:16:57 np0005548790.localdomain podman[105562]: 2025-12-06 09:16:57.645322501 +0000 UTC m=+0.144501526 container health_status 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, managed_by=tripleo_ansible)
Dec 06 09:16:57 np0005548790.localdomain sshd[105609]: Accepted publickey for zuul from 192.168.122.31 port 36100 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:16:57 np0005548790.localdomain podman[105563]: 2025-12-06 09:16:57.69899452 +0000 UTC m=+0.198091922 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, tcib_managed=true, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, distribution-scope=public, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 09:16:57 np0005548790.localdomain systemd-logind[760]: New session 36 of user zuul.
Dec 06 09:16:57 np0005548790.localdomain systemd[1]: Started Session 36 of User zuul.
Dec 06 09:16:57 np0005548790.localdomain sshd[105609]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:16:57 np0005548790.localdomain podman[105563]: 2025-12-06 09:16:57.738967443 +0000 UTC m=+0.238064815 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 06 09:16:57 np0005548790.localdomain podman[105569]: 2025-12-06 09:16:57.749176596 +0000 UTC m=+0.244541549 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, container_name=nova_migration_target, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Dec 06 09:16:57 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:16:57 np0005548790.localdomain podman[105562]: 2025-12-06 09:16:57.752894826 +0000 UTC m=+0.252073871 container exec_died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 09:16:57 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Deactivated successfully.
Dec 06 09:16:57 np0005548790.localdomain podman[105570]: 2025-12-06 09:16:57.808123887 +0000 UTC m=+0.298987509 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, tcib_managed=true, config_id=tripleo_step4, container_name=logrotate_crond, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, name=rhosp17/openstack-cron, version=17.1.12, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Dec 06 09:16:57 np0005548790.localdomain podman[105570]: 2025-12-06 09:16:57.842325055 +0000 UTC m=+0.333188687 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, distribution-scope=public, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 09:16:57 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:16:58 np0005548790.localdomain podman[105569]: 2025-12-06 09:16:58.110220628 +0000 UTC m=+0.605585661 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, vcs-type=git)
Dec 06 09:16:58 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:16:58 np0005548790.localdomain sudo[105767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uwaawibpnyvpbnwzpwhcfsnramdzqgfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012617.81129-25-50208616963733/AnsiballZ_systemd_service.py
Dec 06 09:16:58 np0005548790.localdomain sudo[105767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:16:58 np0005548790.localdomain python3.9[105769]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:16:58 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:16:58 np0005548790.localdomain systemd-rc-local-generator[105794]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:16:58 np0005548790.localdomain systemd-sysv-generator[105799]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:16:58 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:16:59 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 09:16:59 np0005548790.localdomain sudo[105767]: pam_unix(sudo:session): session closed for user root
Dec 06 09:16:59 np0005548790.localdomain recover_tripleo_nova_virtqemud[105806]: 62556
Dec 06 09:16:59 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 09:16:59 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 09:17:00 np0005548790.localdomain python3.9[105896]: ansible-ansible.builtin.service_facts Invoked
Dec 06 09:17:00 np0005548790.localdomain network[105925]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 09:17:00 np0005548790.localdomain network[105927]: 'network-scripts' will be removed from distribution in near future.
Dec 06 09:17:00 np0005548790.localdomain network[105928]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 09:17:00 np0005548790.localdomain sudo[105903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:17:00 np0005548790.localdomain sudo[105903]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:17:00 np0005548790.localdomain sudo[105903]: pam_unix(sudo:session): session closed for user root
Dec 06 09:17:00 np0005548790.localdomain sudo[105936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:17:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60465 DF PROTO=TCP SPT=44496 DPT=9105 SEQ=1532918288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E4509F0000000001030307) 
Dec 06 09:17:00 np0005548790.localdomain sudo[105936]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:17:00 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:17:00 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:17:00 np0005548790.localdomain podman[105953]: 2025-12-06 09:17:00.849513389 +0000 UTC m=+0.077363446 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, container_name=ovn_controller, batch=17.1_20251118.1)
Dec 06 09:17:00 np0005548790.localdomain podman[105953]: 2025-12-06 09:17:00.863956676 +0000 UTC m=+0.091806753 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc.)
Dec 06 09:17:00 np0005548790.localdomain podman[105953]: unhealthy
Dec 06 09:17:00 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:17:00 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:17:00 np0005548790.localdomain podman[105952]: 2025-12-06 09:17:00.957953728 +0000 UTC m=+0.188112946 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1)
Dec 06 09:17:00 np0005548790.localdomain podman[105952]: 2025-12-06 09:17:00.974920862 +0000 UTC m=+0.205080080 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 09:17:00 np0005548790.localdomain podman[105952]: unhealthy
Dec 06 09:17:00 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:17:00 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:17:01 np0005548790.localdomain sudo[105936]: pam_unix(sudo:session): session closed for user root
Dec 06 09:17:01 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:17:01 np0005548790.localdomain sudo[106077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:17:01 np0005548790.localdomain sudo[106077]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:17:01 np0005548790.localdomain sudo[106077]: pam_unix(sudo:session): session closed for user root
Dec 06 09:17:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5565 DF PROTO=TCP SPT=52076 DPT=9882 SEQ=3963802924 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E45D1F0000000001030307) 
Dec 06 09:17:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11176 DF PROTO=TCP SPT=43846 DPT=9102 SEQ=1763627359 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E45D1F0000000001030307) 
Dec 06 09:17:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:17:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15813 DF PROTO=TCP SPT=57776 DPT=9101 SEQ=2291162255 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E467200000000001030307) 
Dec 06 09:17:06 np0005548790.localdomain podman[106228]: 2025-12-06 09:17:06.576240545 +0000 UTC m=+0.089305466 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, architecture=x86_64, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 09:17:06 np0005548790.localdomain python3.9[106227]: ansible-ansible.builtin.service_facts Invoked
Dec 06 09:17:06 np0005548790.localdomain podman[106228]: 2025-12-06 09:17:06.624099048 +0000 UTC m=+0.137163959 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, container_name=nova_compute, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 06 09:17:06 np0005548790.localdomain podman[106228]: unhealthy
Dec 06 09:17:06 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:17:06 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Failed with result 'exit-code'.
Dec 06 09:17:06 np0005548790.localdomain network[106266]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 09:17:06 np0005548790.localdomain network[106267]: 'network-scripts' will be removed from distribution in near future.
Dec 06 09:17:06 np0005548790.localdomain network[106268]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 09:17:07 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:17:10 np0005548790.localdomain sudo[106467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hfcrlgsjswobofyzqvddhboopnzuzeuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012630.416774-116-88182652774717/AnsiballZ_systemd_service.py
Dec 06 09:17:10 np0005548790.localdomain sudo[106467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:17:10 np0005548790.localdomain python3.9[106469]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:17:11 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:17:11 np0005548790.localdomain systemd-rc-local-generator[106493]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:17:11 np0005548790.localdomain systemd-sysv-generator[106498]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:17:11 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:17:11 np0005548790.localdomain systemd[1]: Stopping ceilometer_agent_compute container...
Dec 06 09:17:11 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47462 DF PROTO=TCP SPT=54616 DPT=9101 SEQ=2447506410 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E47C1F0000000001030307) 
Dec 06 09:17:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16351 DF PROTO=TCP SPT=35578 DPT=9100 SEQ=2200803826 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E4819F0000000001030307) 
Dec 06 09:17:14 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:17:14 np0005548790.localdomain podman[106524]: 2025-12-06 09:17:14.570003627 +0000 UTC m=+0.084159628 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Dec 06 09:17:14 np0005548790.localdomain podman[106524]: 2025-12-06 09:17:14.80131452 +0000 UTC m=+0.315470561 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container)
Dec 06 09:17:14 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:17:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2202 DF PROTO=TCP SPT=51192 DPT=9102 SEQ=3803262526 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E495A10000000001030307) 
Dec 06 09:17:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5818 DF PROTO=TCP SPT=53828 DPT=9882 SEQ=4137668366 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E495A70000000001030307) 
Dec 06 09:17:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5820 DF PROTO=TCP SPT=53828 DPT=9882 SEQ=4137668366 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E4A19F0000000001030307) 
Dec 06 09:17:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13643 DF PROTO=TCP SPT=57680 DPT=9105 SEQ=2193556710 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E4AE1F0000000001030307) 
Dec 06 09:17:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:17:26 np0005548790.localdomain podman[106554]: 2025-12-06 09:17:26.789508412 +0000 UTC m=+0.055919610 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Dec 06 09:17:26 np0005548790.localdomain podman[106554]: 2025-12-06 09:17:26.802093339 +0000 UTC m=+0.068504517 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, container_name=collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible)
Dec 06 09:17:26 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:17:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25281 DF PROTO=TCP SPT=57506 DPT=9105 SEQ=3704715307 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E4B9200000000001030307) 
Dec 06 09:17:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:17:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:17:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:17:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:17:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:17:28 np0005548790.localdomain podman[106576]: 2025-12-06 09:17:28.576679289 +0000 UTC m=+0.081885526 container health_status 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 09:17:28 np0005548790.localdomain podman[106591]: 2025-12-06 09:17:28.636500103 +0000 UTC m=+0.130804389 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=logrotate_crond, release=1761123044, name=rhosp17/openstack-cron, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 09:17:28 np0005548790.localdomain podman[106576]: 2025-12-06 09:17:28.646163493 +0000 UTC m=+0.151369730 container exec_died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.openshift.expose-services=, distribution-scope=public, release=1761123044)
Dec 06 09:17:28 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Deactivated successfully.
Dec 06 09:17:28 np0005548790.localdomain podman[106591]: 2025-12-06 09:17:28.672310474 +0000 UTC m=+0.166614760 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, tcib_managed=true, version=17.1.12)
Dec 06 09:17:28 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:17:28 np0005548790.localdomain systemd[1]: tmp-crun.pEyHEC.mount: Deactivated successfully.
Dec 06 09:17:28 np0005548790.localdomain podman[106574]: 2025-12-06 09:17:28.740667877 +0000 UTC m=+0.250510099 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 09:17:28 np0005548790.localdomain podman[106574]: 2025-12-06 09:17:28.750122581 +0000 UTC m=+0.259964833 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12)
Dec 06 09:17:28 np0005548790.localdomain podman[106575]: Error: container 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be is not running
Dec 06 09:17:28 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Main process exited, code=exited, status=125/n/a
Dec 06 09:17:28 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Failed with result 'exit-code'.
Dec 06 09:17:28 np0005548790.localdomain podman[106582]: 2025-12-06 09:17:28.792564489 +0000 UTC m=+0.288418256 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, distribution-scope=public)
Dec 06 09:17:28 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:17:29 np0005548790.localdomain podman[106582]: 2025-12-06 09:17:29.138120995 +0000 UTC m=+0.633974752 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, vcs-type=git, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12)
Dec 06 09:17:29 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:17:29 np0005548790.localdomain systemd[1]: tmp-crun.me5sXp.mount: Deactivated successfully.
Dec 06 09:17:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13645 DF PROTO=TCP SPT=57680 DPT=9105 SEQ=2193556710 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E4C5E00000000001030307) 
Dec 06 09:17:31 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:17:31 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:17:31 np0005548790.localdomain systemd[1]: tmp-crun.FJreYx.mount: Deactivated successfully.
Dec 06 09:17:31 np0005548790.localdomain podman[106675]: 2025-12-06 09:17:31.334455645 +0000 UTC m=+0.095416630 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, architecture=x86_64, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, container_name=ovn_metadata_agent, batch=17.1_20251118.1)
Dec 06 09:17:31 np0005548790.localdomain podman[106675]: 2025-12-06 09:17:31.376236066 +0000 UTC m=+0.137197061 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 06 09:17:31 np0005548790.localdomain podman[106675]: unhealthy
Dec 06 09:17:31 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:17:31 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:17:31 np0005548790.localdomain podman[106676]: 2025-12-06 09:17:31.423465453 +0000 UTC m=+0.180989226 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., container_name=ovn_controller, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z)
Dec 06 09:17:31 np0005548790.localdomain podman[106676]: 2025-12-06 09:17:31.466157717 +0000 UTC m=+0.223681510 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible)
Dec 06 09:17:31 np0005548790.localdomain podman[106676]: unhealthy
Dec 06 09:17:31 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:17:31 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:17:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2206 DF PROTO=TCP SPT=51192 DPT=9102 SEQ=3803262526 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E4D11F0000000001030307) 
Dec 06 09:17:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47464 DF PROTO=TCP SPT=54616 DPT=9101 SEQ=2447506410 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E4DD1F0000000001030307) 
Dec 06 09:17:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:17:37 np0005548790.localdomain podman[106713]: 2025-12-06 09:17:37.068299282 +0000 UTC m=+0.081932388 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 09:17:37 np0005548790.localdomain podman[106713]: 2025-12-06 09:17:37.114217643 +0000 UTC m=+0.127850769 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, container_name=nova_compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 09:17:37 np0005548790.localdomain podman[106713]: unhealthy
Dec 06 09:17:37 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:17:37 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Failed with result 'exit-code'.
Dec 06 09:17:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64093 DF PROTO=TCP SPT=55020 DPT=9100 SEQ=3254910829 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E4EB1F0000000001030307) 
Dec 06 09:17:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65089 DF PROTO=TCP SPT=34032 DPT=9100 SEQ=1534978943 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E4F6DF0000000001030307) 
Dec 06 09:17:45 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:17:45 np0005548790.localdomain systemd[1]: tmp-crun.S5KUe4.mount: Deactivated successfully.
Dec 06 09:17:45 np0005548790.localdomain podman[106737]: 2025-12-06 09:17:45.314981367 +0000 UTC m=+0.077385007 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, release=1761123044, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 09:17:45 np0005548790.localdomain podman[106737]: 2025-12-06 09:17:45.510216442 +0000 UTC m=+0.272620132 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 09:17:45 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:17:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55210 DF PROTO=TCP SPT=42372 DPT=9102 SEQ=814405206 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E50AD20000000001030307) 
Dec 06 09:17:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33949 DF PROTO=TCP SPT=41650 DPT=9882 SEQ=1564026543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E50AD70000000001030307) 
Dec 06 09:17:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55212 DF PROTO=TCP SPT=42372 DPT=9102 SEQ=814405206 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E516DF0000000001030307) 
Dec 06 09:17:53 np0005548790.localdomain podman[106510]: time="2025-12-06T09:17:53Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_compute in 42 seconds, resorting to SIGKILL"
Dec 06 09:17:53 np0005548790.localdomain systemd[1]: tmp-crun.ian33M.mount: Deactivated successfully.
Dec 06 09:17:53 np0005548790.localdomain systemd[1]: libpod-610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.scope: Deactivated successfully.
Dec 06 09:17:53 np0005548790.localdomain systemd[1]: libpod-610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.scope: Consumed 4.815s CPU time.
Dec 06 09:17:53 np0005548790.localdomain podman[106510]: 2025-12-06 09:17:53.516237622 +0000 UTC m=+42.087746684 container stop 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 09:17:53 np0005548790.localdomain podman[106510]: 2025-12-06 09:17:53.545687083 +0000 UTC m=+42.117196175 container died 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., tcib_managed=true, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=)
Dec 06 09:17:53 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.timer: Deactivated successfully.
Dec 06 09:17:53 np0005548790.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.
Dec 06 09:17:53 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Failed to open /run/systemd/transient/610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: No such file or directory
Dec 06 09:17:53 np0005548790.localdomain systemd[1]: tmp-crun.nP3fpF.mount: Deactivated successfully.
Dec 06 09:17:53 np0005548790.localdomain podman[106510]: 2025-12-06 09:17:53.606078832 +0000 UTC m=+42.177587884 container cleanup 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=ceilometer_agent_compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, version=17.1.12)
Dec 06 09:17:53 np0005548790.localdomain podman[106510]: ceilometer_agent_compute
Dec 06 09:17:53 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.timer: Failed to open /run/systemd/transient/610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.timer: No such file or directory
Dec 06 09:17:53 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Failed to open /run/systemd/transient/610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: No such file or directory
Dec 06 09:17:53 np0005548790.localdomain podman[106768]: 2025-12-06 09:17:53.664477878 +0000 UTC m=+0.132610727 container cleanup 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.4, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container)
Dec 06 09:17:53 np0005548790.localdomain systemd[1]: libpod-conmon-610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.scope: Deactivated successfully.
Dec 06 09:17:53 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.timer: Failed to open /run/systemd/transient/610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.timer: No such file or directory
Dec 06 09:17:53 np0005548790.localdomain systemd[1]: 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: Failed to open /run/systemd/transient/610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be.service: No such file or directory
Dec 06 09:17:53 np0005548790.localdomain podman[106781]: 2025-12-06 09:17:53.770749698 +0000 UTC m=+0.071994961 container cleanup 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 06 09:17:53 np0005548790.localdomain podman[106781]: ceilometer_agent_compute
Dec 06 09:17:53 np0005548790.localdomain systemd[1]: tripleo_ceilometer_agent_compute.service: Deactivated successfully.
Dec 06 09:17:53 np0005548790.localdomain systemd[1]: Stopped ceilometer_agent_compute container.
Dec 06 09:17:53 np0005548790.localdomain systemd[1]: tripleo_ceilometer_agent_compute.service: Consumed 1.090s CPU time, no IO.
Dec 06 09:17:53 np0005548790.localdomain sudo[106467]: pam_unix(sudo:session): session closed for user root
Dec 06 09:17:54 np0005548790.localdomain sudo[106883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dcveuqbogusbnuvcdbfbbgabjkcydpbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012673.9407005-116-217653658523379/AnsiballZ_systemd_service.py
Dec 06 09:17:54 np0005548790.localdomain sudo[106883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:17:54 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3c39009f0f6008c6d55691d8dd4cf23ac737f3eb0a424c8d14f1ebded10dc0a2-merged.mount: Deactivated successfully.
Dec 06 09:17:54 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be-userdata-shm.mount: Deactivated successfully.
Dec 06 09:17:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7157 DF PROTO=TCP SPT=38510 DPT=9105 SEQ=2256886832 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E5231F0000000001030307) 
Dec 06 09:17:54 np0005548790.localdomain python3.9[106885]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:17:54 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:17:54 np0005548790.localdomain systemd-rc-local-generator[106914]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:17:54 np0005548790.localdomain systemd-sysv-generator[106917]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:17:54 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:17:55 np0005548790.localdomain systemd[1]: Stopping ceilometer_agent_ipmi container...
Dec 06 09:17:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:17:57 np0005548790.localdomain systemd[1]: tmp-crun.1eKYEP.mount: Deactivated successfully.
Dec 06 09:17:57 np0005548790.localdomain podman[106940]: 2025-12-06 09:17:57.32041913 +0000 UTC m=+0.088488733 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, version=17.1.12, release=1761123044, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team)
Dec 06 09:17:57 np0005548790.localdomain podman[106940]: 2025-12-06 09:17:57.357128015 +0000 UTC m=+0.125197608 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, io.buildah.version=1.41.4)
Dec 06 09:17:57 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:17:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60468 DF PROTO=TCP SPT=44496 DPT=9105 SEQ=1532918288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E52F200000000001030307) 
Dec 06 09:17:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:17:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:17:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:17:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:17:59 np0005548790.localdomain podman[106961]: Error: container 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 is not running
Dec 06 09:17:59 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Main process exited, code=exited, status=125/n/a
Dec 06 09:17:59 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Failed with result 'exit-code'.
Dec 06 09:17:59 np0005548790.localdomain systemd[1]: tmp-crun.iPstIf.mount: Deactivated successfully.
Dec 06 09:17:59 np0005548790.localdomain podman[106960]: 2025-12-06 09:17:59.627075829 +0000 UTC m=+0.140376485 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, release=1761123044, vcs-type=git, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, version=17.1.12, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team)
Dec 06 09:17:59 np0005548790.localdomain podman[106960]: 2025-12-06 09:17:59.636467381 +0000 UTC m=+0.149767997 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, distribution-scope=public, build-date=2025-11-18T23:44:13Z)
Dec 06 09:17:59 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:17:59 np0005548790.localdomain systemd[1]: tmp-crun.FxOZCo.mount: Deactivated successfully.
Dec 06 09:17:59 np0005548790.localdomain podman[106963]: 2025-12-06 09:17:59.725515719 +0000 UTC m=+0.231757936 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=logrotate_crond, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z)
Dec 06 09:17:59 np0005548790.localdomain podman[106963]: 2025-12-06 09:17:59.760108827 +0000 UTC m=+0.266351064 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container)
Dec 06 09:17:59 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:17:59 np0005548790.localdomain podman[106962]: 2025-12-06 09:17:59.778920061 +0000 UTC m=+0.289228447 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, version=17.1.12, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team)
Dec 06 09:18:00 np0005548790.localdomain podman[106962]: 2025-12-06 09:18:00.164564334 +0000 UTC m=+0.674872730 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true)
Dec 06 09:18:00 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:18:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7159 DF PROTO=TCP SPT=38510 DPT=9105 SEQ=2256886832 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E53AE00000000001030307) 
Dec 06 09:18:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:18:01 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 09:18:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:18:01 np0005548790.localdomain recover_tripleo_nova_virtqemud[107043]: 62556
Dec 06 09:18:01 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 09:18:01 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 09:18:01 np0005548790.localdomain podman[107035]: 2025-12-06 09:18:01.577891665 +0000 UTC m=+0.089084370 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 09:18:01 np0005548790.localdomain podman[107035]: 2025-12-06 09:18:01.623145849 +0000 UTC m=+0.134338534 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team)
Dec 06 09:18:01 np0005548790.localdomain podman[107035]: unhealthy
Dec 06 09:18:01 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:18:01 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:18:01 np0005548790.localdomain podman[107038]: 2025-12-06 09:18:01.635591043 +0000 UTC m=+0.133179793 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, container_name=ovn_controller, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 09:18:01 np0005548790.localdomain podman[107038]: 2025-12-06 09:18:01.678313968 +0000 UTC m=+0.175902668 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.12, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, architecture=x86_64)
Dec 06 09:18:01 np0005548790.localdomain podman[107038]: unhealthy
Dec 06 09:18:01 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:18:01 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:18:02 np0005548790.localdomain sudo[107077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:18:02 np0005548790.localdomain sudo[107077]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:18:02 np0005548790.localdomain sudo[107077]: pam_unix(sudo:session): session closed for user root
Dec 06 09:18:02 np0005548790.localdomain sudo[107092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:18:02 np0005548790.localdomain sudo[107092]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:18:02 np0005548790.localdomain sudo[107092]: pam_unix(sudo:session): session closed for user root
Dec 06 09:18:03 np0005548790.localdomain sudo[107138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:18:03 np0005548790.localdomain sudo[107138]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:18:03 np0005548790.localdomain sudo[107138]: pam_unix(sudo:session): session closed for user root
Dec 06 09:18:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55214 DF PROTO=TCP SPT=42372 DPT=9102 SEQ=814405206 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E547200000000001030307) 
Dec 06 09:18:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:18:07 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63255 DF PROTO=TCP SPT=50504 DPT=9100 SEQ=1930167346 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E5549F0000000001030307) 
Dec 06 09:18:07 np0005548790.localdomain podman[107153]: 2025-12-06 09:18:07.335107079 +0000 UTC m=+0.091229657 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, architecture=x86_64, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, release=1761123044, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 09:18:07 np0005548790.localdomain podman[107153]: 2025-12-06 09:18:07.363187302 +0000 UTC m=+0.119309900 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.)
Dec 06 09:18:07 np0005548790.localdomain podman[107153]: unhealthy
Dec 06 09:18:07 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:18:07 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Failed with result 'exit-code'.
Dec 06 09:18:09 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16354 DF PROTO=TCP SPT=35578 DPT=9100 SEQ=2200803826 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E55F1F0000000001030307) 
Dec 06 09:18:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63257 DF PROTO=TCP SPT=50504 DPT=9100 SEQ=1930167346 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E56C5F0000000001030307) 
Dec 06 09:18:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:18:16 np0005548790.localdomain systemd[1]: tmp-crun.Rc5p8e.mount: Deactivated successfully.
Dec 06 09:18:16 np0005548790.localdomain podman[107175]: 2025-12-06 09:18:16.083554299 +0000 UTC m=+0.096572000 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=metrics_qdr, distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4)
Dec 06 09:18:16 np0005548790.localdomain podman[107175]: 2025-12-06 09:18:16.313273719 +0000 UTC m=+0.326291350 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com)
Dec 06 09:18:16 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:18:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55123 DF PROTO=TCP SPT=46874 DPT=9102 SEQ=3054299721 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E580010000000001030307) 
Dec 06 09:18:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2489 DF PROTO=TCP SPT=38290 DPT=9882 SEQ=3629350217 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E580080000000001030307) 
Dec 06 09:18:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2491 DF PROTO=TCP SPT=38290 DPT=9882 SEQ=3629350217 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E58C1F0000000001030307) 
Dec 06 09:18:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13945 DF PROTO=TCP SPT=50072 DPT=9105 SEQ=1936688337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E5985F0000000001030307) 
Dec 06 09:18:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:18:27 np0005548790.localdomain systemd[1]: tmp-crun.swHiLH.mount: Deactivated successfully.
Dec 06 09:18:27 np0005548790.localdomain podman[107204]: 2025-12-06 09:18:27.565112425 +0000 UTC m=+0.085351291 container health_status ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, name=rhosp17/openstack-collectd, config_id=tripleo_step3, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:18:27 np0005548790.localdomain podman[107204]: 2025-12-06 09:18:27.599139487 +0000 UTC m=+0.119378303 container exec_died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step3, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, distribution-scope=public, managed_by=tripleo_ansible)
Dec 06 09:18:27 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Deactivated successfully.
Dec 06 09:18:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13648 DF PROTO=TCP SPT=57680 DPT=9105 SEQ=2193556710 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E5A51F0000000001030307) 
Dec 06 09:18:29 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:18:29 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:18:29 np0005548790.localdomain podman[107223]: 2025-12-06 09:18:29.814859087 +0000 UTC m=+0.078783084 container health_status 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, config_id=tripleo_step3, version=17.1.12, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., container_name=iscsid, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team)
Dec 06 09:18:29 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:18:29 np0005548790.localdomain podman[107224]: Error: container 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 is not running
Dec 06 09:18:29 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Main process exited, code=exited, status=125/n/a
Dec 06 09:18:29 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Failed with result 'exit-code'.
Dec 06 09:18:29 np0005548790.localdomain systemd[1]: tmp-crun.51BWgB.mount: Deactivated successfully.
Dec 06 09:18:29 np0005548790.localdomain podman[107250]: 2025-12-06 09:18:29.912730492 +0000 UTC m=+0.069569687 container health_status a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, architecture=x86_64, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, tcib_managed=true, container_name=logrotate_crond, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:18:29 np0005548790.localdomain podman[107250]: 2025-12-06 09:18:29.920361696 +0000 UTC m=+0.077200881 container exec_died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, container_name=logrotate_crond, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_id=tripleo_step4)
Dec 06 09:18:29 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Deactivated successfully.
Dec 06 09:18:29 np0005548790.localdomain podman[107223]: 2025-12-06 09:18:29.935959404 +0000 UTC m=+0.199883351 container exec_died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, config_id=tripleo_step3, version=17.1.12, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, release=1761123044, container_name=iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64)
Dec 06 09:18:29 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Deactivated successfully.
Dec 06 09:18:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:18:30 np0005548790.localdomain podman[107271]: 2025-12-06 09:18:30.577841168 +0000 UTC m=+0.097255630 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, name=rhosp17/openstack-nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 09:18:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13947 DF PROTO=TCP SPT=50072 DPT=9105 SEQ=1936688337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E5B0200000000001030307) 
Dec 06 09:18:30 np0005548790.localdomain podman[107271]: 2025-12-06 09:18:30.96944169 +0000 UTC m=+0.488856142 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-nova-compute-container, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.buildah.version=1.41.4, version=17.1.12, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public)
Dec 06 09:18:30 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:18:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:18:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:18:32 np0005548790.localdomain podman[107294]: 2025-12-06 09:18:32.569073848 +0000 UTC m=+0.086564273 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1761123044, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 09:18:32 np0005548790.localdomain podman[107295]: 2025-12-06 09:18:32.618980156 +0000 UTC m=+0.132405952 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, container_name=ovn_controller)
Dec 06 09:18:32 np0005548790.localdomain podman[107295]: 2025-12-06 09:18:32.632977822 +0000 UTC m=+0.146403628 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 06 09:18:32 np0005548790.localdomain podman[107295]: unhealthy
Dec 06 09:18:32 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:18:32 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:18:32 np0005548790.localdomain podman[107294]: 2025-12-06 09:18:32.641575522 +0000 UTC m=+0.159065957 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, url=https://www.redhat.com)
Dec 06 09:18:32 np0005548790.localdomain podman[107294]: unhealthy
Dec 06 09:18:32 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:18:32 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:18:34 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55127 DF PROTO=TCP SPT=46874 DPT=9102 SEQ=3054299721 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E5BD1F0000000001030307) 
Dec 06 09:18:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36430 DF PROTO=TCP SPT=43642 DPT=9101 SEQ=2129998308 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E5C7200000000001030307) 
Dec 06 09:18:37 np0005548790.localdomain podman[106925]: time="2025-12-06T09:18:37Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_ipmi in 42 seconds, resorting to SIGKILL"
Dec 06 09:18:37 np0005548790.localdomain systemd[1]: libpod-8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.scope: Deactivated successfully.
Dec 06 09:18:37 np0005548790.localdomain systemd[1]: libpod-8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.scope: Consumed 5.140s CPU time.
Dec 06 09:18:37 np0005548790.localdomain podman[106925]: 2025-12-06 09:18:37.135282942 +0000 UTC m=+42.096248893 container stop 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, managed_by=tripleo_ansible)
Dec 06 09:18:37 np0005548790.localdomain podman[106925]: 2025-12-06 09:18:37.167968179 +0000 UTC m=+42.128934150 container died 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, build-date=2025-11-19T00:12:45Z)
Dec 06 09:18:37 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.timer: Deactivated successfully.
Dec 06 09:18:37 np0005548790.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.
Dec 06 09:18:37 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Failed to open /run/systemd/transient/8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: No such file or directory
Dec 06 09:18:37 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9-userdata-shm.mount: Deactivated successfully.
Dec 06 09:18:37 np0005548790.localdomain podman[106925]: 2025-12-06 09:18:37.216651004 +0000 UTC m=+42.177616915 container cleanup 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 06 09:18:37 np0005548790.localdomain podman[106925]: ceilometer_agent_ipmi
Dec 06 09:18:37 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.timer: Failed to open /run/systemd/transient/8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.timer: No such file or directory
Dec 06 09:18:37 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Failed to open /run/systemd/transient/8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: No such file or directory
Dec 06 09:18:37 np0005548790.localdomain podman[107331]: 2025-12-06 09:18:37.231872802 +0000 UTC m=+0.076982865 container cleanup 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, vendor=Red Hat, Inc.)
Dec 06 09:18:37 np0005548790.localdomain systemd[1]: libpod-conmon-8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.scope: Deactivated successfully.
Dec 06 09:18:37 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.timer: Failed to open /run/systemd/transient/8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.timer: No such file or directory
Dec 06 09:18:37 np0005548790.localdomain systemd[1]: 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: Failed to open /run/systemd/transient/8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9.service: No such file or directory
Dec 06 09:18:37 np0005548790.localdomain podman[107346]: 2025-12-06 09:18:37.324330771 +0000 UTC m=+0.063562855 container cleanup 8985161d5fae8569ee2d42fd3a17b24ccaa1697f060233397bfe2e1609dc17c9 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com)
Dec 06 09:18:37 np0005548790.localdomain podman[107346]: ceilometer_agent_ipmi
Dec 06 09:18:37 np0005548790.localdomain systemd[1]: tripleo_ceilometer_agent_ipmi.service: Deactivated successfully.
Dec 06 09:18:37 np0005548790.localdomain systemd[1]: Stopped ceilometer_agent_ipmi container.
Dec 06 09:18:37 np0005548790.localdomain sudo[106883]: pam_unix(sudo:session): session closed for user root
Dec 06 09:18:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:18:37 np0005548790.localdomain podman[107371]: 2025-12-06 09:18:37.570139253 +0000 UTC m=+0.088494263 container health_status 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=nova_compute, architecture=x86_64, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container)
Dec 06 09:18:37 np0005548790.localdomain podman[107371]: 2025-12-06 09:18:37.618304685 +0000 UTC m=+0.136659685 container exec_died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 09:18:37 np0005548790.localdomain podman[107371]: unhealthy
Dec 06 09:18:37 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:18:37 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Failed with result 'exit-code'.
Dec 06 09:18:37 np0005548790.localdomain sudo[107468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cexlmemmwblwjoorbkyouxdtajcqqntw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012717.5198636-116-41656827822311/AnsiballZ_systemd_service.py
Dec 06 09:18:37 np0005548790.localdomain sudo[107468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:18:38 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-5d6d513da38af6e3cd155d4b3518d4b989d374acab410fbc1ad1d5be1919c445-merged.mount: Deactivated successfully.
Dec 06 09:18:38 np0005548790.localdomain python3.9[107470]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_collectd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:18:38 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:18:38 np0005548790.localdomain systemd-sysv-generator[107498]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:18:38 np0005548790.localdomain systemd-rc-local-generator[107495]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:18:38 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:18:38 np0005548790.localdomain systemd[1]: Stopping collectd container...
Dec 06 09:18:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65092 DF PROTO=TCP SPT=34032 DPT=9100 SEQ=1534978943 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E5D5200000000001030307) 
Dec 06 09:18:42 np0005548790.localdomain systemd[1]: libpod-ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.scope: Deactivated successfully.
Dec 06 09:18:42 np0005548790.localdomain systemd[1]: libpod-ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.scope: Consumed 2.127s CPU time.
Dec 06 09:18:42 np0005548790.localdomain podman[107511]: 2025-12-06 09:18:42.090495627 +0000 UTC m=+3.552999063 container died ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-collectd-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12)
Dec 06 09:18:42 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.timer: Deactivated successfully.
Dec 06 09:18:42 np0005548790.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.
Dec 06 09:18:42 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Failed to open /run/systemd/transient/ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: No such file or directory
Dec 06 09:18:42 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7-userdata-shm.mount: Deactivated successfully.
Dec 06 09:18:42 np0005548790.localdomain podman[107511]: 2025-12-06 09:18:42.148880023 +0000 UTC m=+3.611383429 container cleanup ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1761123044, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vendor=Red Hat, Inc.)
Dec 06 09:18:42 np0005548790.localdomain podman[107511]: collectd
Dec 06 09:18:42 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.timer: Failed to open /run/systemd/transient/ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.timer: No such file or directory
Dec 06 09:18:42 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Failed to open /run/systemd/transient/ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: No such file or directory
Dec 06 09:18:42 np0005548790.localdomain podman[107524]: 2025-12-06 09:18:42.23416868 +0000 UTC m=+0.136401619 container cleanup ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd)
Dec 06 09:18:42 np0005548790.localdomain systemd[1]: tripleo_collectd.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:18:42 np0005548790.localdomain systemd[1]: libpod-conmon-ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.scope: Deactivated successfully.
Dec 06 09:18:42 np0005548790.localdomain podman[107551]: error opening file `/run/crun/ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7/status`: No such file or directory
Dec 06 09:18:42 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.timer: Failed to open /run/systemd/transient/ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.timer: No such file or directory
Dec 06 09:18:42 np0005548790.localdomain systemd[1]: ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: Failed to open /run/systemd/transient/ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7.service: No such file or directory
Dec 06 09:18:42 np0005548790.localdomain podman[107539]: 2025-12-06 09:18:42.327559404 +0000 UTC m=+0.061012416 container cleanup ba0db8f99b1222b7f1e5eb0cd4cb37e0c9a29c16a3f908050c466df831dbffa7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, release=1761123044, version=17.1.12, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public)
Dec 06 09:18:42 np0005548790.localdomain podman[107539]: collectd
Dec 06 09:18:42 np0005548790.localdomain systemd[1]: tripleo_collectd.service: Failed with result 'exit-code'.
Dec 06 09:18:42 np0005548790.localdomain systemd[1]: Stopped collectd container.
Dec 06 09:18:42 np0005548790.localdomain sudo[107468]: pam_unix(sudo:session): session closed for user root
Dec 06 09:18:42 np0005548790.localdomain sudo[107643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-clyviufhrbjwoulbpiwbegfmzhlrveyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012722.491287-116-78363977489549/AnsiballZ_systemd_service.py
Dec 06 09:18:42 np0005548790.localdomain sudo[107643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:18:43 np0005548790.localdomain python3.9[107645]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_iscsid.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:18:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-0521d8df5d3673de67e6c677f90dfbd55b1c1f914f1671502a747da6648e8a6d-merged.mount: Deactivated successfully.
Dec 06 09:18:43 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:18:43 np0005548790.localdomain systemd-sysv-generator[107676]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:18:43 np0005548790.localdomain systemd-rc-local-generator[107671]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:18:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:18:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50110 DF PROTO=TCP SPT=39194 DPT=9100 SEQ=223364499 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E5E15F0000000001030307) 
Dec 06 09:18:43 np0005548790.localdomain systemd[1]: Stopping iscsid container...
Dec 06 09:18:43 np0005548790.localdomain systemd[1]: libpod-20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.scope: Deactivated successfully.
Dec 06 09:18:43 np0005548790.localdomain systemd[1]: libpod-20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.scope: Consumed 1.054s CPU time.
Dec 06 09:18:43 np0005548790.localdomain podman[107686]: 2025-12-06 09:18:43.560122438 +0000 UTC m=+0.076462141 container died 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=iscsid)
Dec 06 09:18:43 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.timer: Deactivated successfully.
Dec 06 09:18:43 np0005548790.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.
Dec 06 09:18:43 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Failed to open /run/systemd/transient/20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: No such file or directory
Dec 06 09:18:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d-userdata-shm.mount: Deactivated successfully.
Dec 06 09:18:43 np0005548790.localdomain podman[107686]: 2025-12-06 09:18:43.610274573 +0000 UTC m=+0.126614276 container cleanup 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, name=rhosp17/openstack-iscsid)
Dec 06 09:18:43 np0005548790.localdomain podman[107686]: iscsid
Dec 06 09:18:43 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.timer: Failed to open /run/systemd/transient/20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.timer: No such file or directory
Dec 06 09:18:43 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Failed to open /run/systemd/transient/20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: No such file or directory
Dec 06 09:18:43 np0005548790.localdomain podman[107700]: 2025-12-06 09:18:43.64073303 +0000 UTC m=+0.070448600 container cleanup 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, release=1761123044, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.component=openstack-iscsid-container)
Dec 06 09:18:43 np0005548790.localdomain systemd[1]: libpod-conmon-20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.scope: Deactivated successfully.
Dec 06 09:18:43 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.timer: Failed to open /run/systemd/transient/20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.timer: No such file or directory
Dec 06 09:18:43 np0005548790.localdomain systemd[1]: 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: Failed to open /run/systemd/transient/20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d.service: No such file or directory
Dec 06 09:18:43 np0005548790.localdomain podman[107713]: 2025-12-06 09:18:43.74063419 +0000 UTC m=+0.068077228 container cleanup 20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T23:44:13Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 09:18:43 np0005548790.localdomain podman[107713]: iscsid
Dec 06 09:18:43 np0005548790.localdomain systemd[1]: tripleo_iscsid.service: Deactivated successfully.
Dec 06 09:18:43 np0005548790.localdomain systemd[1]: Stopped iscsid container.
Dec 06 09:18:43 np0005548790.localdomain sudo[107643]: pam_unix(sudo:session): session closed for user root
Dec 06 09:18:44 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-180b3aecafe7c8da44d60e3a56d560d6da12982d5d153924b65220d20b7de3a0-merged.mount: Deactivated successfully.
Dec 06 09:18:44 np0005548790.localdomain sudo[107814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-imcpestyeyrtsfoksfjwmymsoymuvgcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012723.910093-116-254960483856963/AnsiballZ_systemd_service.py
Dec 06 09:18:44 np0005548790.localdomain sudo[107814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:18:44 np0005548790.localdomain python3.9[107816]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_logrotate_crond.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:18:45 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:18:45 np0005548790.localdomain systemd-sysv-generator[107843]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:18:45 np0005548790.localdomain systemd-rc-local-generator[107840]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:18:46 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:18:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:18:46 np0005548790.localdomain systemd[1]: Stopping logrotate_crond container...
Dec 06 09:18:46 np0005548790.localdomain podman[107856]: 2025-12-06 09:18:46.44412382 +0000 UTC m=+0.090382254 container health_status ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, distribution-scope=public, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team)
Dec 06 09:18:46 np0005548790.localdomain crond[69486]: (CRON) INFO (Shutting down)
Dec 06 09:18:46 np0005548790.localdomain systemd[1]: libpod-a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.scope: Deactivated successfully.
Dec 06 09:18:46 np0005548790.localdomain systemd[1]: libpod-a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.scope: Consumed 1.009s CPU time.
Dec 06 09:18:46 np0005548790.localdomain podman[107864]: 2025-12-06 09:18:46.473912058 +0000 UTC m=+0.110591606 container died a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, url=https://www.redhat.com, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=)
Dec 06 09:18:46 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.timer: Deactivated successfully.
Dec 06 09:18:46 np0005548790.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.
Dec 06 09:18:46 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Failed to open /run/systemd/transient/a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: No such file or directory
Dec 06 09:18:46 np0005548790.localdomain podman[107864]: 2025-12-06 09:18:46.530703242 +0000 UTC m=+0.167382760 container cleanup a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container)
Dec 06 09:18:46 np0005548790.localdomain podman[107864]: logrotate_crond
Dec 06 09:18:46 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.timer: Failed to open /run/systemd/transient/a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.timer: No such file or directory
Dec 06 09:18:46 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Failed to open /run/systemd/transient/a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: No such file or directory
Dec 06 09:18:46 np0005548790.localdomain podman[107900]: 2025-12-06 09:18:46.564538509 +0000 UTC m=+0.075498105 container cleanup a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=logrotate_crond, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true)
Dec 06 09:18:46 np0005548790.localdomain systemd[1]: libpod-conmon-a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.scope: Deactivated successfully.
Dec 06 09:18:46 np0005548790.localdomain podman[107929]: error opening file `/run/crun/a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810/status`: No such file or directory
Dec 06 09:18:46 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.timer: Failed to open /run/systemd/transient/a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.timer: No such file or directory
Dec 06 09:18:46 np0005548790.localdomain systemd[1]: a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: Failed to open /run/systemd/transient/a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810.service: No such file or directory
Dec 06 09:18:46 np0005548790.localdomain podman[107916]: 2025-12-06 09:18:46.677908149 +0000 UTC m=+0.076836491 container cleanup a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_id=tripleo_step4, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 06 09:18:46 np0005548790.localdomain podman[107916]: logrotate_crond
Dec 06 09:18:46 np0005548790.localdomain podman[107856]: 2025-12-06 09:18:46.680245462 +0000 UTC m=+0.326503936 container exec_died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 09:18:46 np0005548790.localdomain systemd[1]: tripleo_logrotate_crond.service: Deactivated successfully.
Dec 06 09:18:46 np0005548790.localdomain systemd[1]: Stopped logrotate_crond container.
Dec 06 09:18:46 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Deactivated successfully.
Dec 06 09:18:46 np0005548790.localdomain sudo[107814]: pam_unix(sudo:session): session closed for user root
Dec 06 09:18:47 np0005548790.localdomain sudo[108021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wbidqkpnbvbvvoqqehziwemtmudrvyok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012726.835803-116-170746201527906/AnsiballZ_systemd_service.py
Dec 06 09:18:47 np0005548790.localdomain sudo[108021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:18:47 np0005548790.localdomain python3.9[108023]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_metrics_qdr.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:18:47 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3a7065aceffc6d2146ce223b38d40dafae928acede64701f0e57091e6babe580-merged.mount: Deactivated successfully.
Dec 06 09:18:47 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a44fd23256151af78f8df54cf3f15d43813e58feaf7c015fce3c1b6c0df23810-userdata-shm.mount: Deactivated successfully.
Dec 06 09:18:47 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:18:47 np0005548790.localdomain systemd-rc-local-generator[108046]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:18:47 np0005548790.localdomain systemd-sysv-generator[108051]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:18:47 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:18:47 np0005548790.localdomain systemd[1]: Stopping metrics_qdr container...
Dec 06 09:18:47 np0005548790.localdomain kernel: qdrouterd[55054]: segfault at 0 ip 00007f3e4f5837cb sp 00007ffc427761c0 error 4 in libc.so.6[7f3e4f520000+175000]
Dec 06 09:18:47 np0005548790.localdomain kernel: Code: 0b 00 64 44 89 23 85 c0 75 d4 e9 2b ff ff ff e8 db a5 00 00 e9 fd fe ff ff e8 41 1d 0d 00 90 f3 0f 1e fa 41 54 55 48 89 fd 53 <8b> 07 f6 c4 20 0f 85 aa 00 00 00 89 c2 81 e2 00 80 00 00 0f 84 a9
Dec 06 09:18:47 np0005548790.localdomain systemd[1]: Created slice Slice /system/systemd-coredump.
Dec 06 09:18:47 np0005548790.localdomain systemd[1]: Started Process Core Dump (PID 108076/UID 0).
Dec 06 09:18:48 np0005548790.localdomain systemd-coredump[108077]: Resource limits disable core dumping for process 55054 (qdrouterd).
Dec 06 09:18:48 np0005548790.localdomain systemd-coredump[108077]: Process 55054 (qdrouterd) of user 42465 dumped core.
Dec 06 09:18:48 np0005548790.localdomain systemd[1]: systemd-coredump@0-108076-0.service: Deactivated successfully.
Dec 06 09:18:48 np0005548790.localdomain podman[108064]: 2025-12-06 09:18:48.04531981 +0000 UTC m=+0.250079278 container stop ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public)
Dec 06 09:18:48 np0005548790.localdomain systemd[1]: libpod-ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.scope: Deactivated successfully.
Dec 06 09:18:48 np0005548790.localdomain systemd[1]: libpod-ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.scope: Consumed 27.964s CPU time.
Dec 06 09:18:48 np0005548790.localdomain podman[108064]: 2025-12-06 09:18:48.07999728 +0000 UTC m=+0.284756678 container died ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, version=17.1.12, container_name=metrics_qdr, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd)
Dec 06 09:18:48 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.timer: Deactivated successfully.
Dec 06 09:18:48 np0005548790.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.
Dec 06 09:18:48 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Failed to open /run/systemd/transient/ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: No such file or directory
Dec 06 09:18:48 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b-userdata-shm.mount: Deactivated successfully.
Dec 06 09:18:48 np0005548790.localdomain podman[108064]: 2025-12-06 09:18:48.137008869 +0000 UTC m=+0.341768277 container cleanup ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc.)
Dec 06 09:18:48 np0005548790.localdomain podman[108064]: metrics_qdr
Dec 06 09:18:48 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.timer: Failed to open /run/systemd/transient/ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.timer: No such file or directory
Dec 06 09:18:48 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Failed to open /run/systemd/transient/ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: No such file or directory
Dec 06 09:18:48 np0005548790.localdomain podman[108081]: 2025-12-06 09:18:48.192065195 +0000 UTC m=+0.131011654 container cleanup ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, version=17.1.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 06 09:18:48 np0005548790.localdomain systemd[1]: tripleo_metrics_qdr.service: Main process exited, code=exited, status=139/n/a
Dec 06 09:18:48 np0005548790.localdomain systemd[1]: libpod-conmon-ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.scope: Deactivated successfully.
Dec 06 09:18:48 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.timer: Failed to open /run/systemd/transient/ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.timer: No such file or directory
Dec 06 09:18:48 np0005548790.localdomain systemd[1]: ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: Failed to open /run/systemd/transient/ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b.service: No such file or directory
Dec 06 09:18:48 np0005548790.localdomain podman[108097]: 2025-12-06 09:18:48.301306545 +0000 UTC m=+0.075800854 container cleanup ad074d1e194542e3c6dd1f976d9af7aa79758d0fd28b2bd53ed9bc19d6ce0a6b (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'c93aed9c81ad5102fc4c6784fdec0c75'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, tcib_managed=true, distribution-scope=public, release=1761123044)
Dec 06 09:18:48 np0005548790.localdomain podman[108097]: metrics_qdr
Dec 06 09:18:48 np0005548790.localdomain systemd[1]: tripleo_metrics_qdr.service: Failed with result 'exit-code'.
Dec 06 09:18:48 np0005548790.localdomain systemd[1]: Stopped metrics_qdr container.
Dec 06 09:18:48 np0005548790.localdomain sudo[108021]: pam_unix(sudo:session): session closed for user root
Dec 06 09:18:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23496 DF PROTO=TCP SPT=41414 DPT=9102 SEQ=278033306 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E5F5310000000001030307) 
Dec 06 09:18:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20557 DF PROTO=TCP SPT=54894 DPT=9882 SEQ=603844770 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E5F5380000000001030307) 
Dec 06 09:18:48 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-415e7a279decd7116c2befbd34e92cf4f0c1820f58473bd34c5452500e4d856c-merged.mount: Deactivated successfully.
Dec 06 09:18:48 np0005548790.localdomain sudo[108199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-leupvdfwgfxxkzsevuelulvrkkztnodf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012728.4797838-116-52569904147068/AnsiballZ_systemd_service.py
Dec 06 09:18:48 np0005548790.localdomain sudo[108199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:18:49 np0005548790.localdomain python3.9[108201]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_dhcp.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:18:49 np0005548790.localdomain sudo[108199]: pam_unix(sudo:session): session closed for user root
Dec 06 09:18:49 np0005548790.localdomain sudo[108292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rhsydhfzqzbdmeujcvcukiurabtmdfqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012729.2088823-116-180322299897874/AnsiballZ_systemd_service.py
Dec 06 09:18:49 np0005548790.localdomain sudo[108292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:18:49 np0005548790.localdomain python3.9[108294]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_l3_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:18:49 np0005548790.localdomain sudo[108292]: pam_unix(sudo:session): session closed for user root
Dec 06 09:18:50 np0005548790.localdomain sudo[108385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gtwjublrvqqvkobimywcpkedhtirbnka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012729.9248366-116-166355509805316/AnsiballZ_systemd_service.py
Dec 06 09:18:50 np0005548790.localdomain sudo[108385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:18:50 np0005548790.localdomain python3.9[108387]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_ovs_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:18:50 np0005548790.localdomain sudo[108385]: pam_unix(sudo:session): session closed for user root
Dec 06 09:18:50 np0005548790.localdomain sudo[108478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lwmsjfilqujjccnwayouutfhwiqrdynr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012730.6879518-116-1145221062869/AnsiballZ_systemd_service.py
Dec 06 09:18:50 np0005548790.localdomain sudo[108478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:18:51 np0005548790.localdomain python3.9[108480]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:18:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23498 DF PROTO=TCP SPT=41414 DPT=9102 SEQ=278033306 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E6011F0000000001030307) 
Dec 06 09:18:52 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:18:52 np0005548790.localdomain systemd-rc-local-generator[108508]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:18:52 np0005548790.localdomain systemd-sysv-generator[108512]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:18:52 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:18:52 np0005548790.localdomain systemd[1]: Stopping nova_compute container...
Dec 06 09:18:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56615 DF PROTO=TCP SPT=43832 DPT=9105 SEQ=155378256 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E60D9F0000000001030307) 
Dec 06 09:18:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7162 DF PROTO=TCP SPT=38510 DPT=9105 SEQ=2256886832 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E6191F0000000001030307) 
Dec 06 09:19:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56617 DF PROTO=TCP SPT=43832 DPT=9105 SEQ=155378256 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E6255F0000000001030307) 
Dec 06 09:19:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:19:01 np0005548790.localdomain podman[108533]: 2025-12-06 09:19:01.314923595 +0000 UTC m=+0.080892811 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 09:19:01 np0005548790.localdomain podman[108533]: 2025-12-06 09:19:01.730263543 +0000 UTC m=+0.496232759 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, distribution-scope=public, config_id=tripleo_step4, release=1761123044, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 09:19:01 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:19:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:19:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:19:03 np0005548790.localdomain systemd[1]: tmp-crun.Lh0Uli.mount: Deactivated successfully.
Dec 06 09:19:03 np0005548790.localdomain podman[108557]: 2025-12-06 09:19:03.566507097 +0000 UTC m=+0.082233966 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller)
Dec 06 09:19:03 np0005548790.localdomain podman[108557]: 2025-12-06 09:19:03.576951988 +0000 UTC m=+0.092678887 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 09:19:03 np0005548790.localdomain sudo[108576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:19:03 np0005548790.localdomain sudo[108576]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:19:03 np0005548790.localdomain sudo[108576]: pam_unix(sudo:session): session closed for user root
Dec 06 09:19:03 np0005548790.localdomain podman[108556]: 2025-12-06 09:19:03.598811203 +0000 UTC m=+0.116966477 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, io.openshift.expose-services=, container_name=ovn_metadata_agent, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, release=1761123044, managed_by=tripleo_ansible)
Dec 06 09:19:03 np0005548790.localdomain podman[108556]: 2025-12-06 09:19:03.612650564 +0000 UTC m=+0.130805838 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Dec 06 09:19:03 np0005548790.localdomain podman[108556]: unhealthy
Dec 06 09:19:03 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:19:03 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:19:03 np0005548790.localdomain podman[108557]: unhealthy
Dec 06 09:19:03 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:19:03 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:19:03 np0005548790.localdomain sudo[108612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:19:03 np0005548790.localdomain sudo[108612]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:19:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20561 DF PROTO=TCP SPT=54894 DPT=9882 SEQ=603844770 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E6311F0000000001030307) 
Dec 06 09:19:04 np0005548790.localdomain sudo[108612]: pam_unix(sudo:session): session closed for user root
Dec 06 09:19:05 np0005548790.localdomain sudo[108658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:19:05 np0005548790.localdomain sudo[108658]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:19:05 np0005548790.localdomain sudo[108658]: pam_unix(sudo:session): session closed for user root
Dec 06 09:19:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44995 DF PROTO=TCP SPT=51050 DPT=9101 SEQ=2993449341 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E63D200000000001030307) 
Dec 06 09:19:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:19:07 np0005548790.localdomain podman[108673]: Error: container 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 is not running
Dec 06 09:19:07 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Main process exited, code=exited, status=125/n/a
Dec 06 09:19:07 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Failed with result 'exit-code'.
Dec 06 09:19:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63260 DF PROTO=TCP SPT=50504 DPT=9100 SEQ=1930167346 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E64B200000000001030307) 
Dec 06 09:19:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28198 DF PROTO=TCP SPT=51228 DPT=9100 SEQ=569190495 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E6565F0000000001030307) 
Dec 06 09:19:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17448 DF PROTO=TCP SPT=53452 DPT=9102 SEQ=23459929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E66A610000000001030307) 
Dec 06 09:19:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1987 DF PROTO=TCP SPT=59274 DPT=9882 SEQ=3927075878 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E66A670000000001030307) 
Dec 06 09:19:19 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 09:19:19 np0005548790.localdomain recover_tripleo_nova_virtqemud[108685]: 62556
Dec 06 09:19:19 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 09:19:19 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 09:19:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1989 DF PROTO=TCP SPT=59274 DPT=9882 SEQ=3927075878 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E6765F0000000001030307) 
Dec 06 09:19:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38585 DF PROTO=TCP SPT=50756 DPT=9105 SEQ=1862734501 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E682DF0000000001030307) 
Dec 06 09:19:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13950 DF PROTO=TCP SPT=50072 DPT=9105 SEQ=1936688337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E68F1F0000000001030307) 
Dec 06 09:19:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38587 DF PROTO=TCP SPT=50756 DPT=9105 SEQ=1862734501 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E69A9F0000000001030307) 
Dec 06 09:19:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:19:32 np0005548790.localdomain podman[108686]: 2025-12-06 09:19:32.330929914 +0000 UTC m=+0.089201864 container health_status 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, architecture=x86_64, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step4)
Dec 06 09:19:32 np0005548790.localdomain podman[108686]: 2025-12-06 09:19:32.689287694 +0000 UTC m=+0.447559674 container exec_died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, distribution-scope=public, release=1761123044, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container)
Dec 06 09:19:32 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Deactivated successfully.
Dec 06 09:19:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17452 DF PROTO=TCP SPT=53452 DPT=9102 SEQ=23459929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E6A71F0000000001030307) 
Dec 06 09:19:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:19:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:19:34 np0005548790.localdomain systemd[1]: tmp-crun.VDfvLd.mount: Deactivated successfully.
Dec 06 09:19:34 np0005548790.localdomain podman[108710]: 2025-12-06 09:19:34.586934144 +0000 UTC m=+0.101989276 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=)
Dec 06 09:19:34 np0005548790.localdomain podman[108710]: 2025-12-06 09:19:34.635153657 +0000 UTC m=+0.150208759 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=)
Dec 06 09:19:34 np0005548790.localdomain podman[108710]: unhealthy
Dec 06 09:19:34 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:19:34 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:19:34 np0005548790.localdomain podman[108520]: time="2025-12-06T09:19:34Z" level=warning msg="StopSignal SIGTERM failed to stop container nova_compute in 42 seconds, resorting to SIGKILL"
Dec 06 09:19:34 np0005548790.localdomain systemd[1]: libpod-1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.scope: Deactivated successfully.
Dec 06 09:19:34 np0005548790.localdomain systemd[1]: libpod-1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.scope: Consumed 27.707s CPU time.
Dec 06 09:19:34 np0005548790.localdomain podman[108520]: 2025-12-06 09:19:34.745365223 +0000 UTC m=+42.103219999 container died 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 06 09:19:34 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.timer: Deactivated successfully.
Dec 06 09:19:34 np0005548790.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.
Dec 06 09:19:34 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Failed to open /run/systemd/transient/1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: No such file or directory
Dec 06 09:19:34 np0005548790.localdomain podman[108711]: 2025-12-06 09:19:34.746181815 +0000 UTC m=+0.254346203 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, release=1761123044, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public)
Dec 06 09:19:34 np0005548790.localdomain podman[108520]: 2025-12-06 09:19:34.807400036 +0000 UTC m=+42.165254772 container cleanup 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, release=1761123044, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, version=17.1.12)
Dec 06 09:19:34 np0005548790.localdomain podman[108520]: nova_compute
Dec 06 09:19:34 np0005548790.localdomain podman[108711]: 2025-12-06 09:19:34.834158533 +0000 UTC m=+0.342322901 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, version=17.1.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, distribution-scope=public)
Dec 06 09:19:34 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.timer: Failed to open /run/systemd/transient/1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.timer: No such file or directory
Dec 06 09:19:34 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Failed to open /run/systemd/transient/1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: No such file or directory
Dec 06 09:19:34 np0005548790.localdomain podman[108711]: unhealthy
Dec 06 09:19:34 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:19:34 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:19:34 np0005548790.localdomain podman[108752]: 2025-12-06 09:19:34.890120734 +0000 UTC m=+0.121585851 container cleanup 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step5, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 09:19:34 np0005548790.localdomain systemd[1]: libpod-conmon-1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.scope: Deactivated successfully.
Dec 06 09:19:34 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.timer: Failed to open /run/systemd/transient/1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.timer: No such file or directory
Dec 06 09:19:34 np0005548790.localdomain systemd[1]: 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: Failed to open /run/systemd/transient/1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254.service: No such file or directory
Dec 06 09:19:34 np0005548790.localdomain podman[108767]: 2025-12-06 09:19:34.977409316 +0000 UTC m=+0.058838669 container cleanup 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Dec 06 09:19:34 np0005548790.localdomain podman[108767]: nova_compute
Dec 06 09:19:34 np0005548790.localdomain systemd[1]: tripleo_nova_compute.service: Deactivated successfully.
Dec 06 09:19:34 np0005548790.localdomain systemd[1]: Stopped nova_compute container.
Dec 06 09:19:34 np0005548790.localdomain systemd[1]: tripleo_nova_compute.service: Consumed 1.184s CPU time, no IO.
Dec 06 09:19:35 np0005548790.localdomain sudo[108478]: pam_unix(sudo:session): session closed for user root
Dec 06 09:19:35 np0005548790.localdomain sudo[108870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yhxtmqgnhlnknprmqmltrvxvwxlkzkio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012775.152325-116-138113323383877/AnsiballZ_systemd_service.py
Dec 06 09:19:35 np0005548790.localdomain sudo[108870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:19:35 np0005548790.localdomain systemd[1]: tmp-crun.yoGCxK.mount: Deactivated successfully.
Dec 06 09:19:35 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3a613f65a5b410bb267b33e0b08cce4603bbfb5f7e30ec2a1f53d0927d5f78cd-merged.mount: Deactivated successfully.
Dec 06 09:19:35 np0005548790.localdomain python3.9[108872]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:19:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45195 DF PROTO=TCP SPT=56158 DPT=9101 SEQ=4017190583 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E6B11F0000000001030307) 
Dec 06 09:19:36 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:19:36 np0005548790.localdomain systemd-sysv-generator[108900]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:19:36 np0005548790.localdomain systemd-rc-local-generator[108896]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:19:37 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:19:37 np0005548790.localdomain systemd[1]: Stopping nova_migration_target container...
Dec 06 09:19:37 np0005548790.localdomain sshd[69844]: Received signal 15; terminating.
Dec 06 09:19:37 np0005548790.localdomain systemd[1]: libpod-8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.scope: Deactivated successfully.
Dec 06 09:19:37 np0005548790.localdomain systemd[1]: libpod-8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.scope: Consumed 34.272s CPU time.
Dec 06 09:19:37 np0005548790.localdomain podman[108913]: 2025-12-06 09:19:37.282649555 +0000 UTC m=+0.077750685 container died 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 09:19:37 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.timer: Deactivated successfully.
Dec 06 09:19:37 np0005548790.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.
Dec 06 09:19:37 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Failed to open /run/systemd/transient/8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: No such file or directory
Dec 06 09:19:37 np0005548790.localdomain systemd[1]: tmp-crun.dyC6qZ.mount: Deactivated successfully.
Dec 06 09:19:37 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757-userdata-shm.mount: Deactivated successfully.
Dec 06 09:19:37 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ba8b2af72d8dcbf02be78782d6a093327973a6f19db17113d2698cfcfba8f0d1-merged.mount: Deactivated successfully.
Dec 06 09:19:37 np0005548790.localdomain podman[108913]: 2025-12-06 09:19:37.339254074 +0000 UTC m=+0.134355134 container cleanup 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, container_name=nova_migration_target, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step4)
Dec 06 09:19:37 np0005548790.localdomain podman[108913]: nova_migration_target
Dec 06 09:19:37 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.timer: Failed to open /run/systemd/transient/8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.timer: No such file or directory
Dec 06 09:19:37 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Failed to open /run/systemd/transient/8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: No such file or directory
Dec 06 09:19:37 np0005548790.localdomain podman[108926]: 2025-12-06 09:19:37.368827157 +0000 UTC m=+0.072624869 container cleanup 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, name=rhosp17/openstack-nova-compute, distribution-scope=public, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4)
Dec 06 09:19:37 np0005548790.localdomain systemd[1]: libpod-conmon-8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.scope: Deactivated successfully.
Dec 06 09:19:37 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.timer: Failed to open /run/systemd/transient/8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.timer: No such file or directory
Dec 06 09:19:37 np0005548790.localdomain systemd[1]: 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: Failed to open /run/systemd/transient/8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757.service: No such file or directory
Dec 06 09:19:37 np0005548790.localdomain podman[108941]: 2025-12-06 09:19:37.466756373 +0000 UTC m=+0.069608398 container cleanup 8c8b508e3a997320c7731bdfac98743e1e1aec516b1bae9552c4c8a63e7ad757 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, container_name=nova_migration_target, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc.)
Dec 06 09:19:37 np0005548790.localdomain podman[108941]: nova_migration_target
Dec 06 09:19:37 np0005548790.localdomain systemd[1]: tripleo_nova_migration_target.service: Deactivated successfully.
Dec 06 09:19:37 np0005548790.localdomain systemd[1]: Stopped nova_migration_target container.
Dec 06 09:19:37 np0005548790.localdomain sudo[108870]: pam_unix(sudo:session): session closed for user root
Dec 06 09:19:38 np0005548790.localdomain sudo[109041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qjgnyvaezgpkgvzomgwrafaaqoaopdch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012777.6643243-116-264972587353650/AnsiballZ_systemd_service.py
Dec 06 09:19:38 np0005548790.localdomain sudo[109041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:19:38 np0005548790.localdomain python3.9[109043]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:19:38 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:19:38 np0005548790.localdomain systemd-sysv-generator[109077]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:19:38 np0005548790.localdomain systemd-rc-local-generator[109073]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:19:38 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:19:38 np0005548790.localdomain systemd[1]: Stopping nova_virtlogd_wrapper container...
Dec 06 09:19:38 np0005548790.localdomain systemd[1]: libpod-97b023c025806445deae14e86e94f9f9bd79c09975d803afcacb9a5317cf3a94.scope: Deactivated successfully.
Dec 06 09:19:38 np0005548790.localdomain podman[109085]: 2025-12-06 09:19:38.812248505 +0000 UTC m=+0.076737108 container died 97b023c025806445deae14e86e94f9f9bd79c09975d803afcacb9a5317cf3a94 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, tcib_managed=true, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, build-date=2025-11-19T00:35:22Z, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, container_name=nova_virtlogd_wrapper, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 09:19:38 np0005548790.localdomain podman[109085]: 2025-12-06 09:19:38.925408521 +0000 UTC m=+0.189897124 container cleanup 97b023c025806445deae14e86e94f9f9bd79c09975d803afcacb9a5317cf3a94 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtlogd_wrapper, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-19T00:35:22Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, release=1761123044, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 09:19:38 np0005548790.localdomain podman[109085]: nova_virtlogd_wrapper
Dec 06 09:19:38 np0005548790.localdomain podman[109099]: 2025-12-06 09:19:38.938955004 +0000 UTC m=+0.117693997 container cleanup 97b023c025806445deae14e86e94f9f9bd79c09975d803afcacb9a5317cf3a94 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, build-date=2025-11-19T00:35:22Z, batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, architecture=x86_64, release=1761123044, container_name=nova_virtlogd_wrapper, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt)
Dec 06 09:19:39 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e0160acc82432e6ab5584ba775b0f7164edaf038948049207c6a0305ea190059-merged.mount: Deactivated successfully.
Dec 06 09:19:39 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-97b023c025806445deae14e86e94f9f9bd79c09975d803afcacb9a5317cf3a94-userdata-shm.mount: Deactivated successfully.
Dec 06 09:19:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50113 DF PROTO=TCP SPT=39194 DPT=9100 SEQ=223364499 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E6BF1F0000000001030307) 
Dec 06 09:19:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63150 DF PROTO=TCP SPT=32780 DPT=9100 SEQ=3711610982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E6CBA00000000001030307) 
Dec 06 09:19:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33735 DF PROTO=TCP SPT=36622 DPT=9102 SEQ=4246269560 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E6DF910000000001030307) 
Dec 06 09:19:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14954 DF PROTO=TCP SPT=54512 DPT=9882 SEQ=3651573160 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E6DF980000000001030307) 
Dec 06 09:19:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33737 DF PROTO=TCP SPT=36622 DPT=9102 SEQ=4246269560 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E6EB9F0000000001030307) 
Dec 06 09:19:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56743 DF PROTO=TCP SPT=49478 DPT=9105 SEQ=508360540 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E6F7DF0000000001030307) 
Dec 06 09:19:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56620 DF PROTO=TCP SPT=43832 DPT=9105 SEQ=155378256 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E7031F0000000001030307) 
Dec 06 09:20:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56745 DF PROTO=TCP SPT=49478 DPT=9105 SEQ=508360540 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E70F9F0000000001030307) 
Dec 06 09:20:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33739 DF PROTO=TCP SPT=36622 DPT=9102 SEQ=4246269560 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E71B1F0000000001030307) 
Dec 06 09:20:04 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:20:04 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:20:05 np0005548790.localdomain podman[109115]: 2025-12-06 09:20:05.076855513 +0000 UTC m=+0.084132527 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64)
Dec 06 09:20:05 np0005548790.localdomain podman[109115]: 2025-12-06 09:20:05.119126247 +0000 UTC m=+0.126403301 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12)
Dec 06 09:20:05 np0005548790.localdomain podman[109115]: unhealthy
Dec 06 09:20:05 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:20:05 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:20:05 np0005548790.localdomain podman[109114]: 2025-12-06 09:20:05.13264983 +0000 UTC m=+0.140661323 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, version=17.1.12, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, release=1761123044)
Dec 06 09:20:05 np0005548790.localdomain podman[109114]: 2025-12-06 09:20:05.173771703 +0000 UTC m=+0.181783256 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=ovn_metadata_agent, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=)
Dec 06 09:20:05 np0005548790.localdomain podman[109114]: unhealthy
Dec 06 09:20:05 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:20:05 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:20:05 np0005548790.localdomain sudo[109155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:20:05 np0005548790.localdomain sudo[109155]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:20:05 np0005548790.localdomain sudo[109155]: pam_unix(sudo:session): session closed for user root
Dec 06 09:20:05 np0005548790.localdomain sudo[109170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:20:05 np0005548790.localdomain sudo[109170]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:20:06 np0005548790.localdomain sudo[109170]: pam_unix(sudo:session): session closed for user root
Dec 06 09:20:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13853 DF PROTO=TCP SPT=35034 DPT=9101 SEQ=44470508 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E7271F0000000001030307) 
Dec 06 09:20:06 np0005548790.localdomain sudo[109217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:20:06 np0005548790.localdomain sudo[109217]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:20:06 np0005548790.localdomain sudo[109217]: pam_unix(sudo:session): session closed for user root
Dec 06 09:20:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 09:20:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 4800.1 total, 600.0 interval
                                                          Cumulative writes: 5186 writes, 23K keys, 5186 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5186 writes, 682 syncs, 7.60 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 09:20:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28201 DF PROTO=TCP SPT=51228 DPT=9100 SEQ=569190495 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E7351F0000000001030307) 
Dec 06 09:20:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 09:20:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 4800.2 total, 600.0 interval
                                                          Cumulative writes: 5446 writes, 23K keys, 5446 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5446 writes, 742 syncs, 7.34 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 09:20:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44694 DF PROTO=TCP SPT=39154 DPT=9100 SEQ=2640866428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E740DF0000000001030307) 
Dec 06 09:20:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6011 DF PROTO=TCP SPT=46258 DPT=9102 SEQ=722992827 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E754C00000000001030307) 
Dec 06 09:20:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59921 DF PROTO=TCP SPT=60606 DPT=9882 SEQ=3649322165 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E754C90000000001030307) 
Dec 06 09:20:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59923 DF PROTO=TCP SPT=60606 DPT=9882 SEQ=3649322165 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E760DF0000000001030307) 
Dec 06 09:20:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42315 DF PROTO=TCP SPT=37098 DPT=9105 SEQ=1848255424 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E76D200000000001030307) 
Dec 06 09:20:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38590 DF PROTO=TCP SPT=50756 DPT=9105 SEQ=1862734501 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E7791F0000000001030307) 
Dec 06 09:20:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42317 DF PROTO=TCP SPT=37098 DPT=9105 SEQ=1848255424 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E784DF0000000001030307) 
Dec 06 09:20:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6015 DF PROTO=TCP SPT=46258 DPT=9102 SEQ=722992827 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E7911F0000000001030307) 
Dec 06 09:20:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:20:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:20:35 np0005548790.localdomain systemd[1]: tmp-crun.TEfoWK.mount: Deactivated successfully.
Dec 06 09:20:35 np0005548790.localdomain podman[109233]: 2025-12-06 09:20:35.316973474 +0000 UTC m=+0.077913301 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12)
Dec 06 09:20:35 np0005548790.localdomain podman[109232]: 2025-12-06 09:20:35.385255265 +0000 UTC m=+0.145544395 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 09:20:35 np0005548790.localdomain podman[109232]: 2025-12-06 09:20:35.398944352 +0000 UTC m=+0.159233502 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, config_id=tripleo_step4, batch=17.1_20251118.1, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 09:20:35 np0005548790.localdomain podman[109232]: unhealthy
Dec 06 09:20:35 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:20:35 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:20:35 np0005548790.localdomain podman[109233]: 2025-12-06 09:20:35.457837931 +0000 UTC m=+0.218777748 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 09:20:35 np0005548790.localdomain podman[109233]: unhealthy
Dec 06 09:20:35 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:20:35 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:20:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9652 DF PROTO=TCP SPT=59060 DPT=9101 SEQ=4226902233 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E79B1F0000000001030307) 
Dec 06 09:20:39 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63153 DF PROTO=TCP SPT=32780 DPT=9100 SEQ=3711610982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E7A9200000000001030307) 
Dec 06 09:20:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4851 DF PROTO=TCP SPT=40442 DPT=9100 SEQ=2344009568 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E7B61F0000000001030307) 
Dec 06 09:20:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48351 DF PROTO=TCP SPT=58494 DPT=9102 SEQ=90664483 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E7C9F10000000001030307) 
Dec 06 09:20:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13089 DF PROTO=TCP SPT=59440 DPT=9882 SEQ=1986449255 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E7C9F70000000001030307) 
Dec 06 09:20:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48353 DF PROTO=TCP SPT=58494 DPT=9102 SEQ=90664483 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E7D5DF0000000001030307) 
Dec 06 09:20:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21631 DF PROTO=TCP SPT=41382 DPT=9105 SEQ=793707177 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E7E2600000000001030307) 
Dec 06 09:20:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56748 DF PROTO=TCP SPT=49478 DPT=9105 SEQ=508360540 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E7ED200000000001030307) 
Dec 06 09:21:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21633 DF PROTO=TCP SPT=41382 DPT=9105 SEQ=793707177 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E7FA1F0000000001030307) 
Dec 06 09:21:02 np0005548790.localdomain systemd[1]: tripleo_nova_virtlogd_wrapper.service: State 'stop-sigterm' timed out. Killing.
Dec 06 09:21:02 np0005548790.localdomain systemd[1]: tripleo_nova_virtlogd_wrapper.service: Killing process 61791 (conmon) with signal SIGKILL.
Dec 06 09:21:02 np0005548790.localdomain systemd[1]: tripleo_nova_virtlogd_wrapper.service: Main process exited, code=killed, status=9/KILL
Dec 06 09:21:02 np0005548790.localdomain systemd[1]: libpod-conmon-97b023c025806445deae14e86e94f9f9bd79c09975d803afcacb9a5317cf3a94.scope: Deactivated successfully.
Dec 06 09:21:03 np0005548790.localdomain podman[109284]: error opening file `/run/crun/97b023c025806445deae14e86e94f9f9bd79c09975d803afcacb9a5317cf3a94/status`: No such file or directory
Dec 06 09:21:03 np0005548790.localdomain systemd[1]: tmp-crun.y9r5ZN.mount: Deactivated successfully.
Dec 06 09:21:03 np0005548790.localdomain podman[109273]: 2025-12-06 09:21:03.061852399 +0000 UTC m=+0.073650346 container cleanup 97b023c025806445deae14e86e94f9f9bd79c09975d803afcacb9a5317cf3a94 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_virtlogd_wrapper, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-libvirt, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, release=1761123044, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 06 09:21:03 np0005548790.localdomain podman[109273]: nova_virtlogd_wrapper
Dec 06 09:21:03 np0005548790.localdomain systemd[1]: tripleo_nova_virtlogd_wrapper.service: Failed with result 'timeout'.
Dec 06 09:21:03 np0005548790.localdomain systemd[1]: Stopped nova_virtlogd_wrapper container.
Dec 06 09:21:03 np0005548790.localdomain sudo[109041]: pam_unix(sudo:session): session closed for user root
Dec 06 09:21:03 np0005548790.localdomain sudo[109375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rrwnclvnbilczhkjgemnspkkvxnlbskn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012863.218258-116-61406720085003/AnsiballZ_systemd_service.py
Dec 06 09:21:03 np0005548790.localdomain sudo[109375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:21:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48355 DF PROTO=TCP SPT=58494 DPT=9102 SEQ=90664483 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E8051F0000000001030307) 
Dec 06 09:21:03 np0005548790.localdomain python3.9[109377]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:21:03 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:21:03 np0005548790.localdomain systemd-rc-local-generator[109402]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:21:03 np0005548790.localdomain systemd-sysv-generator[109405]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:21:04 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:21:04 np0005548790.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 06 09:21:04 np0005548790.localdomain systemd[1]: Stopping nova_virtnodedevd container...
Dec 06 09:21:04 np0005548790.localdomain recover_tripleo_nova_virtqemud[109419]: 62556
Dec 06 09:21:04 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 06 09:21:04 np0005548790.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 06 09:21:04 np0005548790.localdomain systemd[1]: libpod-3da71cdf11184a768bf6160f2ccd670dbb882c1138b39c104b8d2321959543e1.scope: Deactivated successfully.
Dec 06 09:21:04 np0005548790.localdomain systemd[1]: libpod-3da71cdf11184a768bf6160f2ccd670dbb882c1138b39c104b8d2321959543e1.scope: Consumed 1.415s CPU time.
Dec 06 09:21:04 np0005548790.localdomain podman[109420]: 2025-12-06 09:21:04.238521735 +0000 UTC m=+0.071580692 container died 3da71cdf11184a768bf6160f2ccd670dbb882c1138b39c104b8d2321959543e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, config_id=tripleo_step3, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, container_name=nova_virtnodedevd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, batch=17.1_20251118.1)
Dec 06 09:21:04 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3da71cdf11184a768bf6160f2ccd670dbb882c1138b39c104b8d2321959543e1-userdata-shm.mount: Deactivated successfully.
Dec 06 09:21:04 np0005548790.localdomain podman[109420]: 2025-12-06 09:21:04.281842076 +0000 UTC m=+0.114901003 container cleanup 3da71cdf11184a768bf6160f2ccd670dbb882c1138b39c104b8d2321959543e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, container_name=nova_virtnodedevd, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64)
Dec 06 09:21:04 np0005548790.localdomain podman[109420]: nova_virtnodedevd
Dec 06 09:21:04 np0005548790.localdomain podman[109433]: 2025-12-06 09:21:04.321188111 +0000 UTC m=+0.069897276 container cleanup 3da71cdf11184a768bf6160f2ccd670dbb882c1138b39c104b8d2321959543e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, container_name=nova_virtnodedevd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vendor=Red Hat, Inc.)
Dec 06 09:21:04 np0005548790.localdomain systemd[1]: libpod-conmon-3da71cdf11184a768bf6160f2ccd670dbb882c1138b39c104b8d2321959543e1.scope: Deactivated successfully.
Dec 06 09:21:04 np0005548790.localdomain podman[109464]: error opening file `/run/crun/3da71cdf11184a768bf6160f2ccd670dbb882c1138b39c104b8d2321959543e1/status`: No such file or directory
Dec 06 09:21:04 np0005548790.localdomain podman[109452]: 2025-12-06 09:21:04.413654821 +0000 UTC m=+0.057767180 container cleanup 3da71cdf11184a768bf6160f2ccd670dbb882c1138b39c104b8d2321959543e1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtnodedevd, build-date=2025-11-19T00:35:22Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:21:04 np0005548790.localdomain podman[109452]: nova_virtnodedevd
Dec 06 09:21:04 np0005548790.localdomain systemd[1]: tripleo_nova_virtnodedevd.service: Deactivated successfully.
Dec 06 09:21:04 np0005548790.localdomain systemd[1]: Stopped nova_virtnodedevd container.
Dec 06 09:21:04 np0005548790.localdomain sudo[109375]: pam_unix(sudo:session): session closed for user root
Dec 06 09:21:04 np0005548790.localdomain sudo[109555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lxnullaerkxoyvvohldiuahcokrzrrmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012864.5899947-116-215805611692027/AnsiballZ_systemd_service.py
Dec 06 09:21:04 np0005548790.localdomain sudo[109555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:21:05 np0005548790.localdomain python3.9[109557]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:21:05 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:21:05 np0005548790.localdomain systemd-sysv-generator[109588]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:21:05 np0005548790.localdomain systemd-rc-local-generator[109583]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:21:05 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:21:05 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-86ebff1bc107fdb4ba48a82a29c7022b5ab13c6ae61733851ce5b1c08088cab4-merged.mount: Deactivated successfully.
Dec 06 09:21:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:21:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:21:05 np0005548790.localdomain systemd[1]: Stopping nova_virtproxyd container...
Dec 06 09:21:05 np0005548790.localdomain podman[109597]: 2025-12-06 09:21:05.618993295 +0000 UTC m=+0.083669814 container health_status 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1761123044, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true)
Dec 06 09:21:05 np0005548790.localdomain podman[109597]: 2025-12-06 09:21:05.659340527 +0000 UTC m=+0.124017026 container exec_died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, vcs-type=git)
Dec 06 09:21:05 np0005548790.localdomain podman[109597]: unhealthy
Dec 06 09:21:05 np0005548790.localdomain systemd[1]: tmp-crun.lhw6Y7.mount: Deactivated successfully.
Dec 06 09:21:05 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:21:05 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed with result 'exit-code'.
Dec 06 09:21:05 np0005548790.localdomain podman[109598]: 2025-12-06 09:21:05.678053509 +0000 UTC m=+0.142378949 container health_status 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, distribution-scope=public, io.buildah.version=1.41.4, container_name=ovn_controller, url=https://www.redhat.com)
Dec 06 09:21:05 np0005548790.localdomain systemd[1]: libpod-062d833a452392c20587d4ca3912b26ff638ef1ea56eba26dd074d03b54fcad2.scope: Deactivated successfully.
Dec 06 09:21:05 np0005548790.localdomain podman[109600]: 2025-12-06 09:21:05.719192612 +0000 UTC m=+0.179687600 container died 062d833a452392c20587d4ca3912b26ff638ef1ea56eba26dd074d03b54fcad2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt, container_name=nova_virtproxyd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:35:22Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 06 09:21:05 np0005548790.localdomain podman[109598]: 2025-12-06 09:21:05.74595834 +0000 UTC m=+0.210283810 container exec_died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 09:21:05 np0005548790.localdomain podman[109598]: unhealthy
Dec 06 09:21:05 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:21:05 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed with result 'exit-code'.
Dec 06 09:21:05 np0005548790.localdomain podman[109600]: 2025-12-06 09:21:05.806334769 +0000 UTC m=+0.266829757 container cleanup 062d833a452392c20587d4ca3912b26ff638ef1ea56eba26dd074d03b54fcad2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=nova_virtproxyd, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 06 09:21:05 np0005548790.localdomain podman[109600]: nova_virtproxyd
Dec 06 09:21:05 np0005548790.localdomain podman[109650]: 2025-12-06 09:21:05.819885592 +0000 UTC m=+0.092509172 container cleanup 062d833a452392c20587d4ca3912b26ff638ef1ea56eba26dd074d03b54fcad2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, url=https://www.redhat.com, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtproxyd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:35:22Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 09:21:05 np0005548790.localdomain systemd[1]: libpod-conmon-062d833a452392c20587d4ca3912b26ff638ef1ea56eba26dd074d03b54fcad2.scope: Deactivated successfully.
Dec 06 09:21:05 np0005548790.localdomain podman[109680]: error opening file `/run/crun/062d833a452392c20587d4ca3912b26ff638ef1ea56eba26dd074d03b54fcad2/status`: No such file or directory
Dec 06 09:21:05 np0005548790.localdomain podman[109669]: 2025-12-06 09:21:05.8932877 +0000 UTC m=+0.046951139 container cleanup 062d833a452392c20587d4ca3912b26ff638ef1ea56eba26dd074d03b54fcad2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, container_name=nova_virtproxyd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:35:22Z, tcib_managed=true, config_id=tripleo_step3, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 06 09:21:05 np0005548790.localdomain podman[109669]: nova_virtproxyd
Dec 06 09:21:05 np0005548790.localdomain systemd[1]: tripleo_nova_virtproxyd.service: Deactivated successfully.
Dec 06 09:21:05 np0005548790.localdomain systemd[1]: Stopped nova_virtproxyd container.
Dec 06 09:21:05 np0005548790.localdomain sudo[109555]: pam_unix(sudo:session): session closed for user root
Dec 06 09:21:06 np0005548790.localdomain sudo[109771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ofbbcspzpkyquteifyxvapowuihhirgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012866.0349634-116-240053825864184/AnsiballZ_systemd_service.py
Dec 06 09:21:06 np0005548790.localdomain sudo[109771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:21:06 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-9c5ded326c11c52da5ab8fc5537c56948ef8cb9ad4217d530c95e9a7122f4a61-merged.mount: Deactivated successfully.
Dec 06 09:21:06 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-062d833a452392c20587d4ca3912b26ff638ef1ea56eba26dd074d03b54fcad2-userdata-shm.mount: Deactivated successfully.
Dec 06 09:21:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23178 DF PROTO=TCP SPT=59336 DPT=9101 SEQ=623803957 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E8111F0000000001030307) 
Dec 06 09:21:06 np0005548790.localdomain python3.9[109773]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:21:06 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:21:06 np0005548790.localdomain systemd-rc-local-generator[109796]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:21:06 np0005548790.localdomain systemd-sysv-generator[109799]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:21:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:21:06 np0005548790.localdomain sudo[109810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:21:06 np0005548790.localdomain sudo[109810]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:21:06 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud_recover.timer: Deactivated successfully.
Dec 06 09:21:06 np0005548790.localdomain systemd[1]: Stopped Check and recover tripleo_nova_virtqemud every 10m.
Dec 06 09:21:06 np0005548790.localdomain sudo[109810]: pam_unix(sudo:session): session closed for user root
Dec 06 09:21:06 np0005548790.localdomain systemd[1]: Stopping nova_virtqemud container...
Dec 06 09:21:07 np0005548790.localdomain sudo[109830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:21:07 np0005548790.localdomain sudo[109830]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:21:07 np0005548790.localdomain systemd[1]: tmp-crun.ZHZs1K.mount: Deactivated successfully.
Dec 06 09:21:07 np0005548790.localdomain systemd[1]: libpod-955a185f06c627fd6869ca3b1cd3398316e754f2eda20d9ab7cb1a56c030723f.scope: Deactivated successfully.
Dec 06 09:21:07 np0005548790.localdomain systemd[1]: libpod-955a185f06c627fd6869ca3b1cd3398316e754f2eda20d9ab7cb1a56c030723f.scope: Consumed 2.093s CPU time.
Dec 06 09:21:07 np0005548790.localdomain podman[109829]: 2025-12-06 09:21:07.069453172 +0000 UTC m=+0.089271775 container died 955a185f06c627fd6869ca3b1cd3398316e754f2eda20d9ab7cb1a56c030723f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, container_name=nova_virtqemud, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt)
Dec 06 09:21:07 np0005548790.localdomain podman[109829]: 2025-12-06 09:21:07.10551937 +0000 UTC m=+0.125337953 container cleanup 955a185f06c627fd6869ca3b1cd3398316e754f2eda20d9ab7cb1a56c030723f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtqemud, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 09:21:07 np0005548790.localdomain podman[109829]: nova_virtqemud
Dec 06 09:21:07 np0005548790.localdomain podman[109859]: 2025-12-06 09:21:07.152327965 +0000 UTC m=+0.069039003 container cleanup 955a185f06c627fd6869ca3b1cd3398316e754f2eda20d9ab7cb1a56c030723f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, config_id=tripleo_step3, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:35:22Z, container_name=nova_virtqemud, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12)
Dec 06 09:21:07 np0005548790.localdomain systemd[1]: libpod-conmon-955a185f06c627fd6869ca3b1cd3398316e754f2eda20d9ab7cb1a56c030723f.scope: Deactivated successfully.
Dec 06 09:21:07 np0005548790.localdomain podman[109884]: error opening file `/run/crun/955a185f06c627fd6869ca3b1cd3398316e754f2eda20d9ab7cb1a56c030723f/status`: No such file or directory
Dec 06 09:21:07 np0005548790.localdomain podman[109873]: 2025-12-06 09:21:07.270844263 +0000 UTC m=+0.084184818 container cleanup 955a185f06c627fd6869ca3b1cd3398316e754f2eda20d9ab7cb1a56c030723f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 06 09:21:07 np0005548790.localdomain podman[109873]: nova_virtqemud
Dec 06 09:21:07 np0005548790.localdomain systemd[1]: tripleo_nova_virtqemud.service: Deactivated successfully.
Dec 06 09:21:07 np0005548790.localdomain systemd[1]: Stopped nova_virtqemud container.
Dec 06 09:21:07 np0005548790.localdomain sudo[109771]: pam_unix(sudo:session): session closed for user root
Dec 06 09:21:07 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-8b3046bf95c005ae5e06ce4ce46dded50d0c609d6971f4cdb5d43c0345e88618-merged.mount: Deactivated successfully.
Dec 06 09:21:07 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-955a185f06c627fd6869ca3b1cd3398316e754f2eda20d9ab7cb1a56c030723f-userdata-shm.mount: Deactivated successfully.
Dec 06 09:21:07 np0005548790.localdomain sudo[110000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ewfifgucomhvjzlvomsdtkfjlaedagxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012867.4388053-116-83311769265967/AnsiballZ_systemd_service.py
Dec 06 09:21:07 np0005548790.localdomain sudo[110000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:21:07 np0005548790.localdomain sudo[109830]: pam_unix(sudo:session): session closed for user root
Dec 06 09:21:07 np0005548790.localdomain python3.9[110009]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud_recover.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:21:09 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:21:09 np0005548790.localdomain systemd-rc-local-generator[110035]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:21:09 np0005548790.localdomain systemd-sysv-generator[110042]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:21:09 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:21:09 np0005548790.localdomain sudo[110000]: pam_unix(sudo:session): session closed for user root
Dec 06 09:21:09 np0005548790.localdomain sudo[110137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ewpwyrredhsybswuqohqpmuratpbtmsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012869.4977615-116-18747478075184/AnsiballZ_systemd_service.py
Dec 06 09:21:09 np0005548790.localdomain sudo[110137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:21:10 np0005548790.localdomain python3.9[110139]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:21:10 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:21:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44697 DF PROTO=TCP SPT=39154 DPT=9100 SEQ=2640866428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E81F200000000001030307) 
Dec 06 09:21:10 np0005548790.localdomain systemd-rc-local-generator[110167]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:21:10 np0005548790.localdomain systemd-sysv-generator[110172]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:21:10 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:21:10 np0005548790.localdomain systemd[1]: Stopping nova_virtsecretd container...
Dec 06 09:21:10 np0005548790.localdomain systemd[1]: libpod-88da17bd57495bf861303cba36153f51080615a3ef2323dd08baf107ea35b912.scope: Deactivated successfully.
Dec 06 09:21:10 np0005548790.localdomain podman[110180]: 2025-12-06 09:21:10.497989677 +0000 UTC m=+0.075841415 container died 88da17bd57495bf861303cba36153f51080615a3ef2323dd08baf107ea35b912 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtsecretd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-11-19T00:35:22Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 06 09:21:10 np0005548790.localdomain podman[110180]: 2025-12-06 09:21:10.534904877 +0000 UTC m=+0.112756595 container cleanup 88da17bd57495bf861303cba36153f51080615a3ef2323dd08baf107ea35b912 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, release=1761123044, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtsecretd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, maintainer=OpenStack TripleO Team)
Dec 06 09:21:10 np0005548790.localdomain podman[110180]: nova_virtsecretd
Dec 06 09:21:10 np0005548790.localdomain podman[110193]: 2025-12-06 09:21:10.577861499 +0000 UTC m=+0.073047169 container cleanup 88da17bd57495bf861303cba36153f51080615a3ef2323dd08baf107ea35b912 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, name=rhosp17/openstack-nova-libvirt, container_name=nova_virtsecretd, config_id=tripleo_step3, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true)
Dec 06 09:21:10 np0005548790.localdomain systemd[1]: libpod-conmon-88da17bd57495bf861303cba36153f51080615a3ef2323dd08baf107ea35b912.scope: Deactivated successfully.
Dec 06 09:21:10 np0005548790.localdomain podman[110221]: error opening file `/run/crun/88da17bd57495bf861303cba36153f51080615a3ef2323dd08baf107ea35b912/status`: No such file or directory
Dec 06 09:21:10 np0005548790.localdomain podman[110209]: 2025-12-06 09:21:10.677659265 +0000 UTC m=+0.065808295 container cleanup 88da17bd57495bf861303cba36153f51080615a3ef2323dd08baf107ea35b912 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, build-date=2025-11-19T00:35:22Z, container_name=nova_virtsecretd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044)
Dec 06 09:21:10 np0005548790.localdomain podman[110209]: nova_virtsecretd
Dec 06 09:21:10 np0005548790.localdomain systemd[1]: tripleo_nova_virtsecretd.service: Deactivated successfully.
Dec 06 09:21:10 np0005548790.localdomain systemd[1]: Stopped nova_virtsecretd container.
Dec 06 09:21:10 np0005548790.localdomain sudo[110137]: pam_unix(sudo:session): session closed for user root
Dec 06 09:21:10 np0005548790.localdomain sudo[110223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:21:10 np0005548790.localdomain sudo[110223]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:21:10 np0005548790.localdomain sudo[110223]: pam_unix(sudo:session): session closed for user root
Dec 06 09:21:11 np0005548790.localdomain sudo[110327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tvdvmnjkvbhtmebolduuwrinyvyvigpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012870.848514-116-82371792110092/AnsiballZ_systemd_service.py
Dec 06 09:21:11 np0005548790.localdomain sudo[110327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:21:11 np0005548790.localdomain python3.9[110329]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:21:11 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c83dd2ae8ec29b7cb801d1fd4229674fbfc32ccfb1cee7918282407025d079f4-merged.mount: Deactivated successfully.
Dec 06 09:21:11 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-88da17bd57495bf861303cba36153f51080615a3ef2323dd08baf107ea35b912-userdata-shm.mount: Deactivated successfully.
Dec 06 09:21:11 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:21:11 np0005548790.localdomain systemd-rc-local-generator[110354]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:21:11 np0005548790.localdomain systemd-sysv-generator[110358]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:21:11 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:21:11 np0005548790.localdomain systemd[1]: Stopping nova_virtstoraged container...
Dec 06 09:21:11 np0005548790.localdomain systemd[1]: libpod-cf855de1a4fb0e317fcb88f2c4523f8d6c248f9a4ed34d1f894c5b1977d5029c.scope: Deactivated successfully.
Dec 06 09:21:11 np0005548790.localdomain podman[110370]: 2025-12-06 09:21:11.903028997 +0000 UTC m=+0.074295624 container died cf855de1a4fb0e317fcb88f2c4523f8d6c248f9a4ed34d1f894c5b1977d5029c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, io.buildah.version=1.41.4, config_id=tripleo_step3, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, vcs-type=git, container_name=nova_virtstoraged, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 06 09:21:11 np0005548790.localdomain podman[110370]: 2025-12-06 09:21:11.953632074 +0000 UTC m=+0.124898681 container cleanup cf855de1a4fb0e317fcb88f2c4523f8d6c248f9a4ed34d1f894c5b1977d5029c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtstoraged, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step3)
Dec 06 09:21:11 np0005548790.localdomain podman[110370]: nova_virtstoraged
Dec 06 09:21:11 np0005548790.localdomain podman[110385]: 2025-12-06 09:21:11.980855124 +0000 UTC m=+0.066146675 container cleanup cf855de1a4fb0e317fcb88f2c4523f8d6c248f9a4ed34d1f894c5b1977d5029c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtstoraged, version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt)
Dec 06 09:21:12 np0005548790.localdomain systemd[1]: libpod-conmon-cf855de1a4fb0e317fcb88f2c4523f8d6c248f9a4ed34d1f894c5b1977d5029c.scope: Deactivated successfully.
Dec 06 09:21:12 np0005548790.localdomain podman[110413]: error opening file `/run/crun/cf855de1a4fb0e317fcb88f2c4523f8d6c248f9a4ed34d1f894c5b1977d5029c/status`: No such file or directory
Dec 06 09:21:12 np0005548790.localdomain podman[110402]: 2025-12-06 09:21:12.07952865 +0000 UTC m=+0.062708893 container cleanup cf855de1a4fb0e317fcb88f2c4523f8d6c248f9a4ed34d1f894c5b1977d5029c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '1c14d9f34e8565ad391b489e982af70f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-19T00:35:22Z, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtstoraged, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible)
Dec 06 09:21:12 np0005548790.localdomain podman[110402]: nova_virtstoraged
Dec 06 09:21:12 np0005548790.localdomain systemd[1]: tripleo_nova_virtstoraged.service: Deactivated successfully.
Dec 06 09:21:12 np0005548790.localdomain systemd[1]: Stopped nova_virtstoraged container.
Dec 06 09:21:12 np0005548790.localdomain sudo[110327]: pam_unix(sudo:session): session closed for user root
Dec 06 09:21:12 np0005548790.localdomain sudo[110504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kxpziuxsiityppxcnzngbhsoexpxwsit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012872.227984-116-115889022112236/AnsiballZ_systemd_service.py
Dec 06 09:21:12 np0005548790.localdomain sudo[110504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:21:12 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c1e85ee8cd933bc1928fa8420e88eccd78c498fc11458e12d7d40087a3d81339-merged.mount: Deactivated successfully.
Dec 06 09:21:12 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cf855de1a4fb0e317fcb88f2c4523f8d6c248f9a4ed34d1f894c5b1977d5029c-userdata-shm.mount: Deactivated successfully.
Dec 06 09:21:12 np0005548790.localdomain python3.9[110506]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_controller.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:21:12 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:21:12 np0005548790.localdomain systemd-rc-local-generator[110532]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:21:12 np0005548790.localdomain systemd-sysv-generator[110538]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:21:13 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:21:13 np0005548790.localdomain systemd[1]: Stopping ovn_controller container...
Dec 06 09:21:13 np0005548790.localdomain systemd[1]: libpod-8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.scope: Deactivated successfully.
Dec 06 09:21:13 np0005548790.localdomain systemd[1]: libpod-8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.scope: Consumed 2.537s CPU time.
Dec 06 09:21:13 np0005548790.localdomain podman[110547]: 2025-12-06 09:21:13.225177084 +0000 UTC m=+0.074201372 container died 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 06 09:21:13 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.timer: Deactivated successfully.
Dec 06 09:21:13 np0005548790.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.
Dec 06 09:21:13 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed to open /run/systemd/transient/8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: No such file or directory
Dec 06 09:21:13 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3-userdata-shm.mount: Deactivated successfully.
Dec 06 09:21:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53343 DF PROTO=TCP SPT=40020 DPT=9100 SEQ=3089435200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E82B1F0000000001030307) 
Dec 06 09:21:13 np0005548790.localdomain podman[110547]: 2025-12-06 09:21:13.270803127 +0000 UTC m=+0.119827395 container cleanup 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, maintainer=OpenStack TripleO Team)
Dec 06 09:21:13 np0005548790.localdomain podman[110547]: ovn_controller
Dec 06 09:21:13 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.timer: Failed to open /run/systemd/transient/8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.timer: No such file or directory
Dec 06 09:21:13 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed to open /run/systemd/transient/8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: No such file or directory
Dec 06 09:21:13 np0005548790.localdomain podman[110559]: 2025-12-06 09:21:13.308992341 +0000 UTC m=+0.072606118 container cleanup 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, release=1761123044, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller)
Dec 06 09:21:13 np0005548790.localdomain systemd[1]: libpod-conmon-8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.scope: Deactivated successfully.
Dec 06 09:21:13 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.timer: Failed to open /run/systemd/transient/8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.timer: No such file or directory
Dec 06 09:21:13 np0005548790.localdomain systemd[1]: 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: Failed to open /run/systemd/transient/8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3.service: No such file or directory
Dec 06 09:21:13 np0005548790.localdomain podman[110575]: 2025-12-06 09:21:13.408014957 +0000 UTC m=+0.062286272 container cleanup 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 06 09:21:13 np0005548790.localdomain podman[110575]: ovn_controller
Dec 06 09:21:13 np0005548790.localdomain systemd[1]: tripleo_ovn_controller.service: Deactivated successfully.
Dec 06 09:21:13 np0005548790.localdomain systemd[1]: Stopped ovn_controller container.
Dec 06 09:21:13 np0005548790.localdomain sudo[110504]: pam_unix(sudo:session): session closed for user root
Dec 06 09:21:13 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-7395787d7c7157781d23da507bf0dd85e09ccfaea104452a9558b49679a2c1ad-merged.mount: Deactivated successfully.
Dec 06 09:21:13 np0005548790.localdomain sudo[110677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-djndzjkedxerstgmxeadvbpwugfgtzti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012873.735187-116-106018882237604/AnsiballZ_systemd_service.py
Dec 06 09:21:13 np0005548790.localdomain sudo[110677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:21:14 np0005548790.localdomain python3.9[110679]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_metadata_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:21:14 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:21:14 np0005548790.localdomain systemd-rc-local-generator[110702]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:21:14 np0005548790.localdomain systemd-sysv-generator[110707]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:21:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:21:14 np0005548790.localdomain systemd[1]: Stopping ovn_metadata_agent container...
Dec 06 09:21:15 np0005548790.localdomain systemd[1]: libpod-8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.scope: Deactivated successfully.
Dec 06 09:21:15 np0005548790.localdomain systemd[1]: libpod-8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.scope: Consumed 9.131s CPU time.
Dec 06 09:21:15 np0005548790.localdomain podman[110719]: 2025-12-06 09:21:15.182750461 +0000 UTC m=+0.551422369 container died 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20251118.1)
Dec 06 09:21:15 np0005548790.localdomain systemd[1]: tmp-crun.dZ9uV4.mount: Deactivated successfully.
Dec 06 09:21:15 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.timer: Deactivated successfully.
Dec 06 09:21:15 np0005548790.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.
Dec 06 09:21:15 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed to open /run/systemd/transient/8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: No such file or directory
Dec 06 09:21:15 np0005548790.localdomain podman[110719]: 2025-12-06 09:21:15.300977541 +0000 UTC m=+0.669649469 container cleanup 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, release=1761123044, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 06 09:21:15 np0005548790.localdomain podman[110719]: ovn_metadata_agent
Dec 06 09:21:15 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.timer: Failed to open /run/systemd/transient/8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.timer: No such file or directory
Dec 06 09:21:15 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed to open /run/systemd/transient/8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: No such file or directory
Dec 06 09:21:15 np0005548790.localdomain podman[110732]: 2025-12-06 09:21:15.324583783 +0000 UTC m=+0.132387180 container cleanup 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64)
Dec 06 09:21:15 np0005548790.localdomain systemd[1]: libpod-conmon-8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.scope: Deactivated successfully.
Dec 06 09:21:15 np0005548790.localdomain podman[110765]: error opening file `/run/crun/8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5/status`: No such file or directory
Dec 06 09:21:15 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.timer: Failed to open /run/systemd/transient/8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.timer: No such file or directory
Dec 06 09:21:15 np0005548790.localdomain systemd[1]: 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: Failed to open /run/systemd/transient/8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5.service: No such file or directory
Dec 06 09:21:15 np0005548790.localdomain podman[110752]: 2025-12-06 09:21:15.434834781 +0000 UTC m=+0.074301634 container cleanup 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.12, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 06 09:21:15 np0005548790.localdomain podman[110752]: ovn_metadata_agent
Dec 06 09:21:15 np0005548790.localdomain systemd[1]: tripleo_ovn_metadata_agent.service: Deactivated successfully.
Dec 06 09:21:15 np0005548790.localdomain systemd[1]: Stopped ovn_metadata_agent container.
Dec 06 09:21:15 np0005548790.localdomain sudo[110677]: pam_unix(sudo:session): session closed for user root
Dec 06 09:21:15 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-8284dca6529e5ab9438e0117511d130f8650dec0e9dc23d1b17bfc3ebcf839dd-merged.mount: Deactivated successfully.
Dec 06 09:21:15 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5-userdata-shm.mount: Deactivated successfully.
Dec 06 09:21:15 np0005548790.localdomain sudo[110856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yoqdqdrnksppzdlhjnasuftpzjfnnzsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765012875.6123953-116-61414907440442/AnsiballZ_systemd_service.py
Dec 06 09:21:15 np0005548790.localdomain sudo[110856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:21:16 np0005548790.localdomain python3.9[110858]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_rsyslog.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:21:16 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:21:16 np0005548790.localdomain systemd-rc-local-generator[110886]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:21:16 np0005548790.localdomain systemd-sysv-generator[110892]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:21:16 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:21:16 np0005548790.localdomain sudo[110856]: pam_unix(sudo:session): session closed for user root
Dec 06 09:21:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34296 DF PROTO=TCP SPT=41224 DPT=9102 SEQ=2041561981 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E83F210000000001030307) 
Dec 06 09:21:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57067 DF PROTO=TCP SPT=37240 DPT=9882 SEQ=3297065380 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E83F270000000001030307) 
Dec 06 09:21:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53344 DF PROTO=TCP SPT=40020 DPT=9100 SEQ=3089435200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E84B1F0000000001030307) 
Dec 06 09:21:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16364 DF PROTO=TCP SPT=41768 DPT=9105 SEQ=1336498921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E857A00000000001030307) 
Dec 06 09:21:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42320 DF PROTO=TCP SPT=37098 DPT=9105 SEQ=1848255424 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E8631F0000000001030307) 
Dec 06 09:21:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16366 DF PROTO=TCP SPT=41768 DPT=9105 SEQ=1336498921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E86F600000000001030307) 
Dec 06 09:21:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57071 DF PROTO=TCP SPT=37240 DPT=9882 SEQ=3297065380 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E87B1F0000000001030307) 
Dec 06 09:21:37 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51306 DF PROTO=TCP SPT=44870 DPT=9100 SEQ=2929681394 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E8889F0000000001030307) 
Dec 06 09:21:37 np0005548790.localdomain sshd[110912]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:21:38 np0005548790.localdomain sshd[110912]: Invalid user admin from 45.135.232.92 port 41628
Dec 06 09:21:39 np0005548790.localdomain sshd[110912]: Connection reset by invalid user admin 45.135.232.92 port 41628 [preauth]
Dec 06 09:21:39 np0005548790.localdomain sshd[110914]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:21:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4854 DF PROTO=TCP SPT=40442 DPT=9100 SEQ=2344009568 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E8951F0000000001030307) 
Dec 06 09:21:41 np0005548790.localdomain sshd[110914]: Invalid user admin from 45.135.232.92 port 41630
Dec 06 09:21:41 np0005548790.localdomain sshd[110914]: Connection reset by invalid user admin 45.135.232.92 port 41630 [preauth]
Dec 06 09:21:41 np0005548790.localdomain sshd[110916]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:21:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51308 DF PROTO=TCP SPT=44870 DPT=9100 SEQ=2929681394 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E8A05F0000000001030307) 
Dec 06 09:21:43 np0005548790.localdomain sshd[110916]: Invalid user admin from 45.135.232.92 port 41644
Dec 06 09:21:43 np0005548790.localdomain sshd[110916]: Connection reset by invalid user admin 45.135.232.92 port 41644 [preauth]
Dec 06 09:21:44 np0005548790.localdomain sshd[110918]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:21:45 np0005548790.localdomain sshd[110918]: Connection reset by authenticating user root 45.135.232.92 port 41662 [preauth]
Dec 06 09:21:45 np0005548790.localdomain sshd[110920]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:21:48 np0005548790.localdomain sshd[110920]: Connection reset by authenticating user root 45.135.232.92 port 51614 [preauth]
Dec 06 09:21:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8879 DF PROTO=TCP SPT=54266 DPT=9102 SEQ=1461243496 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E8B4510000000001030307) 
Dec 06 09:21:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14639 DF PROTO=TCP SPT=54276 DPT=9882 SEQ=4225041567 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E8B4570000000001030307) 
Dec 06 09:21:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14641 DF PROTO=TCP SPT=54276 DPT=9882 SEQ=4225041567 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E8C0600000000001030307) 
Dec 06 09:21:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6295 DF PROTO=TCP SPT=50988 DPT=9105 SEQ=2336358970 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E8CC9F0000000001030307) 
Dec 06 09:21:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21636 DF PROTO=TCP SPT=41382 DPT=9105 SEQ=793707177 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E8D91F0000000001030307) 
Dec 06 09:22:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6297 DF PROTO=TCP SPT=50988 DPT=9105 SEQ=2336358970 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E8E45F0000000001030307) 
Dec 06 09:22:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8883 DF PROTO=TCP SPT=54266 DPT=9102 SEQ=1461243496 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E8F11F0000000001030307) 
Dec 06 09:22:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9021 DF PROTO=TCP SPT=47310 DPT=9101 SEQ=1050910456 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E8FB1F0000000001030307) 
Dec 06 09:22:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53346 DF PROTO=TCP SPT=40020 DPT=9100 SEQ=3089435200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E9091F0000000001030307) 
Dec 06 09:22:10 np0005548790.localdomain sudo[110922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:22:10 np0005548790.localdomain sudo[110922]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:22:10 np0005548790.localdomain sudo[110922]: pam_unix(sudo:session): session closed for user root
Dec 06 09:22:10 np0005548790.localdomain sudo[110937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 06 09:22:10 np0005548790.localdomain sudo[110937]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:22:11 np0005548790.localdomain sudo[110937]: pam_unix(sudo:session): session closed for user root
Dec 06 09:22:11 np0005548790.localdomain sudo[110972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:22:11 np0005548790.localdomain sudo[110972]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:22:11 np0005548790.localdomain sudo[110972]: pam_unix(sudo:session): session closed for user root
Dec 06 09:22:11 np0005548790.localdomain sudo[110987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:22:11 np0005548790.localdomain sudo[110987]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:22:12 np0005548790.localdomain sudo[110987]: pam_unix(sudo:session): session closed for user root
Dec 06 09:22:12 np0005548790.localdomain sudo[111034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:22:12 np0005548790.localdomain sudo[111034]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:22:12 np0005548790.localdomain sudo[111034]: pam_unix(sudo:session): session closed for user root
Dec 06 09:22:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=390 DF PROTO=TCP SPT=56238 DPT=9100 SEQ=1899876315 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E915A00000000001030307) 
Dec 06 09:22:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14090 DF PROTO=TCP SPT=33134 DPT=9102 SEQ=1703874560 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E929810000000001030307) 
Dec 06 09:22:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35313 DF PROTO=TCP SPT=38814 DPT=9882 SEQ=3988348118 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E929880000000001030307) 
Dec 06 09:22:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35315 DF PROTO=TCP SPT=38814 DPT=9882 SEQ=3988348118 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E9359F0000000001030307) 
Dec 06 09:22:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41140 DF PROTO=TCP SPT=59572 DPT=9105 SEQ=1939607592 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E941DF0000000001030307) 
Dec 06 09:22:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16369 DF PROTO=TCP SPT=41768 DPT=9105 SEQ=1336498921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E94D200000000001030307) 
Dec 06 09:22:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41142 DF PROTO=TCP SPT=59572 DPT=9105 SEQ=1939607592 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E9599F0000000001030307) 
Dec 06 09:22:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14094 DF PROTO=TCP SPT=33134 DPT=9102 SEQ=1703874560 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E9651F0000000001030307) 
Dec 06 09:22:36 np0005548790.localdomain sshd[105652]: Received disconnect from 192.168.122.31 port 36100:11: disconnected by user
Dec 06 09:22:36 np0005548790.localdomain sshd[105652]: Disconnected from user zuul 192.168.122.31 port 36100
Dec 06 09:22:36 np0005548790.localdomain sshd[105609]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:22:36 np0005548790.localdomain systemd[1]: session-36.scope: Deactivated successfully.
Dec 06 09:22:36 np0005548790.localdomain systemd[1]: session-36.scope: Consumed 18.814s CPU time.
Dec 06 09:22:36 np0005548790.localdomain systemd-logind[760]: Session 36 logged out. Waiting for processes to exit.
Dec 06 09:22:36 np0005548790.localdomain systemd-logind[760]: Removed session 36.
Dec 06 09:22:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27939 DF PROTO=TCP SPT=50252 DPT=9101 SEQ=709570726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E9711F0000000001030307) 
Dec 06 09:22:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51311 DF PROTO=TCP SPT=44870 DPT=9100 SEQ=2929681394 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E97F1F0000000001030307) 
Dec 06 09:22:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9070 DF PROTO=TCP SPT=42286 DPT=9100 SEQ=3492161760 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E98ADF0000000001030307) 
Dec 06 09:22:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34313 DF PROTO=TCP SPT=42956 DPT=9102 SEQ=3531948545 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E99EB10000000001030307) 
Dec 06 09:22:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19644 DF PROTO=TCP SPT=44710 DPT=9882 SEQ=464813210 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E99EB70000000001030307) 
Dec 06 09:22:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34315 DF PROTO=TCP SPT=42956 DPT=9102 SEQ=3531948545 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E9AA9F0000000001030307) 
Dec 06 09:22:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14834 DF PROTO=TCP SPT=49392 DPT=9105 SEQ=889014670 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E9B7200000000001030307) 
Dec 06 09:22:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6300 DF PROTO=TCP SPT=50988 DPT=9105 SEQ=2336358970 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E9C31F0000000001030307) 
Dec 06 09:23:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14836 DF PROTO=TCP SPT=49392 DPT=9105 SEQ=889014670 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E9CEDF0000000001030307) 
Dec 06 09:23:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19648 DF PROTO=TCP SPT=44710 DPT=9882 SEQ=464813210 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E9DB1F0000000001030307) 
Dec 06 09:23:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32817 DF PROTO=TCP SPT=48092 DPT=9101 SEQ=1746772230 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E9E5200000000001030307) 
Dec 06 09:23:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=393 DF PROTO=TCP SPT=56238 DPT=9100 SEQ=1899876315 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E9F31F0000000001030307) 
Dec 06 09:23:12 np0005548790.localdomain sudo[111049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:23:12 np0005548790.localdomain sudo[111049]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:23:12 np0005548790.localdomain sudo[111049]: pam_unix(sudo:session): session closed for user root
Dec 06 09:23:13 np0005548790.localdomain sudo[111064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:23:13 np0005548790.localdomain sudo[111064]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:23:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56157 DF PROTO=TCP SPT=45402 DPT=9100 SEQ=4129379874 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17E9FFDF0000000001030307) 
Dec 06 09:23:13 np0005548790.localdomain sudo[111064]: pam_unix(sudo:session): session closed for user root
Dec 06 09:23:14 np0005548790.localdomain sudo[111110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:23:14 np0005548790.localdomain sudo[111110]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:23:14 np0005548790.localdomain sudo[111110]: pam_unix(sudo:session): session closed for user root
Dec 06 09:23:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14003 DF PROTO=TCP SPT=46410 DPT=9102 SEQ=137522398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EA13E00000000001030307) 
Dec 06 09:23:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47534 DF PROTO=TCP SPT=56402 DPT=9882 SEQ=111538646 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EA13E70000000001030307) 
Dec 06 09:23:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14005 DF PROTO=TCP SPT=46410 DPT=9102 SEQ=137522398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EA1FE00000000001030307) 
Dec 06 09:23:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7349 DF PROTO=TCP SPT=49458 DPT=9105 SEQ=4150436835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EA2C5F0000000001030307) 
Dec 06 09:23:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41145 DF PROTO=TCP SPT=59572 DPT=9105 SEQ=1939607592 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EA371F0000000001030307) 
Dec 06 09:23:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7351 DF PROTO=TCP SPT=49458 DPT=9105 SEQ=4150436835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EA44200000000001030307) 
Dec 06 09:23:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14007 DF PROTO=TCP SPT=46410 DPT=9102 SEQ=137522398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EA4F1F0000000001030307) 
Dec 06 09:23:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45418 DF PROTO=TCP SPT=43662 DPT=9101 SEQ=4005955105 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EA5B200000000001030307) 
Dec 06 09:23:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9073 DF PROTO=TCP SPT=42286 DPT=9100 SEQ=3492161760 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EA69200000000001030307) 
Dec 06 09:23:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=673 DF PROTO=TCP SPT=35366 DPT=9100 SEQ=1012669459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EA75200000000001030307) 
Dec 06 09:23:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58468 DF PROTO=TCP SPT=46424 DPT=9102 SEQ=1741806848 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EA89110000000001030307) 
Dec 06 09:23:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26087 DF PROTO=TCP SPT=33842 DPT=9882 SEQ=927889910 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EA89180000000001030307) 
Dec 06 09:23:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=674 DF PROTO=TCP SPT=35366 DPT=9100 SEQ=1012669459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EA951F0000000001030307) 
Dec 06 09:23:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=236 DF PROTO=TCP SPT=55322 DPT=9105 SEQ=3011661123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EAA15F0000000001030307) 
Dec 06 09:23:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14839 DF PROTO=TCP SPT=49392 DPT=9105 SEQ=889014670 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EAAD1F0000000001030307) 
Dec 06 09:24:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=238 DF PROTO=TCP SPT=55322 DPT=9105 SEQ=3011661123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EAB9200000000001030307) 
Dec 06 09:24:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58472 DF PROTO=TCP SPT=46424 DPT=9102 SEQ=1741806848 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EAC51F0000000001030307) 
Dec 06 09:24:06 np0005548790.localdomain sshd[111126]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:24:06 np0005548790.localdomain sshd[111126]: Accepted publickey for zuul from 192.168.122.31 port 50120 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:24:06 np0005548790.localdomain systemd-logind[760]: New session 37 of user zuul.
Dec 06 09:24:06 np0005548790.localdomain systemd[1]: Started Session 37 of User zuul.
Dec 06 09:24:06 np0005548790.localdomain sshd[111126]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:24:06 np0005548790.localdomain sudo[111205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqmzkdyoiaqkurmvlemmmvmtitcxsdqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013046.4270601-565-125504680295115/AnsiballZ_file.py
Dec 06 09:24:06 np0005548790.localdomain sudo[111205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:06 np0005548790.localdomain python3.9[111207]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:06 np0005548790.localdomain sudo[111205]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:07 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52088 DF PROTO=TCP SPT=53002 DPT=9100 SEQ=1962059556 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EAD29F0000000001030307) 
Dec 06 09:24:07 np0005548790.localdomain sudo[111297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xckbhnzbswjxbbrizwlwivpvplrqxbvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013046.98803-565-89576573318128/AnsiballZ_file.py
Dec 06 09:24:07 np0005548790.localdomain sudo[111297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:07 np0005548790.localdomain python3.9[111299]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:07 np0005548790.localdomain sudo[111297]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:07 np0005548790.localdomain sudo[111389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vgzuvfubeqtbsdotfvxshhedhpurwyps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013047.5742326-565-191253372835019/AnsiballZ_file.py
Dec 06 09:24:07 np0005548790.localdomain sudo[111389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:08 np0005548790.localdomain python3.9[111391]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:08 np0005548790.localdomain sudo[111389]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:08 np0005548790.localdomain sudo[111481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-llkysdczbjhmrhybalugncnuydebhoxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013048.1406863-565-214108684942687/AnsiballZ_file.py
Dec 06 09:24:08 np0005548790.localdomain sudo[111481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:08 np0005548790.localdomain python3.9[111483]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:08 np0005548790.localdomain sudo[111481]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:08 np0005548790.localdomain sudo[111573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uzohkbktzznffrbwnjeqxlznrcwfdpoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013048.7145882-565-8583035555577/AnsiballZ_file.py
Dec 06 09:24:08 np0005548790.localdomain sudo[111573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:09 np0005548790.localdomain python3.9[111575]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:09 np0005548790.localdomain sudo[111573]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:09 np0005548790.localdomain sudo[111665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gshbmpucqvdzlmivdojnvuocguxaqiav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013049.2626188-565-256462362845640/AnsiballZ_file.py
Dec 06 09:24:09 np0005548790.localdomain sudo[111665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:09 np0005548790.localdomain python3.9[111667]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:09 np0005548790.localdomain sudo[111665]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:10 np0005548790.localdomain sudo[111757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-behqtdyihpwojvtodpdowjimtofphnwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013049.8090565-565-274626130441277/AnsiballZ_file.py
Dec 06 09:24:10 np0005548790.localdomain sudo[111757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:10 np0005548790.localdomain python3.9[111759]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:10 np0005548790.localdomain sudo[111757]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56160 DF PROTO=TCP SPT=45402 DPT=9100 SEQ=4129379874 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EADF1F0000000001030307) 
Dec 06 09:24:10 np0005548790.localdomain sudo[111849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ptxgfiaduvawicwnfwufllsuondhgjug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013050.3369112-565-140117347215461/AnsiballZ_file.py
Dec 06 09:24:10 np0005548790.localdomain sudo[111849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:10 np0005548790.localdomain python3.9[111851]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:10 np0005548790.localdomain sudo[111849]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:11 np0005548790.localdomain sudo[111941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-noaturulpqymonnnwoqmfegrzhhkhkck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013050.8870237-565-228432094036114/AnsiballZ_file.py
Dec 06 09:24:11 np0005548790.localdomain sudo[111941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:11 np0005548790.localdomain python3.9[111943]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:11 np0005548790.localdomain sudo[111941]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:11 np0005548790.localdomain sudo[112033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rrzpybsgylvqaglgjwycornbzuenqsqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013051.4527097-565-95237096137821/AnsiballZ_file.py
Dec 06 09:24:11 np0005548790.localdomain sudo[112033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:11 np0005548790.localdomain python3.9[112035]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:11 np0005548790.localdomain sudo[112033]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:12 np0005548790.localdomain sudo[112125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vurbpkiwvydflrtmgogrxpmxzqbvkgjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013052.0602527-565-260254816275817/AnsiballZ_file.py
Dec 06 09:24:12 np0005548790.localdomain sudo[112125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:12 np0005548790.localdomain python3.9[112127]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:12 np0005548790.localdomain sudo[112125]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:12 np0005548790.localdomain sudo[112217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kovijxmhqrxrdcprzgfimcmkivjsrave ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013052.558573-565-180738241958026/AnsiballZ_file.py
Dec 06 09:24:12 np0005548790.localdomain sudo[112217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:13 np0005548790.localdomain python3.9[112219]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:13 np0005548790.localdomain sudo[112217]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52090 DF PROTO=TCP SPT=53002 DPT=9100 SEQ=1962059556 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EAEA5F0000000001030307) 
Dec 06 09:24:13 np0005548790.localdomain sudo[112309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lmvmgjvzpcxgyscikfmwvllcjtblpiad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013053.1518996-565-267513382813465/AnsiballZ_file.py
Dec 06 09:24:13 np0005548790.localdomain sudo[112309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:13 np0005548790.localdomain python3.9[112311]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:13 np0005548790.localdomain sudo[112309]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:13 np0005548790.localdomain sudo[112401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kvsxdabhhmwyryfydwbakfhacsiglmsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013053.7148516-565-254267476051115/AnsiballZ_file.py
Dec 06 09:24:13 np0005548790.localdomain sudo[112401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:14 np0005548790.localdomain python3.9[112403]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:14 np0005548790.localdomain sudo[112401]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:14 np0005548790.localdomain sudo[112476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:24:14 np0005548790.localdomain sudo[112476]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:24:14 np0005548790.localdomain sudo[112476]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:14 np0005548790.localdomain sudo[112507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ktjxtidpssscjnmneswyevgdmivoetxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013054.3115423-565-152254585612016/AnsiballZ_file.py
Dec 06 09:24:14 np0005548790.localdomain sudo[112507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:14 np0005548790.localdomain sudo[112511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 09:24:14 np0005548790.localdomain sudo[112511]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:24:14 np0005548790.localdomain python3.9[112510]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:14 np0005548790.localdomain sudo[112507]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:15 np0005548790.localdomain sudo[112657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-otmokixlnlpzobtefcghyifbadehagrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013054.9049523-565-82546929127963/AnsiballZ_file.py
Dec 06 09:24:15 np0005548790.localdomain sudo[112657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:15 np0005548790.localdomain python3.9[112663]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:15 np0005548790.localdomain sudo[112657]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:15 np0005548790.localdomain systemd[1]: tmp-crun.rjgzAN.mount: Deactivated successfully.
Dec 06 09:24:15 np0005548790.localdomain podman[112690]: 2025-12-06 09:24:15.451917475 +0000 UTC m=+0.089255015 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, architecture=x86_64, RELEASE=main, GIT_BRANCH=main, version=7, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph)
Dec 06 09:24:15 np0005548790.localdomain podman[112690]: 2025-12-06 09:24:15.559195402 +0000 UTC m=+0.196532982 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, RELEASE=main, distribution-scope=public, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, version=7, name=rhceph, ceph=True, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, architecture=x86_64)
Dec 06 09:24:15 np0005548790.localdomain sudo[112511]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:15 np0005548790.localdomain sudo[112845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-izeghfgzbwziatzdemjwzjsvnoocesao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013055.5537295-565-68769200108868/AnsiballZ_file.py
Dec 06 09:24:15 np0005548790.localdomain sudo[112845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:15 np0005548790.localdomain sudo[112848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:24:15 np0005548790.localdomain sudo[112848]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:24:15 np0005548790.localdomain sudo[112848]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:15 np0005548790.localdomain sudo[112863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:24:15 np0005548790.localdomain sudo[112863]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:24:16 np0005548790.localdomain python3.9[112847]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:16 np0005548790.localdomain sudo[112845]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:16 np0005548790.localdomain sudo[112981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lehloyayfmypjgiikdxruodsjlyhgkkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013056.1147814-565-130950868153237/AnsiballZ_file.py
Dec 06 09:24:16 np0005548790.localdomain sudo[112981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:16 np0005548790.localdomain python3.9[112985]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:16 np0005548790.localdomain sudo[112981]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:16 np0005548790.localdomain sudo[112863]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:17 np0005548790.localdomain sudo[113092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqovafreyhwntafvfqdjskzmqpgrfote ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013056.7115817-565-102631065710509/AnsiballZ_file.py
Dec 06 09:24:17 np0005548790.localdomain sudo[113092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:17 np0005548790.localdomain sudo[113095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:24:17 np0005548790.localdomain sudo[113095]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:24:17 np0005548790.localdomain sudo[113095]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:17 np0005548790.localdomain python3.9[113094]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:17 np0005548790.localdomain sudo[113092]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:17 np0005548790.localdomain sudo[113199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ohlxvssydcbrdgsvximdrhtwnugkqgjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013057.407703-565-18486958856677/AnsiballZ_file.py
Dec 06 09:24:17 np0005548790.localdomain sudo[113199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:17 np0005548790.localdomain python3.9[113201]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:17 np0005548790.localdomain sudo[113199]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:18 np0005548790.localdomain sudo[113291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uhqkouvzkxhykcuehockxmnhsxsetyxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013057.998904-565-125158389839154/AnsiballZ_file.py
Dec 06 09:24:18 np0005548790.localdomain sudo[113291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60080 DF PROTO=TCP SPT=53052 DPT=9102 SEQ=400247612 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EAFE400000000001030307) 
Dec 06 09:24:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3072 DF PROTO=TCP SPT=55428 DPT=9882 SEQ=3621896653 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EAFE480000000001030307) 
Dec 06 09:24:18 np0005548790.localdomain python3.9[113293]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:18 np0005548790.localdomain sudo[113291]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:19 np0005548790.localdomain sudo[113383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hpaglcwlsftgmmzpacyqeuxjexadcenq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013058.9928079-1015-262625434960687/AnsiballZ_file.py
Dec 06 09:24:19 np0005548790.localdomain sudo[113383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:19 np0005548790.localdomain python3.9[113385]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:19 np0005548790.localdomain sudo[113383]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:19 np0005548790.localdomain sudo[113475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pbfprhleuboovwqoblmqetpgawarqbui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013059.5529559-1015-188083822141426/AnsiballZ_file.py
Dec 06 09:24:19 np0005548790.localdomain sudo[113475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:19 np0005548790.localdomain python3.9[113477]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:20 np0005548790.localdomain sudo[113475]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:20 np0005548790.localdomain sudo[113567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xyvtbawelhdjkfkggesdxbatcqfancun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013060.0957007-1015-85958439169893/AnsiballZ_file.py
Dec 06 09:24:20 np0005548790.localdomain sudo[113567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:20 np0005548790.localdomain python3.9[113569]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:20 np0005548790.localdomain sudo[113567]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:20 np0005548790.localdomain sudo[113659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ixqbmerebkzsjglcssqmkfexboorlowy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013060.7003257-1015-181305473964232/AnsiballZ_file.py
Dec 06 09:24:20 np0005548790.localdomain sudo[113659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:21 np0005548790.localdomain python3.9[113661]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:21 np0005548790.localdomain sudo[113659]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3074 DF PROTO=TCP SPT=55428 DPT=9882 SEQ=3621896653 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EB0A600000000001030307) 
Dec 06 09:24:21 np0005548790.localdomain sudo[113751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-babgviocrkqszqsluekorapmonlvhbsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013061.278782-1015-127357365383235/AnsiballZ_file.py
Dec 06 09:24:21 np0005548790.localdomain sudo[113751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:21 np0005548790.localdomain python3.9[113753]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:21 np0005548790.localdomain sudo[113751]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:22 np0005548790.localdomain sudo[113843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ldtgzcnlcqhpvxgnqcvmoozcyrqzdvdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013061.8513556-1015-200327562904996/AnsiballZ_file.py
Dec 06 09:24:22 np0005548790.localdomain sudo[113843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:22 np0005548790.localdomain python3.9[113845]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:22 np0005548790.localdomain sudo[113843]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:22 np0005548790.localdomain sudo[113935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fozuwhlnrwxqumqhanqezhjfgzppwisi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013062.4581642-1015-184883878110784/AnsiballZ_file.py
Dec 06 09:24:22 np0005548790.localdomain sudo[113935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:23 np0005548790.localdomain python3.9[113937]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:23 np0005548790.localdomain sudo[113935]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:23 np0005548790.localdomain sudo[114027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lxhuhjvrnenimdeuswmgqjawqaeikfdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013063.2539432-1015-22850280064649/AnsiballZ_file.py
Dec 06 09:24:23 np0005548790.localdomain sudo[114027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:23 np0005548790.localdomain python3.9[114029]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:23 np0005548790.localdomain sudo[114027]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:24 np0005548790.localdomain sudo[114119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pohcnftrbthmzaoonntbucincrznejiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013063.8275669-1015-3585023128160/AnsiballZ_file.py
Dec 06 09:24:24 np0005548790.localdomain sudo[114119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:24 np0005548790.localdomain python3.9[114121]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:24 np0005548790.localdomain sudo[114119]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11553 DF PROTO=TCP SPT=51914 DPT=9105 SEQ=3292879858 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EB169F0000000001030307) 
Dec 06 09:24:24 np0005548790.localdomain sudo[114211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ffkdnwrihognnvwmnbfmilzcmofwydix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013064.400461-1015-106594454925671/AnsiballZ_file.py
Dec 06 09:24:24 np0005548790.localdomain sudo[114211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:24 np0005548790.localdomain python3.9[114213]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:24 np0005548790.localdomain sudo[114211]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:25 np0005548790.localdomain sudo[114303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iyludybmtmnzharshfiyaotwbktwnsty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013064.97799-1015-99632846842396/AnsiballZ_file.py
Dec 06 09:24:25 np0005548790.localdomain sudo[114303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:25 np0005548790.localdomain python3.9[114305]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:25 np0005548790.localdomain sudo[114303]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:25 np0005548790.localdomain sudo[114395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ubimofnwgmcrcechsosbxfhibsydbzbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013065.542374-1015-42180610304148/AnsiballZ_file.py
Dec 06 09:24:25 np0005548790.localdomain sudo[114395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:25 np0005548790.localdomain python3.9[114397]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:25 np0005548790.localdomain sudo[114395]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:26 np0005548790.localdomain sudo[114487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-efvomqwrqfejdnutlooauaqpckttdqwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013066.090045-1015-4739636794628/AnsiballZ_file.py
Dec 06 09:24:26 np0005548790.localdomain sudo[114487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:26 np0005548790.localdomain python3.9[114489]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:26 np0005548790.localdomain sudo[114487]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:26 np0005548790.localdomain sudo[114579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qwortntymarvtdolvztfdchyvoekjfok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013066.5807838-1015-186032341227714/AnsiballZ_file.py
Dec 06 09:24:26 np0005548790.localdomain sudo[114579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:27 np0005548790.localdomain python3.9[114581]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:27 np0005548790.localdomain sudo[114579]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:27 np0005548790.localdomain sudo[114671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lcsbkiswfiduakxgwijnjeqtrdwstvhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013067.16533-1015-146410305253753/AnsiballZ_file.py
Dec 06 09:24:27 np0005548790.localdomain sudo[114671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:27 np0005548790.localdomain python3.9[114673]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:27 np0005548790.localdomain sudo[114671]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7354 DF PROTO=TCP SPT=49458 DPT=9105 SEQ=4150436835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EB23200000000001030307) 
Dec 06 09:24:28 np0005548790.localdomain sudo[114763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-svsiitdezlkoyxvpodgidzesndeknmxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013067.773991-1015-173238167242601/AnsiballZ_file.py
Dec 06 09:24:28 np0005548790.localdomain sudo[114763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:28 np0005548790.localdomain python3.9[114765]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:28 np0005548790.localdomain sudo[114763]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:28 np0005548790.localdomain sudo[114855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-attocucebvsnkepripslmsqsswnfuyxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013068.3995502-1015-272478836769739/AnsiballZ_file.py
Dec 06 09:24:28 np0005548790.localdomain sudo[114855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:28 np0005548790.localdomain python3.9[114857]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:28 np0005548790.localdomain sudo[114855]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:29 np0005548790.localdomain sudo[114947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yusaoboqyyrncvxfdrlsvlsjqxyanfne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013069.7288392-1015-152036642537213/AnsiballZ_file.py
Dec 06 09:24:29 np0005548790.localdomain sudo[114947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:30 np0005548790.localdomain python3.9[114949]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:30 np0005548790.localdomain sudo[114947]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:30 np0005548790.localdomain sudo[115039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sinihegifwwllzxxfjzfjmljpnhhssgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013070.299921-1015-103993786070331/AnsiballZ_file.py
Dec 06 09:24:30 np0005548790.localdomain sudo[115039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11555 DF PROTO=TCP SPT=51914 DPT=9105 SEQ=3292879858 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EB2E600000000001030307) 
Dec 06 09:24:30 np0005548790.localdomain python3.9[115041]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:30 np0005548790.localdomain sudo[115039]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:31 np0005548790.localdomain sudo[115131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oglmpxsfgiwzzssdboqwzhmtmrgbohgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013071.3890672-1015-190877796027398/AnsiballZ_file.py
Dec 06 09:24:31 np0005548790.localdomain sudo[115131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:31 np0005548790.localdomain python3.9[115133]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:31 np0005548790.localdomain sudo[115131]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:32 np0005548790.localdomain sudo[115223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xpedvhcrxifcpouvncxwbqjgvdfwwyfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013072.105295-1015-186709984797987/AnsiballZ_file.py
Dec 06 09:24:32 np0005548790.localdomain sudo[115223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:32 np0005548790.localdomain python3.9[115225]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:24:32 np0005548790.localdomain sudo[115223]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:33 np0005548790.localdomain sudo[115315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tiaeyxnomkkekgwvldbjsdjjxqvjjfoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013072.8897035-1462-89807372030683/AnsiballZ_command.py
Dec 06 09:24:33 np0005548790.localdomain sudo[115315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:33 np0005548790.localdomain python3.9[115317]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                              systemctl disable --now certmonger.service
                                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                            fi
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:24:33 np0005548790.localdomain sudo[115315]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3076 DF PROTO=TCP SPT=55428 DPT=9882 SEQ=3621896653 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EB3B1F0000000001030307) 
Dec 06 09:24:34 np0005548790.localdomain python3.9[115409]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 09:24:34 np0005548790.localdomain sudo[115499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nmmbhxsbuawetppyhauzirhiifniehrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013074.4980352-1517-157486990710984/AnsiballZ_systemd_service.py
Dec 06 09:24:34 np0005548790.localdomain sudo[115499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:35 np0005548790.localdomain python3.9[115501]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:24:35 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:24:35 np0005548790.localdomain systemd-sysv-generator[115530]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:24:35 np0005548790.localdomain systemd-rc-local-generator[115523]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:24:35 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:24:35 np0005548790.localdomain sudo[115499]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35735 DF PROTO=TCP SPT=41398 DPT=9101 SEQ=1840071004 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EB451F0000000001030307) 
Dec 06 09:24:37 np0005548790.localdomain sudo[115627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-saytgeweswsmnzqshvyncxfevzktryph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013077.2542334-1541-74114830479710/AnsiballZ_command.py
Dec 06 09:24:37 np0005548790.localdomain sudo[115627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:37 np0005548790.localdomain python3.9[115629]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:24:37 np0005548790.localdomain sudo[115627]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:38 np0005548790.localdomain sudo[115720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kdoizqsqitvpridalrpykuamgebebumx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013077.7971218-1541-226986924921655/AnsiballZ_command.py
Dec 06 09:24:38 np0005548790.localdomain sudo[115720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:38 np0005548790.localdomain python3.9[115722]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:24:38 np0005548790.localdomain sudo[115720]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:38 np0005548790.localdomain sudo[115813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-odzbihfwjpaamxkympembizedckpgudr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013078.398675-1541-152956899888874/AnsiballZ_command.py
Dec 06 09:24:38 np0005548790.localdomain sudo[115813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:38 np0005548790.localdomain python3.9[115815]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_collectd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:24:38 np0005548790.localdomain sudo[115813]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:39 np0005548790.localdomain sudo[115906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-clzashspfddttkwfrmjxgsozohzsvzrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013078.9748447-1541-247767342262965/AnsiballZ_command.py
Dec 06 09:24:39 np0005548790.localdomain sudo[115906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:39 np0005548790.localdomain python3.9[115908]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_iscsid.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:24:39 np0005548790.localdomain sudo[115906]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:39 np0005548790.localdomain sudo[115999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zzvpxvgvtvgspnqsievxqkpxuoddkzek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013079.4989462-1541-220726521414630/AnsiballZ_command.py
Dec 06 09:24:39 np0005548790.localdomain sudo[115999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:40 np0005548790.localdomain python3.9[116001]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_logrotate_crond.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:24:40 np0005548790.localdomain sudo[115999]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=676 DF PROTO=TCP SPT=35366 DPT=9100 SEQ=1012669459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EB531F0000000001030307) 
Dec 06 09:24:40 np0005548790.localdomain sudo[116092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vrctkzxctmoimdukcyfdnvdbhatptkur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013080.1311934-1541-60365295219590/AnsiballZ_command.py
Dec 06 09:24:40 np0005548790.localdomain sudo[116092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:40 np0005548790.localdomain python3.9[116094]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_metrics_qdr.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:24:40 np0005548790.localdomain sudo[116092]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:41 np0005548790.localdomain sudo[116185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kezxempkpmginctuhjetcmqoxlrtvuvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013080.8222563-1541-185553324897635/AnsiballZ_command.py
Dec 06 09:24:41 np0005548790.localdomain sudo[116185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:41 np0005548790.localdomain python3.9[116187]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_dhcp.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:24:41 np0005548790.localdomain sudo[116185]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:41 np0005548790.localdomain sudo[116278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pnxbggkqooouhqdjmgcapvbmpcgoplcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013081.397161-1541-5685210743010/AnsiballZ_command.py
Dec 06 09:24:41 np0005548790.localdomain sudo[116278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:41 np0005548790.localdomain python3.9[116280]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_l3_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:24:41 np0005548790.localdomain sudo[116278]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:42 np0005548790.localdomain sudo[116371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oucmngklqkemvuwomcvufhfuoczygvwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013081.9483743-1541-247640762182216/AnsiballZ_command.py
Dec 06 09:24:42 np0005548790.localdomain sudo[116371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:42 np0005548790.localdomain python3.9[116373]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_ovs_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:24:42 np0005548790.localdomain sudo[116371]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:42 np0005548790.localdomain sudo[116464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sfjfekmdtzbgccuzuvandlhrzkpxyurn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013082.5070553-1541-156107356368971/AnsiballZ_command.py
Dec 06 09:24:42 np0005548790.localdomain sudo[116464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:42 np0005548790.localdomain python3.9[116466]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:24:42 np0005548790.localdomain sudo[116464]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29178 DF PROTO=TCP SPT=49878 DPT=9100 SEQ=1861935453 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EB5FA00000000001030307) 
Dec 06 09:24:43 np0005548790.localdomain sudo[116557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uyptftcqongtejwvnmtfzkuwtxopfqzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013083.1162379-1541-4507144278817/AnsiballZ_command.py
Dec 06 09:24:43 np0005548790.localdomain sudo[116557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:43 np0005548790.localdomain python3.9[116559]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:24:43 np0005548790.localdomain sudo[116557]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:43 np0005548790.localdomain sudo[116650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iusepihobnoxnsycxqvvmgdxkktirbvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013083.670611-1541-193616034932352/AnsiballZ_command.py
Dec 06 09:24:43 np0005548790.localdomain sudo[116650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:44 np0005548790.localdomain python3.9[116652]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:24:44 np0005548790.localdomain sudo[116650]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:44 np0005548790.localdomain sudo[116743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nbdrhbcszeodvlvqmjqfubdpbxxwdjuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013084.2647567-1541-24326518519792/AnsiballZ_command.py
Dec 06 09:24:44 np0005548790.localdomain sudo[116743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:44 np0005548790.localdomain python3.9[116745]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:24:44 np0005548790.localdomain sudo[116743]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:45 np0005548790.localdomain sudo[116836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fsbsilyykfhxukanakzrdvjokeslhoxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013084.8517988-1541-246322538252647/AnsiballZ_command.py
Dec 06 09:24:45 np0005548790.localdomain sudo[116836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:45 np0005548790.localdomain python3.9[116838]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:24:45 np0005548790.localdomain sudo[116836]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:45 np0005548790.localdomain sudo[116929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-umhspprwuzkeukseohnwxzjndbcfkxqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013085.3787863-1541-188511036757534/AnsiballZ_command.py
Dec 06 09:24:45 np0005548790.localdomain sudo[116929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:45 np0005548790.localdomain python3.9[116931]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:24:45 np0005548790.localdomain sudo[116929]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:46 np0005548790.localdomain sudo[117022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ftghixihlypcpbghwqnwpkdvfwyygvts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013085.9790385-1541-30811435135673/AnsiballZ_command.py
Dec 06 09:24:46 np0005548790.localdomain sudo[117022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:46 np0005548790.localdomain python3.9[117024]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud_recover.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:24:46 np0005548790.localdomain sudo[117022]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:46 np0005548790.localdomain sudo[117115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jduekqersjgswgfpkjknysbaixhrkxdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013086.5495813-1541-66963268443535/AnsiballZ_command.py
Dec 06 09:24:46 np0005548790.localdomain sudo[117115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:47 np0005548790.localdomain python3.9[117117]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:24:47 np0005548790.localdomain sudo[117115]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:47 np0005548790.localdomain sudo[117208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xuyatkbsjqrdijhkongzvdrnewamodap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013087.2921238-1541-51125680583325/AnsiballZ_command.py
Dec 06 09:24:47 np0005548790.localdomain sudo[117208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:47 np0005548790.localdomain python3.9[117210]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:24:47 np0005548790.localdomain sudo[117208]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:48 np0005548790.localdomain sudo[117301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vxdllsntnmgjzyyxolnavhlwofeociqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013087.806066-1541-161104453933166/AnsiballZ_command.py
Dec 06 09:24:48 np0005548790.localdomain sudo[117301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:48 np0005548790.localdomain python3.9[117303]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_controller.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:24:48 np0005548790.localdomain sudo[117301]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4811 DF PROTO=TCP SPT=36134 DPT=9102 SEQ=295637050 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EB73730000000001030307) 
Dec 06 09:24:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42858 DF PROTO=TCP SPT=50254 DPT=9882 SEQ=2558370070 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EB73790000000001030307) 
Dec 06 09:24:48 np0005548790.localdomain sudo[117394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jhzuyvruvwkfvsbxhezqresqocinkeyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013088.4342105-1541-165382866567113/AnsiballZ_command.py
Dec 06 09:24:48 np0005548790.localdomain sudo[117394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:48 np0005548790.localdomain python3.9[117396]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_metadata_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:24:48 np0005548790.localdomain sudo[117394]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:49 np0005548790.localdomain sudo[117487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ufhmchvsdtuzuvguzkxpkrfdtppcqcjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013088.9970977-1541-41422500560187/AnsiballZ_command.py
Dec 06 09:24:49 np0005548790.localdomain sudo[117487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:24:49 np0005548790.localdomain python3.9[117489]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_rsyslog.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:24:49 np0005548790.localdomain sudo[117487]: pam_unix(sudo:session): session closed for user root
Dec 06 09:24:50 np0005548790.localdomain sshd[111126]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:24:50 np0005548790.localdomain systemd-logind[760]: Session 37 logged out. Waiting for processes to exit.
Dec 06 09:24:50 np0005548790.localdomain systemd[1]: session-37.scope: Deactivated successfully.
Dec 06 09:24:50 np0005548790.localdomain systemd[1]: session-37.scope: Consumed 30.035s CPU time.
Dec 06 09:24:50 np0005548790.localdomain systemd-logind[760]: Removed session 37.
Dec 06 09:24:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4813 DF PROTO=TCP SPT=36134 DPT=9102 SEQ=295637050 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EB7F600000000001030307) 
Dec 06 09:24:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33951 DF PROTO=TCP SPT=53688 DPT=9105 SEQ=1359872606 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EB8BDF0000000001030307) 
Dec 06 09:24:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=241 DF PROTO=TCP SPT=55322 DPT=9105 SEQ=3011661123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EB97200000000001030307) 
Dec 06 09:25:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33953 DF PROTO=TCP SPT=53688 DPT=9105 SEQ=1359872606 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EBA39F0000000001030307) 
Dec 06 09:25:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42862 DF PROTO=TCP SPT=50254 DPT=9882 SEQ=2558370070 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EBAF1F0000000001030307) 
Dec 06 09:25:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33279 DF PROTO=TCP SPT=44984 DPT=9101 SEQ=3572816017 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EBBB200000000001030307) 
Dec 06 09:25:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52093 DF PROTO=TCP SPT=53002 DPT=9100 SEQ=1962059556 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EBC91F0000000001030307) 
Dec 06 09:25:11 np0005548790.localdomain sshd[117506]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:25:11 np0005548790.localdomain sshd[117506]: Accepted publickey for zuul from 192.168.122.30 port 48340 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:25:11 np0005548790.localdomain systemd-logind[760]: New session 38 of user zuul.
Dec 06 09:25:11 np0005548790.localdomain systemd[1]: Started Session 38 of User zuul.
Dec 06 09:25:11 np0005548790.localdomain sshd[117506]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:25:12 np0005548790.localdomain python3.9[117599]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 06 09:25:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33674 DF PROTO=TCP SPT=42580 DPT=9100 SEQ=1094144546 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EBD49F0000000001030307) 
Dec 06 09:25:13 np0005548790.localdomain python3.9[117703]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:25:14 np0005548790.localdomain sudo[117793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tmgteghezkvwnhxljdefgczrvrjybnoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013113.9757445-95-73319904664842/AnsiballZ_command.py
Dec 06 09:25:14 np0005548790.localdomain sudo[117793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:25:14 np0005548790.localdomain python3.9[117795]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:25:14 np0005548790.localdomain sudo[117793]: pam_unix(sudo:session): session closed for user root
Dec 06 09:25:15 np0005548790.localdomain sudo[117886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vysxgupxjdevtecofymhozxewempdyfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013114.8964105-131-242207036257295/AnsiballZ_stat.py
Dec 06 09:25:15 np0005548790.localdomain sudo[117886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:25:15 np0005548790.localdomain python3.9[117888]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:25:15 np0005548790.localdomain sudo[117886]: pam_unix(sudo:session): session closed for user root
Dec 06 09:25:16 np0005548790.localdomain sudo[117978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jqszgsjfzjdptifakrqjbfwngqmpyjce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013115.6633584-155-35760584163114/AnsiballZ_file.py
Dec 06 09:25:16 np0005548790.localdomain sudo[117978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:25:16 np0005548790.localdomain python3.9[117980]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:25:16 np0005548790.localdomain sudo[117978]: pam_unix(sudo:session): session closed for user root
Dec 06 09:25:16 np0005548790.localdomain sudo[118070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xhrolwbwsozjvstoaitpwkmuycjlmihn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013116.4447458-179-207046901651808/AnsiballZ_stat.py
Dec 06 09:25:16 np0005548790.localdomain sudo[118070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:25:17 np0005548790.localdomain python3.9[118072]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:25:17 np0005548790.localdomain sudo[118070]: pam_unix(sudo:session): session closed for user root
Dec 06 09:25:17 np0005548790.localdomain sudo[118100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:25:17 np0005548790.localdomain sudo[118100]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:25:17 np0005548790.localdomain sudo[118100]: pam_unix(sudo:session): session closed for user root
Dec 06 09:25:17 np0005548790.localdomain sudo[118115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:25:17 np0005548790.localdomain sudo[118115]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:25:17 np0005548790.localdomain sudo[118173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dhklifjfebhkjiwiezmnwijosnxmcwnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013116.4447458-179-207046901651808/AnsiballZ_copy.py
Dec 06 09:25:17 np0005548790.localdomain sudo[118173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:25:17 np0005548790.localdomain python3.9[118175]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013116.4447458-179-207046901651808/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:25:17 np0005548790.localdomain sudo[118173]: pam_unix(sudo:session): session closed for user root
Dec 06 09:25:18 np0005548790.localdomain sudo[118115]: pam_unix(sudo:session): session closed for user root
Dec 06 09:25:18 np0005548790.localdomain sudo[118296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ovxkopjttkaazinsusivtarrbksiswhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013118.0220692-224-10601442920010/AnsiballZ_setup.py
Dec 06 09:25:18 np0005548790.localdomain sudo[118296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:25:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52298 DF PROTO=TCP SPT=50412 DPT=9102 SEQ=2935375371 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EBE8A10000000001030307) 
Dec 06 09:25:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48370 DF PROTO=TCP SPT=43718 DPT=9882 SEQ=2917683076 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EBE8A80000000001030307) 
Dec 06 09:25:18 np0005548790.localdomain python3.9[118298]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:25:18 np0005548790.localdomain sudo[118296]: pam_unix(sudo:session): session closed for user root
Dec 06 09:25:19 np0005548790.localdomain sudo[118317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:25:19 np0005548790.localdomain sudo[118317]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:25:19 np0005548790.localdomain sudo[118317]: pam_unix(sudo:session): session closed for user root
Dec 06 09:25:19 np0005548790.localdomain sudo[118407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dvkhqmofoxyahtsgzbycvirmjtzyyszy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013119.0760167-249-132533025508958/AnsiballZ_file.py
Dec 06 09:25:19 np0005548790.localdomain sudo[118407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:25:19 np0005548790.localdomain python3.9[118409]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:25:19 np0005548790.localdomain sudo[118407]: pam_unix(sudo:session): session closed for user root
Dec 06 09:25:19 np0005548790.localdomain sudo[118499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wwtebqdtswykkpsxugfqrejjstrunorq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013119.7319148-275-63497878461426/AnsiballZ_file.py
Dec 06 09:25:19 np0005548790.localdomain sudo[118499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:25:20 np0005548790.localdomain python3.9[118501]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:25:20 np0005548790.localdomain sudo[118499]: pam_unix(sudo:session): session closed for user root
Dec 06 09:25:20 np0005548790.localdomain python3.9[118591]: ansible-ansible.builtin.service_facts Invoked
Dec 06 09:25:21 np0005548790.localdomain network[118608]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 09:25:21 np0005548790.localdomain network[118609]: 'network-scripts' will be removed from distribution in near future.
Dec 06 09:25:21 np0005548790.localdomain network[118610]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 09:25:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52300 DF PROTO=TCP SPT=50412 DPT=9102 SEQ=2935375371 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EBF49F0000000001030307) 
Dec 06 09:25:22 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:25:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26192 DF PROTO=TCP SPT=41986 DPT=9105 SEQ=1989622481 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EC011F0000000001030307) 
Dec 06 09:25:25 np0005548790.localdomain python3.9[118808]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:25:25 np0005548790.localdomain python3.9[118898]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:25:26 np0005548790.localdomain sudo[118992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qznwutsfjfgushrsdmeqoneosgohidlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013126.4111767-379-65848202400544/AnsiballZ_command.py
Dec 06 09:25:26 np0005548790.localdomain sudo[118992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:25:26 np0005548790.localdomain python3.9[118994]: ansible-ansible.legacy.command Invoked with _raw_params=# This is a hack to deploy RDO Delorean repos to RHEL as if it were Centos 9 Stream
                                                            set -euxo pipefail
                                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                                            python3 -m venv ./venv
                                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./repo-setup-main
                                                            # This is required for FIPS enabled until trunk.rdoproject.org
                                                            # is not being served from a centos7 host, tracked by
                                                            # https://issues.redhat.com/browse/RHOSZUUL-1517
                                                            dnf -y install crypto-policies
                                                            update-crypto-policies --set FIPS:NO-ENFORCE-EMS
                                                            ./venv/bin/repo-setup current-podified -b antelope -d centos9 --stream
                                                            
                                                            # Exclude ceph-common-18.2.7 as it's pulling newer openssl not compatible
                                                            # with rhel 9.2 openssh
                                                            dnf config-manager --setopt centos9-storage.exclude="ceph-common-18.2.7" --save
                                                            # FIXME: perform dnf upgrade for other packages in EDPM ansible
                                                            # here we only ensuring that decontainerized libvirt can start
                                                            dnf -y upgrade openstack-selinux
                                                            rm -f /run/virtlogd.pid
                                                            
                                                            rm -rf repo-setup-main
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:25:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11558 DF PROTO=TCP SPT=51914 DPT=9105 SEQ=3292879858 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EC0D1F0000000001030307) 
Dec 06 09:25:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26194 DF PROTO=TCP SPT=41986 DPT=9105 SEQ=1989622481 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EC18E00000000001030307) 
Dec 06 09:25:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48374 DF PROTO=TCP SPT=43718 DPT=9882 SEQ=2917683076 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EC251F0000000001030307) 
Dec 06 09:25:36 np0005548790.localdomain systemd[1]: Stopping OpenSSH server daemon...
Dec 06 09:25:36 np0005548790.localdomain sshd[45480]: Received signal 15; terminating.
Dec 06 09:25:36 np0005548790.localdomain systemd[1]: sshd.service: Deactivated successfully.
Dec 06 09:25:36 np0005548790.localdomain systemd[1]: Stopped OpenSSH server daemon.
Dec 06 09:25:36 np0005548790.localdomain systemd[1]: sshd.service: Consumed 5.564s CPU time.
Dec 06 09:25:36 np0005548790.localdomain systemd[1]: Stopped target sshd-keygen.target.
Dec 06 09:25:36 np0005548790.localdomain systemd[1]: Stopping sshd-keygen.target...
Dec 06 09:25:36 np0005548790.localdomain systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 09:25:36 np0005548790.localdomain systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 09:25:36 np0005548790.localdomain systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 09:25:36 np0005548790.localdomain systemd[1]: Reached target sshd-keygen.target.
Dec 06 09:25:36 np0005548790.localdomain systemd[1]: Starting OpenSSH server daemon...
Dec 06 09:25:36 np0005548790.localdomain sshd[119037]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:25:36 np0005548790.localdomain sshd[119037]: Server listening on 0.0.0.0 port 22.
Dec 06 09:25:36 np0005548790.localdomain sshd[119037]: Server listening on :: port 22.
Dec 06 09:25:36 np0005548790.localdomain systemd[1]: Started OpenSSH server daemon.
Dec 06 09:25:36 np0005548790.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 09:25:36 np0005548790.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 06 09:25:36 np0005548790.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 09:25:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47632 DF PROTO=TCP SPT=57926 DPT=9101 SEQ=1707318775 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EC2F200000000001030307) 
Dec 06 09:25:36 np0005548790.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 09:25:36 np0005548790.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 06 09:25:36 np0005548790.localdomain systemd[1]: run-rab3868829eed414b81f2f042993424b4.service: Deactivated successfully.
Dec 06 09:25:36 np0005548790.localdomain systemd[1]: run-r9abdaad3d64e4b6291e6129d0ac7105e.service: Deactivated successfully.
Dec 06 09:25:37 np0005548790.localdomain sshd[119037]: Received signal 15; terminating.
Dec 06 09:25:37 np0005548790.localdomain systemd[1]: Stopping OpenSSH server daemon...
Dec 06 09:25:37 np0005548790.localdomain systemd[1]: sshd.service: Deactivated successfully.
Dec 06 09:25:37 np0005548790.localdomain systemd[1]: Stopped OpenSSH server daemon.
Dec 06 09:25:37 np0005548790.localdomain systemd[1]: Stopped target sshd-keygen.target.
Dec 06 09:25:37 np0005548790.localdomain systemd[1]: Stopping sshd-keygen.target...
Dec 06 09:25:37 np0005548790.localdomain systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 09:25:37 np0005548790.localdomain systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 09:25:37 np0005548790.localdomain systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 09:25:37 np0005548790.localdomain systemd[1]: Reached target sshd-keygen.target.
Dec 06 09:25:37 np0005548790.localdomain systemd[1]: Starting OpenSSH server daemon...
Dec 06 09:25:37 np0005548790.localdomain sshd[119211]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:25:37 np0005548790.localdomain sshd[119211]: Server listening on 0.0.0.0 port 22.
Dec 06 09:25:37 np0005548790.localdomain sshd[119211]: Server listening on :: port 22.
Dec 06 09:25:37 np0005548790.localdomain systemd[1]: Started OpenSSH server daemon.
Dec 06 09:25:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29181 DF PROTO=TCP SPT=49878 DPT=9100 SEQ=1861935453 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EC3D1F0000000001030307) 
Dec 06 09:25:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43778 DF PROTO=TCP SPT=51744 DPT=9100 SEQ=2565823201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EC49E00000000001030307) 
Dec 06 09:25:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46004 DF PROTO=TCP SPT=56474 DPT=9102 SEQ=3723723477 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EC5DD10000000001030307) 
Dec 06 09:25:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54597 DF PROTO=TCP SPT=47752 DPT=9882 SEQ=4276183828 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EC5DD80000000001030307) 
Dec 06 09:25:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46006 DF PROTO=TCP SPT=56474 DPT=9102 SEQ=3723723477 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EC69DF0000000001030307) 
Dec 06 09:25:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37937 DF PROTO=TCP SPT=49600 DPT=9105 SEQ=2129346735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EC76200000000001030307) 
Dec 06 09:25:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33956 DF PROTO=TCP SPT=53688 DPT=9105 SEQ=1359872606 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EC811F0000000001030307) 
Dec 06 09:26:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37939 DF PROTO=TCP SPT=49600 DPT=9105 SEQ=2129346735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EC8DDF0000000001030307) 
Dec 06 09:26:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46008 DF PROTO=TCP SPT=56474 DPT=9102 SEQ=3723723477 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EC991F0000000001030307) 
Dec 06 09:26:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28697 DF PROTO=TCP SPT=55554 DPT=9101 SEQ=167899838 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17ECA5200000000001030307) 
Dec 06 09:26:09 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50786 DF PROTO=TCP SPT=52016 DPT=9100 SEQ=567355475 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17ECB15F0000000001030307) 
Dec 06 09:26:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50787 DF PROTO=TCP SPT=52016 DPT=9100 SEQ=567355475 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17ECC11F0000000001030307) 
Dec 06 09:26:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23677 DF PROTO=TCP SPT=34420 DPT=9102 SEQ=337597665 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17ECD3010000000001030307) 
Dec 06 09:26:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30196 DF PROTO=TCP SPT=49680 DPT=9882 SEQ=2977846913 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17ECD3070000000001030307) 
Dec 06 09:26:19 np0005548790.localdomain sudo[119349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:26:19 np0005548790.localdomain sudo[119349]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:26:19 np0005548790.localdomain sudo[119349]: pam_unix(sudo:session): session closed for user root
Dec 06 09:26:19 np0005548790.localdomain sudo[119364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:26:19 np0005548790.localdomain sudo[119364]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:26:19 np0005548790.localdomain sudo[119364]: pam_unix(sudo:session): session closed for user root
Dec 06 09:26:20 np0005548790.localdomain sudo[119413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:26:20 np0005548790.localdomain sudo[119413]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:26:20 np0005548790.localdomain sudo[119413]: pam_unix(sudo:session): session closed for user root
Dec 06 09:26:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23679 DF PROTO=TCP SPT=34420 DPT=9102 SEQ=337597665 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17ECDF1F0000000001030307) 
Dec 06 09:26:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35421 DF PROTO=TCP SPT=45010 DPT=9105 SEQ=499596993 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17ECEB670000000001030307) 
Dec 06 09:26:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26197 DF PROTO=TCP SPT=41986 DPT=9105 SEQ=1989622481 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17ECF71F0000000001030307) 
Dec 06 09:26:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35423 DF PROTO=TCP SPT=45010 DPT=9105 SEQ=499596993 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17ED031F0000000001030307) 
Dec 06 09:26:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30200 DF PROTO=TCP SPT=49680 DPT=9882 SEQ=2977846913 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17ED0F1F0000000001030307) 
Dec 06 09:26:37 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43468 DF PROTO=TCP SPT=44426 DPT=9100 SEQ=2308500621 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17ED1C5F0000000001030307) 
Dec 06 09:26:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43781 DF PROTO=TCP SPT=51744 DPT=9100 SEQ=2565823201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17ED291F0000000001030307) 
Dec 06 09:26:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43470 DF PROTO=TCP SPT=44426 DPT=9100 SEQ=2308500621 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17ED34200000000001030307) 
Dec 06 09:26:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53242 DF PROTO=TCP SPT=33566 DPT=9102 SEQ=1588958586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17ED48310000000001030307) 
Dec 06 09:26:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55766 DF PROTO=TCP SPT=36414 DPT=9882 SEQ=3530715256 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17ED48380000000001030307) 
Dec 06 09:26:48 np0005548790.localdomain kernel: SELinux:  Converting 2741 SID table entries...
Dec 06 09:26:48 np0005548790.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 09:26:48 np0005548790.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 06 09:26:48 np0005548790.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 09:26:48 np0005548790.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 06 09:26:48 np0005548790.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 09:26:48 np0005548790.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 09:26:48 np0005548790.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 09:26:49 np0005548790.localdomain sudo[118992]: pam_unix(sudo:session): session closed for user root
Dec 06 09:26:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53244 DF PROTO=TCP SPT=33566 DPT=9102 SEQ=1588958586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17ED54200000000001030307) 
Dec 06 09:26:53 np0005548790.localdomain sudo[119856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-osbujzjsgknjplftngvnplhrajftuxfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013213.0197628-405-152315618521596/AnsiballZ_file.py
Dec 06 09:26:53 np0005548790.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=17 res=1
Dec 06 09:26:53 np0005548790.localdomain sudo[119856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:26:53 np0005548790.localdomain python3.9[119858]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:26:53 np0005548790.localdomain sudo[119856]: pam_unix(sudo:session): session closed for user root
Dec 06 09:26:53 np0005548790.localdomain sudo[119948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qhmuaejmdybmdmpdtzhqxsvdrcufnfhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013213.7038164-428-18101218001391/AnsiballZ_stat.py
Dec 06 09:26:53 np0005548790.localdomain sudo[119948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:26:54 np0005548790.localdomain python3.9[119950]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/edpm.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:26:54 np0005548790.localdomain sudo[119948]: pam_unix(sudo:session): session closed for user root
Dec 06 09:26:54 np0005548790.localdomain sudo[120021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-byjogegvnimfinmkapznghlzhfhkyanb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013213.7038164-428-18101218001391/AnsiballZ_copy.py
Dec 06 09:26:54 np0005548790.localdomain sudo[120021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:26:54 np0005548790.localdomain python3.9[120023]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/edpm.fact mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013213.7038164-428-18101218001391/.source.fact _original_basename=.azos6qqg follow=False checksum=03aee63dcf9b49b0ac4473b2f1a1b5d3783aa639 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:26:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21417 DF PROTO=TCP SPT=39224 DPT=9105 SEQ=423463916 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17ED609F0000000001030307) 
Dec 06 09:26:54 np0005548790.localdomain sudo[120021]: pam_unix(sudo:session): session closed for user root
Dec 06 09:26:55 np0005548790.localdomain python3.9[120113]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:26:56 np0005548790.localdomain sudo[120209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ridzrbbrsznlfxyojvplyekqmmsmsydc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013216.2171125-503-75211375303419/AnsiballZ_setup.py
Dec 06 09:26:56 np0005548790.localdomain sudo[120209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:26:56 np0005548790.localdomain python3.9[120211]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:26:57 np0005548790.localdomain sudo[120209]: pam_unix(sudo:session): session closed for user root
Dec 06 09:26:57 np0005548790.localdomain sudo[120263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ekebzppsqrazzqjjrphjeifppvygirob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013216.2171125-503-75211375303419/AnsiballZ_dnf.py
Dec 06 09:26:57 np0005548790.localdomain sudo[120263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:26:57 np0005548790.localdomain python3.9[120265]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:26:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37942 DF PROTO=TCP SPT=49600 DPT=9105 SEQ=2129346735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17ED6D1F0000000001030307) 
Dec 06 09:27:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21419 DF PROTO=TCP SPT=39224 DPT=9105 SEQ=423463916 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17ED78600000000001030307) 
Dec 06 09:27:01 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:27:01 np0005548790.localdomain systemd-rc-local-generator[120299]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:27:01 np0005548790.localdomain systemd-sysv-generator[120304]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:27:01 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:27:01 np0005548790.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 09:27:02 np0005548790.localdomain sudo[120263]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:03 np0005548790.localdomain sudo[120403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fvdkdztouzizwccbwtdcjpzbdtmruijj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013223.1823068-540-144888008421681/AnsiballZ_command.py
Dec 06 09:27:03 np0005548790.localdomain sudo[120403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:03 np0005548790.localdomain python3.9[120405]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:27:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53246 DF PROTO=TCP SPT=33566 DPT=9102 SEQ=1588958586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17ED851F0000000001030307) 
Dec 06 09:27:04 np0005548790.localdomain sudo[120403]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:05 np0005548790.localdomain sudo[120642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ivnbktrqwhenugrrmnotwxoxthrscpoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013225.3154242-564-184730300412163/AnsiballZ_selinux.py
Dec 06 09:27:05 np0005548790.localdomain sudo[120642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:06 np0005548790.localdomain python3.9[120644]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 06 09:27:06 np0005548790.localdomain sudo[120642]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44209 DF PROTO=TCP SPT=58168 DPT=9101 SEQ=685638435 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17ED8F1F0000000001030307) 
Dec 06 09:27:06 np0005548790.localdomain sudo[120734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jyoajkblldmfhxxspnkrnbrvcwayfgts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013226.594324-596-250315995604579/AnsiballZ_command.py
Dec 06 09:27:06 np0005548790.localdomain sudo[120734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:07 np0005548790.localdomain python3.9[120736]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 06 09:27:07 np0005548790.localdomain sudo[120734]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:07 np0005548790.localdomain sudo[120827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wjhmeifccqdthyzqvwlkpkfzgrqvbjgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013227.72165-620-130969471109994/AnsiballZ_file.py
Dec 06 09:27:07 np0005548790.localdomain sudo[120827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:08 np0005548790.localdomain python3.9[120829]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:27:08 np0005548790.localdomain sudo[120827]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:08 np0005548790.localdomain sudo[120919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zpzyklcvvysydwjfnqvmepysrsaojuwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013228.410702-644-143578835449764/AnsiballZ_mount.py
Dec 06 09:27:08 np0005548790.localdomain sudo[120919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:09 np0005548790.localdomain python3.9[120921]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 06 09:27:09 np0005548790.localdomain sudo[120919]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:10 np0005548790.localdomain sudo[121011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-whsvpejvmimhplsijfbiwedggxllufzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013229.9876287-729-118349038641106/AnsiballZ_file.py
Dec 06 09:27:10 np0005548790.localdomain sudo[121011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:10 np0005548790.localdomain python3.9[121013]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:27:10 np0005548790.localdomain sudo[121011]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50790 DF PROTO=TCP SPT=52016 DPT=9100 SEQ=567355475 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17ED9F1F0000000001030307) 
Dec 06 09:27:10 np0005548790.localdomain sudo[121103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tncngvwtdndwqdjxqootehxooancwpeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013230.6751208-753-198478927536905/AnsiballZ_stat.py
Dec 06 09:27:10 np0005548790.localdomain sudo[121103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:11 np0005548790.localdomain python3.9[121105]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:27:11 np0005548790.localdomain sudo[121103]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:11 np0005548790.localdomain sudo[121176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jwlrtdinaiqwisgcdspifebeifciazxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013230.6751208-753-198478927536905/AnsiballZ_copy.py
Dec 06 09:27:11 np0005548790.localdomain sudo[121176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:11 np0005548790.localdomain python3.9[121178]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013230.6751208-753-198478927536905/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=31b29ee7333177b2eb4f4f85549af35c3d0ec3b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:27:11 np0005548790.localdomain sudo[121176]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:12 np0005548790.localdomain sudo[121268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yhrabacowgdbvnxckvfcsqcpenplffas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013232.4367733-825-235665952890014/AnsiballZ_stat.py
Dec 06 09:27:12 np0005548790.localdomain sudo[121268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:12 np0005548790.localdomain python3.9[121270]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:27:12 np0005548790.localdomain sudo[121268]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19215 DF PROTO=TCP SPT=48084 DPT=9100 SEQ=306944250 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EDA9600000000001030307) 
Dec 06 09:27:14 np0005548790.localdomain sudo[121362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wsrantujdospmyknefpdxqoafsnrerfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013233.592436-864-192155802707039/AnsiballZ_getent.py
Dec 06 09:27:14 np0005548790.localdomain sudo[121362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:14 np0005548790.localdomain python3.9[121364]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 06 09:27:14 np0005548790.localdomain sudo[121362]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:15 np0005548790.localdomain sudo[121455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cgnciqkqsvyjaudibsmfrldatbnpfcnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013234.987101-893-226911092437271/AnsiballZ_getent.py
Dec 06 09:27:15 np0005548790.localdomain sudo[121455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:15 np0005548790.localdomain python3.9[121457]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 06 09:27:15 np0005548790.localdomain sudo[121455]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:16 np0005548790.localdomain sudo[121548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-unkveigqhzxzfyrgycluphphtkfglqjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013236.0418038-917-143321641427915/AnsiballZ_group.py
Dec 06 09:27:16 np0005548790.localdomain sudo[121548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:16 np0005548790.localdomain python3.9[121550]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 06 09:27:16 np0005548790.localdomain groupmod[121551]: group changed in /etc/group (group hugetlbfs/985, new gid: 42477)
Dec 06 09:27:16 np0005548790.localdomain groupmod[121551]: group changed in /etc/passwd (group hugetlbfs/985, new gid: 42477)
Dec 06 09:27:16 np0005548790.localdomain sudo[121548]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:17 np0005548790.localdomain sudo[121646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-swnljqngtrbsffxgqcqolkxqovknldtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013237.0059714-944-252454220373549/AnsiballZ_file.py
Dec 06 09:27:17 np0005548790.localdomain sudo[121646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:17 np0005548790.localdomain python3.9[121648]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 06 09:27:17 np0005548790.localdomain sudo[121646]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:18 np0005548790.localdomain sudo[121738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-caquxhwwxnupjzhcdtitturthgnbfiia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013237.9009328-978-189930202573235/AnsiballZ_dnf.py
Dec 06 09:27:18 np0005548790.localdomain sudo[121738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8600 DF PROTO=TCP SPT=51718 DPT=9102 SEQ=2460504535 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EDBD610000000001030307) 
Dec 06 09:27:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18681 DF PROTO=TCP SPT=37768 DPT=9882 SEQ=1154190166 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EDBD6A0000000001030307) 
Dec 06 09:27:18 np0005548790.localdomain python3.9[121740]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:27:20 np0005548790.localdomain sudo[121743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:27:20 np0005548790.localdomain sudo[121743]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:27:20 np0005548790.localdomain sudo[121743]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:20 np0005548790.localdomain sudo[121758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:27:20 np0005548790.localdomain sudo[121758]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:27:21 np0005548790.localdomain sudo[121758]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19216 DF PROTO=TCP SPT=48084 DPT=9100 SEQ=306944250 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EDC91F0000000001030307) 
Dec 06 09:27:21 np0005548790.localdomain sudo[121738]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:21 np0005548790.localdomain sudo[121892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mwxnybfarvgqydretjbbklkbzuozuqqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013241.6555495-1002-41991649864054/AnsiballZ_file.py
Dec 06 09:27:21 np0005548790.localdomain sudo[121892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:22 np0005548790.localdomain sudo[121895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:27:22 np0005548790.localdomain sudo[121895]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:27:22 np0005548790.localdomain sudo[121895]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:22 np0005548790.localdomain python3.9[121894]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:27:22 np0005548790.localdomain sudo[121892]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:22 np0005548790.localdomain sudo[121999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-efhinxyciunuoohdiykoflzyfyllgbgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013242.303134-1025-37698768408431/AnsiballZ_stat.py
Dec 06 09:27:22 np0005548790.localdomain sudo[121999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64607 DF PROTO=TCP SPT=33488 DPT=9105 SEQ=2527046682 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EDD5DF0000000001030307) 
Dec 06 09:27:26 np0005548790.localdomain python3.9[122001]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:27:26 np0005548790.localdomain sudo[121999]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35426 DF PROTO=TCP SPT=45010 DPT=9105 SEQ=499596993 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EDE11F0000000001030307) 
Dec 06 09:27:27 np0005548790.localdomain sudo[122073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-guthdvdlnkhprvugkneoxgahiulthmuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013242.303134-1025-37698768408431/AnsiballZ_copy.py
Dec 06 09:27:27 np0005548790.localdomain sudo[122073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:27 np0005548790.localdomain python3.9[122075]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013242.303134-1025-37698768408431/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:27:27 np0005548790.localdomain sudo[122073]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:28 np0005548790.localdomain sudo[122165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yxljrqatdqlnjnrzyohmtvdlkpochbxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013248.034039-1071-28551997892120/AnsiballZ_systemd.py
Dec 06 09:27:28 np0005548790.localdomain sudo[122165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:29 np0005548790.localdomain python3.9[122167]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:27:29 np0005548790.localdomain systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 06 09:27:29 np0005548790.localdomain systemd[1]: Stopped Load Kernel Modules.
Dec 06 09:27:29 np0005548790.localdomain systemd[1]: Stopping Load Kernel Modules...
Dec 06 09:27:29 np0005548790.localdomain systemd[1]: Starting Load Kernel Modules...
Dec 06 09:27:29 np0005548790.localdomain systemd-modules-load[122171]: Module 'msr' is built in
Dec 06 09:27:29 np0005548790.localdomain systemd[1]: Finished Load Kernel Modules.
Dec 06 09:27:29 np0005548790.localdomain sudo[122165]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:30 np0005548790.localdomain sudo[122262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hujqfcyfjtejxfeubbpwltimakoaomin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013250.0366201-1095-38498897249319/AnsiballZ_stat.py
Dec 06 09:27:30 np0005548790.localdomain sudo[122262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:30 np0005548790.localdomain python3.9[122264]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:27:30 np0005548790.localdomain sudo[122262]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64609 DF PROTO=TCP SPT=33488 DPT=9105 SEQ=2527046682 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EDED9F0000000001030307) 
Dec 06 09:27:30 np0005548790.localdomain sudo[122335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oefyoxpifsbpbtqhklcvokojdmxxfyui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013250.0366201-1095-38498897249319/AnsiballZ_copy.py
Dec 06 09:27:30 np0005548790.localdomain sudo[122335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:31 np0005548790.localdomain python3.9[122337]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013250.0366201-1095-38498897249319/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:27:31 np0005548790.localdomain sudo[122335]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8604 DF PROTO=TCP SPT=51718 DPT=9102 SEQ=2460504535 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EDF9200000000001030307) 
Dec 06 09:27:36 np0005548790.localdomain sudo[122427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eiolmnrmftktronoyyvuybsxnjomzfwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013255.8897722-1148-259581226374806/AnsiballZ_dnf.py
Dec 06 09:27:36 np0005548790.localdomain sudo[122427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:36 np0005548790.localdomain python3.9[122429]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:27:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40653 DF PROTO=TCP SPT=37918 DPT=9101 SEQ=2874942182 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EE05200000000001030307) 
Dec 06 09:27:39 np0005548790.localdomain sudo[122427]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43473 DF PROTO=TCP SPT=44426 DPT=9100 SEQ=2308500621 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EE131F0000000001030307) 
Dec 06 09:27:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17703 DF PROTO=TCP SPT=33820 DPT=9100 SEQ=2642317867 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EE1EA00000000001030307) 
Dec 06 09:27:45 np0005548790.localdomain python3.9[122521]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:27:47 np0005548790.localdomain python3.9[122613]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 06 09:27:47 np0005548790.localdomain python3.9[122703]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:27:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52792 DF PROTO=TCP SPT=52648 DPT=9102 SEQ=2592058696 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EE32910000000001030307) 
Dec 06 09:27:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41597 DF PROTO=TCP SPT=38838 DPT=9882 SEQ=310167516 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EE32970000000001030307) 
Dec 06 09:27:48 np0005548790.localdomain sudo[122793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mvkcljlowaqyrpsgplwtrxshxgpsgwdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013268.2491128-1272-151891383557723/AnsiballZ_systemd.py
Dec 06 09:27:48 np0005548790.localdomain sudo[122793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:48 np0005548790.localdomain python3.9[122795]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:27:49 np0005548790.localdomain systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 06 09:27:49 np0005548790.localdomain systemd[1]: tuned.service: Deactivated successfully.
Dec 06 09:27:49 np0005548790.localdomain systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 06 09:27:49 np0005548790.localdomain systemd[1]: tuned.service: Consumed 1.676s CPU time, no IO.
Dec 06 09:27:49 np0005548790.localdomain systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 06 09:27:51 np0005548790.localdomain systemd[1]: Started Dynamic System Tuning Daemon.
Dec 06 09:27:51 np0005548790.localdomain sudo[122793]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52794 DF PROTO=TCP SPT=52648 DPT=9102 SEQ=2592058696 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EE3E9F0000000001030307) 
Dec 06 09:27:52 np0005548790.localdomain python3.9[122898]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 06 09:27:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45019 DF PROTO=TCP SPT=53874 DPT=9105 SEQ=1227853106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EE4AE00000000001030307) 
Dec 06 09:27:56 np0005548790.localdomain sudo[122988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-siuarmzaiwcavvxclzrwmzlicpyffrod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013276.0404322-1443-246953131157610/AnsiballZ_systemd.py
Dec 06 09:27:56 np0005548790.localdomain sudo[122988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:56 np0005548790.localdomain python3.9[122990]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:27:56 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:27:56 np0005548790.localdomain systemd-rc-local-generator[123017]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:27:56 np0005548790.localdomain systemd-sysv-generator[123021]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:27:56 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:27:56 np0005548790.localdomain sudo[122988]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:57 np0005548790.localdomain sudo[123118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nwtflsdgrmctulckidzyhvtwovdeysmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013277.0688713-1443-26545215222104/AnsiballZ_systemd.py
Dec 06 09:27:57 np0005548790.localdomain sudo[123118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:57 np0005548790.localdomain python3.9[123120]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:27:57 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:27:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21422 DF PROTO=TCP SPT=39224 DPT=9105 SEQ=423463916 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EE571F0000000001030307) 
Dec 06 09:27:57 np0005548790.localdomain systemd-rc-local-generator[123144]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:27:57 np0005548790.localdomain systemd-sysv-generator[123149]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:27:57 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:27:57 np0005548790.localdomain sudo[123118]: pam_unix(sudo:session): session closed for user root
Dec 06 09:27:59 np0005548790.localdomain sudo[123248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zapuxdxpasimiqfzxrtncwomwkcwtvoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013279.385809-1491-21488533051672/AnsiballZ_command.py
Dec 06 09:27:59 np0005548790.localdomain sudo[123248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:27:59 np0005548790.localdomain python3.9[123250]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:27:59 np0005548790.localdomain sudo[123248]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:00 np0005548790.localdomain sudo[123341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ulkwzciermonxxnqptlelynkrstzfyte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013280.1276872-1516-100479980223500/AnsiballZ_command.py
Dec 06 09:28:00 np0005548790.localdomain sudo[123341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:00 np0005548790.localdomain python3.9[123343]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:28:00 np0005548790.localdomain kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k FS
Dec 06 09:28:00 np0005548790.localdomain sudo[123341]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45021 DF PROTO=TCP SPT=53874 DPT=9105 SEQ=1227853106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EE629F0000000001030307) 
Dec 06 09:28:01 np0005548790.localdomain sudo[123434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hdfinnmohcotesllujjgpinjemfabbni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013280.7809412-1539-39882840796455/AnsiballZ_command.py
Dec 06 09:28:01 np0005548790.localdomain sudo[123434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:01 np0005548790.localdomain python3.9[123436]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:28:02 np0005548790.localdomain sudo[123434]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:03 np0005548790.localdomain sudo[123533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uucgzrtkhiacszxrqylzvclanxlbybat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013283.1571195-1563-200594443392874/AnsiballZ_command.py
Dec 06 09:28:03 np0005548790.localdomain sudo[123533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:03 np0005548790.localdomain python3.9[123535]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:28:03 np0005548790.localdomain sudo[123533]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52796 DF PROTO=TCP SPT=52648 DPT=9102 SEQ=2592058696 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EE6F1F0000000001030307) 
Dec 06 09:28:04 np0005548790.localdomain sudo[123626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oddnxuwcrcqkcuuycjotnnbcqmcijfms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013283.848799-1588-161122068988209/AnsiballZ_systemd.py
Dec 06 09:28:04 np0005548790.localdomain sudo[123626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:04 np0005548790.localdomain python3.9[123628]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:28:04 np0005548790.localdomain systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 06 09:28:04 np0005548790.localdomain systemd[1]: Stopped Apply Kernel Variables.
Dec 06 09:28:04 np0005548790.localdomain systemd[1]: Stopping Apply Kernel Variables...
Dec 06 09:28:04 np0005548790.localdomain systemd[1]: Starting Apply Kernel Variables...
Dec 06 09:28:04 np0005548790.localdomain systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 06 09:28:04 np0005548790.localdomain systemd[1]: Finished Apply Kernel Variables.
Dec 06 09:28:04 np0005548790.localdomain sudo[123626]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:05 np0005548790.localdomain sshd[117506]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:28:05 np0005548790.localdomain systemd[1]: session-38.scope: Deactivated successfully.
Dec 06 09:28:05 np0005548790.localdomain systemd[1]: session-38.scope: Consumed 1min 55.089s CPU time.
Dec 06 09:28:05 np0005548790.localdomain systemd-logind[760]: Session 38 logged out. Waiting for processes to exit.
Dec 06 09:28:05 np0005548790.localdomain systemd-logind[760]: Removed session 38.
Dec 06 09:28:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35929 DF PROTO=TCP SPT=45008 DPT=9101 SEQ=1205595828 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EE791F0000000001030307) 
Dec 06 09:28:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19218 DF PROTO=TCP SPT=48084 DPT=9100 SEQ=306944250 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EE871F0000000001030307) 
Dec 06 09:28:13 np0005548790.localdomain sshd[123648]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:28:13 np0005548790.localdomain sshd[123648]: Accepted publickey for zuul from 192.168.122.30 port 35752 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:28:13 np0005548790.localdomain systemd-logind[760]: New session 39 of user zuul.
Dec 06 09:28:13 np0005548790.localdomain systemd[1]: Started Session 39 of User zuul.
Dec 06 09:28:13 np0005548790.localdomain sshd[123648]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:28:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42325 DF PROTO=TCP SPT=34168 DPT=9100 SEQ=861808835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EE93E00000000001030307) 
Dec 06 09:28:14 np0005548790.localdomain python3.9[123741]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:28:15 np0005548790.localdomain python3.9[123835]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:28:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61796 DF PROTO=TCP SPT=41702 DPT=9102 SEQ=433738455 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EEA7C20000000001030307) 
Dec 06 09:28:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7632 DF PROTO=TCP SPT=45114 DPT=9882 SEQ=732508119 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EEA7C90000000001030307) 
Dec 06 09:28:18 np0005548790.localdomain sudo[123929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jggyvtjgtacdfwwvupypipefaunzfzbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013298.4946408-112-265722678688039/AnsiballZ_command.py
Dec 06 09:28:18 np0005548790.localdomain sudo[123929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:19 np0005548790.localdomain python3.9[123931]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:28:19 np0005548790.localdomain sudo[123929]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:20 np0005548790.localdomain python3.9[124022]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:28:20 np0005548790.localdomain sudo[124116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-npbhbiogkmumekiigyxmqyhpdwuwvmej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013300.5801969-172-229201273242530/AnsiballZ_setup.py
Dec 06 09:28:20 np0005548790.localdomain sudo[124116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:21 np0005548790.localdomain python3.9[124118]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:28:21 np0005548790.localdomain sudo[124116]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61798 DF PROTO=TCP SPT=41702 DPT=9102 SEQ=433738455 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EEB3DF0000000001030307) 
Dec 06 09:28:22 np0005548790.localdomain sudo[124170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fftyifbhiyhebozpimppxvwmuitycojp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013300.5801969-172-229201273242530/AnsiballZ_dnf.py
Dec 06 09:28:22 np0005548790.localdomain sudo[124170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:22 np0005548790.localdomain sudo[124172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:28:22 np0005548790.localdomain sudo[124172]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:28:22 np0005548790.localdomain sudo[124172]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:22 np0005548790.localdomain sudo[124188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:28:22 np0005548790.localdomain sudo[124188]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:28:22 np0005548790.localdomain python3.9[124178]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:28:22 np0005548790.localdomain sudo[124188]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:24 np0005548790.localdomain sudo[124238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:28:24 np0005548790.localdomain sudo[124238]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:28:24 np0005548790.localdomain sudo[124238]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44237 DF PROTO=TCP SPT=51542 DPT=9105 SEQ=450121106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EEC0200000000001030307) 
Dec 06 09:28:25 np0005548790.localdomain sudo[124170]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:25 np0005548790.localdomain sudo[124342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fufvuyhovyricbixpacwocdqzdqvhhug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013305.5694294-208-250448380305741/AnsiballZ_setup.py
Dec 06 09:28:25 np0005548790.localdomain sudo[124342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:26 np0005548790.localdomain python3.9[124344]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:28:26 np0005548790.localdomain sudo[124342]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:27 np0005548790.localdomain sudo[124489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uzojwmsmvcfgkbzrrpmmlpydnivbhmaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013306.792415-242-129492968001001/AnsiballZ_file.py
Dec 06 09:28:27 np0005548790.localdomain sudo[124489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:27 np0005548790.localdomain python3.9[124491]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:28:27 np0005548790.localdomain sudo[124489]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64612 DF PROTO=TCP SPT=33488 DPT=9105 SEQ=2527046682 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EECB1F0000000001030307) 
Dec 06 09:28:28 np0005548790.localdomain sudo[124581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ljlbyuuegznaojfcbepgrwsxwmriwhuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013307.5473769-266-277113855544037/AnsiballZ_command.py
Dec 06 09:28:28 np0005548790.localdomain sudo[124581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:28 np0005548790.localdomain python3.9[124583]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:28:28 np0005548790.localdomain sudo[124581]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:29 np0005548790.localdomain sudo[124684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xvniruqxzaoxgebiyupjwqolcvxoffwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013308.7725224-289-239335458837430/AnsiballZ_stat.py
Dec 06 09:28:29 np0005548790.localdomain sudo[124684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:29 np0005548790.localdomain python3.9[124686]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:28:29 np0005548790.localdomain sudo[124684]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:29 np0005548790.localdomain sudo[124732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bkjhtsrvyfnqhreltchrflurdvzrjoga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013308.7725224-289-239335458837430/AnsiballZ_file.py
Dec 06 09:28:29 np0005548790.localdomain sudo[124732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:29 np0005548790.localdomain python3.9[124734]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:28:29 np0005548790.localdomain sudo[124732]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:30 np0005548790.localdomain sudo[124824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qbehpnqqxdmnlmibpatedmgvpkonyzdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013310.097674-326-236296776427702/AnsiballZ_stat.py
Dec 06 09:28:30 np0005548790.localdomain sudo[124824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:30 np0005548790.localdomain python3.9[124826]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:28:30 np0005548790.localdomain sudo[124824]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44239 DF PROTO=TCP SPT=51542 DPT=9105 SEQ=450121106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EED7DF0000000001030307) 
Dec 06 09:28:31 np0005548790.localdomain sudo[124897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zeryrvfspbsmcwcdzmvmwlvpjhjowgtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013310.097674-326-236296776427702/AnsiballZ_copy.py
Dec 06 09:28:31 np0005548790.localdomain sudo[124897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:31 np0005548790.localdomain python3.9[124899]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013310.097674-326-236296776427702/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:28:31 np0005548790.localdomain sudo[124897]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:31 np0005548790.localdomain sudo[124989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mxgkdfentxopanlovkcrgsctthcoxddf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013311.5298975-373-116715353044369/AnsiballZ_ini_file.py
Dec 06 09:28:31 np0005548790.localdomain sudo[124989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:32 np0005548790.localdomain python3.9[124991]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:28:32 np0005548790.localdomain sudo[124989]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:32 np0005548790.localdomain sudo[125081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tegprumgovtgtjeuebzwylywiapktlvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013312.2143266-373-120650508741686/AnsiballZ_ini_file.py
Dec 06 09:28:32 np0005548790.localdomain sudo[125081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:32 np0005548790.localdomain python3.9[125083]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:28:32 np0005548790.localdomain sudo[125081]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:32 np0005548790.localdomain sudo[125173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mfopbyqiqktxedurecddeglfcpvorybx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013312.7502618-373-3702779124379/AnsiballZ_ini_file.py
Dec 06 09:28:32 np0005548790.localdomain sudo[125173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:33 np0005548790.localdomain python3.9[125175]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:28:33 np0005548790.localdomain sudo[125173]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:33 np0005548790.localdomain sudo[125265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fbelxwolkwlnhohoisvreoxpqkpgmzqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013313.3302295-373-158662909634535/AnsiballZ_ini_file.py
Dec 06 09:28:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61800 DF PROTO=TCP SPT=41702 DPT=9102 SEQ=433738455 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EEE31F0000000001030307) 
Dec 06 09:28:33 np0005548790.localdomain sudo[125265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:33 np0005548790.localdomain python3.9[125267]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:28:33 np0005548790.localdomain sudo[125265]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14833 DF PROTO=TCP SPT=51298 DPT=9101 SEQ=1441424277 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EEEF1F0000000001030307) 
Dec 06 09:28:37 np0005548790.localdomain python3.9[125357]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:28:37 np0005548790.localdomain sudo[125449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wtvovppflqchtbmgmqykuijxhjkzhssa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013317.3667827-493-251306342474967/AnsiballZ_dnf.py
Dec 06 09:28:37 np0005548790.localdomain sudo[125449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:37 np0005548790.localdomain python3.9[125451]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 06 09:28:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17706 DF PROTO=TCP SPT=33820 DPT=9100 SEQ=2642317867 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EEFD200000000001030307) 
Dec 06 09:28:41 np0005548790.localdomain sudo[125449]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:41 np0005548790.localdomain sudo[125543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ivqtqgvbufaxxgdvpekhuqygkgoudpmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013321.2608812-517-72202901307893/AnsiballZ_dnf.py
Dec 06 09:28:41 np0005548790.localdomain sudo[125543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:41 np0005548790.localdomain python3.9[125545]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 06 09:28:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65173 DF PROTO=TCP SPT=38662 DPT=9100 SEQ=2784398179 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EF09200000000001030307) 
Dec 06 09:28:44 np0005548790.localdomain sudo[125543]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:45 np0005548790.localdomain sudo[125637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bnbhdmldlkjaugvxijltjrqyifucdohp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013325.1523254-547-259293032272907/AnsiballZ_dnf.py
Dec 06 09:28:45 np0005548790.localdomain sudo[125637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:45 np0005548790.localdomain python3.9[125639]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 06 09:28:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24780 DF PROTO=TCP SPT=34478 DPT=9102 SEQ=1126105620 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EF1CF00000000001030307) 
Dec 06 09:28:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7904 DF PROTO=TCP SPT=42582 DPT=9882 SEQ=3486275263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EF1CF80000000001030307) 
Dec 06 09:28:48 np0005548790.localdomain sudo[125637]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:49 np0005548790.localdomain sudo[125737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eqazdeluldwktlpebnvkdsghbmrnnyjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013329.1778421-574-246082294501220/AnsiballZ_dnf.py
Dec 06 09:28:49 np0005548790.localdomain sudo[125737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:49 np0005548790.localdomain python3.9[125739]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 06 09:28:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24782 DF PROTO=TCP SPT=34478 DPT=9102 SEQ=1126105620 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EF28DF0000000001030307) 
Dec 06 09:28:52 np0005548790.localdomain sudo[125737]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:53 np0005548790.localdomain sudo[125831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iqjilunbmiwfyrgbqhoposhmmzlqwagt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013333.2538023-610-255885572563401/AnsiballZ_dnf.py
Dec 06 09:28:53 np0005548790.localdomain sudo[125831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:53 np0005548790.localdomain python3.9[125833]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 06 09:28:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12316 DF PROTO=TCP SPT=50134 DPT=9105 SEQ=601885340 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EF355F0000000001030307) 
Dec 06 09:28:56 np0005548790.localdomain sudo[125831]: pam_unix(sudo:session): session closed for user root
Dec 06 09:28:57 np0005548790.localdomain sudo[125925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-veplqlwguvhovriytxsimalnieizsuwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013337.3098848-637-260586218264641/AnsiballZ_dnf.py
Dec 06 09:28:57 np0005548790.localdomain sudo[125925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:28:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45024 DF PROTO=TCP SPT=53874 DPT=9105 SEQ=1227853106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EF41200000000001030307) 
Dec 06 09:28:57 np0005548790.localdomain python3.9[125927]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 06 09:29:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12318 DF PROTO=TCP SPT=50134 DPT=9105 SEQ=601885340 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EF4D1F0000000001030307) 
Dec 06 09:29:00 np0005548790.localdomain sudo[125925]: pam_unix(sudo:session): session closed for user root
Dec 06 09:29:01 np0005548790.localdomain sudo[126019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nobtxhokcppwacequbmwwuapdkgeqizb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013341.627324-664-150744486451568/AnsiballZ_dnf.py
Dec 06 09:29:01 np0005548790.localdomain sudo[126019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:29:02 np0005548790.localdomain python3.9[126021]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 06 09:29:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24784 DF PROTO=TCP SPT=34478 DPT=9102 SEQ=1126105620 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EF591F0000000001030307) 
Dec 06 09:29:07 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26306 DF PROTO=TCP SPT=35606 DPT=9100 SEQ=908246323 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EF665F0000000001030307) 
Dec 06 09:29:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42328 DF PROTO=TCP SPT=34168 DPT=9100 SEQ=861808835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EF73200000000001030307) 
Dec 06 09:29:11 np0005548790.localdomain sudo[126019]: pam_unix(sudo:session): session closed for user root
Dec 06 09:29:13 np0005548790.localdomain sudo[126186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tqjunarmscukiscnjtdvqudgayiughqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013352.8179152-700-42112806280281/AnsiballZ_file.py
Dec 06 09:29:13 np0005548790.localdomain sudo[126186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:29:13 np0005548790.localdomain python3.9[126188]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:29:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26308 DF PROTO=TCP SPT=35606 DPT=9100 SEQ=908246323 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EF7E1F0000000001030307) 
Dec 06 09:29:13 np0005548790.localdomain sudo[126186]: pam_unix(sudo:session): session closed for user root
Dec 06 09:29:13 np0005548790.localdomain sudo[126291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vfbvgakwexymcexwcqtbettbxyinpmqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013353.449954-724-117593161368640/AnsiballZ_stat.py
Dec 06 09:29:13 np0005548790.localdomain sudo[126291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:29:13 np0005548790.localdomain python3.9[126293]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:29:13 np0005548790.localdomain sudo[126291]: pam_unix(sudo:session): session closed for user root
Dec 06 09:29:14 np0005548790.localdomain sudo[126364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fufhajepoklpjmhepjnyigpqahvvekap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013353.449954-724-117593161368640/AnsiballZ_copy.py
Dec 06 09:29:14 np0005548790.localdomain sudo[126364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:29:14 np0005548790.localdomain python3.9[126366]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1765013353.449954-724-117593161368640/.source.json _original_basename=.iyejo4gg follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:29:14 np0005548790.localdomain sudo[126364]: pam_unix(sudo:session): session closed for user root
Dec 06 09:29:15 np0005548790.localdomain sudo[126456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mevkxossrvfyhgufacgnggrlltucwwyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013354.9559526-778-187933607711745/AnsiballZ_podman_image.py
Dec 06 09:29:15 np0005548790.localdomain sudo[126456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:29:15 np0005548790.localdomain python3.9[126458]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 06 09:29:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8351 DF PROTO=TCP SPT=58362 DPT=9102 SEQ=1411049172 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EF92210000000001030307) 
Dec 06 09:29:18 np0005548790.localdomain systemd-journald[47675]: Field hash table of /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal has a fill level at 77.5 (258 of 333 items), suggesting rotation.
Dec 06 09:29:18 np0005548790.localdomain systemd-journald[47675]: /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 06 09:29:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23734 DF PROTO=TCP SPT=45276 DPT=9882 SEQ=2274883919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EF92270000000001030307) 
Dec 06 09:29:18 np0005548790.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 09:29:18 np0005548790.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 09:29:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23736 DF PROTO=TCP SPT=45276 DPT=9882 SEQ=2274883919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EF9E1F0000000001030307) 
Dec 06 09:29:21 np0005548790.localdomain podman[126470]: 2025-12-06 09:29:15.698878583 +0000 UTC m=+0.034151807 image pull  quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 06 09:29:21 np0005548790.localdomain sudo[126456]: pam_unix(sudo:session): session closed for user root
Dec 06 09:29:23 np0005548790.localdomain sudo[126668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hbovsuenqbzkfipfruyubzfgbojlqblb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013362.6496148-811-41446770441658/AnsiballZ_podman_image.py
Dec 06 09:29:23 np0005548790.localdomain sudo[126668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:29:23 np0005548790.localdomain python3.9[126670]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 06 09:29:24 np0005548790.localdomain sudo[126697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:29:24 np0005548790.localdomain sudo[126697]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:29:24 np0005548790.localdomain sudo[126697]: pam_unix(sudo:session): session closed for user root
Dec 06 09:29:24 np0005548790.localdomain sudo[126712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:29:24 np0005548790.localdomain sudo[126712]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:29:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43214 DF PROTO=TCP SPT=40056 DPT=9105 SEQ=1910609702 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EFAA9F0000000001030307) 
Dec 06 09:29:25 np0005548790.localdomain sudo[126712]: pam_unix(sudo:session): session closed for user root
Dec 06 09:29:25 np0005548790.localdomain sudo[126760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:29:25 np0005548790.localdomain sudo[126760]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:29:25 np0005548790.localdomain sudo[126760]: pam_unix(sudo:session): session closed for user root
Dec 06 09:29:25 np0005548790.localdomain sudo[126775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 06 09:29:25 np0005548790.localdomain sudo[126775]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:29:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44242 DF PROTO=TCP SPT=51542 DPT=9105 SEQ=450121106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EFB71F0000000001030307) 
Dec 06 09:29:29 np0005548790.localdomain sudo[126775]: pam_unix(sudo:session): session closed for user root
Dec 06 09:29:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43216 DF PROTO=TCP SPT=40056 DPT=9105 SEQ=1910609702 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EFC25F0000000001030307) 
Dec 06 09:29:30 np0005548790.localdomain podman[126684]: 2025-12-06 09:29:23.831332785 +0000 UTC m=+0.046105578 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 09:29:31 np0005548790.localdomain sudo[126886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:29:31 np0005548790.localdomain sudo[126886]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:29:31 np0005548790.localdomain sudo[126886]: pam_unix(sudo:session): session closed for user root
Dec 06 09:29:31 np0005548790.localdomain sudo[126668]: pam_unix(sudo:session): session closed for user root
Dec 06 09:29:32 np0005548790.localdomain sudo[127002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kyrsmetcxnfxzdubvgukobsoktfzyvne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013372.2855525-847-12719849621384/AnsiballZ_podman_image.py
Dec 06 09:29:32 np0005548790.localdomain sudo[127002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:29:32 np0005548790.localdomain python3.9[127004]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 06 09:29:34 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23738 DF PROTO=TCP SPT=45276 DPT=9882 SEQ=2274883919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EFCF1F0000000001030307) 
Dec 06 09:29:34 np0005548790.localdomain podman[127015]: 2025-12-06 09:29:32.882384075 +0000 UTC m=+0.045487792 image pull  quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified
Dec 06 09:29:34 np0005548790.localdomain sudo[127002]: pam_unix(sudo:session): session closed for user root
Dec 06 09:29:35 np0005548790.localdomain sudo[127177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zovtzdjrwathrwfsfwwioudzdszhciyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013375.1696694-874-254068137424513/AnsiballZ_podman_image.py
Dec 06 09:29:35 np0005548790.localdomain sudo[127177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:29:35 np0005548790.localdomain python3.9[127179]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 06 09:29:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30476 DF PROTO=TCP SPT=54874 DPT=9101 SEQ=1849156121 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EFD91F0000000001030307) 
Dec 06 09:29:36 np0005548790.localdomain podman[127192]: 2025-12-06 09:29:35.767290677 +0000 UTC m=+0.040714262 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 09:29:37 np0005548790.localdomain sudo[127177]: pam_unix(sudo:session): session closed for user root
Dec 06 09:29:37 np0005548790.localdomain sudo[127355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ubnvcwjohipcookmhunszxemicugqxci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013377.4943006-901-80198345623884/AnsiballZ_podman_image.py
Dec 06 09:29:37 np0005548790.localdomain sudo[127355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:29:37 np0005548790.localdomain python3.9[127357]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 06 09:29:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65176 DF PROTO=TCP SPT=38662 DPT=9100 SEQ=2784398179 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EFE71F0000000001030307) 
Dec 06 09:29:41 np0005548790.localdomain podman[127370]: 2025-12-06 09:29:38.067187821 +0000 UTC m=+0.032103002 image pull  quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified
Dec 06 09:29:41 np0005548790.localdomain sudo[127355]: pam_unix(sudo:session): session closed for user root
Dec 06 09:29:41 np0005548790.localdomain sudo[127542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gaiycafibttkgnrtjylqajfektwnlpnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013381.5542898-901-187593936636874/AnsiballZ_podman_image.py
Dec 06 09:29:41 np0005548790.localdomain sudo[127542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:29:42 np0005548790.localdomain python3.9[127544]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 06 09:29:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22801 DF PROTO=TCP SPT=50464 DPT=9100 SEQ=3009853788 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17EFF35F0000000001030307) 
Dec 06 09:29:43 np0005548790.localdomain podman[127556]: 2025-12-06 09:29:42.140992591 +0000 UTC m=+0.044176036 image pull  quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c
Dec 06 09:29:43 np0005548790.localdomain sudo[127542]: pam_unix(sudo:session): session closed for user root
Dec 06 09:29:45 np0005548790.localdomain sshd[123648]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:29:45 np0005548790.localdomain systemd[1]: session-39.scope: Deactivated successfully.
Dec 06 09:29:45 np0005548790.localdomain systemd[1]: session-39.scope: Consumed 1min 26.503s CPU time.
Dec 06 09:29:45 np0005548790.localdomain systemd-logind[760]: Session 39 logged out. Waiting for processes to exit.
Dec 06 09:29:45 np0005548790.localdomain systemd-logind[760]: Removed session 39.
Dec 06 09:29:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61611 DF PROTO=TCP SPT=54716 DPT=9102 SEQ=360727894 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F007510000000001030307) 
Dec 06 09:29:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11762 DF PROTO=TCP SPT=43632 DPT=9882 SEQ=3045986845 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F007580000000001030307) 
Dec 06 09:29:50 np0005548790.localdomain sshd[127668]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:29:51 np0005548790.localdomain sshd[127668]: Accepted publickey for zuul from 192.168.122.30 port 44642 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:29:51 np0005548790.localdomain systemd-logind[760]: New session 40 of user zuul.
Dec 06 09:29:51 np0005548790.localdomain systemd[1]: Started Session 40 of User zuul.
Dec 06 09:29:51 np0005548790.localdomain sshd[127668]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:29:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22802 DF PROTO=TCP SPT=50464 DPT=9100 SEQ=3009853788 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F013200000000001030307) 
Dec 06 09:29:52 np0005548790.localdomain python3.9[127761]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:29:53 np0005548790.localdomain sudo[127866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ipsuaemqesrzrwkrcetldxdcrhhdizfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013392.7254987-70-234545389064442/AnsiballZ_getent.py
Dec 06 09:29:53 np0005548790.localdomain sudo[127866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:29:53 np0005548790.localdomain python3.9[127868]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 06 09:29:54 np0005548790.localdomain sudo[127866]: pam_unix(sudo:session): session closed for user root
Dec 06 09:29:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=453 DF PROTO=TCP SPT=56972 DPT=9105 SEQ=3095613566 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F01FA00000000001030307) 
Dec 06 09:29:54 np0005548790.localdomain sudo[127975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jytaqejekmtvmpbfowwyuxudoeugaygb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013394.660632-106-269634669464595/AnsiballZ_setup.py
Dec 06 09:29:54 np0005548790.localdomain sudo[127975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:29:55 np0005548790.localdomain python3.9[127977]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:29:55 np0005548790.localdomain sudo[127975]: pam_unix(sudo:session): session closed for user root
Dec 06 09:29:56 np0005548790.localdomain sudo[128029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yitxggytcowzwzodigrjmzhlzbeviiiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013394.660632-106-269634669464595/AnsiballZ_dnf.py
Dec 06 09:29:56 np0005548790.localdomain sudo[128029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:29:56 np0005548790.localdomain python3.9[128031]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch3.3'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 06 09:29:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12321 DF PROTO=TCP SPT=50134 DPT=9105 SEQ=601885340 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F02B1F0000000001030307) 
Dec 06 09:30:00 np0005548790.localdomain sudo[128029]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:00 np0005548790.localdomain sudo[128378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kygsmwcfrpdxgnrqldqgwzctgonsslzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013400.3587735-148-91640809542244/AnsiballZ_dnf.py
Dec 06 09:30:00 np0005548790.localdomain sudo[128378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=455 DF PROTO=TCP SPT=56972 DPT=9105 SEQ=3095613566 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F0375F0000000001030307) 
Dec 06 09:30:00 np0005548790.localdomain python3.9[128380]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:30:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61615 DF PROTO=TCP SPT=54716 DPT=9102 SEQ=360727894 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F043200000000001030307) 
Dec 06 09:30:03 np0005548790.localdomain sudo[128378]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:05 np0005548790.localdomain sudo[128472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wrectjdemhbhcxnzckyzfroqdootfodf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013404.2734056-172-266884189488068/AnsiballZ_systemd.py
Dec 06 09:30:05 np0005548790.localdomain sudo[128472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:05 np0005548790.localdomain python3.9[128474]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 09:30:05 np0005548790.localdomain sudo[128472]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7649 DF PROTO=TCP SPT=38308 DPT=9101 SEQ=1161043454 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F04F200000000001030307) 
Dec 06 09:30:08 np0005548790.localdomain python3.9[128567]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:30:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 09:30:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 5400.1 total, 600.0 interval
                                                          Cumulative writes: 5186 writes, 23K keys, 5186 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5186 writes, 682 syncs, 7.60 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 09:30:09 np0005548790.localdomain sudo[128657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lcjopjfbddhivntlpxkdsmsbvasqpyqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013409.1998372-226-227132227657898/AnsiballZ_sefcontext.py
Dec 06 09:30:09 np0005548790.localdomain sudo[128657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:09 np0005548790.localdomain python3.9[128659]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 06 09:30:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26311 DF PROTO=TCP SPT=35606 DPT=9100 SEQ=908246323 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F05D1F0000000001030307) 
Dec 06 09:30:11 np0005548790.localdomain kernel: SELinux:  Converting 2743 SID table entries...
Dec 06 09:30:11 np0005548790.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 09:30:11 np0005548790.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 06 09:30:11 np0005548790.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 09:30:11 np0005548790.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 06 09:30:11 np0005548790.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 09:30:11 np0005548790.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 09:30:11 np0005548790.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 09:30:11 np0005548790.localdomain sudo[128657]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:12 np0005548790.localdomain python3.9[128889]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:30:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 09:30:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 5400.2 total, 600.0 interval
                                                          Cumulative writes: 5446 writes, 23K keys, 5446 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5446 writes, 742 syncs, 7.34 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 09:30:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54933 DF PROTO=TCP SPT=58688 DPT=9100 SEQ=3781664781 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F0689F0000000001030307) 
Dec 06 09:30:13 np0005548790.localdomain sudo[128985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-odewqlffjewimlfpucjbpvyqkysonber ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013413.2631316-280-48246181806504/AnsiballZ_dnf.py
Dec 06 09:30:13 np0005548790.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=18 res=1
Dec 06 09:30:13 np0005548790.localdomain sudo[128985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:13 np0005548790.localdomain python3.9[128987]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:30:17 np0005548790.localdomain sudo[128985]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49618 DF PROTO=TCP SPT=34444 DPT=9102 SEQ=412263320 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F07C810000000001030307) 
Dec 06 09:30:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31971 DF PROTO=TCP SPT=56920 DPT=9882 SEQ=761445619 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F07C880000000001030307) 
Dec 06 09:30:18 np0005548790.localdomain sudo[129079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kmhgarmghoiapxipbdxdwoybqmwrnooz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013417.9953618-304-145282917711250/AnsiballZ_command.py
Dec 06 09:30:18 np0005548790.localdomain sudo[129079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:18 np0005548790.localdomain python3.9[129081]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:30:19 np0005548790.localdomain sudo[129079]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:20 np0005548790.localdomain sudo[129324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-reozalzzlqukrtoqzwurbonvthosshmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013419.6187387-329-251653060938119/AnsiballZ_file.py
Dec 06 09:30:20 np0005548790.localdomain sudo[129324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:20 np0005548790.localdomain python3.9[129326]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 09:30:20 np0005548790.localdomain sudo[129324]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:20 np0005548790.localdomain python3.9[129416]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:30:21 np0005548790.localdomain sudo[129508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vyozijxiiktbjowzorxzzednuforihfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013421.1750872-382-7147745446099/AnsiballZ_dnf.py
Dec 06 09:30:21 np0005548790.localdomain sudo[129508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49620 DF PROTO=TCP SPT=34444 DPT=9102 SEQ=412263320 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F0889F0000000001030307) 
Dec 06 09:30:21 np0005548790.localdomain python3.9[129510]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:30:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40547 DF PROTO=TCP SPT=45644 DPT=9105 SEQ=689069432 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F094DF0000000001030307) 
Dec 06 09:30:25 np0005548790.localdomain sudo[129508]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:25 np0005548790.localdomain sudo[129602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bvvxrjfpetdvawiqlvoayrabzyuwdcpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013425.244384-406-131684893308850/AnsiballZ_dnf.py
Dec 06 09:30:25 np0005548790.localdomain sudo[129602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:25 np0005548790.localdomain python3.9[129604]: ansible-ansible.legacy.dnf Invoked with name=['openstack-network-scripts'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:30:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43219 DF PROTO=TCP SPT=40056 DPT=9105 SEQ=1910609702 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F0A11F0000000001030307) 
Dec 06 09:30:28 np0005548790.localdomain sudo[129602]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:29 np0005548790.localdomain sudo[129696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-koubzgbavqqsgjvriixpvugivzfypfmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013429.1507173-430-161275881920919/AnsiballZ_systemd.py
Dec 06 09:30:29 np0005548790.localdomain sudo[129696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:29 np0005548790.localdomain python3.9[129698]: ansible-ansible.builtin.systemd Invoked with enabled=True name=network daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 06 09:30:29 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:30:29 np0005548790.localdomain systemd-rc-local-generator[129727]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:30:29 np0005548790.localdomain systemd-sysv-generator[129730]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:30:29 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:30:30 np0005548790.localdomain sudo[129696]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40549 DF PROTO=TCP SPT=45644 DPT=9105 SEQ=689069432 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F0ACA00000000001030307) 
Dec 06 09:30:31 np0005548790.localdomain sudo[129753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:30:31 np0005548790.localdomain sudo[129753]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:30:31 np0005548790.localdomain sudo[129753]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:31 np0005548790.localdomain sudo[129768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:30:31 np0005548790.localdomain sudo[129768]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:30:31 np0005548790.localdomain sudo[129875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wwmaimigvcliohfeqzafoeotiyslhmxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013431.6101253-460-53917040863819/AnsiballZ_stat.py
Dec 06 09:30:31 np0005548790.localdomain sudo[129875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:32 np0005548790.localdomain sudo[129768]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:32 np0005548790.localdomain python3.9[129878]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:30:32 np0005548790.localdomain sudo[129875]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:32 np0005548790.localdomain sudo[129981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-txyxefdcopujiuudavgbsummkrllrvel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013432.3309622-487-86485173209544/AnsiballZ_ini_file.py
Dec 06 09:30:32 np0005548790.localdomain sudo[129981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:32 np0005548790.localdomain python3.9[129983]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:30:32 np0005548790.localdomain sudo[129981]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:33 np0005548790.localdomain sudo[130075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kgclgazsngjalkxxaqmsnacysvtbbcdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013433.1142197-511-140122550297478/AnsiballZ_ini_file.py
Dec 06 09:30:33 np0005548790.localdomain sudo[130075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:33 np0005548790.localdomain python3.9[130077]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:30:33 np0005548790.localdomain sudo[130075]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31975 DF PROTO=TCP SPT=56920 DPT=9882 SEQ=761445619 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F0B91F0000000001030307) 
Dec 06 09:30:34 np0005548790.localdomain sudo[130167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-phraornphmljdbctotuznvpgxavqxbyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013433.7771063-535-172014363741559/AnsiballZ_ini_file.py
Dec 06 09:30:34 np0005548790.localdomain sudo[130167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:34 np0005548790.localdomain python3.9[130169]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:30:34 np0005548790.localdomain sudo[130167]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:35 np0005548790.localdomain sudo[130259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kdgzxmxktutleogquunmetdfwlzwjzih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013434.758109-565-253190817322249/AnsiballZ_stat.py
Dec 06 09:30:35 np0005548790.localdomain sudo[130259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:35 np0005548790.localdomain python3.9[130261]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:30:35 np0005548790.localdomain sudo[130259]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:35 np0005548790.localdomain sudo[130273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:30:35 np0005548790.localdomain sudo[130273]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:30:35 np0005548790.localdomain sudo[130273]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:35 np0005548790.localdomain sudo[130347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tkoqdzjacsdgxhlzcsyamqnnhnudkalg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013434.758109-565-253190817322249/AnsiballZ_copy.py
Dec 06 09:30:35 np0005548790.localdomain sudo[130347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:35 np0005548790.localdomain python3.9[130349]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013434.758109-565-253190817322249/.source _original_basename=.dxj5swim follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:30:35 np0005548790.localdomain sudo[130347]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:36 np0005548790.localdomain sudo[130439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qiqduzhfkwryrljaudpmswokjhsxrhvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013436.0547907-611-253461783925030/AnsiballZ_file.py
Dec 06 09:30:36 np0005548790.localdomain sudo[130439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10897 DF PROTO=TCP SPT=40734 DPT=9101 SEQ=2450836294 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F0C31F0000000001030307) 
Dec 06 09:30:36 np0005548790.localdomain python3.9[130441]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:30:36 np0005548790.localdomain sudo[130439]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:37 np0005548790.localdomain sudo[130531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yuakimjmlcktgaxwaceiaqdzhuxuxsai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013436.7321947-634-65184182602467/AnsiballZ_edpm_os_net_config_mappings.py
Dec 06 09:30:37 np0005548790.localdomain sudo[130531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:37 np0005548790.localdomain python3.9[130533]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec 06 09:30:37 np0005548790.localdomain sudo[130531]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:37 np0005548790.localdomain sudo[130623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tfxfymntunujdmcnbdtujdgxzuyagwum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013437.6270056-661-266486571117106/AnsiballZ_file.py
Dec 06 09:30:37 np0005548790.localdomain sudo[130623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:38 np0005548790.localdomain python3.9[130625]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:30:38 np0005548790.localdomain sudo[130623]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:38 np0005548790.localdomain sudo[130715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dzctsqhtlthmdeuxvzoqlgolzplcfhbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013438.4601748-691-61204441692585/AnsiballZ_stat.py
Dec 06 09:30:38 np0005548790.localdomain sudo[130715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:38 np0005548790.localdomain python3.9[130717]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/config.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:30:38 np0005548790.localdomain sudo[130715]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:39 np0005548790.localdomain sudo[130788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hbtxnjexrmntbqikxxwaamkxxrsljggw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013438.4601748-691-61204441692585/AnsiballZ_copy.py
Dec 06 09:30:39 np0005548790.localdomain sudo[130788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:39 np0005548790.localdomain python3.9[130790]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/os-net-config/config.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013438.4601748-691-61204441692585/.source.yaml _original_basename=.z5lbqh4c follow=False checksum=4c28d1662755c608a6ffaa942e27a2488c0a78a3 force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:30:39 np0005548790.localdomain sudo[130788]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22804 DF PROTO=TCP SPT=50464 DPT=9100 SEQ=3009853788 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F0D11F0000000001030307) 
Dec 06 09:30:40 np0005548790.localdomain sudo[130880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yakemyriqvlnhzyrwbfwoaflggjjexnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013439.6843476-736-72101949022491/AnsiballZ_slurp.py
Dec 06 09:30:40 np0005548790.localdomain sudo[130880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:40 np0005548790.localdomain python3.9[130882]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec 06 09:30:40 np0005548790.localdomain sudo[130880]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:41 np0005548790.localdomain sudo[130985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ysjwscqligzqyejhwcuuveznvzyqmekt ; ANSIBLE_ASYNC_DIR='~/.ansible_async' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013441.1590123-763-250009272682480/async_wrapper.py j512287767301 300 /home/zuul/.ansible/tmp/ansible-tmp-1765013441.1590123-763-250009272682480/AnsiballZ_edpm_os_net_config.py _
Dec 06 09:30:41 np0005548790.localdomain sudo[130985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:42 np0005548790.localdomain ansible-async_wrapper.py[130987]: Invoked with j512287767301 300 /home/zuul/.ansible/tmp/ansible-tmp-1765013441.1590123-763-250009272682480/AnsiballZ_edpm_os_net_config.py _
Dec 06 09:30:42 np0005548790.localdomain ansible-async_wrapper.py[130990]: Starting module and watcher
Dec 06 09:30:42 np0005548790.localdomain ansible-async_wrapper.py[130990]: Start watching 130991 (300)
Dec 06 09:30:42 np0005548790.localdomain ansible-async_wrapper.py[130991]: Start module (130991)
Dec 06 09:30:42 np0005548790.localdomain ansible-async_wrapper.py[130987]: Return async_wrapper task started.
Dec 06 09:30:42 np0005548790.localdomain sudo[130985]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:42 np0005548790.localdomain python3.9[130992]: ansible-edpm_os_net_config Invoked with cleanup=False config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=False
Dec 06 09:30:43 np0005548790.localdomain ansible-async_wrapper.py[130991]: Module complete (130991)
Dec 06 09:30:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1383 DF PROTO=TCP SPT=42596 DPT=9100 SEQ=3777187908 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F0DDDF0000000001030307) 
Dec 06 09:30:45 np0005548790.localdomain sudo[131082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bwtdzgptcjqcpouiimlezvgdjvtoluzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013445.2234159-763-90255907596911/AnsiballZ_async_status.py
Dec 06 09:30:45 np0005548790.localdomain sudo[131082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:45 np0005548790.localdomain python3.9[131084]: ansible-ansible.legacy.async_status Invoked with jid=j512287767301.130987 mode=status _async_dir=/root/.ansible_async
Dec 06 09:30:45 np0005548790.localdomain sudo[131082]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:46 np0005548790.localdomain sudo[131141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jrofpbwvtvwdxbiisedqffqrfycgehsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013445.2234159-763-90255907596911/AnsiballZ_async_status.py
Dec 06 09:30:46 np0005548790.localdomain sudo[131141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:46 np0005548790.localdomain python3.9[131143]: ansible-ansible.legacy.async_status Invoked with jid=j512287767301.130987 mode=cleanup _async_dir=/root/.ansible_async
Dec 06 09:30:46 np0005548790.localdomain sudo[131141]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:46 np0005548790.localdomain sudo[131233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zebcbnstcmzfcvqwnbzloivgeixsvpyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013446.5656402-829-251932588809373/AnsiballZ_stat.py
Dec 06 09:30:46 np0005548790.localdomain sudo[131233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:47 np0005548790.localdomain python3.9[131235]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:30:47 np0005548790.localdomain sudo[131233]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:47 np0005548790.localdomain ansible-async_wrapper.py[130990]: Done in kid B.
Dec 06 09:30:47 np0005548790.localdomain sudo[131306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qkouwbyrrvgsanyuhyiqiuzciswvxvmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013446.5656402-829-251932588809373/AnsiballZ_copy.py
Dec 06 09:30:47 np0005548790.localdomain sudo[131306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:47 np0005548790.localdomain python3.9[131308]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013446.5656402-829-251932588809373/.source.returncode _original_basename=.rj11oy6p follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:30:47 np0005548790.localdomain sudo[131306]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:48 np0005548790.localdomain sudo[131398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rvbitzzgotqipvsesogxtvebwbggrqfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013447.8004127-877-128957984494733/AnsiballZ_stat.py
Dec 06 09:30:48 np0005548790.localdomain sudo[131398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:48 np0005548790.localdomain python3.9[131400]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:30:48 np0005548790.localdomain sudo[131398]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53716 DF PROTO=TCP SPT=46874 DPT=9102 SEQ=1888147466 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F0F1B00000000001030307) 
Dec 06 09:30:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3412 DF PROTO=TCP SPT=43474 DPT=9882 SEQ=1007976245 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F0F1B70000000001030307) 
Dec 06 09:30:48 np0005548790.localdomain sudo[131471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iowqdrliabdgtawpqgtuxorucwdsvbmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013447.8004127-877-128957984494733/AnsiballZ_copy.py
Dec 06 09:30:48 np0005548790.localdomain sudo[131471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:48 np0005548790.localdomain python3.9[131473]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013447.8004127-877-128957984494733/.source.cfg _original_basename=.rmsy09yo follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:30:48 np0005548790.localdomain sudo[131471]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:49 np0005548790.localdomain sudo[131563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bxfgbtknxqmmfiadumsghxfklbkocmlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013448.927968-923-223459294673418/AnsiballZ_systemd.py
Dec 06 09:30:49 np0005548790.localdomain sudo[131563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:30:49 np0005548790.localdomain python3.9[131565]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:30:49 np0005548790.localdomain systemd[1]: Reloading Network Manager...
Dec 06 09:30:49 np0005548790.localdomain NetworkManager[5968]: <info>  [1765013449.5511] audit: op="reload" arg="0" pid=131569 uid=0 result="success"
Dec 06 09:30:49 np0005548790.localdomain NetworkManager[5968]: <info>  [1765013449.5521] config: signal: SIGHUP (no changes from disk)
Dec 06 09:30:49 np0005548790.localdomain systemd[1]: Reloaded Network Manager.
Dec 06 09:30:49 np0005548790.localdomain sudo[131563]: pam_unix(sudo:session): session closed for user root
Dec 06 09:30:49 np0005548790.localdomain sshd[127668]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:30:49 np0005548790.localdomain systemd-logind[760]: Session 40 logged out. Waiting for processes to exit.
Dec 06 09:30:49 np0005548790.localdomain systemd[1]: session-40.scope: Deactivated successfully.
Dec 06 09:30:49 np0005548790.localdomain systemd[1]: session-40.scope: Consumed 35.477s CPU time.
Dec 06 09:30:49 np0005548790.localdomain systemd-logind[760]: Removed session 40.
Dec 06 09:30:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53718 DF PROTO=TCP SPT=46874 DPT=9102 SEQ=1888147466 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F0FD9F0000000001030307) 
Dec 06 09:30:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41020 DF PROTO=TCP SPT=59626 DPT=9105 SEQ=3048302590 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F10A1F0000000001030307) 
Dec 06 09:30:56 np0005548790.localdomain sshd[131584]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:30:56 np0005548790.localdomain sshd[131584]: Accepted publickey for zuul from 192.168.122.30 port 54272 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:30:56 np0005548790.localdomain systemd-logind[760]: New session 41 of user zuul.
Dec 06 09:30:56 np0005548790.localdomain systemd[1]: Started Session 41 of User zuul.
Dec 06 09:30:56 np0005548790.localdomain sshd[131584]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:30:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=458 DF PROTO=TCP SPT=56972 DPT=9105 SEQ=3095613566 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F1151F0000000001030307) 
Dec 06 09:30:57 np0005548790.localdomain python3.9[131677]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:30:58 np0005548790.localdomain python3.9[131771]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:31:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41022 DF PROTO=TCP SPT=59626 DPT=9105 SEQ=3048302590 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F121DF0000000001030307) 
Dec 06 09:31:01 np0005548790.localdomain python3.9[131916]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:31:01 np0005548790.localdomain sshd[131584]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:31:01 np0005548790.localdomain systemd[1]: session-41.scope: Deactivated successfully.
Dec 06 09:31:01 np0005548790.localdomain systemd[1]: session-41.scope: Consumed 2.102s CPU time.
Dec 06 09:31:01 np0005548790.localdomain systemd-logind[760]: Session 41 logged out. Waiting for processes to exit.
Dec 06 09:31:01 np0005548790.localdomain systemd-logind[760]: Removed session 41.
Dec 06 09:31:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3416 DF PROTO=TCP SPT=43474 DPT=9882 SEQ=1007976245 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F12D200000000001030307) 
Dec 06 09:31:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60388 DF PROTO=TCP SPT=59152 DPT=9101 SEQ=653954440 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F1391F0000000001030307) 
Dec 06 09:31:07 np0005548790.localdomain sshd[131932]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:31:07 np0005548790.localdomain sshd[131932]: Accepted publickey for zuul from 192.168.122.30 port 47918 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:31:07 np0005548790.localdomain systemd-logind[760]: New session 42 of user zuul.
Dec 06 09:31:07 np0005548790.localdomain systemd[1]: Started Session 42 of User zuul.
Dec 06 09:31:07 np0005548790.localdomain sshd[131932]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:31:08 np0005548790.localdomain python3.9[132025]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:31:09 np0005548790.localdomain python3.9[132119]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:31:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54936 DF PROTO=TCP SPT=58688 DPT=9100 SEQ=3781664781 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F147200000000001030307) 
Dec 06 09:31:10 np0005548790.localdomain sudo[132213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gxseyjmfobcgtevpibtxnnyedoxdcusg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013470.1827457-82-272182249407638/AnsiballZ_setup.py
Dec 06 09:31:10 np0005548790.localdomain sudo[132213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:10 np0005548790.localdomain python3.9[132215]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:31:11 np0005548790.localdomain sudo[132213]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:11 np0005548790.localdomain sudo[132267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ewpptctwaelavxbutbnsbzisycocruri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013470.1827457-82-272182249407638/AnsiballZ_dnf.py
Dec 06 09:31:11 np0005548790.localdomain sudo[132267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:11 np0005548790.localdomain python3.9[132269]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:31:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38626 DF PROTO=TCP SPT=46404 DPT=9100 SEQ=2347100962 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F152DF0000000001030307) 
Dec 06 09:31:14 np0005548790.localdomain sudo[132267]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:15 np0005548790.localdomain sudo[132361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wfkstzwtorqrfcgfqxrcjhjhzdjjeafo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013474.9103162-118-119678713388830/AnsiballZ_setup.py
Dec 06 09:31:15 np0005548790.localdomain sudo[132361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:15 np0005548790.localdomain python3.9[132363]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:31:15 np0005548790.localdomain sudo[132361]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:17 np0005548790.localdomain sudo[132508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nwdironzipdziuraolfjxwvdpkbfxibh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013477.0491006-152-26014486050842/AnsiballZ_file.py
Dec 06 09:31:17 np0005548790.localdomain sudo[132508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:17 np0005548790.localdomain python3.9[132510]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:31:17 np0005548790.localdomain sudo[132508]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:18 np0005548790.localdomain auditd[726]: Audit daemon rotating log files
Dec 06 09:31:18 np0005548790.localdomain sudo[132600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cjqngtkhfzzsvgxwqpemydykvjmtaaxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013477.8639166-176-25123294786704/AnsiballZ_command.py
Dec 06 09:31:18 np0005548790.localdomain sudo[132600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57517 DF PROTO=TCP SPT=41730 DPT=9102 SEQ=2301784906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F166E10000000001030307) 
Dec 06 09:31:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5482 DF PROTO=TCP SPT=43570 DPT=9882 SEQ=527289569 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F166E70000000001030307) 
Dec 06 09:31:18 np0005548790.localdomain python3.9[132602]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:31:18 np0005548790.localdomain sudo[132600]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:19 np0005548790.localdomain sudo[132703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-htdkjquppuidrdvwvqsyawtakefbuqwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013478.7367105-199-187862814298039/AnsiballZ_stat.py
Dec 06 09:31:19 np0005548790.localdomain sudo[132703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:19 np0005548790.localdomain python3.9[132705]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:31:19 np0005548790.localdomain sudo[132703]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:19 np0005548790.localdomain sudo[132751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oyowtvrnndeizykxifdxlefybihhvace ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013478.7367105-199-187862814298039/AnsiballZ_file.py
Dec 06 09:31:19 np0005548790.localdomain sudo[132751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:19 np0005548790.localdomain python3.9[132753]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:31:19 np0005548790.localdomain sudo[132751]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:20 np0005548790.localdomain sudo[132843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rguullpjcacwxftqvvvpemjvkxtvpefv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013479.9698205-235-171581688177689/AnsiballZ_stat.py
Dec 06 09:31:20 np0005548790.localdomain sudo[132843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:20 np0005548790.localdomain python3.9[132845]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:31:20 np0005548790.localdomain sudo[132843]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:20 np0005548790.localdomain sudo[132891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xiimyrsktgdnuumgcievgopbzviqczsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013479.9698205-235-171581688177689/AnsiballZ_file.py
Dec 06 09:31:20 np0005548790.localdomain sudo[132891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:20 np0005548790.localdomain python3.9[132893]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:31:20 np0005548790.localdomain sudo[132891]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57519 DF PROTO=TCP SPT=41730 DPT=9102 SEQ=2301784906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F172DF0000000001030307) 
Dec 06 09:31:21 np0005548790.localdomain sudo[132983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xxxdauxhyrufhqgxcmtxfbmqfhclanjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013481.1332738-274-166292593171454/AnsiballZ_ini_file.py
Dec 06 09:31:21 np0005548790.localdomain sudo[132983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:21 np0005548790.localdomain python3.9[132985]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:31:21 np0005548790.localdomain sudo[132983]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:22 np0005548790.localdomain sudo[133075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hbjcnttxzmmtfxnzvqtmjapclbvjxrfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013481.8480089-274-15718070034246/AnsiballZ_ini_file.py
Dec 06 09:31:22 np0005548790.localdomain sudo[133075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:22 np0005548790.localdomain python3.9[133077]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:31:22 np0005548790.localdomain sudo[133075]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:22 np0005548790.localdomain sudo[133167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fosjrkcgvhyeyufogxsjyotpbgdwcwas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013482.4157515-274-65138256698812/AnsiballZ_ini_file.py
Dec 06 09:31:22 np0005548790.localdomain sudo[133167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:22 np0005548790.localdomain python3.9[133169]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:31:22 np0005548790.localdomain sudo[133167]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:23 np0005548790.localdomain sudo[133259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zycywhtqrxxttxankcaivoumrpjzeyzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013482.9929116-274-174243343421170/AnsiballZ_ini_file.py
Dec 06 09:31:23 np0005548790.localdomain sudo[133259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:23 np0005548790.localdomain python3.9[133261]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:31:23 np0005548790.localdomain sudo[133259]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:24 np0005548790.localdomain sudo[133351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-etzizdelnfeyrlwhmkwrjrsacwxysaxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013483.7933338-367-175548502606356/AnsiballZ_dnf.py
Dec 06 09:31:24 np0005548790.localdomain sudo[133351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:24 np0005548790.localdomain python3.9[133353]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:31:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18261 DF PROTO=TCP SPT=59704 DPT=9105 SEQ=1543435166 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F17F5F0000000001030307) 
Dec 06 09:31:27 np0005548790.localdomain sudo[133351]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40552 DF PROTO=TCP SPT=45644 DPT=9105 SEQ=689069432 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F18B1F0000000001030307) 
Dec 06 09:31:28 np0005548790.localdomain sudo[133445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tiztlldyhffftxxajopbfelvyfrpzkrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013487.9075646-400-30634430753012/AnsiballZ_setup.py
Dec 06 09:31:28 np0005548790.localdomain sudo[133445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:28 np0005548790.localdomain python3.9[133447]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:31:28 np0005548790.localdomain sudo[133445]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:28 np0005548790.localdomain sudo[133539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lumjidvvaywymzodhubxwfchohkogshp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013488.658602-425-108006777835422/AnsiballZ_stat.py
Dec 06 09:31:28 np0005548790.localdomain sudo[133539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:29 np0005548790.localdomain python3.9[133541]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:31:29 np0005548790.localdomain sudo[133539]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:29 np0005548790.localdomain sudo[133631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xzuvrtyznfbbfjlfnuxndjwmqlzwyqmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013489.3850513-451-202727033076629/AnsiballZ_stat.py
Dec 06 09:31:29 np0005548790.localdomain sudo[133631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:29 np0005548790.localdomain python3.9[133633]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:31:29 np0005548790.localdomain sudo[133631]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:30 np0005548790.localdomain sudo[133723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qvtqaciyecxnjmghwzyugbhmjocjvtql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013490.19067-481-20345699630074/AnsiballZ_command.py
Dec 06 09:31:30 np0005548790.localdomain sudo[133723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:30 np0005548790.localdomain python3.9[133725]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:31:30 np0005548790.localdomain sudo[133723]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18263 DF PROTO=TCP SPT=59704 DPT=9105 SEQ=1543435166 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F1971F0000000001030307) 
Dec 06 09:31:31 np0005548790.localdomain sudo[133816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nxuyughvumepfcwksadffglwmfvtkotw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013490.9832473-512-113152841152198/AnsiballZ_service_facts.py
Dec 06 09:31:31 np0005548790.localdomain sudo[133816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:31 np0005548790.localdomain python3.9[133818]: ansible-service_facts Invoked
Dec 06 09:31:31 np0005548790.localdomain network[133835]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 09:31:31 np0005548790.localdomain network[133836]: 'network-scripts' will be removed from distribution in near future.
Dec 06 09:31:31 np0005548790.localdomain network[133837]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 09:31:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:31:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57521 DF PROTO=TCP SPT=41730 DPT=9102 SEQ=2301784906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F1A31F0000000001030307) 
Dec 06 09:31:35 np0005548790.localdomain sudo[133816]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:35 np0005548790.localdomain sudo[133962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:31:35 np0005548790.localdomain sudo[133962]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:31:35 np0005548790.localdomain sudo[133962]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:35 np0005548790.localdomain sudo[133977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:31:35 np0005548790.localdomain sudo[133977]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:31:36 np0005548790.localdomain sudo[133977]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:36 np0005548790.localdomain sudo[134023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:31:36 np0005548790.localdomain sudo[134023]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:31:36 np0005548790.localdomain sudo[134023]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:36 np0005548790.localdomain sudo[134038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 -- inventory --format=json-pretty --filter-for-batch
Dec 06 09:31:36 np0005548790.localdomain sudo[134038]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:31:37 np0005548790.localdomain podman[134093]: 
Dec 06 09:31:37 np0005548790.localdomain podman[134093]: 2025-12-06 09:31:37.072040323 +0000 UTC m=+0.078547498 container create b4003457ff98d98c1b707452256dbd76bc57daeafa3256510e7d0273b9c1e27a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_mirzakhani, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_BRANCH=main, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7)
Dec 06 09:31:37 np0005548790.localdomain systemd[1]: Started libpod-conmon-b4003457ff98d98c1b707452256dbd76bc57daeafa3256510e7d0273b9c1e27a.scope.
Dec 06 09:31:37 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:31:37 np0005548790.localdomain podman[134093]: 2025-12-06 09:31:37.132147716 +0000 UTC m=+0.138654921 container init b4003457ff98d98c1b707452256dbd76bc57daeafa3256510e7d0273b9c1e27a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_mirzakhani, name=rhceph, ceph=True, distribution-scope=public, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_BRANCH=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., release=1763362218, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, io.buildah.version=1.41.4)
Dec 06 09:31:37 np0005548790.localdomain podman[134093]: 2025-12-06 09:31:37.03835993 +0000 UTC m=+0.044867135 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 09:31:37 np0005548790.localdomain podman[134093]: 2025-12-06 09:31:37.141241219 +0000 UTC m=+0.147748424 container start b4003457ff98d98c1b707452256dbd76bc57daeafa3256510e7d0273b9c1e27a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_mirzakhani, vendor=Red Hat, Inc., name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, release=1763362218, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, distribution-scope=public, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64)
Dec 06 09:31:37 np0005548790.localdomain podman[134093]: 2025-12-06 09:31:37.141508256 +0000 UTC m=+0.148015431 container attach b4003457ff98d98c1b707452256dbd76bc57daeafa3256510e7d0273b9c1e27a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_mirzakhani, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, release=1763362218, distribution-scope=public, io.openshift.expose-services=, RELEASE=main, CEPH_POINT_RELEASE=, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, ceph=True, vendor=Red Hat, Inc., name=rhceph, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7)
Dec 06 09:31:37 np0005548790.localdomain cool_mirzakhani[134108]: 167 167
Dec 06 09:31:37 np0005548790.localdomain systemd[1]: libpod-b4003457ff98d98c1b707452256dbd76bc57daeafa3256510e7d0273b9c1e27a.scope: Deactivated successfully.
Dec 06 09:31:37 np0005548790.localdomain podman[134093]: 2025-12-06 09:31:37.145317308 +0000 UTC m=+0.151824483 container died b4003457ff98d98c1b707452256dbd76bc57daeafa3256510e7d0273b9c1e27a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_mirzakhani, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_CLEAN=True, ceph=True, name=rhceph, release=1763362218, CEPH_POINT_RELEASE=, vcs-type=git)
Dec 06 09:31:37 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1961 DF PROTO=TCP SPT=36372 DPT=9100 SEQ=1529989084 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F1B0600000000001030307) 
Dec 06 09:31:37 np0005548790.localdomain podman[134113]: 2025-12-06 09:31:37.263887399 +0000 UTC m=+0.106368964 container remove b4003457ff98d98c1b707452256dbd76bc57daeafa3256510e7d0273b9c1e27a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_mirzakhani, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, name=rhceph, description=Red Hat Ceph Storage 7, ceph=True, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True)
Dec 06 09:31:37 np0005548790.localdomain systemd[1]: libpod-conmon-b4003457ff98d98c1b707452256dbd76bc57daeafa3256510e7d0273b9c1e27a.scope: Deactivated successfully.
Dec 06 09:31:37 np0005548790.localdomain podman[134170]: 
Dec 06 09:31:37 np0005548790.localdomain podman[134170]: 2025-12-06 09:31:37.47051544 +0000 UTC m=+0.079343049 container create aedd7caca2f546533046d3bbf14aa621d41cd3ef40464c30f0a6b7e076a71579 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_bardeen, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, architecture=x86_64, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, name=rhceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 06 09:31:37 np0005548790.localdomain systemd[1]: Started libpod-conmon-aedd7caca2f546533046d3bbf14aa621d41cd3ef40464c30f0a6b7e076a71579.scope.
Dec 06 09:31:37 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:31:37 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27dd670d2e2759e2545b7b883cfd21960004b72e1250e664e5f5c6a0c5fa1328/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 09:31:37 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27dd670d2e2759e2545b7b883cfd21960004b72e1250e664e5f5c6a0c5fa1328/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 09:31:37 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27dd670d2e2759e2545b7b883cfd21960004b72e1250e664e5f5c6a0c5fa1328/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 09:31:37 np0005548790.localdomain podman[134170]: 2025-12-06 09:31:37.53688642 +0000 UTC m=+0.145714019 container init aedd7caca2f546533046d3bbf14aa621d41cd3ef40464c30f0a6b7e076a71579 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_bardeen, ceph=True, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, GIT_CLEAN=True, architecture=x86_64, RELEASE=main, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7)
Dec 06 09:31:37 np0005548790.localdomain podman[134170]: 2025-12-06 09:31:37.439030486 +0000 UTC m=+0.047858125 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 09:31:37 np0005548790.localdomain podman[134170]: 2025-12-06 09:31:37.546418155 +0000 UTC m=+0.155245754 container start aedd7caca2f546533046d3bbf14aa621d41cd3ef40464c30f0a6b7e076a71579 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_bardeen, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 06 09:31:37 np0005548790.localdomain podman[134170]: 2025-12-06 09:31:37.546717334 +0000 UTC m=+0.155544973 container attach aedd7caca2f546533046d3bbf14aa621d41cd3ef40464c30f0a6b7e076a71579 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_bardeen, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, version=7, RELEASE=main, vcs-type=git, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, release=1763362218, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 09:31:38 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-03adc0a338d69be72f132dc7c4cd99bad9874e9cee1bdf1fe049d0929ff4372e-merged.mount: Deactivated successfully.
Dec 06 09:31:38 np0005548790.localdomain sudo[134588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uqggtgayqtqknkartarzpejveefwktjp ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1765013497.3044002-557-34555377222126/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1765013497.3044002-557-34555377222126/args
Dec 06 09:31:38 np0005548790.localdomain sudo[134588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:38 np0005548790.localdomain sudo[134588]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]: [
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:     {
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:         "available": false,
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:         "ceph_device": false,
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:         "lsm_data": {},
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:         "lvs": [],
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:         "path": "/dev/sr0",
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:         "rejected_reasons": [
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:             "Insufficient space (<5GB)",
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:             "Has a FileSystem"
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:         ],
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:         "sys_api": {
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:             "actuators": null,
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:             "device_nodes": "sr0",
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:             "human_readable_size": "482.00 KB",
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:             "id_bus": "ata",
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:             "model": "QEMU DVD-ROM",
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:             "nr_requests": "2",
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:             "partitions": {},
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:             "path": "/dev/sr0",
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:             "removable": "1",
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:             "rev": "2.5+",
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:             "ro": "0",
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:             "rotational": "1",
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:             "sas_address": "",
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:             "sas_device_handle": "",
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:             "scheduler_mode": "mq-deadline",
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:             "sectors": 0,
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:             "sectorsize": "2048",
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:             "size": 493568.0,
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:             "support_discard": "0",
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:             "type": "disk",
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:             "vendor": "QEMU"
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:         }
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]:     }
Dec 06 09:31:38 np0005548790.localdomain strange_bardeen[134215]: ]
Dec 06 09:31:38 np0005548790.localdomain systemd[1]: libpod-aedd7caca2f546533046d3bbf14aa621d41cd3ef40464c30f0a6b7e076a71579.scope: Deactivated successfully.
Dec 06 09:31:38 np0005548790.localdomain podman[134170]: 2025-12-06 09:31:38.356474032 +0000 UTC m=+0.965301631 container died aedd7caca2f546533046d3bbf14aa621d41cd3ef40464c30f0a6b7e076a71579 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_bardeen, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, RELEASE=main, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, name=rhceph, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=7)
Dec 06 09:31:38 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-27dd670d2e2759e2545b7b883cfd21960004b72e1250e664e5f5c6a0c5fa1328-merged.mount: Deactivated successfully.
Dec 06 09:31:38 np0005548790.localdomain podman[135537]: 2025-12-06 09:31:38.436414446 +0000 UTC m=+0.068500288 container remove aedd7caca2f546533046d3bbf14aa621d41cd3ef40464c30f0a6b7e076a71579 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_bardeen, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, RELEASE=main, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.openshift.expose-services=, ceph=True, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc.)
Dec 06 09:31:38 np0005548790.localdomain systemd[1]: libpod-conmon-aedd7caca2f546533046d3bbf14aa621d41cd3ef40464c30f0a6b7e076a71579.scope: Deactivated successfully.
Dec 06 09:31:38 np0005548790.localdomain sudo[134038]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:38 np0005548790.localdomain sudo[135626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aqapotlaxtjxxgpjvvwwjdirftsrgbzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013498.5501027-590-31254260349982/AnsiballZ_dnf.py
Dec 06 09:31:38 np0005548790.localdomain sudo[135626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:39 np0005548790.localdomain python3.9[135628]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:31:39 np0005548790.localdomain sudo[135630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:31:39 np0005548790.localdomain sudo[135630]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:31:39 np0005548790.localdomain sudo[135630]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1386 DF PROTO=TCP SPT=42596 DPT=9100 SEQ=3777187908 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F1BD200000000001030307) 
Dec 06 09:31:42 np0005548790.localdomain sudo[135626]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1963 DF PROTO=TCP SPT=36372 DPT=9100 SEQ=1529989084 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F1C8200000000001030307) 
Dec 06 09:31:43 np0005548790.localdomain sudo[135735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sfdcrzidxlmdntiyhrbhvyhhuafawfiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013502.7462482-628-138222569393561/AnsiballZ_package_facts.py
Dec 06 09:31:43 np0005548790.localdomain sudo[135735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:43 np0005548790.localdomain python3.9[135737]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 06 09:31:43 np0005548790.localdomain sudo[135735]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:45 np0005548790.localdomain sudo[135827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kvyermyrpterifckifzdufjtvmwvcnon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013504.71678-660-43809603504862/AnsiballZ_stat.py
Dec 06 09:31:45 np0005548790.localdomain sudo[135827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:45 np0005548790.localdomain python3.9[135829]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:31:45 np0005548790.localdomain sudo[135827]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:45 np0005548790.localdomain sudo[135902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-proygccxjtkriksvrofxdjqzsnsoxmtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013504.71678-660-43809603504862/AnsiballZ_copy.py
Dec 06 09:31:45 np0005548790.localdomain sudo[135902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:45 np0005548790.localdomain python3.9[135904]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013504.71678-660-43809603504862/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:31:45 np0005548790.localdomain sudo[135902]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:46 np0005548790.localdomain sudo[135996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-urqqxdtqjoigqqepubdbseeiqcswhffk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013506.1626282-705-110744756413021/AnsiballZ_stat.py
Dec 06 09:31:46 np0005548790.localdomain sudo[135996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:46 np0005548790.localdomain python3.9[135998]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:31:46 np0005548790.localdomain sudo[135996]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:46 np0005548790.localdomain sudo[136071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pntrnbesbzxqmltdeimofdnibxmdyrmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013506.1626282-705-110744756413021/AnsiballZ_copy.py
Dec 06 09:31:46 np0005548790.localdomain sudo[136071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:47 np0005548790.localdomain python3.9[136073]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013506.1626282-705-110744756413021/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:31:47 np0005548790.localdomain sudo[136071]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30379 DF PROTO=TCP SPT=33798 DPT=9102 SEQ=3162106093 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F1DC110000000001030307) 
Dec 06 09:31:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47742 DF PROTO=TCP SPT=40000 DPT=9882 SEQ=1483728647 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F1DC170000000001030307) 
Dec 06 09:31:48 np0005548790.localdomain sudo[136165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-udiuvdfmqyvxkcaucxtimmqmttdhhgvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013508.1762097-767-181042034880948/AnsiballZ_lineinfile.py
Dec 06 09:31:48 np0005548790.localdomain sudo[136165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:48 np0005548790.localdomain python3.9[136167]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:31:48 np0005548790.localdomain sudo[136165]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:50 np0005548790.localdomain sudo[136259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ltwimcnoavlwnkquuwlpncwvsovnozcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013509.9088206-813-45343298230838/AnsiballZ_setup.py
Dec 06 09:31:50 np0005548790.localdomain sudo[136259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:50 np0005548790.localdomain python3.9[136261]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:31:50 np0005548790.localdomain sudo[136259]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47744 DF PROTO=TCP SPT=40000 DPT=9882 SEQ=1483728647 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F1E81F0000000001030307) 
Dec 06 09:31:51 np0005548790.localdomain sudo[136313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rmyyplibsxbdakmqjixcbtbvmkaojiwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013509.9088206-813-45343298230838/AnsiballZ_systemd.py
Dec 06 09:31:51 np0005548790.localdomain sudo[136313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:52 np0005548790.localdomain python3.9[136315]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:31:52 np0005548790.localdomain sudo[136313]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:54 np0005548790.localdomain sudo[136407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bgqxfdxyuwqkdguiuyvjycmtgtgjdbsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013514.045559-860-34907364862444/AnsiballZ_setup.py
Dec 06 09:31:54 np0005548790.localdomain sudo[136407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19529 DF PROTO=TCP SPT=49202 DPT=9105 SEQ=3573675818 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F1F45F0000000001030307) 
Dec 06 09:31:54 np0005548790.localdomain python3.9[136409]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:31:54 np0005548790.localdomain sudo[136407]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:55 np0005548790.localdomain sudo[136461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qppaelvwusaduujccxmcacqiivaryfma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013514.045559-860-34907364862444/AnsiballZ_systemd.py
Dec 06 09:31:55 np0005548790.localdomain sudo[136461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:31:55 np0005548790.localdomain python3.9[136463]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:31:55 np0005548790.localdomain chronyd[25781]: chronyd exiting
Dec 06 09:31:55 np0005548790.localdomain systemd[1]: Stopping NTP client/server...
Dec 06 09:31:55 np0005548790.localdomain systemd[1]: chronyd.service: Deactivated successfully.
Dec 06 09:31:55 np0005548790.localdomain systemd[1]: Stopped NTP client/server.
Dec 06 09:31:55 np0005548790.localdomain systemd[1]: Starting NTP client/server...
Dec 06 09:31:55 np0005548790.localdomain chronyd[136471]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Dec 06 09:31:55 np0005548790.localdomain chronyd[136471]: Frequency -26.460 +/- 0.114 ppm read from /var/lib/chrony/drift
Dec 06 09:31:55 np0005548790.localdomain chronyd[136471]: Loaded seccomp filter (level 2)
Dec 06 09:31:55 np0005548790.localdomain systemd[1]: Started NTP client/server.
Dec 06 09:31:55 np0005548790.localdomain sudo[136461]: pam_unix(sudo:session): session closed for user root
Dec 06 09:31:56 np0005548790.localdomain sshd[131932]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:31:56 np0005548790.localdomain systemd[1]: session-42.scope: Deactivated successfully.
Dec 06 09:31:56 np0005548790.localdomain systemd[1]: session-42.scope: Consumed 27.796s CPU time.
Dec 06 09:31:56 np0005548790.localdomain systemd-logind[760]: Session 42 logged out. Waiting for processes to exit.
Dec 06 09:31:56 np0005548790.localdomain systemd-logind[760]: Removed session 42.
Dec 06 09:31:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41025 DF PROTO=TCP SPT=59626 DPT=9105 SEQ=3048302590 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F2011F0000000001030307) 
Dec 06 09:32:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19531 DF PROTO=TCP SPT=49202 DPT=9105 SEQ=3573675818 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F20C1F0000000001030307) 
Dec 06 09:32:01 np0005548790.localdomain sshd[136487]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:32:02 np0005548790.localdomain sshd[136487]: Accepted publickey for zuul from 192.168.122.30 port 44006 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:32:02 np0005548790.localdomain systemd-logind[760]: New session 43 of user zuul.
Dec 06 09:32:02 np0005548790.localdomain systemd[1]: Started Session 43 of User zuul.
Dec 06 09:32:02 np0005548790.localdomain sshd[136487]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:32:03 np0005548790.localdomain python3.9[136580]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:32:04 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47746 DF PROTO=TCP SPT=40000 DPT=9882 SEQ=1483728647 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F2191F0000000001030307) 
Dec 06 09:32:04 np0005548790.localdomain sudo[136674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xfwjccmzvdfpnmapfdsphzumwyvhlumu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013523.9032393-61-12783455221107/AnsiballZ_file.py
Dec 06 09:32:04 np0005548790.localdomain sudo[136674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:04 np0005548790.localdomain python3.9[136676]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:32:04 np0005548790.localdomain sudo[136674]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:05 np0005548790.localdomain sudo[136779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hzozhfgdkqowvranompdfbxqvicbeiyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013524.6795776-85-137997703921495/AnsiballZ_stat.py
Dec 06 09:32:05 np0005548790.localdomain sudo[136779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:05 np0005548790.localdomain python3.9[136781]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:32:05 np0005548790.localdomain sudo[136779]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:05 np0005548790.localdomain sudo[136827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lrqcnvmfzmwwmtvvhhddkkrigkjjzhds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013524.6795776-85-137997703921495/AnsiballZ_file.py
Dec 06 09:32:05 np0005548790.localdomain sudo[136827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:05 np0005548790.localdomain python3.9[136829]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.pe1r9t8t recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:32:05 np0005548790.localdomain sudo[136827]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:06 np0005548790.localdomain sudo[136919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eccihbclrxeriocaknwroszhzhlgawyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013526.2485356-145-85074583257667/AnsiballZ_stat.py
Dec 06 09:32:06 np0005548790.localdomain sudo[136919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16119 DF PROTO=TCP SPT=35180 DPT=9101 SEQ=2929214080 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F223200000000001030307) 
Dec 06 09:32:06 np0005548790.localdomain python3.9[136921]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:32:06 np0005548790.localdomain sudo[136919]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:07 np0005548790.localdomain sudo[136994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-grrmvskvogjhbkrpjwxdgcavmisqdzsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013526.2485356-145-85074583257667/AnsiballZ_copy.py
Dec 06 09:32:07 np0005548790.localdomain sudo[136994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:07 np0005548790.localdomain python3.9[136996]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013526.2485356-145-85074583257667/.source _original_basename=.izhz123x follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:32:07 np0005548790.localdomain sudo[136994]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:07 np0005548790.localdomain sudo[137086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vhdibgfuuokutlkuklrppvwlqikpuhuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013527.6384068-193-59695574732985/AnsiballZ_file.py
Dec 06 09:32:07 np0005548790.localdomain sudo[137086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:08 np0005548790.localdomain python3.9[137088]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:32:08 np0005548790.localdomain sudo[137086]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:08 np0005548790.localdomain sudo[137178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vbgzuzlblehvepfxfdlhyhgcvxvbmtar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013528.253053-218-257917049970661/AnsiballZ_stat.py
Dec 06 09:32:08 np0005548790.localdomain sudo[137178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:08 np0005548790.localdomain python3.9[137180]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:32:08 np0005548790.localdomain sudo[137178]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:08 np0005548790.localdomain sudo[137251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pzijhhcwrhdamiwwmrvjvtlcishuynhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013528.253053-218-257917049970661/AnsiballZ_copy.py
Dec 06 09:32:08 np0005548790.localdomain sudo[137251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:09 np0005548790.localdomain python3.9[137253]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013528.253053-218-257917049970661/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:32:09 np0005548790.localdomain sudo[137251]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:09 np0005548790.localdomain sudo[137343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aejrkfqyhjjrcuxejqvgsxbzmrsjgldz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013529.322861-218-13046148350345/AnsiballZ_stat.py
Dec 06 09:32:09 np0005548790.localdomain sudo[137343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:09 np0005548790.localdomain python3.9[137345]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:32:09 np0005548790.localdomain sudo[137343]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:10 np0005548790.localdomain sudo[137416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kgioxftnvibiahhxneyaoeyypfmxhqnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013529.322861-218-13046148350345/AnsiballZ_copy.py
Dec 06 09:32:10 np0005548790.localdomain sudo[137416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38629 DF PROTO=TCP SPT=46404 DPT=9100 SEQ=2347100962 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F2311F0000000001030307) 
Dec 06 09:32:10 np0005548790.localdomain python3.9[137418]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013529.322861-218-13046148350345/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:32:10 np0005548790.localdomain sudo[137416]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:10 np0005548790.localdomain sudo[137508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vyiuoehwbpdvtamvhzrkrthhjmehwdyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013530.4818292-305-119875280874448/AnsiballZ_file.py
Dec 06 09:32:10 np0005548790.localdomain sudo[137508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:10 np0005548790.localdomain python3.9[137510]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:32:10 np0005548790.localdomain sudo[137508]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:11 np0005548790.localdomain sudo[137600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ktrtwnzbolefshpdvqfmbcvuxzrhgncn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013531.1190865-328-60111812087191/AnsiballZ_stat.py
Dec 06 09:32:11 np0005548790.localdomain sudo[137600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:12 np0005548790.localdomain python3.9[137602]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:32:12 np0005548790.localdomain sudo[137600]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:12 np0005548790.localdomain sudo[137673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ynkeyjrilnyigsdtwthekqszdqiqlbis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013531.1190865-328-60111812087191/AnsiballZ_copy.py
Dec 06 09:32:12 np0005548790.localdomain sudo[137673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:12 np0005548790.localdomain python3.9[137675]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013531.1190865-328-60111812087191/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:32:12 np0005548790.localdomain sudo[137673]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:13 np0005548790.localdomain sudo[137765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hngwhebxhnayjfhsmqnsbusggeaqwevv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013532.8267677-373-93989897985616/AnsiballZ_stat.py
Dec 06 09:32:13 np0005548790.localdomain sudo[137765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:13 np0005548790.localdomain python3.9[137767]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:32:13 np0005548790.localdomain sudo[137765]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43640 DF PROTO=TCP SPT=46692 DPT=9100 SEQ=472935572 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F23D5F0000000001030307) 
Dec 06 09:32:14 np0005548790.localdomain sudo[137838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xtcqsdlqdxwouoqtpymabesdwskvukwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013532.8267677-373-93989897985616/AnsiballZ_copy.py
Dec 06 09:32:14 np0005548790.localdomain sudo[137838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:14 np0005548790.localdomain python3.9[137840]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013532.8267677-373-93989897985616/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:32:14 np0005548790.localdomain sudo[137838]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:15 np0005548790.localdomain sudo[137930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vsvlunvcvzwvuxofmarqzqenqxhrtvmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013534.5896542-419-30934172444028/AnsiballZ_systemd.py
Dec 06 09:32:15 np0005548790.localdomain sudo[137930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:15 np0005548790.localdomain python3.9[137932]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:32:15 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:32:15 np0005548790.localdomain systemd-rc-local-generator[137962]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:32:15 np0005548790.localdomain systemd-sysv-generator[137965]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:32:15 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:32:15 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:32:15 np0005548790.localdomain systemd-rc-local-generator[137993]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:32:15 np0005548790.localdomain systemd-sysv-generator[137996]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:32:15 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:32:16 np0005548790.localdomain systemd[1]: Starting dnf makecache...
Dec 06 09:32:16 np0005548790.localdomain systemd[1]: Starting EDPM Container Shutdown...
Dec 06 09:32:16 np0005548790.localdomain systemd[1]: Finished EDPM Container Shutdown.
Dec 06 09:32:16 np0005548790.localdomain sudo[137930]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:16 np0005548790.localdomain dnf[138007]: Updating Subscription Management repositories.
Dec 06 09:32:16 np0005548790.localdomain sudo[138099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wkscaktcbcuxixzhbomiuubtmcvlnsev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013536.3223588-442-279821921862626/AnsiballZ_stat.py
Dec 06 09:32:16 np0005548790.localdomain sudo[138099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:16 np0005548790.localdomain python3.9[138101]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:32:16 np0005548790.localdomain sudo[138099]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:17 np0005548790.localdomain sudo[138172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pbrppywsioshgravcofkienhyushpppe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013536.3223588-442-279821921862626/AnsiballZ_copy.py
Dec 06 09:32:17 np0005548790.localdomain sudo[138172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:17 np0005548790.localdomain python3.9[138174]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013536.3223588-442-279821921862626/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:32:17 np0005548790.localdomain sudo[138172]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:17 np0005548790.localdomain sudo[138264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qgfflmpcmdtoxoufnwnbleafvdqtslha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013537.5454092-487-172922603956290/AnsiballZ_stat.py
Dec 06 09:32:17 np0005548790.localdomain sudo[138264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:17 np0005548790.localdomain dnf[138007]: Metadata cache refreshed recently.
Dec 06 09:32:18 np0005548790.localdomain python3.9[138266]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:32:18 np0005548790.localdomain sudo[138264]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:18 np0005548790.localdomain systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 06 09:32:18 np0005548790.localdomain systemd[1]: Finished dnf makecache.
Dec 06 09:32:18 np0005548790.localdomain systemd[1]: dnf-makecache.service: Consumed 1.994s CPU time.
Dec 06 09:32:18 np0005548790.localdomain sudo[138337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zlzciwzwxcjmsnmmybbtmsliytkaflzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013537.5454092-487-172922603956290/AnsiballZ_copy.py
Dec 06 09:32:18 np0005548790.localdomain sudo[138337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14549 DF PROTO=TCP SPT=33746 DPT=9102 SEQ=2679971517 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F251400000000001030307) 
Dec 06 09:32:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14376 DF PROTO=TCP SPT=57392 DPT=9882 SEQ=3594390894 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F251470000000001030307) 
Dec 06 09:32:18 np0005548790.localdomain python3.9[138339]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013537.5454092-487-172922603956290/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:32:18 np0005548790.localdomain sudo[138337]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:18 np0005548790.localdomain sudo[138429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gcivudvtthbwzooghkqvxytkvabxybfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013538.7160487-533-225165916416824/AnsiballZ_systemd.py
Dec 06 09:32:18 np0005548790.localdomain sudo[138429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:19 np0005548790.localdomain python3.9[138431]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:32:19 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:32:19 np0005548790.localdomain systemd-rc-local-generator[138455]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:32:19 np0005548790.localdomain systemd-sysv-generator[138459]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:32:19 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:32:19 np0005548790.localdomain systemd[1]: Starting Create netns directory...
Dec 06 09:32:19 np0005548790.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 09:32:19 np0005548790.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 09:32:19 np0005548790.localdomain systemd[1]: Finished Create netns directory.
Dec 06 09:32:19 np0005548790.localdomain sudo[138429]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:20 np0005548790.localdomain python3.9[138563]: ansible-ansible.builtin.service_facts Invoked
Dec 06 09:32:20 np0005548790.localdomain network[138580]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 09:32:20 np0005548790.localdomain network[138581]: 'network-scripts' will be removed from distribution in near future.
Dec 06 09:32:20 np0005548790.localdomain network[138582]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 09:32:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43641 DF PROTO=TCP SPT=46692 DPT=9100 SEQ=472935572 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F25D1F0000000001030307) 
Dec 06 09:32:21 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:32:24 np0005548790.localdomain sudo[138782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kpxohlnmekzofzlhizhqirlxvckljfpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013544.3431766-611-80740770876021/AnsiballZ_stat.py
Dec 06 09:32:24 np0005548790.localdomain sudo[138782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15249 DF PROTO=TCP SPT=43980 DPT=9105 SEQ=3033870475 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F2699F0000000001030307) 
Dec 06 09:32:24 np0005548790.localdomain python3.9[138784]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:32:24 np0005548790.localdomain sudo[138782]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:25 np0005548790.localdomain sudo[138857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mdikybhfuftyjvhyfminkqwrpaokotrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013544.3431766-611-80740770876021/AnsiballZ_copy.py
Dec 06 09:32:25 np0005548790.localdomain sudo[138857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:25 np0005548790.localdomain python3.9[138859]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013544.3431766-611-80740770876021/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:32:26 np0005548790.localdomain sudo[138857]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:26 np0005548790.localdomain sudo[138950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-afewdekvermdxkbhhxegkoklwrdciddy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013546.290451-655-148701609023240/AnsiballZ_systemd.py
Dec 06 09:32:26 np0005548790.localdomain sudo[138950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:26 np0005548790.localdomain python3.9[138952]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:32:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18266 DF PROTO=TCP SPT=59704 DPT=9105 SEQ=1543435166 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F2751F0000000001030307) 
Dec 06 09:32:27 np0005548790.localdomain systemd[1]: Reloading OpenSSH server daemon...
Dec 06 09:32:27 np0005548790.localdomain systemd[1]: Reloaded OpenSSH server daemon.
Dec 06 09:32:27 np0005548790.localdomain sshd[119211]: Received SIGHUP; restarting.
Dec 06 09:32:27 np0005548790.localdomain sshd[119211]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:32:27 np0005548790.localdomain sshd[119211]: Server listening on 0.0.0.0 port 22.
Dec 06 09:32:27 np0005548790.localdomain sshd[119211]: Server listening on :: port 22.
Dec 06 09:32:27 np0005548790.localdomain sudo[138950]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:28 np0005548790.localdomain sudo[139046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-asjozaigjorvzbvtxptgzpvioyiopobj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013548.1266975-680-280320310771817/AnsiballZ_file.py
Dec 06 09:32:28 np0005548790.localdomain sudo[139046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:28 np0005548790.localdomain python3.9[139048]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:32:28 np0005548790.localdomain sudo[139046]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:29 np0005548790.localdomain sudo[139138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wyyryzzvnwfissgkapffjdtwbxxnlyhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013548.8077335-703-203314714323807/AnsiballZ_stat.py
Dec 06 09:32:29 np0005548790.localdomain sudo[139138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:29 np0005548790.localdomain python3.9[139140]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:32:29 np0005548790.localdomain sudo[139138]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:29 np0005548790.localdomain sudo[139211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dlojayiwnycknsygcvmosjghwafveswr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013548.8077335-703-203314714323807/AnsiballZ_copy.py
Dec 06 09:32:29 np0005548790.localdomain sudo[139211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:29 np0005548790.localdomain python3.9[139213]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013548.8077335-703-203314714323807/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:32:29 np0005548790.localdomain sudo[139211]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15251 DF PROTO=TCP SPT=43980 DPT=9105 SEQ=3033870475 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F2815F0000000001030307) 
Dec 06 09:32:30 np0005548790.localdomain sudo[139303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qlgdcqrycxgociaplqusrggypopovrzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013550.3362463-758-225627302442768/AnsiballZ_timezone.py
Dec 06 09:32:30 np0005548790.localdomain sudo[139303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:31 np0005548790.localdomain python3.9[139305]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 06 09:32:31 np0005548790.localdomain systemd[1]: Starting Time & Date Service...
Dec 06 09:32:31 np0005548790.localdomain systemd[1]: Started Time & Date Service.
Dec 06 09:32:31 np0005548790.localdomain sudo[139303]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:31 np0005548790.localdomain sudo[139399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-opicavrzvugadgmgamxrliltmubyhovy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013551.6378143-785-260061305080803/AnsiballZ_file.py
Dec 06 09:32:31 np0005548790.localdomain sudo[139399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:32 np0005548790.localdomain python3.9[139401]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:32:32 np0005548790.localdomain sudo[139399]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:32 np0005548790.localdomain sudo[139491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tmfvxzivevwblflmrnqzijcqaaououav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013552.3879018-809-62730563188530/AnsiballZ_stat.py
Dec 06 09:32:32 np0005548790.localdomain sudo[139491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:32 np0005548790.localdomain python3.9[139493]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:32:32 np0005548790.localdomain sudo[139491]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:33 np0005548790.localdomain sudo[139564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pqcpxeyyfakirqcjrcvumlyspmeilnkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013552.3879018-809-62730563188530/AnsiballZ_copy.py
Dec 06 09:32:33 np0005548790.localdomain sudo[139564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:33 np0005548790.localdomain python3.9[139566]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013552.3879018-809-62730563188530/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:32:33 np0005548790.localdomain sudo[139564]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14553 DF PROTO=TCP SPT=33746 DPT=9102 SEQ=2679971517 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F28D1F0000000001030307) 
Dec 06 09:32:33 np0005548790.localdomain sudo[139656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kpihteqcwiixeunmlecioswfiyucbiyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013553.557061-854-170349542629684/AnsiballZ_stat.py
Dec 06 09:32:33 np0005548790.localdomain sudo[139656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:33 np0005548790.localdomain python3.9[139658]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:32:34 np0005548790.localdomain sudo[139656]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:34 np0005548790.localdomain sudo[139729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iyaeskjlswaijresynednmmdqhugmoul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013553.557061-854-170349542629684/AnsiballZ_copy.py
Dec 06 09:32:34 np0005548790.localdomain sudo[139729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:34 np0005548790.localdomain python3.9[139731]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013553.557061-854-170349542629684/.source.yaml _original_basename=.xts1voiz follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:32:35 np0005548790.localdomain sudo[139729]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:35 np0005548790.localdomain sudo[139821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mtgkkttfuhfjsjhlfxndbworbwascrvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013555.272613-900-241315985510399/AnsiballZ_stat.py
Dec 06 09:32:35 np0005548790.localdomain sudo[139821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:35 np0005548790.localdomain python3.9[139823]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:32:35 np0005548790.localdomain sudo[139821]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:36 np0005548790.localdomain sudo[139896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yvivwpusbhzsnfnvxivecsoxtvyfwxll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013555.272613-900-241315985510399/AnsiballZ_copy.py
Dec 06 09:32:36 np0005548790.localdomain sudo[139896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:36 np0005548790.localdomain python3.9[139898]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013555.272613-900-241315985510399/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:32:36 np0005548790.localdomain sudo[139896]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18096 DF PROTO=TCP SPT=43976 DPT=9101 SEQ=2676839542 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F2991F0000000001030307) 
Dec 06 09:32:37 np0005548790.localdomain sudo[139988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oagnwzsvhqjqzberimcouukiztfybaqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013556.933118-944-192125606661191/AnsiballZ_command.py
Dec 06 09:32:37 np0005548790.localdomain sudo[139988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:37 np0005548790.localdomain python3.9[139990]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:32:37 np0005548790.localdomain sudo[139988]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:38 np0005548790.localdomain sudo[140081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fpwdnssgebwnqsjtwbgisgpshzuxhumz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013557.8240044-969-120375562559980/AnsiballZ_command.py
Dec 06 09:32:38 np0005548790.localdomain sudo[140081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:38 np0005548790.localdomain python3.9[140083]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:32:38 np0005548790.localdomain sudo[140081]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:38 np0005548790.localdomain sudo[140174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-blzwhavuqnjptdvfjmvlyadhsavftpjl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765013558.5042467-991-81605018196887/AnsiballZ_edpm_nftables_from_files.py
Dec 06 09:32:38 np0005548790.localdomain sudo[140174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:39 np0005548790.localdomain python3[140176]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 06 09:32:39 np0005548790.localdomain sudo[140174]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:39 np0005548790.localdomain sudo[140202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:32:39 np0005548790.localdomain sudo[140202]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:32:39 np0005548790.localdomain sudo[140202]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:39 np0005548790.localdomain sudo[140238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 06 09:32:39 np0005548790.localdomain sudo[140238]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:32:39 np0005548790.localdomain sudo[140296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hhyxredmwagxfpoynyxfvcdvihhtgkhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013559.346264-1015-173813186210159/AnsiballZ_stat.py
Dec 06 09:32:39 np0005548790.localdomain sudo[140296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:39 np0005548790.localdomain python3.9[140298]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:32:39 np0005548790.localdomain sudo[140296]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:39 np0005548790.localdomain sudo[140238]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:39 np0005548790.localdomain sudo[140347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:32:39 np0005548790.localdomain sudo[140347]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:32:39 np0005548790.localdomain sudo[140347]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:40 np0005548790.localdomain sudo[140375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:32:40 np0005548790.localdomain sudo[140375]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:32:40 np0005548790.localdomain sudo[140420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cqlrqafyddrqizoqtbnqyvcbmqpbhdeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013559.346264-1015-173813186210159/AnsiballZ_copy.py
Dec 06 09:32:40 np0005548790.localdomain sudo[140420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1966 DF PROTO=TCP SPT=36372 DPT=9100 SEQ=1529989084 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F2A71F0000000001030307) 
Dec 06 09:32:40 np0005548790.localdomain python3.9[140422]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013559.346264-1015-173813186210159/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:32:40 np0005548790.localdomain sudo[140420]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:40 np0005548790.localdomain sudo[140375]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:40 np0005548790.localdomain sudo[140544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rrsdulkalrslntgjcgmbmecgeljdyjwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013560.629524-1061-100718576383942/AnsiballZ_stat.py
Dec 06 09:32:40 np0005548790.localdomain sudo[140544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:41 np0005548790.localdomain python3.9[140546]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:32:41 np0005548790.localdomain sudo[140544]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:41 np0005548790.localdomain sudo[140584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:32:41 np0005548790.localdomain sudo[140584]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:32:41 np0005548790.localdomain sudo[140584]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:41 np0005548790.localdomain sudo[140632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rgcvgxspzpwqpoyzvnaormkvxdkphgtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013560.629524-1061-100718576383942/AnsiballZ_copy.py
Dec 06 09:32:41 np0005548790.localdomain sudo[140632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:41 np0005548790.localdomain python3.9[140634]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013560.629524-1061-100718576383942/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:32:41 np0005548790.localdomain sudo[140632]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:42 np0005548790.localdomain sudo[140724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wdxnylhxywxdyrpiyjwkuzbhprorhtah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013561.9727302-1107-243471067586926/AnsiballZ_stat.py
Dec 06 09:32:42 np0005548790.localdomain sudo[140724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:42 np0005548790.localdomain python3.9[140726]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:32:42 np0005548790.localdomain sudo[140724]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:42 np0005548790.localdomain sudo[140797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-buijjpsjnmopxtybwayhitucrjlqlhqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013561.9727302-1107-243471067586926/AnsiballZ_copy.py
Dec 06 09:32:42 np0005548790.localdomain sudo[140797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:42 np0005548790.localdomain python3.9[140799]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013561.9727302-1107-243471067586926/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:32:42 np0005548790.localdomain sudo[140797]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:43 np0005548790.localdomain sudo[140889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mjnknkwaqaothnfsmgobclguxpvrxely ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013563.1746428-1151-177616565719829/AnsiballZ_stat.py
Dec 06 09:32:43 np0005548790.localdomain sudo[140889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:43 np0005548790.localdomain python3.9[140891]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:32:43 np0005548790.localdomain sudo[140889]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:43 np0005548790.localdomain sudo[140962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hhnylphusawojghxslybdsondvkkxxbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013563.1746428-1151-177616565719829/AnsiballZ_copy.py
Dec 06 09:32:43 np0005548790.localdomain sudo[140962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:44 np0005548790.localdomain python3.9[140964]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013563.1746428-1151-177616565719829/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:32:44 np0005548790.localdomain sudo[140962]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:45 np0005548790.localdomain sudo[141054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pmatiohenuvhlyanqvwafleoqyphqfmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013564.698109-1196-4750018995921/AnsiballZ_stat.py
Dec 06 09:32:45 np0005548790.localdomain sudo[141054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:45 np0005548790.localdomain python3.9[141056]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:32:45 np0005548790.localdomain sudo[141054]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:45 np0005548790.localdomain sudo[141127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-svxnivrlzlvvdujzlpidjptjdidnycbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013564.698109-1196-4750018995921/AnsiballZ_copy.py
Dec 06 09:32:45 np0005548790.localdomain sudo[141127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:45 np0005548790.localdomain python3.9[141129]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013564.698109-1196-4750018995921/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:32:45 np0005548790.localdomain sudo[141127]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:46 np0005548790.localdomain sudo[141219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sqqxvrtglpsdjsxbmbbguouqzbjufxtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013566.02661-1241-97700486387749/AnsiballZ_file.py
Dec 06 09:32:46 np0005548790.localdomain sudo[141219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:47 np0005548790.localdomain python3.9[141221]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:32:47 np0005548790.localdomain sudo[141219]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:47 np0005548790.localdomain sudo[141311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mpmsixwqmtscstvwjqnojvyitqumvbst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013567.3533366-1265-261351346544019/AnsiballZ_command.py
Dec 06 09:32:47 np0005548790.localdomain sudo[141311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:47 np0005548790.localdomain python3.9[141313]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:32:47 np0005548790.localdomain sudo[141311]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62626 DF PROTO=TCP SPT=44850 DPT=9102 SEQ=4170923082 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F2C6710000000001030307) 
Dec 06 09:32:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56594 DF PROTO=TCP SPT=48916 DPT=9882 SEQ=100112506 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F2C6770000000001030307) 
Dec 06 09:32:48 np0005548790.localdomain sudo[141406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tflqwwvoxppzhbgrgmnosmppydlnvjds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013568.0082817-1288-133493124193891/AnsiballZ_blockinfile.py
Dec 06 09:32:48 np0005548790.localdomain sudo[141406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:48 np0005548790.localdomain python3.9[141408]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                                            include "/etc/nftables/edpm-chains.nft"
                                                            include "/etc/nftables/edpm-rules.nft"
                                                            include "/etc/nftables/edpm-jumps.nft"
                                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:32:48 np0005548790.localdomain sudo[141406]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:49 np0005548790.localdomain sudo[141499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-scoejmcjamcpbamhggrtffblmbgbkyqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013568.9327562-1316-5580642857224/AnsiballZ_file.py
Dec 06 09:32:49 np0005548790.localdomain sudo[141499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:49 np0005548790.localdomain python3.9[141501]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:32:49 np0005548790.localdomain sudo[141499]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:49 np0005548790.localdomain sudo[141591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ofcclkoxeywnwabwbgmpnokafednbseq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013569.5125015-1316-260365176185329/AnsiballZ_file.py
Dec 06 09:32:49 np0005548790.localdomain sudo[141591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:49 np0005548790.localdomain python3.9[141593]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:32:49 np0005548790.localdomain sudo[141591]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:50 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14381 DF PROTO=TCP SPT=57392 DPT=9882 SEQ=3594390894 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F2CD360000000001030307) 
Dec 06 09:32:50 np0005548790.localdomain sudo[141683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pdlxuseypntwfczvxuieanpcihwoqbod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013570.2672598-1361-26075899033115/AnsiballZ_mount.py
Dec 06 09:32:50 np0005548790.localdomain sudo[141683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:50 np0005548790.localdomain python3.9[141685]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 06 09:32:50 np0005548790.localdomain sudo[141683]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:51 np0005548790.localdomain sudo[141776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lnrehasgaqkfsmtfkmhxddzxbhafurhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013571.1233299-1361-195060849778542/AnsiballZ_mount.py
Dec 06 09:32:51 np0005548790.localdomain sudo[141776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:51 np0005548790.localdomain python3.9[141778]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 06 09:32:51 np0005548790.localdomain sudo[141776]: pam_unix(sudo:session): session closed for user root
Dec 06 09:32:52 np0005548790.localdomain sshd[136487]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:32:52 np0005548790.localdomain systemd-logind[760]: Session 43 logged out. Waiting for processes to exit.
Dec 06 09:32:52 np0005548790.localdomain systemd[1]: session-43.scope: Deactivated successfully.
Dec 06 09:32:52 np0005548790.localdomain systemd[1]: session-43.scope: Consumed 27.798s CPU time.
Dec 06 09:32:52 np0005548790.localdomain systemd-logind[760]: Removed session 43.
Dec 06 09:32:52 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47748 DF PROTO=TCP SPT=40000 DPT=9882 SEQ=1483728647 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F2D71F0000000001030307) 
Dec 06 09:32:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19534 DF PROTO=TCP SPT=49202 DPT=9105 SEQ=3573675818 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F2EB200000000001030307) 
Dec 06 09:32:58 np0005548790.localdomain sshd[141794]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:32:58 np0005548790.localdomain sshd[141794]: Accepted publickey for zuul from 192.168.122.30 port 33860 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:32:58 np0005548790.localdomain systemd-logind[760]: New session 44 of user zuul.
Dec 06 09:32:58 np0005548790.localdomain systemd[1]: Started Session 44 of User zuul.
Dec 06 09:32:58 np0005548790.localdomain sshd[141794]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:32:59 np0005548790.localdomain sudo[141887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-upypkzcnnejnntdjlbpokxulatmajrpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013578.6898782-23-35676653212016/AnsiballZ_tempfile.py
Dec 06 09:32:59 np0005548790.localdomain sudo[141887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:32:59 np0005548790.localdomain python3.9[141889]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 06 09:32:59 np0005548790.localdomain sudo[141887]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:00 np0005548790.localdomain sudo[141979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qdbipeecpvolmsfxuuwhltjztjnsvqgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013580.3228576-95-273310996865306/AnsiballZ_stat.py
Dec 06 09:33:00 np0005548790.localdomain sudo[141979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:00 np0005548790.localdomain python3.9[141981]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:33:00 np0005548790.localdomain sudo[141979]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:01 np0005548790.localdomain systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 06 09:33:02 np0005548790.localdomain sudo[142075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bocmpgbtmgjlxhcnsdxvdnosemyjgyzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013581.6361597-143-173989704992634/AnsiballZ_slurp.py
Dec 06 09:33:02 np0005548790.localdomain sudo[142075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:02 np0005548790.localdomain python3.9[142077]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec 06 09:33:02 np0005548790.localdomain sudo[142075]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:03 np0005548790.localdomain sudo[142167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lkmehxgjlrxxioodoujpwviqkmzifozt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013583.2247036-191-16311724353925/AnsiballZ_stat.py
Dec 06 09:33:03 np0005548790.localdomain sudo[142167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:03 np0005548790.localdomain python3.9[142169]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.w53fk3lq follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:33:03 np0005548790.localdomain sudo[142167]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:04 np0005548790.localdomain sudo[142242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-afrteeiwlkpwvedxueqorygdezgsyvoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013583.2247036-191-16311724353925/AnsiballZ_copy.py
Dec 06 09:33:04 np0005548790.localdomain sudo[142242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:04 np0005548790.localdomain python3.9[142244]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.w53fk3lq mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013583.2247036-191-16311724353925/.source.w53fk3lq _original_basename=.fy78mmkc follow=False checksum=3e842c629948eb11ff005810a7264dbaf8a6d16e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:33:04 np0005548790.localdomain sudo[142242]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:04 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17629 DF PROTO=TCP SPT=52302 DPT=9101 SEQ=88265184 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F306680000000001030307) 
Dec 06 09:33:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21513 DF PROTO=TCP SPT=44086 DPT=9100 SEQ=439736648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F30BE70000000001030307) 
Dec 06 09:33:06 np0005548790.localdomain sudo[142334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jhrsnbhglryqwdospzinaeudatqnnyvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013586.1051974-281-15656407304426/AnsiballZ_setup.py
Dec 06 09:33:06 np0005548790.localdomain sudo[142334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:06 np0005548790.localdomain python3.9[142336]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:33:06 np0005548790.localdomain sudo[142334]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:09 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18097 DF PROTO=TCP SPT=43976 DPT=9101 SEQ=2676839542 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F3171F0000000001030307) 
Dec 06 09:33:09 np0005548790.localdomain sudo[142426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oeudxwsohpaatwvyzvbgwhkjruhfdxps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013589.3767545-330-122633449897890/AnsiballZ_blockinfile.py
Dec 06 09:33:09 np0005548790.localdomain sudo[142426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:10 np0005548790.localdomain python3.9[142428]: ansible-ansible.builtin.blockinfile Invoked with block=np0005548785.localdomain,192.168.122.103,np0005548785* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC89JzJHuRLDUgmU66VPdPVwYLrvslBwa5i2QfiUzrnpt1lKz8ayq6QMRy5y5GgfjQQhX/YZiAjUSoogVsYDkoDaImXdtfQHFlFMLTlJPiYcA/cGAwMAE/vifpWoztBHUXkJ5YWUojkXzGoR8d7ESx/tTLG/9QrQDsW6JcV18mcFCQZdeWYWGWdLn6ynmQOZ0N4U6mYK1FqE+GKgP6L9PEjkC1ePo81AnYcdQ5Z1IETdcCcJytdvvxH/Zie1PiAaMAgMYhsqu7+DZRRTvg+cEMw3mRVuodIyQEbpZs8MjR3itViRfZ+UqYi6uKDnz1viLL0aACaYhOLzrE7bQ6Sl4j1MnMrWncUOv3Sq2fus+Y6oYmed84E6HUNljte7vVP9jwPclbCAmj5WuC/Av9dSqqHEpPRbKJ4tAuBrO2LBKS7J62FjRYiY807V1viyxUgjK5FmsQyfVr3/YOirluSx54e4XwxxDrAjtrd0x68H7/Mt6HP/79cWKaVbC7XUckYRmE=
                                                            np0005548785.localdomain,192.168.122.103,np0005548785* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPnHRGHw2U3XDUZBfS69ZpwocvZ2haE6Sebzf3BV40dJ
                                                            np0005548785.localdomain,192.168.122.103,np0005548785* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMgorOAtIXk7BOknkR82ERwiBlDoAcpTTo8DwXwOeKFxueIG2AzGwqy/M3AlognMpbS9bigTSmXKYzfS5SNcGD8=
                                                            np0005548786.localdomain,192.168.122.104,np0005548786* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDURzBA/aIGrwPgaIApy0UCTi4wdQhfDEx0QfkSAIn0ZptZcOkaR8BWtl9GijRPEp++Ep4qU04JcwHO1ZULd2UnCdDeg1Imwnf7x9HQBjAr0mH+tE0t4MBLtBbrk8Ep5ggyKATK1CvEl3NuGIS4gSSUWxzkR74Iju/GtrEMuVnMSsOw+auBofiv1ne4zyXqQWZORiK32DSolw1KyXGLyqG+JOpl3Kza5o79S1KUghfRzskZMm/AxFYciPmg4EQK/jL9Izj7qq3v8MaL8baeyqNlPaaRKCh+pkZlYtoPzDhe+vn/jwnDmQgqC1Bh+dkNiKEVlWz3mxoiMoeLY3jP/tMF2M4M8puGakPc2sqJxk1++Tv/lFRO3zBS+V2kECKI5DtQI6XThfLYXxIQl5SHr4yGEoxhMNt6YNQPLp6lg30kHO24YyNNA7LPFYYoOGUCaq5ZVUCF9lagMxcgkN0Bs+ZZqeni+53RqxoutiRZ0m9pIiqxGjrJjbNFXmofgfDBcUE=
                                                            np0005548786.localdomain,192.168.122.104,np0005548786* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILIgwHZ/0Q8K6t9dlBCQwEO6OABCR0J0IF6hfmA44GBM
                                                            np0005548786.localdomain,192.168.122.104,np0005548786* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBItDJKfsljV78XBJL8EuwSxDvfxuZ9Jz6PgjXVap/GJqsza+9ApDVkNpmAVhdxO9qX1PPD9KOxQjcrD2A8MXQ10=
                                                            np0005548787.localdomain,192.168.122.105,np0005548787* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXe0UZ2kJKcvYaHSnjIOf3QqkGhArLo32nvDm8Pl8ZVNWfdRV8R+e17etAicDq//fxWC+U9jiHp4qI6/0Jm64rPocmJKaA+r79sNpv+598NlGtVUfTYQ34Ze9bgaPkjAwKfPNrzjSDChyfkys4Hm0J7ttog5rvMcuRelxkFmoonOcuzBC+9ufI6qld7br5w4WDookwamkefbMCiwAZxrw2bSjoTu7/TEFbt7SM0lUIdqP5WvxpWK52OkjnakQ0BL4QHdRYz1kBx/vS0TFxXb2pMO291dfkxDl3H2oXXZZYK/LWy3nZyJEX+mD5J6WOEs5HC5GQQ+CNEV0wa2e/gJA7KBsyL5T6RBtH8id22sBHZkzcaDhUz1ZABGAiOx4rdrr4YFFFy/u00nX3ZCuRBPXYh37Pafl7GXcSKyhTmkCZI0591RdNmb1duh9ZIObRmPVp2+WIheAFvS7EU4B0+ZjAEbDJgiSa9VlUrlRFX0ajcFHR8FnwNRcoERO3A3h4/Tc=
                                                            np0005548787.localdomain,192.168.122.105,np0005548787* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGClV/UHC6wrHLH6ofPCeG9Z3WpaSbH42qD4AsTbywke
                                                            np0005548787.localdomain,192.168.122.105,np0005548787* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD+VBma5zUGbc6C8yvVJH1yH01D2HwvgMwJZ3Ew/fQ9uangWsK7hoczIcWgUhEN67mue6bMYPNkv+zbE5QDlLqA=
                                                            np0005548789.localdomain,192.168.122.107,np0005548789* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwH3rhRTvOINLmLdbeRXeXOiMzz+IXEuW2cXYAe50Wcc3ikH2RVGirWQrwLc8hAoA7UFCXADqEMxPg6/fLsQkbP7kLOpUtam8nuXvgt8VHM4RFl5wh9EOgZ7DWgjA7s3r2eQMcBhv82CjVMLY/YjnLuRNXCsJAqeG32qcKedKH/huEFvkb49U/UnNlxi5BfNrMlY9n5UQXE2rd6EKwP58aP/qQ1ie3p8nwHc36/MJcfEIABlLaoHK/LxnadOFTh93OkqVi7A0VQsKSmKD64nABiN7ML0NReoyRIQI5r3Dawe8v2K9jCBh5jY88TVsYUJqgwoZSSU73sYGHX4uF+PY8wL7qwn6mCzA17GGYeB8Dy0N8qwDqah6kUjpcLwGp7YaKf0FIZPBKcLVMrX6Tnwxer1j3kOIt3tgLZoz3mMfstWfCyvt9t+GEW5MCE+MBkY4Eree3uK7pI+wJ3vFQS9XVP00hjNiLWYmoaaW6rl8xtw7QtGhzmjcWbOxaZvHWE5E=
                                                            np0005548789.localdomain,192.168.122.107,np0005548789* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM7zsgz8o1LOsRIDgDJ0j4aB+gvG7QE4PuIS5gi3px2U
                                                            np0005548789.localdomain,192.168.122.107,np0005548789* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNB22R613xD5iIn21fw712bqcytUxBHAFZPMSjpWL8XVTi6taleS2y8rpYqGoN21DgQgwO1SxmcqZLfwlh7T5/4=
                                                            np0005548790.localdomain,192.168.122.108,np0005548790* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDmdMCy44p73Ui+o09YQitqR9FILqoJ6AGYYutFVH6wn5m1j6oEoI4XgVFPR3UpG3SXdoiG7m0DRxC/WZZMpZbaQ3ZHbJJioRh1hV5uQtK5k2gtmS8uePng5UprbLncMXf+HIxNRvirU3r6zdgNGAroK0rN0nWESi/FNb2flu9Aw9JAsgIAAouW4IUoeyMGZ1AflhRhsWsQMstM9UEeGU+iTqV7al1URVCSq1finY99m+QC+Pftpd2C/+agboOIiVa63+D/RqqfYqh4C/PYfDbssYjcZzk3P90+HQ6uMKexX3HRnFbyje4eLSBHC0pjr/4pNfk/eSpdHeyMAPsP+QlBztdcPj9OnjcmT9ymeJRKF7GwNIWg3Pn9L2yY50d8l9Zu6rNIDW786XNcbm88yHdCHA5FE1A8XTWQRQ3eUSUsmsvf03pExAouRM4Fj8dvCu6wzG2SuyWqmdT5yCNrUG0e1CeE6PcfTLBeS5CJAwn5HM8aUndQQldWmaUbMPL5Jis=
                                                            np0005548790.localdomain,192.168.122.108,np0005548790* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPoyxTI8+8n9PWFBkZatum98GfJRQMd2qn9CijEFzfEz
                                                            np0005548790.localdomain,192.168.122.108,np0005548790* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNOlnHgYu82mRZ1QroLe1BG6rymOGDqDJGz5MpHZnXnhJ6iIwC87em0cGHiSKgU+UZ4DpWQTIlxwKsn9Jp9Hl1Y=
                                                            np0005548788.localdomain,192.168.122.106,np0005548788* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxIoAQH9YZnGrAxYR5prFQwo6HY5mwdDjndb+bp2pwvtVLM4ABIdCi+K1wpbhOpoO7BsYOf/tdBqemvSDleNo/ZLh3v3MmoVtoTtQZqLWsAQWFgJCjcGUGB+H3CHhtbp706coVQMlGD+UQqpCBy8WamMB/Ldy+hSHbLHwzuMzj8tO90vUbEyuKgOuu/X3ZFa+Yjo/asQ+PTrVfirh1QvRQ9aK22xH89KbThA/1an4OjnNGLCP752auSQ894B21QLKfqaMGPlpbjU8Wr6MP4zKV9lUzpQiFr6IU6cd4CeIsJDj7FnAZuBSmi8ewgm/r4ZWkmCSlqw8OpMC5soJnm8Q4PJTIFvT9eyyFCh9xmQkMhzE8P332LtYjZ+vXhYFU14e04mOQx5UrtHN8uWJVbOAwtLNAcenHyRtCQGkAZ6f9q0OvSuYr+o3FhHhN5ABu32AKAD8YpkjLypi+PbaiKNQW8XzPAHHbV8CGZ4B09ZWeQY49VA0bPxIYBXd1mEBlXSE=
                                                            np0005548788.localdomain,192.168.122.106,np0005548788* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJBkIOjRpLl815RvOqIZSSNUu/CGLqucfCRUist+ERWP
                                                            np0005548788.localdomain,192.168.122.106,np0005548788* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNyEL9+sMn9BF0LnCanz9jbKQTm6FNV71J4qGFTonom0KXHpLL1p0eyrgFY0iwGH2UtwJ6VWm5bm2RaQJmObwZI=
                                                             create=True mode=0644 path=/tmp/ansible.w53fk3lq state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:33:10 np0005548790.localdomain sudo[142426]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:11 np0005548790.localdomain sudo[142518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-quyxhqwfgcotmlvcfrucrxmmyvhqpfjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013590.7808876-378-61053094163898/AnsiballZ_command.py
Dec 06 09:33:11 np0005548790.localdomain sudo[142518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:11 np0005548790.localdomain python3.9[142520]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.w53fk3lq' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:33:11 np0005548790.localdomain sudo[142518]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:12 np0005548790.localdomain sudo[142612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-phizcynzmaagdzpyfigaendefdguoylt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013592.1442714-426-275729833385654/AnsiballZ_file.py
Dec 06 09:33:12 np0005548790.localdomain sudo[142612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:12 np0005548790.localdomain python3.9[142614]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.w53fk3lq state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:33:12 np0005548790.localdomain sudo[142612]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:13 np0005548790.localdomain sshd[141794]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:33:13 np0005548790.localdomain systemd[1]: session-44.scope: Deactivated successfully.
Dec 06 09:33:13 np0005548790.localdomain systemd[1]: session-44.scope: Consumed 4.170s CPU time.
Dec 06 09:33:13 np0005548790.localdomain systemd-logind[760]: Session 44 logged out. Waiting for processes to exit.
Dec 06 09:33:13 np0005548790.localdomain systemd-logind[760]: Removed session 44.
Dec 06 09:33:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15480 DF PROTO=TCP SPT=33752 DPT=9102 SEQ=4144235667 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F33BA20000000001030307) 
Dec 06 09:33:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19564 DF PROTO=TCP SPT=37428 DPT=9882 SEQ=3349360038 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F33BA70000000001030307) 
Dec 06 09:33:19 np0005548790.localdomain sshd[142629]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:33:20 np0005548790.localdomain sshd[142629]: Accepted publickey for zuul from 192.168.122.30 port 53166 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:33:20 np0005548790.localdomain systemd-logind[760]: New session 45 of user zuul.
Dec 06 09:33:20 np0005548790.localdomain systemd[1]: Started Session 45 of User zuul.
Dec 06 09:33:20 np0005548790.localdomain sshd[142629]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:33:21 np0005548790.localdomain python3.9[142722]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:33:22 np0005548790.localdomain sudo[142816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zpwrgwvjckulsmaxixsrpesrerhrwgcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013601.721617-58-272946930474991/AnsiballZ_systemd.py
Dec 06 09:33:22 np0005548790.localdomain sudo[142816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:22 np0005548790.localdomain python3.9[142818]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 06 09:33:23 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42175 DF PROTO=TCP SPT=47354 DPT=9105 SEQ=453746348 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F34FFD0000000001030307) 
Dec 06 09:33:23 np0005548790.localdomain sudo[142816]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:24 np0005548790.localdomain sudo[142910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nbpobhorupnisiqfwjpgcdwsauhdunoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013603.777664-83-172547867245481/AnsiballZ_systemd.py
Dec 06 09:33:24 np0005548790.localdomain sudo[142910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:24 np0005548790.localdomain python3.9[142912]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:33:24 np0005548790.localdomain sudo[142910]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:25 np0005548790.localdomain sudo[143003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dsojbskzmalbvhgvotdvrijtkhzifzrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013604.6642566-109-257164070711933/AnsiballZ_command.py
Dec 06 09:33:25 np0005548790.localdomain sudo[143003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:25 np0005548790.localdomain python3.9[143005]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:33:25 np0005548790.localdomain sudo[143003]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:25 np0005548790.localdomain sudo[143096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rmhfqicanltjofhkkaxuljjjpifecgwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013605.4486382-134-253735984472101/AnsiballZ_stat.py
Dec 06 09:33:25 np0005548790.localdomain sudo[143096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:26 np0005548790.localdomain python3.9[143098]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:33:26 np0005548790.localdomain sudo[143096]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:26 np0005548790.localdomain sudo[143190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pnziwgcmpihqizgytcwkyguwqrvbcjww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013606.3155906-157-263397320734507/AnsiballZ_command.py
Dec 06 09:33:26 np0005548790.localdomain sudo[143190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:26 np0005548790.localdomain python3.9[143192]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:33:26 np0005548790.localdomain sudo[143190]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:27 np0005548790.localdomain sudo[143285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mdvqxrqmyauhmmtwhvqisjojzjmitavw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013607.0030854-182-264951885666062/AnsiballZ_file.py
Dec 06 09:33:27 np0005548790.localdomain sudo[143285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:27 np0005548790.localdomain python3.9[143287]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:33:27 np0005548790.localdomain sudo[143285]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:28 np0005548790.localdomain sshd[142629]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:33:28 np0005548790.localdomain systemd[1]: session-45.scope: Deactivated successfully.
Dec 06 09:33:28 np0005548790.localdomain systemd[1]: session-45.scope: Consumed 3.842s CPU time.
Dec 06 09:33:28 np0005548790.localdomain systemd-logind[760]: Session 45 logged out. Waiting for processes to exit.
Dec 06 09:33:28 np0005548790.localdomain systemd-logind[760]: Removed session 45.
Dec 06 09:33:33 np0005548790.localdomain sshd[143302]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:33:33 np0005548790.localdomain sshd[143302]: Accepted publickey for zuul from 192.168.122.30 port 60084 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:33:33 np0005548790.localdomain systemd-logind[760]: New session 46 of user zuul.
Dec 06 09:33:33 np0005548790.localdomain systemd[1]: Started Session 46 of User zuul.
Dec 06 09:33:33 np0005548790.localdomain sshd[143302]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:33:34 np0005548790.localdomain python3.9[143395]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:33:34 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17318 DF PROTO=TCP SPT=50550 DPT=9101 SEQ=1323994876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F37B970000000001030307) 
Dec 06 09:33:35 np0005548790.localdomain sudo[143489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yupfmhnywilwtdocpiqeguzxppjoxslu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013615.1654968-64-169713904627298/AnsiballZ_setup.py
Dec 06 09:33:35 np0005548790.localdomain sudo[143489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:35 np0005548790.localdomain python3.9[143491]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:33:35 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17319 DF PROTO=TCP SPT=50550 DPT=9101 SEQ=1323994876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F37F9F0000000001030307) 
Dec 06 09:33:36 np0005548790.localdomain sudo[143489]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48363 DF PROTO=TCP SPT=48462 DPT=9100 SEQ=999928968 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F381170000000001030307) 
Dec 06 09:33:36 np0005548790.localdomain sudo[143543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ykwhdymqwqdccaeweplfrerqbbmgteto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013615.1654968-64-169713904627298/AnsiballZ_dnf.py
Dec 06 09:33:36 np0005548790.localdomain sudo[143543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:36 np0005548790.localdomain python3.9[143545]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 06 09:33:37 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48364 DF PROTO=TCP SPT=48462 DPT=9100 SEQ=999928968 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F385200000000001030307) 
Dec 06 09:33:37 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17320 DF PROTO=TCP SPT=50550 DPT=9101 SEQ=1323994876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F387A00000000001030307) 
Dec 06 09:33:39 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48365 DF PROTO=TCP SPT=48462 DPT=9100 SEQ=999928968 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F38D200000000001030307) 
Dec 06 09:33:39 np0005548790.localdomain sudo[143543]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:41 np0005548790.localdomain python3.9[143638]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:33:41 np0005548790.localdomain sudo[143640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:33:41 np0005548790.localdomain sudo[143640]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:33:41 np0005548790.localdomain sudo[143640]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:41 np0005548790.localdomain sudo[143655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:33:41 np0005548790.localdomain sudo[143655]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:33:41 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17321 DF PROTO=TCP SPT=50550 DPT=9101 SEQ=1323994876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F3975F0000000001030307) 
Dec 06 09:33:42 np0005548790.localdomain sudo[143655]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:42 np0005548790.localdomain sudo[143791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yymyjqwgdsvsfordzrxhtkqybrkxcpfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013622.2551765-127-167484514802596/AnsiballZ_file.py
Dec 06 09:33:42 np0005548790.localdomain sudo[143791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:42 np0005548790.localdomain sudo[143794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:33:42 np0005548790.localdomain sudo[143794]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:33:42 np0005548790.localdomain sudo[143794]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:42 np0005548790.localdomain python3.9[143793]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/reboot_required/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:33:42 np0005548790.localdomain sudo[143791]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48366 DF PROTO=TCP SPT=48462 DPT=9100 SEQ=999928968 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F39CE00000000001030307) 
Dec 06 09:33:43 np0005548790.localdomain sudo[143898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pryspoitowfhmrznfwmtfcvetqqsvbye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013623.0868852-151-41704525807392/AnsiballZ_file.py
Dec 06 09:33:43 np0005548790.localdomain sudo[143898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:43 np0005548790.localdomain python3.9[143900]: ansible-ansible.builtin.file Invoked with mode=0600 path=/var/lib/openstack/reboot_required/needs_restarting state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:33:43 np0005548790.localdomain sudo[143898]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:44 np0005548790.localdomain sudo[143990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-izmbsckjqyuttcnolejtbyntnesxloql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013623.835936-175-206749963624366/AnsiballZ_lineinfile.py
Dec 06 09:33:44 np0005548790.localdomain sudo[143990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:44 np0005548790.localdomain python3.9[143992]: ansible-ansible.builtin.lineinfile Invoked with dest=/var/lib/openstack/reboot_required/needs_restarting line=Not root, Subscription Management repositories not updated
                                                            Core libraries or services have been updated since boot-up:
                                                              * systemd
                                                            
                                                            Reboot is required to fully utilize these updates.
                                                            More information: https://access.redhat.com/solutions/27943 path=/var/lib/openstack/reboot_required/needs_restarting state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:33:44 np0005548790.localdomain sudo[143990]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:45 np0005548790.localdomain python3.9[144082]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 09:33:46 np0005548790.localdomain python3.9[144172]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:33:46 np0005548790.localdomain python3.9[144264]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:33:47 np0005548790.localdomain sshd[143302]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:33:47 np0005548790.localdomain systemd[1]: session-46.scope: Deactivated successfully.
Dec 06 09:33:47 np0005548790.localdomain systemd[1]: session-46.scope: Consumed 8.684s CPU time.
Dec 06 09:33:47 np0005548790.localdomain systemd-logind[760]: Session 46 logged out. Waiting for processes to exit.
Dec 06 09:33:47 np0005548790.localdomain systemd-logind[760]: Removed session 46.
Dec 06 09:33:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19031 DF PROTO=TCP SPT=40908 DPT=9102 SEQ=1853198121 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F3B0D20000000001030307) 
Dec 06 09:33:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47835 DF PROTO=TCP SPT=39624 DPT=9882 SEQ=3978788345 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F3B0D80000000001030307) 
Dec 06 09:33:49 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19032 DF PROTO=TCP SPT=40908 DPT=9102 SEQ=1853198121 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F3B4DF0000000001030307) 
Dec 06 09:33:49 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47836 DF PROTO=TCP SPT=39624 DPT=9882 SEQ=3978788345 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F3B4E00000000001030307) 
Dec 06 09:33:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19033 DF PROTO=TCP SPT=40908 DPT=9102 SEQ=1853198121 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F3BCDF0000000001030307) 
Dec 06 09:33:54 np0005548790.localdomain sshd[144279]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:33:54 np0005548790.localdomain sshd[144279]: Accepted publickey for zuul from 192.168.122.30 port 36390 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:33:54 np0005548790.localdomain systemd-logind[760]: New session 47 of user zuul.
Dec 06 09:33:54 np0005548790.localdomain systemd[1]: Started Session 47 of User zuul.
Dec 06 09:33:54 np0005548790.localdomain sshd[144279]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:33:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41446 DF PROTO=TCP SPT=45626 DPT=9105 SEQ=3677954985 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F3C91F0000000001030307) 
Dec 06 09:33:55 np0005548790.localdomain python3.9[144372]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:33:57 np0005548790.localdomain sudo[144466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tzftadtranfrmajtyeoqpkdmzixcbedp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013636.9605124-164-6511992311142/AnsiballZ_file.py
Dec 06 09:33:57 np0005548790.localdomain sudo[144466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:57 np0005548790.localdomain python3.9[144468]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:33:57 np0005548790.localdomain sudo[144466]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:58 np0005548790.localdomain sudo[144558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pcroqfofmmtxisvilttdknxyfpyccllc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013637.590146-185-41198691624282/AnsiballZ_stat.py
Dec 06 09:33:58 np0005548790.localdomain sudo[144558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:58 np0005548790.localdomain python3.9[144560]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:33:58 np0005548790.localdomain sudo[144558]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:58 np0005548790.localdomain sudo[144631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-omjctsshxnydstvenslvtyuujqxsmcet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013637.590146-185-41198691624282/AnsiballZ_copy.py
Dec 06 09:33:58 np0005548790.localdomain sudo[144631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:59 np0005548790.localdomain python3.9[144633]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013637.590146-185-41198691624282/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=31b29ee7333177b2eb4f4f85549af35c3d0ec3b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:33:59 np0005548790.localdomain sudo[144631]: pam_unix(sudo:session): session closed for user root
Dec 06 09:33:59 np0005548790.localdomain sudo[144723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gywerublewjkfyzbyjumbjliiwihyswh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013639.307887-229-66872387710744/AnsiballZ_file.py
Dec 06 09:33:59 np0005548790.localdomain sudo[144723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:33:59 np0005548790.localdomain python3.9[144725]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-sriov setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:33:59 np0005548790.localdomain sudo[144723]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:00 np0005548790.localdomain sudo[144815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vsurnundkfiudarpfrzuzbidgsqulkaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013640.0952377-255-29606834778116/AnsiballZ_stat.py
Dec 06 09:34:00 np0005548790.localdomain sudo[144815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:00 np0005548790.localdomain python3.9[144817]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:00 np0005548790.localdomain sudo[144815]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41448 DF PROTO=TCP SPT=45626 DPT=9105 SEQ=3677954985 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F3E0DF0000000001030307) 
Dec 06 09:34:01 np0005548790.localdomain sudo[144888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mzwfwzdperfcgzbwbztdxmjroeqcxucz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013640.0952377-255-29606834778116/AnsiballZ_copy.py
Dec 06 09:34:01 np0005548790.localdomain sudo[144888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:01 np0005548790.localdomain python3.9[144890]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013640.0952377-255-29606834778116/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=31b29ee7333177b2eb4f4f85549af35c3d0ec3b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:01 np0005548790.localdomain sudo[144888]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:02 np0005548790.localdomain sudo[144980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qexifnzgobdjerhvxvgnvfevpwbpdiek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013642.174446-298-230914636726559/AnsiballZ_file.py
Dec 06 09:34:02 np0005548790.localdomain sudo[144980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:02 np0005548790.localdomain python3.9[144982]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-dhcp setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:34:02 np0005548790.localdomain sudo[144980]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:03 np0005548790.localdomain sudo[145072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pqcrywzcyqczndpjgkwuwyqpkoxlbufd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013642.7869093-324-95890832275754/AnsiballZ_stat.py
Dec 06 09:34:03 np0005548790.localdomain sudo[145072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:03 np0005548790.localdomain python3.9[145074]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:03 np0005548790.localdomain sudo[145072]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:03 np0005548790.localdomain sudo[145145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ijoiuqpqtesmpejgnkhmhnmjtnssatgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013642.7869093-324-95890832275754/AnsiballZ_copy.py
Dec 06 09:34:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19035 DF PROTO=TCP SPT=40908 DPT=9102 SEQ=1853198121 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F3ED200000000001030307) 
Dec 06 09:34:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47839 DF PROTO=TCP SPT=39624 DPT=9882 SEQ=3978788345 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F3ED200000000001030307) 
Dec 06 09:34:03 np0005548790.localdomain sudo[145145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:04 np0005548790.localdomain python3.9[145147]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013642.7869093-324-95890832275754/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=31b29ee7333177b2eb4f4f85549af35c3d0ec3b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:04 np0005548790.localdomain sudo[145145]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:04 np0005548790.localdomain sudo[145237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nxllhbeffydtanerdyxacsxpetimcjlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013644.2582984-365-10231536166383/AnsiballZ_file.py
Dec 06 09:34:04 np0005548790.localdomain sudo[145237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:04 np0005548790.localdomain python3.9[145239]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:34:04 np0005548790.localdomain sudo[145237]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:05 np0005548790.localdomain sudo[145329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pvcateuuuwvcbhzgthuvnhskucorzlzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013644.927288-391-213344221677509/AnsiballZ_stat.py
Dec 06 09:34:05 np0005548790.localdomain sudo[145329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:05 np0005548790.localdomain python3.9[145331]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:05 np0005548790.localdomain sudo[145329]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:05 np0005548790.localdomain chronyd[136471]: Selected source 54.39.23.64 (pool.ntp.org)
Dec 06 09:34:05 np0005548790.localdomain sudo[145402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dvjnhqjxitcklleuyyzxiujfkyynomlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013644.927288-391-213344221677509/AnsiballZ_copy.py
Dec 06 09:34:05 np0005548790.localdomain sudo[145402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:06 np0005548790.localdomain python3.9[145404]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013644.927288-391-213344221677509/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=31b29ee7333177b2eb4f4f85549af35c3d0ec3b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:06 np0005548790.localdomain sudo[145402]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:06 np0005548790.localdomain sudo[145494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rzjezcyftzfonjcnstjyfdxpfrcaivkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013646.4184456-438-14233213783949/AnsiballZ_file.py
Dec 06 09:34:06 np0005548790.localdomain sudo[145494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:06 np0005548790.localdomain python3.9[145496]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:34:06 np0005548790.localdomain sudo[145494]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:07 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27131 DF PROTO=TCP SPT=58202 DPT=9100 SEQ=1296907494 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F3FA5F0000000001030307) 
Dec 06 09:34:07 np0005548790.localdomain sudo[145586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vtzzflsfiwwgbepmyqwqwqmivyhnwnid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013647.0486476-462-147930550603529/AnsiballZ_stat.py
Dec 06 09:34:07 np0005548790.localdomain sudo[145586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:07 np0005548790.localdomain python3.9[145588]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:07 np0005548790.localdomain sudo[145586]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:07 np0005548790.localdomain sudo[145659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-chdsqkyfuobdwgrnxmvunopnieiphqus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013647.0486476-462-147930550603529/AnsiballZ_copy.py
Dec 06 09:34:07 np0005548790.localdomain sudo[145659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:08 np0005548790.localdomain python3.9[145661]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013647.0486476-462-147930550603529/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=31b29ee7333177b2eb4f4f85549af35c3d0ec3b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:08 np0005548790.localdomain sudo[145659]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:08 np0005548790.localdomain sudo[145751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kjzlfvvztykoctsqdurcvfywlsziytii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013648.2469876-509-26039119181166/AnsiballZ_file.py
Dec 06 09:34:08 np0005548790.localdomain sudo[145751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:08 np0005548790.localdomain python3.9[145753]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:34:08 np0005548790.localdomain sudo[145751]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:09 np0005548790.localdomain sudo[145843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fmojqnhvmxoiaxfekesotimwmcvlqrmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013648.8583496-531-139431102453797/AnsiballZ_stat.py
Dec 06 09:34:09 np0005548790.localdomain sudo[145843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:09 np0005548790.localdomain python3.9[145845]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:09 np0005548790.localdomain sudo[145843]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:09 np0005548790.localdomain sudo[145916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tlhocnvmlspcoglwjzymfrwjptgioqql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013648.8583496-531-139431102453797/AnsiballZ_copy.py
Dec 06 09:34:09 np0005548790.localdomain sudo[145916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:09 np0005548790.localdomain python3.9[145918]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013648.8583496-531-139431102453797/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=31b29ee7333177b2eb4f4f85549af35c3d0ec3b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:09 np0005548790.localdomain sudo[145916]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:10 np0005548790.localdomain sudo[146008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xkikbcisyzlkehagjtqlpaesoojnrnnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013650.0432484-579-112169589802412/AnsiballZ_file.py
Dec 06 09:34:10 np0005548790.localdomain sudo[146008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:10 np0005548790.localdomain python3.9[146010]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:34:10 np0005548790.localdomain sudo[146008]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:10 np0005548790.localdomain sudo[146100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vethwzrxhalxfpxvkqeepwemzbsjoxko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013650.6289353-603-193934263758750/AnsiballZ_stat.py
Dec 06 09:34:10 np0005548790.localdomain sudo[146100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:11 np0005548790.localdomain python3.9[146102]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:11 np0005548790.localdomain sudo[146100]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:11 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48630 DF PROTO=TCP SPT=47272 DPT=9101 SEQ=2371476005 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F40C9F0000000001030307) 
Dec 06 09:34:12 np0005548790.localdomain sudo[146173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iiednezpzracxuoykakkkdezrujfqbei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013650.6289353-603-193934263758750/AnsiballZ_copy.py
Dec 06 09:34:12 np0005548790.localdomain sudo[146173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:12 np0005548790.localdomain python3.9[146175]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013650.6289353-603-193934263758750/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=31b29ee7333177b2eb4f4f85549af35c3d0ec3b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:12 np0005548790.localdomain sudo[146173]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:12 np0005548790.localdomain sudo[146265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-huagncnnwklrpktaemwvpgmkpfscxphx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013652.3897605-651-96374367318587/AnsiballZ_file.py
Dec 06 09:34:12 np0005548790.localdomain sudo[146265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:12 np0005548790.localdomain python3.9[146267]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:34:12 np0005548790.localdomain sudo[146265]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:13 np0005548790.localdomain sudo[146357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jtqhafgsxiwytjziinorbxyupbvgosnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013652.994322-675-273355512194831/AnsiballZ_stat.py
Dec 06 09:34:13 np0005548790.localdomain sudo[146357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27133 DF PROTO=TCP SPT=58202 DPT=9100 SEQ=1296907494 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F4121F0000000001030307) 
Dec 06 09:34:13 np0005548790.localdomain python3.9[146359]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:13 np0005548790.localdomain sudo[146357]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:14 np0005548790.localdomain sudo[146430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-beyqicxensperaxgecvaupfhefimowzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013652.994322-675-273355512194831/AnsiballZ_copy.py
Dec 06 09:34:14 np0005548790.localdomain sudo[146430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:14 np0005548790.localdomain python3.9[146432]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013652.994322-675-273355512194831/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=31b29ee7333177b2eb4f4f85549af35c3d0ec3b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:14 np0005548790.localdomain sudo[146430]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:16 np0005548790.localdomain sshd[144279]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:34:16 np0005548790.localdomain systemd[1]: session-47.scope: Deactivated successfully.
Dec 06 09:34:16 np0005548790.localdomain systemd[1]: session-47.scope: Consumed 11.675s CPU time.
Dec 06 09:34:16 np0005548790.localdomain systemd-logind[760]: Session 47 logged out. Waiting for processes to exit.
Dec 06 09:34:16 np0005548790.localdomain systemd-logind[760]: Removed session 47.
Dec 06 09:34:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27079 DF PROTO=TCP SPT=34172 DPT=9102 SEQ=3450285194 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F426010000000001030307) 
Dec 06 09:34:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15440 DF PROTO=TCP SPT=37042 DPT=9882 SEQ=1127212798 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F426080000000001030307) 
Dec 06 09:34:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27081 DF PROTO=TCP SPT=34172 DPT=9102 SEQ=3450285194 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F432200000000001030307) 
Dec 06 09:34:21 np0005548790.localdomain sshd[146447]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:34:21 np0005548790.localdomain sshd[146447]: Accepted publickey for zuul from 192.168.122.30 port 52166 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:34:21 np0005548790.localdomain systemd-logind[760]: New session 48 of user zuul.
Dec 06 09:34:21 np0005548790.localdomain systemd[1]: Started Session 48 of User zuul.
Dec 06 09:34:21 np0005548790.localdomain sshd[146447]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:34:22 np0005548790.localdomain sudo[146540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-useqdysnkkvbafswakqvuzvdcmrugbke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013661.8827555-28-164055160796484/AnsiballZ_file.py
Dec 06 09:34:22 np0005548790.localdomain sudo[146540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:22 np0005548790.localdomain python3.9[146542]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:22 np0005548790.localdomain sudo[146540]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:23 np0005548790.localdomain sudo[146632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tcygvxfkmserwdaxiqwtxuiwybvywahv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013662.983187-64-162461079366899/AnsiballZ_stat.py
Dec 06 09:34:23 np0005548790.localdomain sudo[146632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:23 np0005548790.localdomain python3.9[146634]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:23 np0005548790.localdomain sudo[146632]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:24 np0005548790.localdomain sudo[146705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lfnshnmpublqdiehsixwzbahnxnsexzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013662.983187-64-162461079366899/AnsiballZ_copy.py
Dec 06 09:34:24 np0005548790.localdomain sudo[146705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:24 np0005548790.localdomain python3.9[146707]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013662.983187-64-162461079366899/.source.conf _original_basename=ceph.conf follow=False checksum=74b6793c28400fa0a16ce9abdc4efa82feeb961d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:24 np0005548790.localdomain sudo[146705]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59779 DF PROTO=TCP SPT=44548 DPT=9105 SEQ=3463971875 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F43E600000000001030307) 
Dec 06 09:34:24 np0005548790.localdomain sudo[146797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vzjxvacbdnmbaqtcdkonucwcdpmmkmtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013664.3874712-64-128567541124519/AnsiballZ_stat.py
Dec 06 09:34:24 np0005548790.localdomain sudo[146797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:24 np0005548790.localdomain python3.9[146799]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:24 np0005548790.localdomain sudo[146797]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:25 np0005548790.localdomain sudo[146870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jjbjkcfjaglkrqnhllithyddjbbcucyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013664.3874712-64-128567541124519/AnsiballZ_copy.py
Dec 06 09:34:25 np0005548790.localdomain sudo[146870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:25 np0005548790.localdomain python3.9[146872]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013664.3874712-64-128567541124519/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=9d631b6552ddeaa0e75a39b18f2bdb583e0e85e3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:25 np0005548790.localdomain sudo[146870]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:25 np0005548790.localdomain sshd[146447]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:34:25 np0005548790.localdomain systemd[1]: session-48.scope: Deactivated successfully.
Dec 06 09:34:25 np0005548790.localdomain systemd[1]: session-48.scope: Consumed 2.296s CPU time.
Dec 06 09:34:25 np0005548790.localdomain systemd-logind[760]: Session 48 logged out. Waiting for processes to exit.
Dec 06 09:34:25 np0005548790.localdomain systemd-logind[760]: Removed session 48.
Dec 06 09:34:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59781 DF PROTO=TCP SPT=44548 DPT=9105 SEQ=3463971875 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F456200000000001030307) 
Dec 06 09:34:31 np0005548790.localdomain sshd[146887]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:34:31 np0005548790.localdomain sshd[146887]: Accepted publickey for zuul from 192.168.122.30 port 57124 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:34:31 np0005548790.localdomain systemd-logind[760]: New session 49 of user zuul.
Dec 06 09:34:31 np0005548790.localdomain systemd[1]: Started Session 49 of User zuul.
Dec 06 09:34:31 np0005548790.localdomain sshd[146887]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:34:32 np0005548790.localdomain python3.9[146980]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:34:33 np0005548790.localdomain sudo[147074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ywgethlpqbymwjwuulzdjrqudmesdmmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013673.0090735-66-255345529881304/AnsiballZ_file.py
Dec 06 09:34:33 np0005548790.localdomain sudo[147074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:33 np0005548790.localdomain python3.9[147076]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:34:33 np0005548790.localdomain sudo[147074]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:33 np0005548790.localdomain sudo[147166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-chohqovgawqnycbqkqjajswvvewonimp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013673.715014-66-187649272027459/AnsiballZ_file.py
Dec 06 09:34:33 np0005548790.localdomain sudo[147166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:34 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15444 DF PROTO=TCP SPT=37042 DPT=9882 SEQ=1127212798 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F4631F0000000001030307) 
Dec 06 09:34:34 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27083 DF PROTO=TCP SPT=34172 DPT=9102 SEQ=3450285194 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F4631F0000000001030307) 
Dec 06 09:34:34 np0005548790.localdomain python3.9[147168]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:34:34 np0005548790.localdomain sudo[147166]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:35 np0005548790.localdomain python3.9[147258]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:34:36 np0005548790.localdomain sudo[147348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-djoojdzruhlynpmzrvqaphowmmyoxxyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013675.229936-133-129054879541730/AnsiballZ_seboolean.py
Dec 06 09:34:36 np0005548790.localdomain sudo[147348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:36 np0005548790.localdomain python3.9[147350]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 06 09:34:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48632 DF PROTO=TCP SPT=47272 DPT=9101 SEQ=2371476005 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F46D1F0000000001030307) 
Dec 06 09:34:36 np0005548790.localdomain sudo[147348]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:37 np0005548790.localdomain sudo[147440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ltonsukdqifzpcfmvidpkkuuzdvrohic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013677.0084326-163-79019594637995/AnsiballZ_setup.py
Dec 06 09:34:37 np0005548790.localdomain sudo[147440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:37 np0005548790.localdomain python3.9[147442]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:34:37 np0005548790.localdomain sudo[147440]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:38 np0005548790.localdomain sudo[147494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sszjamflpuxclnuyeakcbwhkxikescyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013677.0084326-163-79019594637995/AnsiballZ_dnf.py
Dec 06 09:34:38 np0005548790.localdomain sudo[147494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:38 np0005548790.localdomain python3.9[147496]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:34:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48369 DF PROTO=TCP SPT=48462 DPT=9100 SEQ=999928968 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F47B1F0000000001030307) 
Dec 06 09:34:41 np0005548790.localdomain sudo[147494]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:42 np0005548790.localdomain sudo[147588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eejdplrktrsweakpipptyzvgsypslsxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013682.1150157-200-115132450277582/AnsiballZ_systemd.py
Dec 06 09:34:42 np0005548790.localdomain sudo[147588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:42 np0005548790.localdomain sudo[147591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:34:42 np0005548790.localdomain sudo[147591]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:34:42 np0005548790.localdomain sudo[147591]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:42 np0005548790.localdomain sudo[147606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 09:34:42 np0005548790.localdomain sudo[147606]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:34:42 np0005548790.localdomain python3.9[147590]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 09:34:43 np0005548790.localdomain sudo[147588]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17017 DF PROTO=TCP SPT=53566 DPT=9100 SEQ=223540018 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F487600000000001030307) 
Dec 06 09:34:43 np0005548790.localdomain systemd[1]: tmp-crun.8WNSrb.mount: Deactivated successfully.
Dec 06 09:34:43 np0005548790.localdomain podman[147709]: 2025-12-06 09:34:43.837371729 +0000 UTC m=+0.106083276 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, version=7, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_CLEAN=True, ceph=True, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main)
Dec 06 09:34:43 np0005548790.localdomain podman[147709]: 2025-12-06 09:34:43.940675479 +0000 UTC m=+0.209387016 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, name=rhceph, version=7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vcs-type=git, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:34:44 np0005548790.localdomain sudo[147606]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:44 np0005548790.localdomain sudo[147776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:34:44 np0005548790.localdomain sudo[147776]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:34:44 np0005548790.localdomain sudo[147776]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:44 np0005548790.localdomain sudo[147795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:34:44 np0005548790.localdomain sudo[147795]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:34:44 np0005548790.localdomain sudo[147895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tsakaelrosarigcbmqkbuqvleqlvegzm ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765013684.3332329-223-98488828047913/AnsiballZ_edpm_nftables_snippet.py
Dec 06 09:34:44 np0005548790.localdomain sudo[147895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:44 np0005548790.localdomain sudo[147795]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:44 np0005548790.localdomain python3[147897]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                                            rule:
                                                              proto: udp
                                                              dport: 4789
                                                          - rule_name: 119 neutron geneve networks
                                                            rule:
                                                              proto: udp
                                                              dport: 6081
                                                              state: ["UNTRACKED"]
                                                          - rule_name: 120 neutron geneve networks no conntrack
                                                            rule:
                                                              proto: udp
                                                              dport: 6081
                                                              table: raw
                                                              chain: OUTPUT
                                                              jump: NOTRACK
                                                              action: append
                                                              state: []
                                                          - rule_name: 121 neutron geneve networks no conntrack
                                                            rule:
                                                              proto: udp
                                                              dport: 6081
                                                              table: raw
                                                              chain: PREROUTING
                                                              jump: NOTRACK
                                                              action: append
                                                              state: []
                                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec 06 09:34:44 np0005548790.localdomain sudo[147895]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:45 np0005548790.localdomain sudo[148005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xfvecfsvjxoigqsiunrnnuqmhaxoihyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013685.3629787-251-105410733905593/AnsiballZ_file.py
Dec 06 09:34:45 np0005548790.localdomain sudo[148005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:45 np0005548790.localdomain sudo[148004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:34:45 np0005548790.localdomain sudo[148004]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:34:45 np0005548790.localdomain sudo[148004]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:45 np0005548790.localdomain python3.9[148019]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:45 np0005548790.localdomain sudo[148005]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:46 np0005548790.localdomain sudo[148111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ibhzhjrttostvsyzyrbptsqgkfivkugd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013686.0183294-274-95719211283036/AnsiballZ_stat.py
Dec 06 09:34:46 np0005548790.localdomain sudo[148111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:46 np0005548790.localdomain python3.9[148113]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:46 np0005548790.localdomain sudo[148111]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:46 np0005548790.localdomain sudo[148159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-almuhtrnmnndzlafedrnalnigedopaiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013686.0183294-274-95719211283036/AnsiballZ_file.py
Dec 06 09:34:46 np0005548790.localdomain sudo[148159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:47 np0005548790.localdomain python3.9[148161]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:47 np0005548790.localdomain sudo[148159]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:48 np0005548790.localdomain sudo[148251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wdrydslukcnqulvisaypetpwomonwjny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013688.0704243-310-5996171778263/AnsiballZ_stat.py
Dec 06 09:34:48 np0005548790.localdomain sudo[148251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42041 DF PROTO=TCP SPT=49958 DPT=9102 SEQ=1075088325 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F49B330000000001030307) 
Dec 06 09:34:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40603 DF PROTO=TCP SPT=40100 DPT=9882 SEQ=912189104 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F49B380000000001030307) 
Dec 06 09:34:48 np0005548790.localdomain python3.9[148253]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:48 np0005548790.localdomain sudo[148251]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:48 np0005548790.localdomain sudo[148299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-denlfjnhxeaixtdkqjaxxhjgntrxnfbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013688.0704243-310-5996171778263/AnsiballZ_file.py
Dec 06 09:34:48 np0005548790.localdomain sudo[148299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:48 np0005548790.localdomain python3.9[148301]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.9wnbbevb recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:49 np0005548790.localdomain sudo[148299]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:49 np0005548790.localdomain sudo[148391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-czgpqjstymrsuacixenepsmwxujohufo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013689.2171073-347-167146950621504/AnsiballZ_stat.py
Dec 06 09:34:49 np0005548790.localdomain sudo[148391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:50 np0005548790.localdomain python3.9[148393]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:50 np0005548790.localdomain sudo[148391]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:50 np0005548790.localdomain sudo[148439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mziufoevphcqlwnldwrseqeimdtbpebm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013689.2171073-347-167146950621504/AnsiballZ_file.py
Dec 06 09:34:50 np0005548790.localdomain sudo[148439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:50 np0005548790.localdomain python3.9[148441]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:50 np0005548790.localdomain sudo[148439]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:51 np0005548790.localdomain sudo[148531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cbkiwaakxuodtuheokzisluqsyynibmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013690.8219233-385-4977118743176/AnsiballZ_command.py
Dec 06 09:34:51 np0005548790.localdomain sudo[148531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:51 np0005548790.localdomain python3.9[148533]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:34:51 np0005548790.localdomain sudo[148531]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17018 DF PROTO=TCP SPT=53566 DPT=9100 SEQ=223540018 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F4A71F0000000001030307) 
Dec 06 09:34:52 np0005548790.localdomain sudo[148624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hzrpbyzkhprjctznddtxbniyelvycbkz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765013691.6531231-409-102079670534247/AnsiballZ_edpm_nftables_from_files.py
Dec 06 09:34:52 np0005548790.localdomain sudo[148624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:52 np0005548790.localdomain python3[148626]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 06 09:34:52 np0005548790.localdomain sudo[148624]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:52 np0005548790.localdomain sudo[148716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lqggbtwtbuvlhiqsmnukrlacdsgcchhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013692.4620113-433-259654270643793/AnsiballZ_stat.py
Dec 06 09:34:52 np0005548790.localdomain sudo[148716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:52 np0005548790.localdomain python3.9[148718]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:52 np0005548790.localdomain sudo[148716]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:53 np0005548790.localdomain sudo[148791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jyodulpixqprhteecgcmojfpwofpfvxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013692.4620113-433-259654270643793/AnsiballZ_copy.py
Dec 06 09:34:53 np0005548790.localdomain sudo[148791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:53 np0005548790.localdomain python3.9[148793]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013692.4620113-433-259654270643793/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:53 np0005548790.localdomain sudo[148791]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:54 np0005548790.localdomain sudo[148883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lthldeyhlfmufxnwevczexvjyrcojqmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013693.9059422-478-39894731899504/AnsiballZ_stat.py
Dec 06 09:34:54 np0005548790.localdomain sudo[148883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:54 np0005548790.localdomain python3.9[148885]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:54 np0005548790.localdomain sudo[148883]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15880 DF PROTO=TCP SPT=39308 DPT=9105 SEQ=2185136414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F4B39F0000000001030307) 
Dec 06 09:34:54 np0005548790.localdomain sudo[148958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dxgejptqnlcwfkakiaqmdpsxfxwkkvjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013693.9059422-478-39894731899504/AnsiballZ_copy.py
Dec 06 09:34:54 np0005548790.localdomain sudo[148958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:54 np0005548790.localdomain python3.9[148960]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013693.9059422-478-39894731899504/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:54 np0005548790.localdomain sudo[148958]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:55 np0005548790.localdomain sudo[149050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qocjxshbzemtytjmohfyqnkdrskiitsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013695.1865838-523-50505549917126/AnsiballZ_stat.py
Dec 06 09:34:55 np0005548790.localdomain sudo[149050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:55 np0005548790.localdomain python3.9[149052]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:55 np0005548790.localdomain sudo[149050]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:55 np0005548790.localdomain sudo[149125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-secmktacfcdseukieouxxopslvwfiqot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013695.1865838-523-50505549917126/AnsiballZ_copy.py
Dec 06 09:34:55 np0005548790.localdomain sudo[149125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:56 np0005548790.localdomain python3.9[149127]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013695.1865838-523-50505549917126/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:56 np0005548790.localdomain sudo[149125]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:56 np0005548790.localdomain sudo[149217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nzvdesitfdfsxvxxulspipeuergnimmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013696.4166973-568-235693012244106/AnsiballZ_stat.py
Dec 06 09:34:56 np0005548790.localdomain sudo[149217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:56 np0005548790.localdomain python3.9[149219]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:56 np0005548790.localdomain sudo[149217]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:57 np0005548790.localdomain sudo[149292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jwgltjaiwiioydmxwbavmpkittfyiwsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013696.4166973-568-235693012244106/AnsiballZ_copy.py
Dec 06 09:34:57 np0005548790.localdomain sudo[149292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:57 np0005548790.localdomain python3.9[149294]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013696.4166973-568-235693012244106/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:57 np0005548790.localdomain sudo[149292]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41451 DF PROTO=TCP SPT=45626 DPT=9105 SEQ=3677954985 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F4BF1F0000000001030307) 
Dec 06 09:34:57 np0005548790.localdomain sudo[149384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-thnfpqnsayyoetqsxyptwzehbrdilxxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013697.6236496-614-201060841099594/AnsiballZ_stat.py
Dec 06 09:34:57 np0005548790.localdomain sudo[149384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:58 np0005548790.localdomain python3.9[149386]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:34:58 np0005548790.localdomain sudo[149384]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:58 np0005548790.localdomain sudo[149459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fsnyxxrtdcozghxdgjpxdbaxejikdjor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013697.6236496-614-201060841099594/AnsiballZ_copy.py
Dec 06 09:34:58 np0005548790.localdomain sudo[149459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:58 np0005548790.localdomain python3.9[149461]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765013697.6236496-614-201060841099594/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:58 np0005548790.localdomain sudo[149459]: pam_unix(sudo:session): session closed for user root
Dec 06 09:34:59 np0005548790.localdomain sudo[149551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-viecgmevkfoqvvopckflmodclbnxpvdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013698.8694313-658-188000309571608/AnsiballZ_file.py
Dec 06 09:34:59 np0005548790.localdomain sudo[149551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:34:59 np0005548790.localdomain python3.9[149553]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:34:59 np0005548790.localdomain sudo[149551]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:00 np0005548790.localdomain sudo[149643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oyznghowgcasngbufjvailgkvrwzbhpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013700.3595285-683-253722994492509/AnsiballZ_command.py
Dec 06 09:35:00 np0005548790.localdomain sudo[149643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15882 DF PROTO=TCP SPT=39308 DPT=9105 SEQ=2185136414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F4CB5F0000000001030307) 
Dec 06 09:35:00 np0005548790.localdomain python3.9[149645]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:35:00 np0005548790.localdomain sudo[149643]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:02 np0005548790.localdomain sudo[149738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lqzavmgiwakuckbqdjzvfrfvsbkozibi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013701.0448508-707-276686390580032/AnsiballZ_blockinfile.py
Dec 06 09:35:02 np0005548790.localdomain sudo[149738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:02 np0005548790.localdomain python3.9[149740]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                                            include "/etc/nftables/edpm-chains.nft"
                                                            include "/etc/nftables/edpm-rules.nft"
                                                            include "/etc/nftables/edpm-jumps.nft"
                                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:02 np0005548790.localdomain sudo[149738]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:03 np0005548790.localdomain sudo[149830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jldiineznrjpsqwrtamwcuhmbplmxyxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013703.21103-734-98277164951158/AnsiballZ_command.py
Dec 06 09:35:03 np0005548790.localdomain sudo[149830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:03 np0005548790.localdomain python3.9[149832]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:35:03 np0005548790.localdomain sudo[149830]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42045 DF PROTO=TCP SPT=49958 DPT=9102 SEQ=1075088325 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F4D71F0000000001030307) 
Dec 06 09:35:04 np0005548790.localdomain sudo[149923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tkviongeubxucmeetxlxalamrozrxieg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013703.8411338-757-277398412807820/AnsiballZ_stat.py
Dec 06 09:35:04 np0005548790.localdomain sudo[149923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:04 np0005548790.localdomain python3.9[149925]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:35:04 np0005548790.localdomain sudo[149923]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:04 np0005548790.localdomain sudo[150017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lmwzdkvtkeftciznnqkggmjuxeupxfzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013704.5424116-781-240670864329649/AnsiballZ_command.py
Dec 06 09:35:04 np0005548790.localdomain sudo[150017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:05 np0005548790.localdomain python3.9[150019]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:35:05 np0005548790.localdomain sudo[150017]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:05 np0005548790.localdomain sudo[150112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nckureykylnyaqhnmwaimfrzocisitjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013705.2488651-806-69455147282013/AnsiballZ_file.py
Dec 06 09:35:05 np0005548790.localdomain sudo[150112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:05 np0005548790.localdomain python3.9[150114]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:05 np0005548790.localdomain sudo[150112]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42109 DF PROTO=TCP SPT=33636 DPT=9101 SEQ=3072384246 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F4E3200000000001030307) 
Dec 06 09:35:07 np0005548790.localdomain python3.9[150204]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:35:08 np0005548790.localdomain sudo[150295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-woixabvaywpcyvuhmjghucpiygdptliu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013708.0031807-925-59082110897480/AnsiballZ_command.py
Dec 06 09:35:08 np0005548790.localdomain sudo[150295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:08 np0005548790.localdomain python3.9[150297]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=np0005548790.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:a2:0d:dc:1c" external_ids:ovn-encap-ip=172.19.0.108 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:35:08 np0005548790.localdomain ovs-vsctl[150298]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=np0005548790.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:a2:0d:dc:1c external_ids:ovn-encap-ip=172.19.0.108 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec 06 09:35:08 np0005548790.localdomain sudo[150295]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:08 np0005548790.localdomain sudo[150388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kmuzisxkzdnhgaxkfmqiedzwbmmtoqha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013708.7116585-953-34299400808822/AnsiballZ_command.py
Dec 06 09:35:08 np0005548790.localdomain sudo[150388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:09 np0005548790.localdomain python3.9[150390]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                                            ovs-vsctl show | grep -q "Manager"
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:35:09 np0005548790.localdomain sudo[150388]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:09 np0005548790.localdomain python3.9[150483]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:35:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27136 DF PROTO=TCP SPT=58202 DPT=9100 SEQ=1296907494 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F4F1200000000001030307) 
Dec 06 09:35:10 np0005548790.localdomain sudo[150575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tjqzkannaizgdmueoizhcbwjaypsvrax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013710.1774402-1007-117499480448495/AnsiballZ_file.py
Dec 06 09:35:10 np0005548790.localdomain sudo[150575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:10 np0005548790.localdomain python3.9[150577]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:35:10 np0005548790.localdomain sudo[150575]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:11 np0005548790.localdomain sudo[150667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qsedsmpfwqcgzgvuymjuxitnzqnefspw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013710.8463516-1030-268763233185314/AnsiballZ_stat.py
Dec 06 09:35:11 np0005548790.localdomain sudo[150667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:11 np0005548790.localdomain python3.9[150669]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:35:11 np0005548790.localdomain sudo[150667]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:11 np0005548790.localdomain sudo[150715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oqcxxovozchurpjnozkjcyowqubfcsha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013710.8463516-1030-268763233185314/AnsiballZ_file.py
Dec 06 09:35:11 np0005548790.localdomain sudo[150715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:11 np0005548790.localdomain python3.9[150717]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:35:11 np0005548790.localdomain sudo[150715]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:12 np0005548790.localdomain sudo[150807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lkbxocmmfpkgutwoxxnhfvelrxwsxaei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013711.8603542-1030-31837057917920/AnsiballZ_stat.py
Dec 06 09:35:12 np0005548790.localdomain sudo[150807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:12 np0005548790.localdomain python3.9[150809]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:35:12 np0005548790.localdomain sudo[150807]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:12 np0005548790.localdomain sudo[150855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rrftjstvqsvphqyacygxhristbezvnoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013711.8603542-1030-31837057917920/AnsiballZ_file.py
Dec 06 09:35:12 np0005548790.localdomain sudo[150855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:12 np0005548790.localdomain python3.9[150857]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:35:12 np0005548790.localdomain sudo[150855]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9685 DF PROTO=TCP SPT=49260 DPT=9100 SEQ=3147018492 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F4FC600000000001030307) 
Dec 06 09:35:13 np0005548790.localdomain sudo[150947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bpxhjvzfooqxvmdelrpdgmbljmqlmbzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013713.096454-1099-251155117770132/AnsiballZ_file.py
Dec 06 09:35:13 np0005548790.localdomain sudo[150947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:13 np0005548790.localdomain python3.9[150949]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:13 np0005548790.localdomain sudo[150947]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:14 np0005548790.localdomain sudo[151039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qmbquyxtkynzkzhlkbsqakumzhdcuczf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013714.344153-1124-22590451727574/AnsiballZ_stat.py
Dec 06 09:35:14 np0005548790.localdomain sudo[151039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:14 np0005548790.localdomain python3.9[151041]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:35:14 np0005548790.localdomain sudo[151039]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:15 np0005548790.localdomain sudo[151087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mviojyeimqtwhuxkkfzhpkvkirxhfqkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013714.344153-1124-22590451727574/AnsiballZ_file.py
Dec 06 09:35:15 np0005548790.localdomain sudo[151087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:15 np0005548790.localdomain python3.9[151089]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:15 np0005548790.localdomain sudo[151087]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:15 np0005548790.localdomain sudo[151179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jabwpyfmcudxsfpzctqwipwwobdwhzni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013715.4336014-1159-35161532609168/AnsiballZ_stat.py
Dec 06 09:35:15 np0005548790.localdomain sudo[151179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:15 np0005548790.localdomain python3.9[151181]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:35:15 np0005548790.localdomain sudo[151179]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:16 np0005548790.localdomain sudo[151227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tdxlgulhnzmryntevrywqbbwipwayxvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013715.4336014-1159-35161532609168/AnsiballZ_file.py
Dec 06 09:35:16 np0005548790.localdomain sudo[151227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:16 np0005548790.localdomain python3.9[151229]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:16 np0005548790.localdomain sudo[151227]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:17 np0005548790.localdomain sudo[151319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zqriiyishrxnuhvwqjicssdoocvyelpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013717.1577284-1195-176799049997374/AnsiballZ_systemd.py
Dec 06 09:35:17 np0005548790.localdomain sudo[151319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:17 np0005548790.localdomain python3.9[151321]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:35:17 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:35:17 np0005548790.localdomain systemd-rc-local-generator[151345]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:35:17 np0005548790.localdomain systemd-sysv-generator[151351]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:35:17 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:35:18 np0005548790.localdomain sudo[151319]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45130 DF PROTO=TCP SPT=57666 DPT=9102 SEQ=622809908 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F510620000000001030307) 
Dec 06 09:35:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57660 DF PROTO=TCP SPT=43468 DPT=9882 SEQ=2812636262 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F510670000000001030307) 
Dec 06 09:35:18 np0005548790.localdomain sudo[151449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aemudmxtunvmghcsyuzcncwrvwhbhdvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013718.4791465-1219-132362805057034/AnsiballZ_stat.py
Dec 06 09:35:18 np0005548790.localdomain sudo[151449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:18 np0005548790.localdomain python3.9[151451]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:35:18 np0005548790.localdomain sudo[151449]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:19 np0005548790.localdomain sudo[151497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fnmkfixtnhihaaxdaguzzxwjoiijhmjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013718.4791465-1219-132362805057034/AnsiballZ_file.py
Dec 06 09:35:19 np0005548790.localdomain sudo[151497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:19 np0005548790.localdomain python3.9[151499]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:19 np0005548790.localdomain sudo[151497]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:19 np0005548790.localdomain sudo[151589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ooiyiqdktlhgrcnmfsezuruzodlwwjwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013719.620258-1256-227479477711641/AnsiballZ_stat.py
Dec 06 09:35:19 np0005548790.localdomain sudo[151589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:20 np0005548790.localdomain python3.9[151591]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:35:20 np0005548790.localdomain sudo[151589]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:20 np0005548790.localdomain sudo[151637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rxgwcmseutwnytfmderpwilwsqtanled ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013719.620258-1256-227479477711641/AnsiballZ_file.py
Dec 06 09:35:20 np0005548790.localdomain sudo[151637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:20 np0005548790.localdomain python3.9[151639]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:20 np0005548790.localdomain sudo[151637]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:20 np0005548790.localdomain sudo[151729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ojucvgiomvbzrqklyhhuleamquvxnazg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013720.7110615-1292-193489511287630/AnsiballZ_systemd.py
Dec 06 09:35:20 np0005548790.localdomain sudo[151729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:21 np0005548790.localdomain python3.9[151731]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:35:21 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:35:21 np0005548790.localdomain systemd-rc-local-generator[151755]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:35:21 np0005548790.localdomain systemd-sysv-generator[151760]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:35:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45132 DF PROTO=TCP SPT=57666 DPT=9102 SEQ=622809908 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F51C5F0000000001030307) 
Dec 06 09:35:21 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:35:21 np0005548790.localdomain systemd[1]: Starting Create netns directory...
Dec 06 09:35:21 np0005548790.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 09:35:21 np0005548790.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 09:35:21 np0005548790.localdomain systemd[1]: Finished Create netns directory.
Dec 06 09:35:21 np0005548790.localdomain sudo[151729]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:23 np0005548790.localdomain sudo[151863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oegaorogrnnczwpnnpaaiobalknuvsaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013723.0985715-1321-265531679494908/AnsiballZ_file.py
Dec 06 09:35:23 np0005548790.localdomain sudo[151863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:23 np0005548790.localdomain python3.9[151865]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:35:23 np0005548790.localdomain sudo[151863]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:24 np0005548790.localdomain sudo[151955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-titvfaaqnjgwzkqbnixzqqovducsnrso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013723.7795942-1345-204120280581039/AnsiballZ_stat.py
Dec 06 09:35:24 np0005548790.localdomain sudo[151955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:24 np0005548790.localdomain python3.9[151957]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:35:24 np0005548790.localdomain sudo[151955]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:24 np0005548790.localdomain sudo[152028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nldsgisdsxounhsxzxqgosqznrtntjzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013723.7795942-1345-204120280581039/AnsiballZ_copy.py
Dec 06 09:35:24 np0005548790.localdomain sudo[152028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18373 DF PROTO=TCP SPT=57402 DPT=9105 SEQ=2703988733 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F528DF0000000001030307) 
Dec 06 09:35:24 np0005548790.localdomain python3.9[152030]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013723.7795942-1345-204120280581039/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:35:24 np0005548790.localdomain sudo[152028]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:25 np0005548790.localdomain sudo[152120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gekikxrefldamcssnowxgoxwopiyqjpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013725.7409298-1396-223207975910120/AnsiballZ_file.py
Dec 06 09:35:26 np0005548790.localdomain sudo[152120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:26 np0005548790.localdomain python3.9[152122]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:35:26 np0005548790.localdomain sudo[152120]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:26 np0005548790.localdomain sudo[152212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wmdyswzurdmztjhiyzmxetgjdmplohcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013726.507465-1420-22320488264241/AnsiballZ_stat.py
Dec 06 09:35:26 np0005548790.localdomain sudo[152212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:26 np0005548790.localdomain python3.9[152214]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:35:26 np0005548790.localdomain sudo[152212]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59784 DF PROTO=TCP SPT=44548 DPT=9105 SEQ=3463971875 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F5351F0000000001030307) 
Dec 06 09:35:28 np0005548790.localdomain sudo[152287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cofbbacwqloysjqhpfccbjazkdgiwlcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013726.507465-1420-22320488264241/AnsiballZ_copy.py
Dec 06 09:35:28 np0005548790.localdomain sudo[152287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:28 np0005548790.localdomain python3.9[152289]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013726.507465-1420-22320488264241/.source.json _original_basename=.n_tge92k follow=False checksum=38f75f59f5c2ef6b5da12297bfd31cd1e97012ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:28 np0005548790.localdomain sudo[152287]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:28 np0005548790.localdomain sudo[152379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-syelptuawzjrwvhbeywqcqvobhlnshsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013728.359576-1466-137378325798871/AnsiballZ_file.py
Dec 06 09:35:28 np0005548790.localdomain sudo[152379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:28 np0005548790.localdomain python3.9[152381]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:28 np0005548790.localdomain sudo[152379]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:29 np0005548790.localdomain sudo[152471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yhfemttwyghjsijjpsyujvzvthoqyazu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013729.0928357-1489-229437217292925/AnsiballZ_stat.py
Dec 06 09:35:29 np0005548790.localdomain sudo[152471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:29 np0005548790.localdomain sudo[152471]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:29 np0005548790.localdomain sudo[152544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rxcuzibjqiycfsgkponhnsrmdwzpqfvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013729.0928357-1489-229437217292925/AnsiballZ_copy.py
Dec 06 09:35:29 np0005548790.localdomain sudo[152544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:30 np0005548790.localdomain sudo[152544]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18375 DF PROTO=TCP SPT=57402 DPT=9105 SEQ=2703988733 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F5409F0000000001030307) 
Dec 06 09:35:30 np0005548790.localdomain sudo[152636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pthtzhwxkqktvyldeuomdymgbwhzajij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013730.428267-1541-270604832564001/AnsiballZ_container_config_data.py
Dec 06 09:35:30 np0005548790.localdomain sudo[152636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:31 np0005548790.localdomain python3.9[152638]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec 06 09:35:31 np0005548790.localdomain sudo[152636]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:31 np0005548790.localdomain sudo[152728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-piiggdbtgdmevfmusivblblokzqiqyvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013731.290922-1567-264056689771080/AnsiballZ_container_config_hash.py
Dec 06 09:35:31 np0005548790.localdomain sudo[152728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:31 np0005548790.localdomain python3.9[152730]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 09:35:31 np0005548790.localdomain sudo[152728]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:32 np0005548790.localdomain sudo[152820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pnpfowsswyenrdxvdniegjojmrywnmlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013732.182614-1594-159323868577748/AnsiballZ_podman_container_info.py
Dec 06 09:35:32 np0005548790.localdomain sudo[152820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:32 np0005548790.localdomain python3.9[152822]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 06 09:35:33 np0005548790.localdomain sudo[152820]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57664 DF PROTO=TCP SPT=43468 DPT=9882 SEQ=2812636262 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F54D1F0000000001030307) 
Dec 06 09:35:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15865 DF PROTO=TCP SPT=37848 DPT=9101 SEQ=3038854001 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F5571F0000000001030307) 
Dec 06 09:35:37 np0005548790.localdomain sudo[152939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fozxfzrsfzgmdgqdjlvsekebjoohfmhf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765013736.162677-1633-236623127371318/AnsiballZ_edpm_container_manage.py
Dec 06 09:35:37 np0005548790.localdomain sudo[152939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:37 np0005548790.localdomain python3[152941]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 09:35:37 np0005548790.localdomain python3[152941]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c",
                                                                    "Digest": "sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2025-12-01T06:38:47.246477714Z",
                                                                    "Config": {
                                                                         "User": "root",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20251125",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 345722821,
                                                                    "VirtualSize": 345722821,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                                              "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                                              "sha256:ba9362d2aeb297e34b0679b2fc8168350c70a5b0ec414daf293bf2bc013e9088",
                                                                              "sha256:aae3b8a85314314b9db80a043fdf3f3b1d0b69927faca0303c73969a23dddd0f"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20251125",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "root",
                                                                    "History": [
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223494528Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223562059Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:39.054452717Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025707917Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025744608Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025767729Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025791379Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.02581523Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025867611Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.469442331Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:02.029095017Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:05.672474685Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.113425253Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.532320725Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.370061347Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.805172373Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.259306372Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.625948784Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.028304824Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.423316076Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.801219631Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.239187116Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.70996597Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.147342611Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.5739488Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.006975065Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.421255505Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.066694755Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.475695836Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.8971372Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:18.542651107Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622503041Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622561802Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622578342Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622594423Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:22.080892529Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:22.759131427Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:13:25.258260855Z",
                                                                              "created_by": "/bin/sh -c dnf -y install openvswitch openvswitch-ovn-common python3-netifaces python3-openvswitch tcpdump && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:13:28.025145079Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:38:13.535675197Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:38:47.244104142Z",
                                                                              "created_by": "/bin/sh -c dnf -y install openvswitch-ovn-host && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:38:48.759416475Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 06 09:35:37 np0005548790.localdomain podman[152992]: 2025-12-06 09:35:37.832386375 +0000 UTC m=+0.082123147 container remove 8928689b97a263baa35c3c978217bdae0ccee4df445a81c498381d5eec4c05b3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=ovn_controller, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team)
Dec 06 09:35:37 np0005548790.localdomain python3[152941]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_controller
Dec 06 09:35:37 np0005548790.localdomain podman[153005]: 
Dec 06 09:35:37 np0005548790.localdomain podman[153005]: 2025-12-06 09:35:37.936170045 +0000 UTC m=+0.081736777 container create f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:35:37 np0005548790.localdomain podman[153005]: 2025-12-06 09:35:37.900533729 +0000 UTC m=+0.046100491 image pull  quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 06 09:35:37 np0005548790.localdomain python3[152941]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 06 09:35:38 np0005548790.localdomain sudo[152939]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:38 np0005548790.localdomain sudo[153134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ljpurxtxxktgycpyyxbjindlxtvegsya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013738.4814653-1657-246257894173443/AnsiballZ_stat.py
Dec 06 09:35:38 np0005548790.localdomain sudo[153134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:38 np0005548790.localdomain python3.9[153136]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:35:38 np0005548790.localdomain sudo[153134]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:39 np0005548790.localdomain sudo[153228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rnwtezyrfgomqtpildejbvrxhogvcdiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013739.2317462-1684-228914097653113/AnsiballZ_file.py
Dec 06 09:35:39 np0005548790.localdomain sudo[153228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:39 np0005548790.localdomain python3.9[153230]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:39 np0005548790.localdomain sudo[153228]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:39 np0005548790.localdomain sudo[153274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zwjqkqozlvmbdtycsvcklrczvyvyjzzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013739.2317462-1684-228914097653113/AnsiballZ_stat.py
Dec 06 09:35:39 np0005548790.localdomain sudo[153274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17020 DF PROTO=TCP SPT=53566 DPT=9100 SEQ=223540018 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F5651F0000000001030307) 
Dec 06 09:35:40 np0005548790.localdomain python3.9[153276]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:35:40 np0005548790.localdomain sudo[153274]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:40 np0005548790.localdomain sudo[153365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bbcxthdqewibeitexwhouidsdyqgtzsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013740.1663833-1684-82948483667997/AnsiballZ_copy.py
Dec 06 09:35:40 np0005548790.localdomain sudo[153365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:40 np0005548790.localdomain python3.9[153367]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765013740.1663833-1684-82948483667997/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:35:40 np0005548790.localdomain sudo[153365]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:41 np0005548790.localdomain sudo[153411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wimfpybxoixvbnqjpeptnmqxnttuxvor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013740.1663833-1684-82948483667997/AnsiballZ_systemd.py
Dec 06 09:35:41 np0005548790.localdomain sudo[153411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:41 np0005548790.localdomain python3.9[153413]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:35:41 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:35:41 np0005548790.localdomain systemd-rc-local-generator[153435]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:35:41 np0005548790.localdomain systemd-sysv-generator[153441]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:35:41 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:35:42 np0005548790.localdomain sudo[153411]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:42 np0005548790.localdomain sudo[153493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rmbumteapxoznwvratugxtrzvaumwitm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013740.1663833-1684-82948483667997/AnsiballZ_systemd.py
Dec 06 09:35:42 np0005548790.localdomain sudo[153493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:42 np0005548790.localdomain python3.9[153495]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:35:42 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:35:42 np0005548790.localdomain systemd-rc-local-generator[153522]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:35:42 np0005548790.localdomain systemd-sysv-generator[153526]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:35:42 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:35:43 np0005548790.localdomain systemd[1]: Starting ovn_controller container...
Dec 06 09:35:43 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:35:43 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c38acc12cb0b7e809a6d42401897cc17a7237fa6e4fea237e7ce49478b7732/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 06 09:35:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:35:43 np0005548790.localdomain podman[153537]: 2025-12-06 09:35:43.245205995 +0000 UTC m=+0.136691529 container init f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: + sudo -E kolla_set_configs
Dec 06 09:35:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:35:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50337 DF PROTO=TCP SPT=53488 DPT=9100 SEQ=3135094276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F5719F0000000001030307) 
Dec 06 09:35:43 np0005548790.localdomain podman[153537]: 2025-12-06 09:35:43.282053522 +0000 UTC m=+0.173538986 container start f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 09:35:43 np0005548790.localdomain edpm-start-podman-container[153537]: ovn_controller
Dec 06 09:35:43 np0005548790.localdomain systemd[1]: Created slice User Slice of UID 0.
Dec 06 09:35:43 np0005548790.localdomain systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 06 09:35:43 np0005548790.localdomain systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 06 09:35:43 np0005548790.localdomain systemd[1]: Starting User Manager for UID 0...
Dec 06 09:35:43 np0005548790.localdomain systemd[153582]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Dec 06 09:35:43 np0005548790.localdomain podman[153560]: 2025-12-06 09:35:43.375119852 +0000 UTC m=+0.084726941 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 06 09:35:43 np0005548790.localdomain edpm-start-podman-container[153536]: Creating additional drop-in dependency for "ovn_controller" (f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89)
Dec 06 09:35:43 np0005548790.localdomain podman[153560]: 2025-12-06 09:35:43.455624297 +0000 UTC m=+0.165231446 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 09:35:43 np0005548790.localdomain podman[153560]: unhealthy
Dec 06 09:35:43 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:35:43 np0005548790.localdomain systemd[153582]: Queued start job for default target Main User Target.
Dec 06 09:35:43 np0005548790.localdomain systemd[153582]: Created slice User Application Slice.
Dec 06 09:35:43 np0005548790.localdomain systemd[153582]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 06 09:35:43 np0005548790.localdomain systemd[153582]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 09:35:43 np0005548790.localdomain systemd[153582]: Reached target Paths.
Dec 06 09:35:43 np0005548790.localdomain systemd[153582]: Reached target Timers.
Dec 06 09:35:43 np0005548790.localdomain systemd[153582]: Starting D-Bus User Message Bus Socket...
Dec 06 09:35:43 np0005548790.localdomain systemd[153582]: Starting Create User's Volatile Files and Directories...
Dec 06 09:35:43 np0005548790.localdomain systemd[153582]: Finished Create User's Volatile Files and Directories.
Dec 06 09:35:43 np0005548790.localdomain systemd[153582]: Listening on D-Bus User Message Bus Socket.
Dec 06 09:35:43 np0005548790.localdomain systemd[153582]: Reached target Sockets.
Dec 06 09:35:43 np0005548790.localdomain systemd[153582]: Reached target Basic System.
Dec 06 09:35:43 np0005548790.localdomain systemd[153582]: Reached target Main User Target.
Dec 06 09:35:43 np0005548790.localdomain systemd[153582]: Startup finished in 132ms.
Dec 06 09:35:43 np0005548790.localdomain systemd-rc-local-generator[153638]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:35:43 np0005548790.localdomain systemd-sysv-generator[153642]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:35:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:35:43 np0005548790.localdomain systemd[1]: Started User Manager for UID 0.
Dec 06 09:35:43 np0005548790.localdomain systemd[1]: Started ovn_controller container.
Dec 06 09:35:43 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:35:43 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Failed with result 'exit-code'.
Dec 06 09:35:43 np0005548790.localdomain systemd-journald[47675]: Field hash table of /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal has a fill level at 75.7 (252 of 333 items), suggesting rotation.
Dec 06 09:35:43 np0005548790.localdomain systemd-journald[47675]: /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 06 09:35:43 np0005548790.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 09:35:43 np0005548790.localdomain systemd[1]: Started Session c11 of User root.
Dec 06 09:35:43 np0005548790.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 09:35:43 np0005548790.localdomain sudo[153493]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: INFO:__main__:Validating config file
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: INFO:__main__:Writing out command to execute
Dec 06 09:35:43 np0005548790.localdomain systemd[1]: session-c11.scope: Deactivated successfully.
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: ++ cat /run_command
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: + ARGS=
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: + sudo kolla_copy_cacerts
Dec 06 09:35:43 np0005548790.localdomain systemd[1]: Started Session c12 of User root.
Dec 06 09:35:43 np0005548790.localdomain systemd[1]: session-c12.scope: Deactivated successfully.
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: + [[ ! -n '' ]]
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: + . kolla_extend_start
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '\'''
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: + umask 0022
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00005|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00007|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connected
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00012|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00013|main|INFO|OVS feature set changed, force recompute.
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00014|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00015|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00017|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00018|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00020|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00021|main|INFO|OVS feature set changed, force recompute.
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00022|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 06 09:35:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:35:43Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 06 09:35:44 np0005548790.localdomain sudo[153750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xtefredwuaqmetsrbzfxjbpwhbbyykgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013744.2660687-1769-241082882193522/AnsiballZ_command.py
Dec 06 09:35:44 np0005548790.localdomain sudo[153750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:44 np0005548790.localdomain python3.9[153752]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:35:44 np0005548790.localdomain ovs-vsctl[153753]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec 06 09:35:44 np0005548790.localdomain sudo[153750]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:45 np0005548790.localdomain sudo[153843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cxzsmpnttvdcoymkilatzocodpipvfvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013744.9249413-1794-34376403783059/AnsiballZ_command.py
Dec 06 09:35:45 np0005548790.localdomain sudo[153843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:45 np0005548790.localdomain python3.9[153845]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:35:45 np0005548790.localdomain ovs-vsctl[153847]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec 06 09:35:45 np0005548790.localdomain sudo[153843]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:45 np0005548790.localdomain sudo[153863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:35:45 np0005548790.localdomain sudo[153863]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:35:45 np0005548790.localdomain sudo[153863]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:45 np0005548790.localdomain sudo[153878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:35:45 np0005548790.localdomain sudo[153878]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:35:46 np0005548790.localdomain sudo[153985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hrjnzlsrsfkdgkyfgdufyvswsnvjhvol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013746.072003-1834-19899860576675/AnsiballZ_command.py
Dec 06 09:35:46 np0005548790.localdomain sudo[153985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:46 np0005548790.localdomain sudo[153878]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:46 np0005548790.localdomain python3.9[153989]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:35:46 np0005548790.localdomain ovs-vsctl[154002]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec 06 09:35:46 np0005548790.localdomain sudo[153985]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:47 np0005548790.localdomain sshd[146887]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:35:47 np0005548790.localdomain systemd[1]: session-49.scope: Deactivated successfully.
Dec 06 09:35:47 np0005548790.localdomain systemd[1]: session-49.scope: Consumed 39.776s CPU time.
Dec 06 09:35:47 np0005548790.localdomain systemd-logind[760]: Session 49 logged out. Waiting for processes to exit.
Dec 06 09:35:47 np0005548790.localdomain systemd-logind[760]: Removed session 49.
Dec 06 09:35:47 np0005548790.localdomain sudo[154017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:35:47 np0005548790.localdomain sudo[154017]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:35:47 np0005548790.localdomain sudo[154017]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34716 DF PROTO=TCP SPT=39526 DPT=9102 SEQ=3507126580 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F585900000000001030307) 
Dec 06 09:35:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41056 DF PROTO=TCP SPT=57660 DPT=9882 SEQ=3428686666 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F585980000000001030307) 
Dec 06 09:35:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41058 DF PROTO=TCP SPT=57660 DPT=9882 SEQ=3428686666 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F5919F0000000001030307) 
Dec 06 09:35:52 np0005548790.localdomain sshd[154033]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:35:52 np0005548790.localdomain sshd[154033]: Accepted publickey for zuul from 192.168.122.30 port 59942 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:35:52 np0005548790.localdomain systemd-logind[760]: New session 51 of user zuul.
Dec 06 09:35:52 np0005548790.localdomain systemd[1]: Started Session 51 of User zuul.
Dec 06 09:35:52 np0005548790.localdomain sshd[154033]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:35:53 np0005548790.localdomain python3.9[154126]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:35:53 np0005548790.localdomain systemd[1]: Stopping User Manager for UID 0...
Dec 06 09:35:53 np0005548790.localdomain systemd[153582]: Activating special unit Exit the Session...
Dec 06 09:35:53 np0005548790.localdomain systemd[153582]: Stopped target Main User Target.
Dec 06 09:35:53 np0005548790.localdomain systemd[153582]: Stopped target Basic System.
Dec 06 09:35:53 np0005548790.localdomain systemd[153582]: Stopped target Paths.
Dec 06 09:35:53 np0005548790.localdomain systemd[153582]: Stopped target Sockets.
Dec 06 09:35:53 np0005548790.localdomain systemd[153582]: Stopped target Timers.
Dec 06 09:35:53 np0005548790.localdomain systemd[153582]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 06 09:35:53 np0005548790.localdomain systemd[153582]: Closed D-Bus User Message Bus Socket.
Dec 06 09:35:53 np0005548790.localdomain systemd[153582]: Stopped Create User's Volatile Files and Directories.
Dec 06 09:35:53 np0005548790.localdomain systemd[153582]: Removed slice User Application Slice.
Dec 06 09:35:53 np0005548790.localdomain systemd[153582]: Reached target Shutdown.
Dec 06 09:35:53 np0005548790.localdomain systemd[153582]: Finished Exit the Session.
Dec 06 09:35:53 np0005548790.localdomain systemd[153582]: Reached target Exit the Session.
Dec 06 09:35:53 np0005548790.localdomain systemd[1]: user@0.service: Deactivated successfully.
Dec 06 09:35:53 np0005548790.localdomain systemd[1]: Stopped User Manager for UID 0.
Dec 06 09:35:53 np0005548790.localdomain systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 06 09:35:54 np0005548790.localdomain systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 06 09:35:54 np0005548790.localdomain systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 06 09:35:54 np0005548790.localdomain systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 06 09:35:54 np0005548790.localdomain systemd[1]: Removed slice User Slice of UID 0.
Dec 06 09:35:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35750 DF PROTO=TCP SPT=44992 DPT=9105 SEQ=4076690111 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F59DDF0000000001030307) 
Dec 06 09:35:54 np0005548790.localdomain sudo[154223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tjfjemglbuugjwpsdookhwrsvujjmcbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013754.4412184-64-86989232192999/AnsiballZ_file.py
Dec 06 09:35:54 np0005548790.localdomain sudo[154223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:55 np0005548790.localdomain python3.9[154225]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:35:55 np0005548790.localdomain sudo[154223]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:55 np0005548790.localdomain sudo[154315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zvxnxvlwjpzyqkdozfbtgdxulinhonhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013755.2052093-64-13603937776760/AnsiballZ_file.py
Dec 06 09:35:55 np0005548790.localdomain sudo[154315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:55 np0005548790.localdomain python3.9[154317]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:35:55 np0005548790.localdomain sudo[154315]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:56 np0005548790.localdomain sudo[154407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oainmwcotxcsbdkusvixgsgehbjuacjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013755.8424125-64-218226513178064/AnsiballZ_file.py
Dec 06 09:35:56 np0005548790.localdomain sudo[154407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:56 np0005548790.localdomain python3.9[154409]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:35:56 np0005548790.localdomain sudo[154407]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:56 np0005548790.localdomain sudo[154499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eavewrykiisqeixegiqshvoakbejbmwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013756.4330666-64-53918446134375/AnsiballZ_file.py
Dec 06 09:35:56 np0005548790.localdomain sudo[154499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:56 np0005548790.localdomain python3.9[154501]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:35:56 np0005548790.localdomain sudo[154499]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:57 np0005548790.localdomain sudo[154591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lydpqalyzatwzpvclzmgdwshfnfjmqxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013757.0261905-64-49917403178635/AnsiballZ_file.py
Dec 06 09:35:57 np0005548790.localdomain sudo[154591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:57 np0005548790.localdomain python3.9[154593]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:35:57 np0005548790.localdomain sudo[154591]: pam_unix(sudo:session): session closed for user root
Dec 06 09:35:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15885 DF PROTO=TCP SPT=39308 DPT=9105 SEQ=2185136414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F5A91F0000000001030307) 
Dec 06 09:35:58 np0005548790.localdomain python3.9[154683]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:35:58 np0005548790.localdomain sudo[154773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vkjxwegdloibhjlodoclswhfugkmjnrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013758.5141363-196-3857148744700/AnsiballZ_seboolean.py
Dec 06 09:35:58 np0005548790.localdomain sudo[154773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:35:59 np0005548790.localdomain python3.9[154775]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 06 09:35:59 np0005548790.localdomain sudo[154773]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:00 np0005548790.localdomain python3.9[154865]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:36:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35752 DF PROTO=TCP SPT=44992 DPT=9105 SEQ=4076690111 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F5B59F0000000001030307) 
Dec 06 09:36:00 np0005548790.localdomain python3.9[154938]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013759.4393368-220-74547428061104/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:36:01 np0005548790.localdomain python3.9[155028]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:36:02 np0005548790.localdomain python3.9[155102]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013760.9110038-265-254070197436673/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:36:03 np0005548790.localdomain sudo[155192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gyzhineaedrbbngdxmzgueeohumnekbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013762.8769438-316-219339713158022/AnsiballZ_setup.py
Dec 06 09:36:03 np0005548790.localdomain sudo[155192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:03 np0005548790.localdomain python3.9[155194]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:36:03 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:36:03Z|00023|memory|INFO|14240 kB peak resident set size after 19.7 seconds
Dec 06 09:36:03 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:36:03Z|00024|memory|INFO|idl-cells-OVN_Southbound:4028 idl-cells-Open_vSwitch:813 ofctrl_desired_flow_usage-KB:10 ofctrl_installed_flow_usage-KB:7 ofctrl_sb_flow_ref_usage-KB:3
Dec 06 09:36:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41060 DF PROTO=TCP SPT=57660 DPT=9882 SEQ=3428686666 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F5C11F0000000001030307) 
Dec 06 09:36:03 np0005548790.localdomain sudo[155192]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:04 np0005548790.localdomain sudo[155246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-enabpkooxkplagqhhojsudizypbeaixh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013762.8769438-316-219339713158022/AnsiballZ_dnf.py
Dec 06 09:36:04 np0005548790.localdomain sudo[155246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:04 np0005548790.localdomain python3.9[155248]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:36:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37257 DF PROTO=TCP SPT=56174 DPT=9101 SEQ=890942418 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F5CD1F0000000001030307) 
Dec 06 09:36:07 np0005548790.localdomain sudo[155246]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:08 np0005548790.localdomain sudo[155340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pyhckrzfcsshvsndqbkxoicpxayxzdyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013767.7809625-352-280289889766331/AnsiballZ_systemd.py
Dec 06 09:36:08 np0005548790.localdomain sudo[155340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:08 np0005548790.localdomain python3.9[155342]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 09:36:08 np0005548790.localdomain sudo[155340]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:09 np0005548790.localdomain python3.9[155435]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:36:09 np0005548790.localdomain python3.9[155506]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013768.8474326-376-90121231804757/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:36:10 np0005548790.localdomain python3.9[155596]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:36:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9688 DF PROTO=TCP SPT=49260 DPT=9100 SEQ=3147018492 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F5DB1F0000000001030307) 
Dec 06 09:36:10 np0005548790.localdomain python3.9[155667]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013769.8557358-376-130176123303198/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:36:12 np0005548790.localdomain python3.9[155757]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:36:12 np0005548790.localdomain python3.9[155828]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013771.6534276-508-61574595468431/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=aa9e89725fbcebf7a5c773d7b97083445b7b7759 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:36:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37899 DF PROTO=TCP SPT=59620 DPT=9100 SEQ=2468267367 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F5E6E00000000001030307) 
Dec 06 09:36:13 np0005548790.localdomain python3.9[155918]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:36:14 np0005548790.localdomain python3.9[155989]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013772.609161-508-154071907962492/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=979187b925479d81d0609f4188e5b95fe1f92c18 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:36:14 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:36:14 np0005548790.localdomain systemd[1]: tmp-crun.0esAgp.mount: Deactivated successfully.
Dec 06 09:36:14 np0005548790.localdomain podman[156064]: 2025-12-06 09:36:14.599620343 +0000 UTC m=+0.103605586 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 09:36:14 np0005548790.localdomain podman[156064]: 2025-12-06 09:36:14.684028666 +0000 UTC m=+0.188013859 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:36:14 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:36:14 np0005548790.localdomain python3.9[156089]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:36:15 np0005548790.localdomain sudo[156196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dnsvunbcqcyqnssgppvxymlwmqkzczqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013775.637191-623-13694215501350/AnsiballZ_file.py
Dec 06 09:36:15 np0005548790.localdomain sudo[156196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:16 np0005548790.localdomain python3.9[156198]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:36:16 np0005548790.localdomain sudo[156196]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:16 np0005548790.localdomain sudo[156288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dbmqyucahkfsxdsbsagbpnamoapucerl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013776.3543196-647-226457624588406/AnsiballZ_stat.py
Dec 06 09:36:16 np0005548790.localdomain sudo[156288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:16 np0005548790.localdomain python3.9[156290]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:36:16 np0005548790.localdomain sudo[156288]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:17 np0005548790.localdomain sudo[156336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bmcicmmvuvqwtfwctfbiapucfuupgpig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013776.3543196-647-226457624588406/AnsiballZ_file.py
Dec 06 09:36:17 np0005548790.localdomain sudo[156336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:17 np0005548790.localdomain python3.9[156338]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:36:17 np0005548790.localdomain sudo[156336]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:17 np0005548790.localdomain sudo[156428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gsgpuogxqeeboxwndtabqmndupckmlcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013777.374908-647-68840622733825/AnsiballZ_stat.py
Dec 06 09:36:17 np0005548790.localdomain sudo[156428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:17 np0005548790.localdomain python3.9[156430]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:36:17 np0005548790.localdomain sudo[156428]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:18 np0005548790.localdomain sudo[156476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-icyvjzsselffdsbmfbwedeojquntdbne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013777.374908-647-68840622733825/AnsiballZ_file.py
Dec 06 09:36:18 np0005548790.localdomain sudo[156476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:18 np0005548790.localdomain python3.9[156478]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:36:18 np0005548790.localdomain sudo[156476]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31596 DF PROTO=TCP SPT=33610 DPT=9102 SEQ=1157527478 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F5FAC10000000001030307) 
Dec 06 09:36:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6678 DF PROTO=TCP SPT=58552 DPT=9882 SEQ=2407560748 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F5FAC80000000001030307) 
Dec 06 09:36:18 np0005548790.localdomain sudo[156568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xknftfsdsgicjjglzmnoeastpfajlcju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013778.5053139-715-47257918211934/AnsiballZ_file.py
Dec 06 09:36:18 np0005548790.localdomain sudo[156568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:18 np0005548790.localdomain python3.9[156570]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:36:18 np0005548790.localdomain sudo[156568]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:19 np0005548790.localdomain sudo[156660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-twpqkuqwjujmsprrxvapqhmhegnklksv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013779.1287625-740-53182062432353/AnsiballZ_stat.py
Dec 06 09:36:19 np0005548790.localdomain sudo[156660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:19 np0005548790.localdomain python3.9[156662]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:36:19 np0005548790.localdomain sudo[156660]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:19 np0005548790.localdomain sudo[156708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yskuiuhzwzaththvubycjbcrzakvbzjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013779.1287625-740-53182062432353/AnsiballZ_file.py
Dec 06 09:36:19 np0005548790.localdomain sudo[156708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:20 np0005548790.localdomain python3.9[156710]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:36:20 np0005548790.localdomain sudo[156708]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:20 np0005548790.localdomain sudo[156800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gnxgpkvvighvnwmdufazdaoeawtntsey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013780.2666264-775-45134811621510/AnsiballZ_stat.py
Dec 06 09:36:20 np0005548790.localdomain sudo[156800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:20 np0005548790.localdomain python3.9[156802]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:36:20 np0005548790.localdomain sudo[156800]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:20 np0005548790.localdomain sudo[156848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-utozweuxgrtmtnjsuoxqngumteewglgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013780.2666264-775-45134811621510/AnsiballZ_file.py
Dec 06 09:36:20 np0005548790.localdomain sudo[156848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:21 np0005548790.localdomain python3.9[156850]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:36:21 np0005548790.localdomain sudo[156848]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31598 DF PROTO=TCP SPT=33610 DPT=9102 SEQ=1157527478 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F606DF0000000001030307) 
Dec 06 09:36:21 np0005548790.localdomain sudo[156940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ixnfkjmyizvbrefbwcaebwncfzjfeqvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013781.385917-811-99584322780335/AnsiballZ_systemd.py
Dec 06 09:36:21 np0005548790.localdomain sudo[156940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:22 np0005548790.localdomain python3.9[156942]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:36:22 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:36:22 np0005548790.localdomain systemd-sysv-generator[156971]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:36:22 np0005548790.localdomain systemd-rc-local-generator[156966]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:36:22 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:36:22 np0005548790.localdomain sudo[156940]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:22 np0005548790.localdomain sudo[157070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-idgiprbprhqbzrbngwzurobgqndlllxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013782.5413349-835-154517619930556/AnsiballZ_stat.py
Dec 06 09:36:22 np0005548790.localdomain sudo[157070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:23 np0005548790.localdomain python3.9[157072]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:36:23 np0005548790.localdomain sudo[157070]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:23 np0005548790.localdomain sudo[157118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ocaurqhueekkpsruevkwzrbegkzfannp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013782.5413349-835-154517619930556/AnsiballZ_file.py
Dec 06 09:36:23 np0005548790.localdomain sudo[157118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:23 np0005548790.localdomain python3.9[157120]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:36:23 np0005548790.localdomain sudo[157118]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:23 np0005548790.localdomain sudo[157210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qbfixgjucnjtcjbbemqertahhzahaijf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013783.6794062-871-12770348601289/AnsiballZ_stat.py
Dec 06 09:36:23 np0005548790.localdomain sudo[157210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:24 np0005548790.localdomain python3.9[157212]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:36:24 np0005548790.localdomain sudo[157210]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:24 np0005548790.localdomain sudo[157258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vzblejnqdqdzuefakghgymcflkytxbmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013783.6794062-871-12770348601289/AnsiballZ_file.py
Dec 06 09:36:24 np0005548790.localdomain sudo[157258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:24 np0005548790.localdomain python3.9[157260]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:36:24 np0005548790.localdomain sudo[157258]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59097 DF PROTO=TCP SPT=43512 DPT=9105 SEQ=1403901614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F6131F0000000001030307) 
Dec 06 09:36:25 np0005548790.localdomain sudo[157350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ambpjhjcsycdztsqsqompdrvdrjoikep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013784.7925053-908-3582222455113/AnsiballZ_systemd.py
Dec 06 09:36:25 np0005548790.localdomain sudo[157350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:25 np0005548790.localdomain python3.9[157352]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:36:25 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:36:25 np0005548790.localdomain systemd-rc-local-generator[157375]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:36:25 np0005548790.localdomain systemd-sysv-generator[157381]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:36:25 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:36:25 np0005548790.localdomain systemd[1]: Starting Create netns directory...
Dec 06 09:36:25 np0005548790.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 09:36:25 np0005548790.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 09:36:25 np0005548790.localdomain systemd[1]: Finished Create netns directory.
Dec 06 09:36:25 np0005548790.localdomain sudo[157350]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:26 np0005548790.localdomain sudo[157484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ahkpvyocpwlpmnhjllkyyjkuhxdtkdzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013786.7434654-937-29783230610470/AnsiballZ_file.py
Dec 06 09:36:26 np0005548790.localdomain sudo[157484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:27 np0005548790.localdomain python3.9[157486]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:36:27 np0005548790.localdomain sudo[157484]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18378 DF PROTO=TCP SPT=57402 DPT=9105 SEQ=2703988733 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F61F200000000001030307) 
Dec 06 09:36:27 np0005548790.localdomain sudo[157576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-enllgtvqxgjpazcjbkpbwwrbdzgzgnhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013787.3876503-961-199056268733567/AnsiballZ_stat.py
Dec 06 09:36:27 np0005548790.localdomain sudo[157576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:27 np0005548790.localdomain python3.9[157578]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:36:27 np0005548790.localdomain sudo[157576]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:28 np0005548790.localdomain sudo[157649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-prugrrffdbwdcxdqtdnmmgzfdoyccfky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013787.3876503-961-199056268733567/AnsiballZ_copy.py
Dec 06 09:36:28 np0005548790.localdomain sudo[157649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:29 np0005548790.localdomain python3.9[157651]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765013787.3876503-961-199056268733567/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:36:29 np0005548790.localdomain sudo[157649]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:29 np0005548790.localdomain sudo[157741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aqatgzwxhkytkyodrpoejifvxwmwmmeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013789.585681-1012-39295816358625/AnsiballZ_file.py
Dec 06 09:36:29 np0005548790.localdomain sudo[157741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:30 np0005548790.localdomain python3.9[157743]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:36:30 np0005548790.localdomain sudo[157741]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:30 np0005548790.localdomain sudo[157833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dwbmrwxtxlcscdzomjnpiiicyxnhwvfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013790.2744372-1036-192101213815833/AnsiballZ_stat.py
Dec 06 09:36:30 np0005548790.localdomain sudo[157833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59099 DF PROTO=TCP SPT=43512 DPT=9105 SEQ=1403901614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F62ADF0000000001030307) 
Dec 06 09:36:30 np0005548790.localdomain python3.9[157835]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:36:30 np0005548790.localdomain sudo[157833]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:31 np0005548790.localdomain sudo[157908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ttdeuwbgazxgqtkqusgbdnkhwhmqrrwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013790.2744372-1036-192101213815833/AnsiballZ_copy.py
Dec 06 09:36:31 np0005548790.localdomain sudo[157908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:31 np0005548790.localdomain python3.9[157910]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765013790.2744372-1036-192101213815833/.source.json _original_basename=.t2s7hvoa follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:36:31 np0005548790.localdomain sudo[157908]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:31 np0005548790.localdomain sudo[158000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rwexgnmrxzghfnantzbgkvdqsxoiszbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013791.4927614-1081-211653771150378/AnsiballZ_file.py
Dec 06 09:36:31 np0005548790.localdomain sudo[158000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:31 np0005548790.localdomain python3.9[158002]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:36:31 np0005548790.localdomain sudo[158000]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:32 np0005548790.localdomain sudo[158092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ekrjzctodwttlqksuudkhhmarzsqshbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013792.1930082-1105-133447256506449/AnsiballZ_stat.py
Dec 06 09:36:32 np0005548790.localdomain sudo[158092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:32 np0005548790.localdomain sudo[158092]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:32 np0005548790.localdomain sudo[158165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gxoszfofzeybaffmppuetzjmdzermzex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013792.1930082-1105-133447256506449/AnsiballZ_copy.py
Dec 06 09:36:32 np0005548790.localdomain sudo[158165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:33 np0005548790.localdomain sudo[158165]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31600 DF PROTO=TCP SPT=33610 DPT=9102 SEQ=1157527478 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F637200000000001030307) 
Dec 06 09:36:34 np0005548790.localdomain sudo[158257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hylqgcmaltojdnnluzcozgmqcchylxsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013793.6416929-1157-55726330374849/AnsiballZ_container_config_data.py
Dec 06 09:36:34 np0005548790.localdomain sudo[158257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:34 np0005548790.localdomain python3.9[158259]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec 06 09:36:34 np0005548790.localdomain sudo[158257]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:34 np0005548790.localdomain sudo[158349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-osqljrwydhkrppxfcxuyrpagzgmeabqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013794.4939191-1184-147998892989703/AnsiballZ_container_config_hash.py
Dec 06 09:36:34 np0005548790.localdomain sudo[158349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:35 np0005548790.localdomain python3.9[158351]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 09:36:35 np0005548790.localdomain sudo[158349]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:35 np0005548790.localdomain sudo[158441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qplkemupopnysdlkccivbpytenitqzbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013795.4170136-1210-173602654162197/AnsiballZ_podman_container_info.py
Dec 06 09:36:35 np0005548790.localdomain sudo[158441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:36 np0005548790.localdomain python3.9[158443]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 06 09:36:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7778 DF PROTO=TCP SPT=37656 DPT=9101 SEQ=972154228 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F641200000000001030307) 
Dec 06 09:36:36 np0005548790.localdomain sudo[158441]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:39 np0005548790.localdomain sudo[158560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-guibqasxocrfnhyviaozveteerbdsiww ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765013799.4327662-1249-188824779683897/AnsiballZ_edpm_container_manage.py
Dec 06 09:36:39 np0005548790.localdomain sudo[158560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:39 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50340 DF PROTO=TCP SPT=53488 DPT=9100 SEQ=3135094276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F64F1F0000000001030307) 
Dec 06 09:36:40 np0005548790.localdomain python3[158562]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 09:36:40 np0005548790.localdomain python3[158562]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9",
                                                                    "Digest": "sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2025-12-01T06:29:20.327314945Z",
                                                                    "Config": {
                                                                         "User": "neutron",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20251125",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 784141054,
                                                                    "VirtualSize": 784141054,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53/diff:/var/lib/containers/storage/overlay/2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                                              "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                                              "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",
                                                                              "sha256:75abaaa40a93c0e2bba524b6f8d4eb5f1c4c9a33db70c892c7582ec5b0827e5e",
                                                                              "sha256:01f43f620d1ea2a9e584abe0cc14c336bedcf55765127c000d743f536dd36f25",
                                                                              "sha256:0bf5bd378602f28be423f5e84abddff3b103396fae3c167031b6e3fcfcf6f120"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20251125",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "neutron",
                                                                    "History": [
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223494528Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223562059Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:39.054452717Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025707917Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025744608Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025767729Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025791379Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.02581523Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025867611Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.469442331Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:02.029095017Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:05.672474685Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.113425253Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.532320725Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.370061347Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.805172373Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.259306372Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.625948784Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.028304824Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.423316076Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.801219631Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.239187116Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.70996597Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.147342611Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.5739488Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.006975065Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.421255505Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.066694755Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.475695836Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.8971372Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:18.542651107Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622503041Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622561802Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622578342Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622594423Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:22.080892529Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:15.092312074Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:53.218820537Z",
                                                                              "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:56.858075591Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:15:50.18897737Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:15:50.762138914Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage neutron",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:17:13.720608935Z",
                                                                              "created_by": "/bin/sh -c dnf -y install iputils net-tools openstack-neutron openstack-neutron-rpc-server openstack-neutron-ml2 openvswitch python3-networking-baremetal python3-openvswitch python3-unbound && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:17:27.636630318Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/neutron-base/neutron_sudoers /etc/sudoers.d/neutron_sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:17:40.546186661Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers.d/neutron_sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:17:52.875291445Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:27:22.608862134Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:28:35.764559413Z",
                                                                              "created_by": "/bin/sh -c dnf -y install libseccomp podman && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:28:40.983506098Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:28:44.803537768Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-agent-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:29:20.324920691Z",
                                                                              "created_by": "/bin/sh -c dnf -y install python3-networking-ovn-metadata-agent && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:29:20.324983383Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER neutron",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:29:24.215761584Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 09:36:40 np0005548790.localdomain podman[158610]: 2025-12-06 09:36:40.461669244 +0000 UTC m=+0.092587054 container remove 8689918e6d33cc8766e4a8bf46661a0f899dd76225950c059a0f4ea86b4e41e5 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '9b9208098644933bd8c0484efcd7b934'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.openshift.expose-services=, container_name=ovn_metadata_agent, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 06 09:36:40 np0005548790.localdomain python3[158562]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_metadata_agent
Dec 06 09:36:40 np0005548790.localdomain podman[158623]: 
Dec 06 09:36:40 np0005548790.localdomain podman[158623]: 2025-12-06 09:36:40.563296699 +0000 UTC m=+0.081813115 container create 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec 06 09:36:40 np0005548790.localdomain podman[158623]: 2025-12-06 09:36:40.527692385 +0000 UTC m=+0.046208851 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 09:36:40 np0005548790.localdomain python3[158562]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311 --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 09:36:40 np0005548790.localdomain sudo[158560]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:41 np0005548790.localdomain sudo[158751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lajthkdpcusigyrvanieprxoxvwngeet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013801.2941778-1273-221835054205945/AnsiballZ_stat.py
Dec 06 09:36:41 np0005548790.localdomain sudo[158751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:41 np0005548790.localdomain python3.9[158753]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:36:41 np0005548790.localdomain sudo[158751]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:42 np0005548790.localdomain sudo[158845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zvwaohsefnthnfbjpuhhetwkeviszewp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013802.0373418-1300-7243174882951/AnsiballZ_file.py
Dec 06 09:36:42 np0005548790.localdomain sudo[158845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:42 np0005548790.localdomain python3.9[158847]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:36:42 np0005548790.localdomain sudo[158845]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:42 np0005548790.localdomain sudo[158891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rytzfbzalxizknxnomyrinquwidnydre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013802.0373418-1300-7243174882951/AnsiballZ_stat.py
Dec 06 09:36:42 np0005548790.localdomain sudo[158891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:42 np0005548790.localdomain python3.9[158893]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:36:42 np0005548790.localdomain sudo[158891]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18092 DF PROTO=TCP SPT=34362 DPT=9100 SEQ=1902033170 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F65C1F0000000001030307) 
Dec 06 09:36:43 np0005548790.localdomain sudo[158982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qcytryxpyfwpjmkienqwclkfubvelnxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013802.9732976-1300-256130387486705/AnsiballZ_copy.py
Dec 06 09:36:43 np0005548790.localdomain sudo[158982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:43 np0005548790.localdomain python3.9[158984]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765013802.9732976-1300-256130387486705/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:36:43 np0005548790.localdomain sudo[158982]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:43 np0005548790.localdomain sudo[159028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-puzunhvbwkpctvxzwfpnmkacenjexepy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013802.9732976-1300-256130387486705/AnsiballZ_systemd.py
Dec 06 09:36:43 np0005548790.localdomain sudo[159028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:44 np0005548790.localdomain python3.9[159030]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:36:44 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:36:44 np0005548790.localdomain systemd-sysv-generator[159058]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:36:44 np0005548790.localdomain systemd-rc-local-generator[159054]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:36:44 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:36:44 np0005548790.localdomain sudo[159028]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:44 np0005548790.localdomain sudo[159110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ryzbyvexfilmyablbtnsirzherugrmvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013802.9732976-1300-256130387486705/AnsiballZ_systemd.py
Dec 06 09:36:44 np0005548790.localdomain sudo[159110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:36:44 np0005548790.localdomain podman[159113]: 2025-12-06 09:36:44.867456859 +0000 UTC m=+0.076294727 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:36:44 np0005548790.localdomain podman[159113]: 2025-12-06 09:36:44.944129834 +0000 UTC m=+0.152967732 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 06 09:36:44 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:36:45 np0005548790.localdomain python3.9[159112]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:36:46 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:36:46 np0005548790.localdomain systemd-rc-local-generator[159164]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:36:46 np0005548790.localdomain systemd-sysv-generator[159168]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:36:46 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:36:46 np0005548790.localdomain systemd[1]: Starting ovn_metadata_agent container...
Dec 06 09:36:46 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:36:46 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73eccc3003c2dce5aeb07433a2869138ccd9cc3735e2615ed988fa32b010cedd/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 06 09:36:46 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73eccc3003c2dce5aeb07433a2869138ccd9cc3735e2615ed988fa32b010cedd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 09:36:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:36:46 np0005548790.localdomain podman[159180]: 2025-12-06 09:36:46.617688113 +0000 UTC m=+0.155542412 container init 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: + sudo -E kolla_set_configs
Dec 06 09:36:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:36:46 np0005548790.localdomain podman[159180]: 2025-12-06 09:36:46.649363932 +0000 UTC m=+0.187218171 container start 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 06 09:36:46 np0005548790.localdomain edpm-start-podman-container[159180]: ovn_metadata_agent
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: INFO:__main__:Validating config file
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: INFO:__main__:Copying service configuration files
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: INFO:__main__:Writing out command to execute
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: ++ cat /run_command
Dec 06 09:36:46 np0005548790.localdomain podman[159202]: 2025-12-06 09:36:46.730061986 +0000 UTC m=+0.073364378 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=starting, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: + CMD=neutron-ovn-metadata-agent
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: + ARGS=
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: + sudo kolla_copy_cacerts
Dec 06 09:36:46 np0005548790.localdomain podman[159202]: 2025-12-06 09:36:46.735124882 +0000 UTC m=+0.078427284 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: + [[ ! -n '' ]]
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: + . kolla_extend_start
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: Running command: 'neutron-ovn-metadata-agent'
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: + umask 0022
Dec 06 09:36:46 np0005548790.localdomain ovn_metadata_agent[159195]: + exec neutron-ovn-metadata-agent
Dec 06 09:36:46 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:36:46 np0005548790.localdomain edpm-start-podman-container[159179]: Creating additional drop-in dependency for "ovn_metadata_agent" (643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83)
Dec 06 09:36:46 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:36:46 np0005548790.localdomain systemd-rc-local-generator[159269]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:36:46 np0005548790.localdomain systemd-sysv-generator[159273]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:36:47 np0005548790.localdomain sudo[159283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:36:47 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:36:47 np0005548790.localdomain systemd[1]: Started ovn_metadata_agent container.
Dec 06 09:36:47 np0005548790.localdomain sudo[159283]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:36:47 np0005548790.localdomain sudo[159283]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:47 np0005548790.localdomain sudo[159110]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:47 np0005548790.localdomain sudo[159299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:36:47 np0005548790.localdomain sudo[159299]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:36:47 np0005548790.localdomain sshd[154033]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:36:47 np0005548790.localdomain systemd-logind[760]: Session 51 logged out. Waiting for processes to exit.
Dec 06 09:36:47 np0005548790.localdomain systemd[1]: session-51.scope: Deactivated successfully.
Dec 06 09:36:47 np0005548790.localdomain systemd[1]: session-51.scope: Consumed 31.395s CPU time.
Dec 06 09:36:47 np0005548790.localdomain systemd-logind[760]: Removed session 51.
Dec 06 09:36:48 np0005548790.localdomain sudo[159299]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.311 159200 INFO neutron.common.config [-] Logging enabled!
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.311 159200 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.311 159200 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.312 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.312 159200 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.312 159200 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.312 159200 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.312 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.312 159200 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.312 159200 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.312 159200 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.312 159200 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.312 159200 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.313 159200 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.313 159200 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.313 159200 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.313 159200 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.313 159200 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.313 159200 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.313 159200 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.313 159200 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.313 159200 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.313 159200 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.314 159200 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.314 159200 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.314 159200 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.314 159200 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.314 159200 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.314 159200 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.314 159200 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.314 159200 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.314 159200 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.314 159200 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.315 159200 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.315 159200 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.315 159200 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.315 159200 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.315 159200 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.315 159200 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = np0005548790.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.315 159200 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.315 159200 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.315 159200 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.316 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.316 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.316 159200 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.316 159200 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.316 159200 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.316 159200 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.316 159200 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.316 159200 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.316 159200 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.316 159200 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.316 159200 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.316 159200 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.317 159200 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.317 159200 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.317 159200 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.317 159200 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.317 159200 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.317 159200 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.317 159200 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.317 159200 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.317 159200 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.317 159200 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.318 159200 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.318 159200 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.318 159200 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.318 159200 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.318 159200 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.318 159200 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.318 159200 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.318 159200 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.318 159200 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.319 159200 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.319 159200 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.319 159200 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.319 159200 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.319 159200 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.319 159200 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.319 159200 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.319 159200 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.319 159200 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.319 159200 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.319 159200 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.320 159200 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.320 159200 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.320 159200 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.320 159200 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.320 159200 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.320 159200 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.320 159200 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.320 159200 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.320 159200 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.320 159200 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.321 159200 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.321 159200 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.321 159200 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.321 159200 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.321 159200 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.321 159200 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.321 159200 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.321 159200 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.321 159200 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.321 159200 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.321 159200 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.322 159200 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.322 159200 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.322 159200 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.322 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.322 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.322 159200 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.322 159200 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.322 159200 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.322 159200 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.322 159200 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.323 159200 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.323 159200 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.323 159200 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.323 159200 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.323 159200 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.323 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.323 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.323 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.323 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.324 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.324 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.324 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.324 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.324 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.324 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.324 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.324 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.324 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.324 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.325 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.325 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.325 159200 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.325 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.325 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.325 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.325 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.325 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.325 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.325 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.326 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.326 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.326 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.326 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.326 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.326 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.326 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.326 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.326 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.326 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.327 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.327 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.327 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.327 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.327 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.327 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.327 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.327 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.327 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.327 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.328 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.328 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.328 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.328 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.328 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.328 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.328 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.328 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.328 159200 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.328 159200 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.329 159200 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.329 159200 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.329 159200 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.329 159200 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.329 159200 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.329 159200 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.329 159200 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.329 159200 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.329 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.329 159200 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.330 159200 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.330 159200 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.330 159200 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.330 159200 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.330 159200 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.330 159200 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.330 159200 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.330 159200 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.330 159200 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.330 159200 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.331 159200 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.331 159200 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.331 159200 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.331 159200 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.331 159200 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.331 159200 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.331 159200 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.331 159200 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.331 159200 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.332 159200 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.332 159200 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.332 159200 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.332 159200 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.332 159200 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.332 159200 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.332 159200 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.332 159200 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.332 159200 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.332 159200 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.333 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.333 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.333 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.333 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.333 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.333 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.333 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.333 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.333 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.333 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.334 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.334 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.334 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.334 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.334 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.334 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.334 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.334 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.334 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.334 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.335 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.335 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.335 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.335 159200 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.335 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.335 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.335 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.335 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.335 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.336 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.336 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.336 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.336 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.336 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.336 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.336 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.336 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.336 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.336 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.337 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.337 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.337 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.337 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.337 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.337 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.337 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.337 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.338 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.338 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.338 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.338 159200 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.338 159200 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.338 159200 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.338 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.338 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.338 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.338 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.339 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.339 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.339 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.339 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.339 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.339 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.339 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.339 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.339 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.340 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.340 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.340 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.340 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.340 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.340 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.340 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.340 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.340 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.340 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.341 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.341 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.341 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.341 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.341 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.341 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.341 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.341 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.341 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.342 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.342 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.342 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.342 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.342 159200 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.342 159200 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.350 159200 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.350 159200 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.350 159200 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.351 159200 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.351 159200 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.364 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 33b2d0f4-3dae-458c-b286-c937c7cb3d9e (UUID: 33b2d0f4-3dae-458c-b286-c937c7cb3d9e) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.380 159200 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.380 159200 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.380 159200 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.380 159200 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.382 159200 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.385 159200 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected
Dec 06 09:36:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31345 DF PROTO=TCP SPT=36592 DPT=9102 SEQ=3173696261 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F66FF10000000001030307) 
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.391 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '33b2d0f4-3dae-458c-b286-c937c7cb3d9e'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], external_ids={'neutron:ovn-metadata-id': 'b54403fb-cc12-5088-bbb6-613e8a08e54e', 'neutron:ovn-metadata-sb-cfg': '1'}, name=33b2d0f4-3dae-458c-b286-c937c7cb3d9e, nb_cfg_timestamp=1765013752689, nb_cfg=4) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.392 159200 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f8354514b50>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.393 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.393 159200 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.393 159200 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.393 159200 INFO oslo_service.service [-] Starting 1 workers
Dec 06 09:36:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36281 DF PROTO=TCP SPT=42616 DPT=9882 SEQ=3562224415 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F66FF70000000001030307) 
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.395 159200 DEBUG oslo_service.service [-] Started child 159359 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.398 159200 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmphg84uf6f/privsep.sock']
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.399 159359 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-158109'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.422 159359 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.423 159359 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.423 159359 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.425 159359 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.427 159359 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.439 159359 INFO eventlet.wsgi.server [-] (159359) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Dec 06 09:36:48 np0005548790.localdomain sudo[159364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:36:48 np0005548790.localdomain sudo[159364]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:36:48 np0005548790.localdomain sudo[159364]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.994 159200 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.995 159200 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmphg84uf6f/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.891 159379 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.896 159379 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.900 159379 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.900 159379 INFO oslo.privsep.daemon [-] privsep daemon running as pid 159379
Dec 06 09:36:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:48.998 159379 DEBUG oslo.privsep.daemon [-] privsep: reply[a1d87bfd-8023-474b-8f68-3e1f64ae550b]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.439 159379 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.439 159379 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.439 159379 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.876 159379 DEBUG oslo.privsep.daemon [-] privsep: reply[2401edc6-0300-4fab-97af-31331975716c]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.880 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=33b2d0f4-3dae-458c-b286-c937c7cb3d9e, column=external_ids, values=({'neutron:ovn-metadata-id': 'b54403fb-cc12-5088-bbb6-613e8a08e54e'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.881 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.882 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=33b2d0f4-3dae-458c-b286-c937c7cb3d9e, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.893 159200 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.894 159200 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.894 159200 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.894 159200 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.894 159200 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.895 159200 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.895 159200 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.896 159200 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.896 159200 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.896 159200 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.897 159200 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.897 159200 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.897 159200 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.898 159200 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.898 159200 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.899 159200 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.899 159200 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.899 159200 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.900 159200 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.900 159200 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.900 159200 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.901 159200 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.901 159200 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.901 159200 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.902 159200 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.902 159200 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.903 159200 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.903 159200 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.904 159200 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.904 159200 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.904 159200 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.905 159200 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.905 159200 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.905 159200 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.906 159200 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.906 159200 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.906 159200 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.907 159200 DEBUG oslo_service.service [-] host                           = np0005548790.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.907 159200 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.908 159200 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.908 159200 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.908 159200 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.909 159200 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.909 159200 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.910 159200 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.910 159200 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.910 159200 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.910 159200 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.911 159200 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.911 159200 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.912 159200 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.912 159200 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.912 159200 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.912 159200 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.913 159200 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.913 159200 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.914 159200 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.914 159200 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.914 159200 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.915 159200 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.915 159200 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.915 159200 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.916 159200 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.916 159200 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.916 159200 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.917 159200 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.917 159200 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.918 159200 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.918 159200 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.918 159200 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.918 159200 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.918 159200 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.919 159200 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.919 159200 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.919 159200 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.919 159200 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.920 159200 DEBUG oslo_service.service [-] nova_metadata_protocol         = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.920 159200 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.920 159200 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.920 159200 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.920 159200 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.921 159200 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.921 159200 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.921 159200 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.921 159200 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.921 159200 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.922 159200 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.922 159200 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.922 159200 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.922 159200 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.922 159200 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.922 159200 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.923 159200 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.923 159200 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.923 159200 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.923 159200 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.923 159200 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.923 159200 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.923 159200 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.923 159200 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.924 159200 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.924 159200 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.924 159200 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.924 159200 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.924 159200 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.924 159200 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.924 159200 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.925 159200 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.925 159200 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.925 159200 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.925 159200 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.925 159200 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.925 159200 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.925 159200 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.926 159200 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.926 159200 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.926 159200 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.926 159200 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.926 159200 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.926 159200 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.926 159200 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.927 159200 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.927 159200 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.927 159200 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.927 159200 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.927 159200 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.927 159200 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.927 159200 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.928 159200 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.928 159200 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.928 159200 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.928 159200 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.928 159200 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.928 159200 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.928 159200 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.929 159200 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.929 159200 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.929 159200 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.929 159200 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.929 159200 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.929 159200 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.930 159200 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.930 159200 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.930 159200 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.930 159200 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.930 159200 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.930 159200 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.930 159200 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.931 159200 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.931 159200 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.931 159200 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.931 159200 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.931 159200 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.931 159200 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.932 159200 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.932 159200 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.932 159200 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.932 159200 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.932 159200 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.932 159200 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.932 159200 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.932 159200 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.933 159200 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.933 159200 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.933 159200 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.933 159200 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.933 159200 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.933 159200 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.933 159200 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.934 159200 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.934 159200 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.934 159200 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.934 159200 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.934 159200 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.934 159200 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.934 159200 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.934 159200 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.935 159200 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.935 159200 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.935 159200 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.935 159200 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.935 159200 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.935 159200 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.935 159200 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.936 159200 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.936 159200 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.936 159200 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.936 159200 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.936 159200 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.936 159200 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.936 159200 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.937 159200 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.937 159200 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.937 159200 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.937 159200 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.937 159200 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.937 159200 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.937 159200 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.937 159200 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.938 159200 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.938 159200 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.938 159200 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.938 159200 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.938 159200 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.938 159200 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.938 159200 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.939 159200 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.939 159200 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.939 159200 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.939 159200 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.939 159200 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.939 159200 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.939 159200 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.939 159200 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.940 159200 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.940 159200 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.940 159200 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.940 159200 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.940 159200 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.940 159200 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.940 159200 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.940 159200 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.941 159200 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.941 159200 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.941 159200 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.941 159200 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.941 159200 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.941 159200 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.941 159200 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.942 159200 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.942 159200 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.942 159200 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.942 159200 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.942 159200 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.942 159200 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.942 159200 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.942 159200 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.943 159200 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.943 159200 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.943 159200 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.943 159200 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.943 159200 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.943 159200 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.943 159200 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.944 159200 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.944 159200 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.944 159200 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.944 159200 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.944 159200 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.944 159200 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.944 159200 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.944 159200 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.945 159200 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.945 159200 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.945 159200 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.945 159200 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.945 159200 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.945 159200 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.945 159200 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.946 159200 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.946 159200 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.946 159200 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.946 159200 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.946 159200 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.946 159200 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.946 159200 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.946 159200 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.947 159200 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.947 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.947 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.947 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.947 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.948 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.948 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.948 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.948 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.948 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.948 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.948 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.949 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.949 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.949 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.949 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.949 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.949 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.949 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.950 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.950 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.950 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.950 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.950 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.950 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.950 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.951 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.951 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.951 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.951 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.951 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.951 159200 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.951 159200 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.952 159200 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.952 159200 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.952 159200 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:36:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:36:49.952 159200 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 06 09:36:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31347 DF PROTO=TCP SPT=36592 DPT=9102 SEQ=3173696261 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F67BE00000000001030307) 
Dec 06 09:36:53 np0005548790.localdomain sshd[159384]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:36:54 np0005548790.localdomain sshd[159384]: Accepted publickey for zuul from 192.168.122.30 port 45594 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:36:54 np0005548790.localdomain systemd-logind[760]: New session 52 of user zuul.
Dec 06 09:36:54 np0005548790.localdomain systemd[1]: Started Session 52 of User zuul.
Dec 06 09:36:54 np0005548790.localdomain sshd[159384]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:36:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31027 DF PROTO=TCP SPT=33518 DPT=9105 SEQ=2645892822 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F6885F0000000001030307) 
Dec 06 09:36:55 np0005548790.localdomain python3.9[159477]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:36:56 np0005548790.localdomain sudo[159571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pnbrsthrzvzfteddlhleonhmnkconsjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013815.6557236-64-146959661312279/AnsiballZ_command.py
Dec 06 09:36:56 np0005548790.localdomain sudo[159571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:56 np0005548790.localdomain python3.9[159573]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:36:56 np0005548790.localdomain sudo[159571]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:56 np0005548790.localdomain sudo[159676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ryfhovmobqjsjjylchlgyyrtjwhunasu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013816.5399652-89-191096857114178/AnsiballZ_command.py
Dec 06 09:36:56 np0005548790.localdomain sudo[159676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:56 np0005548790.localdomain python3.9[159678]: ansible-ansible.legacy.command Invoked with _raw_params=podman stop nova_virtlogd _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:36:57 np0005548790.localdomain systemd[1]: tmp-crun.pQFauX.mount: Deactivated successfully.
Dec 06 09:36:57 np0005548790.localdomain systemd[1]: libpod-e33b647f1c7f511a2e3d9afc8393ee744450ef1d9a8b9d253d39d08c54121c01.scope: Deactivated successfully.
Dec 06 09:36:57 np0005548790.localdomain podman[159679]: 2025-12-06 09:36:57.053870697 +0000 UTC m=+0.087512888 container died e33b647f1c7f511a2e3d9afc8393ee744450ef1d9a8b9d253d39d08c54121c01 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64)
Dec 06 09:36:57 np0005548790.localdomain podman[159679]: 2025-12-06 09:36:57.084216491 +0000 UTC m=+0.117858672 container cleanup e33b647f1c7f511a2e3d9afc8393ee744450ef1d9a8b9d253d39d08c54121c01 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.buildah.version=1.41.4)
Dec 06 09:36:57 np0005548790.localdomain sudo[159676]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:57 np0005548790.localdomain podman[159696]: 2025-12-06 09:36:57.134231812 +0000 UTC m=+0.076083311 container remove e33b647f1c7f511a2e3d9afc8393ee744450ef1d9a8b9d253d39d08c54121c01 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, batch=17.1_20251118.1, release=1761123044, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, build-date=2025-11-19T00:35:22Z)
Dec 06 09:36:57 np0005548790.localdomain systemd[1]: libpod-conmon-e33b647f1c7f511a2e3d9afc8393ee744450ef1d9a8b9d253d39d08c54121c01.scope: Deactivated successfully.
Dec 06 09:36:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35755 DF PROTO=TCP SPT=44992 DPT=9105 SEQ=4076690111 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F6931F0000000001030307) 
Dec 06 09:36:58 np0005548790.localdomain sudo[159801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mxohizoxjvutmelqfgjcxdcbhzjtzahf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013817.4668868-118-101136425315952/AnsiballZ_systemd_service.py
Dec 06 09:36:58 np0005548790.localdomain sudo[159801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:36:58 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-7d301709acbc74facad7e2d0e0c7cb4c38dc70cc38063e9c8b691a2c4c7e687e-merged.mount: Deactivated successfully.
Dec 06 09:36:58 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e33b647f1c7f511a2e3d9afc8393ee744450ef1d9a8b9d253d39d08c54121c01-userdata-shm.mount: Deactivated successfully.
Dec 06 09:36:58 np0005548790.localdomain python3.9[159803]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:36:58 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:36:58 np0005548790.localdomain systemd-rc-local-generator[159825]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:36:58 np0005548790.localdomain systemd-sysv-generator[159829]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:36:58 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:36:58 np0005548790.localdomain sudo[159801]: pam_unix(sudo:session): session closed for user root
Dec 06 09:36:59 np0005548790.localdomain python3.9[159929]: ansible-ansible.builtin.service_facts Invoked
Dec 06 09:36:59 np0005548790.localdomain network[159946]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 09:36:59 np0005548790.localdomain network[159947]: 'network-scripts' will be removed from distribution in near future.
Dec 06 09:36:59 np0005548790.localdomain network[159948]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 09:37:00 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:37:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31029 DF PROTO=TCP SPT=33518 DPT=9105 SEQ=2645892822 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F6A01F0000000001030307) 
Dec 06 09:37:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31349 DF PROTO=TCP SPT=36592 DPT=9102 SEQ=3173696261 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F6AB1F0000000001030307) 
Dec 06 09:37:05 np0005548790.localdomain sudo[160148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-llvbdptyvxjrxdqrrwdgywnhaadxmnur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013825.2299738-175-268886891078404/AnsiballZ_systemd_service.py
Dec 06 09:37:05 np0005548790.localdomain sudo[160148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:05 np0005548790.localdomain python3.9[160150]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:37:05 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:37:05 np0005548790.localdomain systemd-rc-local-generator[160177]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:37:05 np0005548790.localdomain systemd-sysv-generator[160181]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:37:05 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:37:06 np0005548790.localdomain systemd[1]: Stopped target tripleo_nova_libvirt.target.
Dec 06 09:37:06 np0005548790.localdomain sudo[160148]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:06 np0005548790.localdomain sudo[160280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pyntfygvptswhkbkuvdjqkvvwvuuhged ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013826.2662103-175-127507908548932/AnsiballZ_systemd_service.py
Dec 06 09:37:06 np0005548790.localdomain sudo[160280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48542 DF PROTO=TCP SPT=45096 DPT=9101 SEQ=3309160627 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F6B71F0000000001030307) 
Dec 06 09:37:06 np0005548790.localdomain python3.9[160282]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:37:06 np0005548790.localdomain sudo[160280]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:07 np0005548790.localdomain sudo[160373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ughohdwwpjdyrslmzduclegqmosppynp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013826.954824-175-129497958291843/AnsiballZ_systemd_service.py
Dec 06 09:37:07 np0005548790.localdomain sudo[160373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:07 np0005548790.localdomain python3.9[160375]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:37:07 np0005548790.localdomain sudo[160373]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:07 np0005548790.localdomain sudo[160466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dibkenhvcxpkqgvmqhxjxrqehmrcnsml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013827.6709583-175-241245960542852/AnsiballZ_systemd_service.py
Dec 06 09:37:07 np0005548790.localdomain sudo[160466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:08 np0005548790.localdomain python3.9[160468]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:37:08 np0005548790.localdomain sudo[160466]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:08 np0005548790.localdomain sudo[160559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nodhifsyxwkgqkrsmcclfhhorzqcrbov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013828.382908-175-138232160610876/AnsiballZ_systemd_service.py
Dec 06 09:37:08 np0005548790.localdomain sudo[160559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:08 np0005548790.localdomain python3.9[160561]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:37:08 np0005548790.localdomain sudo[160559]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:09 np0005548790.localdomain sudo[160652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zkfvxmuasryjvligyrnkfslsfdivpbnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013829.063988-175-12143152025512/AnsiballZ_systemd_service.py
Dec 06 09:37:09 np0005548790.localdomain sudo[160652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:09 np0005548790.localdomain python3.9[160654]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:37:09 np0005548790.localdomain sudo[160652]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:09 np0005548790.localdomain sudo[160745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kocyyreuywhhpplpwkcsfdoyxqdwtyls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013829.7346134-175-57882921446282/AnsiballZ_systemd_service.py
Dec 06 09:37:09 np0005548790.localdomain sudo[160745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37902 DF PROTO=TCP SPT=59620 DPT=9100 SEQ=2468267367 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F6C51F0000000001030307) 
Dec 06 09:37:10 np0005548790.localdomain python3.9[160747]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:37:10 np0005548790.localdomain sudo[160745]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42 DF PROTO=TCP SPT=44424 DPT=9100 SEQ=3171995358 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F6D11F0000000001030307) 
Dec 06 09:37:13 np0005548790.localdomain sudo[160838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-viknyamnpygffxykfnhfzzeklarenjvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013832.9824069-332-194688605050272/AnsiballZ_file.py
Dec 06 09:37:13 np0005548790.localdomain sudo[160838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:13 np0005548790.localdomain python3.9[160840]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:37:13 np0005548790.localdomain sudo[160838]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:14 np0005548790.localdomain sudo[160930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rnfogweboltudorzoepldnqpvqhysssg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013834.0053046-332-215362876375554/AnsiballZ_file.py
Dec 06 09:37:14 np0005548790.localdomain sudo[160930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:14 np0005548790.localdomain python3.9[160932]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:37:14 np0005548790.localdomain sudo[160930]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:14 np0005548790.localdomain sudo[161022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tcutlsxaeygtyevbptlubotcbikvroty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013834.609335-332-131527113378682/AnsiballZ_file.py
Dec 06 09:37:14 np0005548790.localdomain sudo[161022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:15 np0005548790.localdomain python3.9[161024]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:37:15 np0005548790.localdomain sudo[161022]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:37:15 np0005548790.localdomain podman[161084]: 2025-12-06 09:37:15.567078723 +0000 UTC m=+0.082074093 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 09:37:15 np0005548790.localdomain podman[161084]: 2025-12-06 09:37:15.640195764 +0000 UTC m=+0.155191174 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 09:37:15 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:37:16 np0005548790.localdomain sudo[161139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jceuaczoaqjtilwchsthrrzspgiqvcrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013835.16241-332-222580832200508/AnsiballZ_file.py
Dec 06 09:37:16 np0005548790.localdomain sudo[161139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:16 np0005548790.localdomain python3.9[161141]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:37:16 np0005548790.localdomain sudo[161139]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:16 np0005548790.localdomain sudo[161231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vkhxxmrmosfjlozknizijyjrjppbsxrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013836.4360425-332-254038225600785/AnsiballZ_file.py
Dec 06 09:37:16 np0005548790.localdomain sudo[161231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:16 np0005548790.localdomain python3.9[161233]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:37:16 np0005548790.localdomain sudo[161231]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:17 np0005548790.localdomain sudo[161323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hzprvvmcqtriupotgoyniisgdhogjhxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013836.9768038-332-166955930724890/AnsiballZ_file.py
Dec 06 09:37:17 np0005548790.localdomain sudo[161323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:37:17 np0005548790.localdomain podman[161326]: 2025-12-06 09:37:17.337636452 +0000 UTC m=+0.086183772 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec 06 09:37:17 np0005548790.localdomain podman[161326]: 2025-12-06 09:37:17.372320592 +0000 UTC m=+0.120867902 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:37:17 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:37:17 np0005548790.localdomain python3.9[161325]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:37:17 np0005548790.localdomain sudo[161323]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:17 np0005548790.localdomain sudo[161432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lrcscpotvunhdbkwlimkrtfkujucjafb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013837.5847077-332-201555164736661/AnsiballZ_file.py
Dec 06 09:37:17 np0005548790.localdomain sudo[161432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:18 np0005548790.localdomain python3.9[161434]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:37:18 np0005548790.localdomain sudo[161432]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39984 DF PROTO=TCP SPT=39960 DPT=9102 SEQ=4247811280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F6E5210000000001030307) 
Dec 06 09:37:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64571 DF PROTO=TCP SPT=46536 DPT=9882 SEQ=2139379783 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F6E5280000000001030307) 
Dec 06 09:37:18 np0005548790.localdomain sudo[161524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hqtfuihfphhjaciaczuaylnjaxefcpjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013838.3276286-481-18633453364545/AnsiballZ_file.py
Dec 06 09:37:18 np0005548790.localdomain sudo[161524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:18 np0005548790.localdomain python3.9[161526]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:37:18 np0005548790.localdomain sudo[161524]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:19 np0005548790.localdomain sudo[161616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pilabewiewytzzbosdrxffkwliexwcsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013838.9007144-481-3062308762523/AnsiballZ_file.py
Dec 06 09:37:19 np0005548790.localdomain sudo[161616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:19 np0005548790.localdomain python3.9[161618]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:37:19 np0005548790.localdomain sudo[161616]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:19 np0005548790.localdomain sudo[161708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eqnfovcdygiqmqzojcfbcmjtiqwoggzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013839.4822226-481-67009079170022/AnsiballZ_file.py
Dec 06 09:37:19 np0005548790.localdomain sudo[161708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:19 np0005548790.localdomain python3.9[161710]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:37:19 np0005548790.localdomain sudo[161708]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:20 np0005548790.localdomain sudo[161800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-brgkimddkigcnmbkvkbkpsuzwgdvwzll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013840.0611546-481-96124939646081/AnsiballZ_file.py
Dec 06 09:37:20 np0005548790.localdomain sudo[161800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:20 np0005548790.localdomain python3.9[161802]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:37:20 np0005548790.localdomain sudo[161800]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:20 np0005548790.localdomain sudo[161892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bnvldsqwfipgxamzkdbzqffmtkbbcaym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013840.686648-481-14430471527529/AnsiballZ_file.py
Dec 06 09:37:20 np0005548790.localdomain sudo[161892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:21 np0005548790.localdomain python3.9[161894]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:37:21 np0005548790.localdomain sudo[161892]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43 DF PROTO=TCP SPT=44424 DPT=9100 SEQ=3171995358 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F6F11F0000000001030307) 
Dec 06 09:37:21 np0005548790.localdomain sudo[161984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iybvrcggxjalhqxucadjqrbxdticlbax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013841.2956588-481-125201829863989/AnsiballZ_file.py
Dec 06 09:37:21 np0005548790.localdomain sudo[161984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:21 np0005548790.localdomain python3.9[161986]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:37:21 np0005548790.localdomain sudo[161984]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:22 np0005548790.localdomain sudo[162076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hyqaalwdxinwdtixrwaiiuekfbkpkigz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013841.8851871-481-14439888770211/AnsiballZ_file.py
Dec 06 09:37:22 np0005548790.localdomain sudo[162076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:22 np0005548790.localdomain python3.9[162078]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:37:22 np0005548790.localdomain sudo[162076]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:22 np0005548790.localdomain sudo[162168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rahxyykqsbsmqfdfruvtijrssszkfllz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013842.6872597-635-187495230129798/AnsiballZ_command.py
Dec 06 09:37:22 np0005548790.localdomain sudo[162168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:23 np0005548790.localdomain python3.9[162170]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                              systemctl disable --now certmonger.service
                                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                            fi
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:37:23 np0005548790.localdomain sudo[162168]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:23 np0005548790.localdomain python3.9[162262]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 09:37:24 np0005548790.localdomain sudo[162352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bhnmmsiuwazsfzotnjaiewmcmrnrwvcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013844.23992-689-216268717044011/AnsiballZ_systemd_service.py
Dec 06 09:37:24 np0005548790.localdomain sudo[162352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4454 DF PROTO=TCP SPT=33888 DPT=9105 SEQ=452664483 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F6FD9F0000000001030307) 
Dec 06 09:37:24 np0005548790.localdomain python3.9[162354]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:37:24 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:37:24 np0005548790.localdomain systemd-rc-local-generator[162376]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:37:24 np0005548790.localdomain systemd-sysv-generator[162380]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:37:24 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:37:25 np0005548790.localdomain sudo[162352]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:26 np0005548790.localdomain sudo[162479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hfidiycoxuvuebwopkzlzjwhavmvhpsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013845.8273094-713-28494919596734/AnsiballZ_command.py
Dec 06 09:37:26 np0005548790.localdomain sudo[162479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:26 np0005548790.localdomain python3.9[162481]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:37:26 np0005548790.localdomain sudo[162479]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:26 np0005548790.localdomain sudo[162572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-biwemdongfbrooqnujaoichrkzyjkjdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013846.3810763-713-181820445498969/AnsiballZ_command.py
Dec 06 09:37:26 np0005548790.localdomain sudo[162572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:26 np0005548790.localdomain python3.9[162574]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:37:26 np0005548790.localdomain sudo[162572]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:27 np0005548790.localdomain sudo[162665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jbihfzdlsavpzuwdvdxtukwrtyjtlgyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013846.9663668-713-64157028037464/AnsiballZ_command.py
Dec 06 09:37:27 np0005548790.localdomain sudo[162665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:27 np0005548790.localdomain python3.9[162667]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:37:27 np0005548790.localdomain sudo[162665]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59102 DF PROTO=TCP SPT=43512 DPT=9105 SEQ=1403901614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F709200000000001030307) 
Dec 06 09:37:28 np0005548790.localdomain sudo[162758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wamaiqjbkzyiuwtnnccpbdkqjoacuawq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013847.9917824-713-129543388225307/AnsiballZ_command.py
Dec 06 09:37:28 np0005548790.localdomain sudo[162758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:28 np0005548790.localdomain python3.9[162760]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:37:28 np0005548790.localdomain sudo[162758]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:28 np0005548790.localdomain sudo[162851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fafozouiyinfbgwxhglkldptjipoucaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013848.5446277-713-110986994791398/AnsiballZ_command.py
Dec 06 09:37:28 np0005548790.localdomain sudo[162851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:28 np0005548790.localdomain python3.9[162853]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:37:28 np0005548790.localdomain sudo[162851]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:29 np0005548790.localdomain sudo[162944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-clwcejovbyuzjpvzyvewrkfslrrjvhec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013849.0875301-713-52745591029084/AnsiballZ_command.py
Dec 06 09:37:29 np0005548790.localdomain sudo[162944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:29 np0005548790.localdomain python3.9[162946]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:37:29 np0005548790.localdomain sudo[162944]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:29 np0005548790.localdomain sudo[163037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wjwsyhzbfdgcsnbuptaldiqznwtgosje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013849.6737022-713-238049439624642/AnsiballZ_command.py
Dec 06 09:37:29 np0005548790.localdomain sudo[163037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:30 np0005548790.localdomain python3.9[163039]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:37:30 np0005548790.localdomain sudo[163037]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4456 DF PROTO=TCP SPT=33888 DPT=9105 SEQ=452664483 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F7155F0000000001030307) 
Dec 06 09:37:31 np0005548790.localdomain sudo[163130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jkojzedqsiehqtcpkrzzgvckbkohlywt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013851.4008455-875-39325260561482/AnsiballZ_getent.py
Dec 06 09:37:31 np0005548790.localdomain sudo[163130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:31 np0005548790.localdomain python3.9[163132]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec 06 09:37:31 np0005548790.localdomain sudo[163130]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:32 np0005548790.localdomain sudo[163223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ayhxgifjrjtytshndxjphszdldjdvypr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013852.1685753-898-2398751429090/AnsiballZ_group.py
Dec 06 09:37:32 np0005548790.localdomain sudo[163223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:32 np0005548790.localdomain python3.9[163225]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 06 09:37:32 np0005548790.localdomain groupadd[163226]: group added to /etc/group: name=libvirt, GID=42473
Dec 06 09:37:32 np0005548790.localdomain groupadd[163226]: group added to /etc/gshadow: name=libvirt
Dec 06 09:37:32 np0005548790.localdomain groupadd[163226]: new group: name=libvirt, GID=42473
Dec 06 09:37:33 np0005548790.localdomain sudo[163223]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:33 np0005548790.localdomain sudo[163321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oqoecaybndoxfwovbuwiqdfcijnriftc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013853.1977472-922-72989824716651/AnsiballZ_user.py
Dec 06 09:37:33 np0005548790.localdomain sudo[163321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39988 DF PROTO=TCP SPT=39960 DPT=9102 SEQ=4247811280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F7211F0000000001030307) 
Dec 06 09:37:33 np0005548790.localdomain python3.9[163323]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005548790.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 06 09:37:33 np0005548790.localdomain useradd[163325]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Dec 06 09:37:34 np0005548790.localdomain sudo[163321]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:34 np0005548790.localdomain sudo[163421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pnkqwffflzcnccrjvzssmgbvlavbyylm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013854.445672-955-72005407115050/AnsiballZ_setup.py
Dec 06 09:37:34 np0005548790.localdomain sudo[163421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:34 np0005548790.localdomain python3.9[163423]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:37:35 np0005548790.localdomain sudo[163421]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:35 np0005548790.localdomain sudo[163475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rhxtwnqhumghmdrmkqnqqecuhujuyjtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765013854.445672-955-72005407115050/AnsiballZ_dnf.py
Dec 06 09:37:35 np0005548790.localdomain sudo[163475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:37:35 np0005548790.localdomain python3.9[163477]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:37:37 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57268 DF PROTO=TCP SPT=49646 DPT=9100 SEQ=2519971046 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F72E9F0000000001030307) 
Dec 06 09:37:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18095 DF PROTO=TCP SPT=34362 DPT=9100 SEQ=1902033170 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F73B200000000001030307) 
Dec 06 09:37:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57270 DF PROTO=TCP SPT=49646 DPT=9100 SEQ=2519971046 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F7465F0000000001030307) 
Dec 06 09:37:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:37:46 np0005548790.localdomain podman[163549]: 2025-12-06 09:37:46.583303371 +0000 UTC m=+0.083374262 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:37:46 np0005548790.localdomain podman[163549]: 2025-12-06 09:37:46.652193111 +0000 UTC m=+0.152264002 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 06 09:37:46 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:37:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:37:47 np0005548790.localdomain systemd[1]: tmp-crun.YdBAQk.mount: Deactivated successfully.
Dec 06 09:37:47 np0005548790.localdomain podman[163577]: 2025-12-06 09:37:47.574473368 +0000 UTC m=+0.092833243 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:37:47 np0005548790.localdomain podman[163577]: 2025-12-06 09:37:47.60710883 +0000 UTC m=+0.125468685 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec 06 09:37:47 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:37:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:37:48.344 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:37:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:37:48.345 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:37:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:37:48.345 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:37:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41024 DF PROTO=TCP SPT=43822 DPT=9102 SEQ=4079311894 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F75A510000000001030307) 
Dec 06 09:37:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8599 DF PROTO=TCP SPT=45866 DPT=9882 SEQ=1726387889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F75A580000000001030307) 
Dec 06 09:37:48 np0005548790.localdomain sudo[163594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:37:48 np0005548790.localdomain sudo[163594]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:37:48 np0005548790.localdomain sudo[163594]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:48 np0005548790.localdomain sudo[163612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:37:48 np0005548790.localdomain sudo[163612]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:37:49 np0005548790.localdomain sudo[163612]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:50 np0005548790.localdomain sudo[163662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:37:50 np0005548790.localdomain sudo[163662]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:37:50 np0005548790.localdomain sudo[163662]: pam_unix(sudo:session): session closed for user root
Dec 06 09:37:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8601 DF PROTO=TCP SPT=45866 DPT=9882 SEQ=1726387889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F7665F0000000001030307) 
Dec 06 09:37:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12130 DF PROTO=TCP SPT=37728 DPT=9105 SEQ=4191303782 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F7729F0000000001030307) 
Dec 06 09:37:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31032 DF PROTO=TCP SPT=33518 DPT=9105 SEQ=2645892822 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F77F1F0000000001030307) 
Dec 06 09:38:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12132 DF PROTO=TCP SPT=37728 DPT=9105 SEQ=4191303782 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F78A600000000001030307) 
Dec 06 09:38:01 np0005548790.localdomain kernel: SELinux:  Converting 2746 SID table entries...
Dec 06 09:38:01 np0005548790.localdomain kernel: SELinux:  Context system_u:object_r:insights_client_cache_t:s0 became invalid (unmapped).
Dec 06 09:38:01 np0005548790.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 09:38:01 np0005548790.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 06 09:38:01 np0005548790.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 09:38:01 np0005548790.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 06 09:38:01 np0005548790.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 09:38:01 np0005548790.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 09:38:01 np0005548790.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 09:38:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41028 DF PROTO=TCP SPT=43822 DPT=9102 SEQ=4079311894 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F7971F0000000001030307) 
Dec 06 09:38:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29589 DF PROTO=TCP SPT=51194 DPT=9101 SEQ=3596591736 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F7A1200000000001030307) 
Dec 06 09:38:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45 DF PROTO=TCP SPT=44424 DPT=9100 SEQ=3171995358 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F7AF1F0000000001030307) 
Dec 06 09:38:11 np0005548790.localdomain kernel: SELinux:  Converting 2749 SID table entries...
Dec 06 09:38:11 np0005548790.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 09:38:11 np0005548790.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 06 09:38:11 np0005548790.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 09:38:11 np0005548790.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 06 09:38:11 np0005548790.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 09:38:11 np0005548790.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 09:38:11 np0005548790.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 09:38:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9440 DF PROTO=TCP SPT=56700 DPT=9100 SEQ=1108359472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F7BB9F0000000001030307) 
Dec 06 09:38:17 np0005548790.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=20 res=1
Dec 06 09:38:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:38:17 np0005548790.localdomain podman[164712]: 2025-12-06 09:38:17.610696855 +0000 UTC m=+0.115331127 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 09:38:17 np0005548790.localdomain podman[164712]: 2025-12-06 09:38:17.64836168 +0000 UTC m=+0.152995972 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 09:38:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:38:17 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:38:17 np0005548790.localdomain systemd[1]: tmp-crun.CULt92.mount: Deactivated successfully.
Dec 06 09:38:17 np0005548790.localdomain podman[164736]: 2025-12-06 09:38:17.775018975 +0000 UTC m=+0.091810576 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 09:38:17 np0005548790.localdomain podman[164736]: 2025-12-06 09:38:17.783106109 +0000 UTC m=+0.099897690 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 09:38:17 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:38:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=595 DF PROTO=TCP SPT=40192 DPT=9102 SEQ=2833995547 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F7CF810000000001030307) 
Dec 06 09:38:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20130 DF PROTO=TCP SPT=36110 DPT=9882 SEQ=318365004 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F7CF880000000001030307) 
Dec 06 09:38:19 np0005548790.localdomain kernel: SELinux:  Converting 2749 SID table entries...
Dec 06 09:38:19 np0005548790.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 09:38:19 np0005548790.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 06 09:38:19 np0005548790.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 09:38:19 np0005548790.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 06 09:38:19 np0005548790.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 09:38:19 np0005548790.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 09:38:19 np0005548790.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 09:38:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=597 DF PROTO=TCP SPT=40192 DPT=9102 SEQ=2833995547 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F7DBA00000000001030307) 
Dec 06 09:38:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28903 DF PROTO=TCP SPT=54246 DPT=9105 SEQ=3024017020 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F7E7DF0000000001030307) 
Dec 06 09:38:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4459 DF PROTO=TCP SPT=33888 DPT=9105 SEQ=452664483 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F7F3200000000001030307) 
Dec 06 09:38:27 np0005548790.localdomain kernel: SELinux:  Converting 2749 SID table entries...
Dec 06 09:38:27 np0005548790.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 09:38:27 np0005548790.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 06 09:38:27 np0005548790.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 09:38:27 np0005548790.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 06 09:38:27 np0005548790.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 09:38:27 np0005548790.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 09:38:27 np0005548790.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 09:38:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28905 DF PROTO=TCP SPT=54246 DPT=9105 SEQ=3024017020 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F7FFA00000000001030307) 
Dec 06 09:38:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=599 DF PROTO=TCP SPT=40192 DPT=9102 SEQ=2833995547 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F80B1F0000000001030307) 
Dec 06 09:38:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25424 DF PROTO=TCP SPT=56078 DPT=9101 SEQ=3666665876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F8171F0000000001030307) 
Dec 06 09:38:37 np0005548790.localdomain kernel: SELinux:  Converting 2749 SID table entries...
Dec 06 09:38:37 np0005548790.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 09:38:37 np0005548790.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 06 09:38:37 np0005548790.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 09:38:37 np0005548790.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 06 09:38:37 np0005548790.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 09:38:37 np0005548790.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 09:38:37 np0005548790.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 09:38:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57273 DF PROTO=TCP SPT=49646 DPT=9100 SEQ=2519971046 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F8251F0000000001030307) 
Dec 06 09:38:41 np0005548790.localdomain sshd[164783]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:38:41 np0005548790.localdomain sshd[164783]: error: kex_exchange_identification: Connection closed by remote host
Dec 06 09:38:41 np0005548790.localdomain sshd[164783]: Connection closed by 43.163.93.82 port 37068
Dec 06 09:38:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64542 DF PROTO=TCP SPT=49834 DPT=9100 SEQ=311829268 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F830DF0000000001030307) 
Dec 06 09:38:45 np0005548790.localdomain kernel: SELinux:  Converting 2749 SID table entries...
Dec 06 09:38:45 np0005548790.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 09:38:45 np0005548790.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 06 09:38:45 np0005548790.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 09:38:45 np0005548790.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 06 09:38:45 np0005548790.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 09:38:45 np0005548790.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 09:38:45 np0005548790.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 09:38:46 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:38:46 np0005548790.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=24 res=1
Dec 06 09:38:46 np0005548790.localdomain systemd-rc-local-generator[164824]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:38:46 np0005548790.localdomain systemd-sysv-generator[164827]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:38:46 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:38:46 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:38:46 np0005548790.localdomain systemd-rc-local-generator[164854]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:38:46 np0005548790.localdomain systemd-sysv-generator[164858]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:38:46 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:38:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:38:48.344 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:38:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:38:48.346 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:38:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:38:48.346 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:38:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58718 DF PROTO=TCP SPT=33918 DPT=9102 SEQ=2757281454 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F844B00000000001030307) 
Dec 06 09:38:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64753 DF PROTO=TCP SPT=52030 DPT=9882 SEQ=1795345347 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F844B80000000001030307) 
Dec 06 09:38:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:38:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:38:48 np0005548790.localdomain podman[164874]: 2025-12-06 09:38:48.569900477 +0000 UTC m=+0.080411903 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 09:38:48 np0005548790.localdomain podman[164874]: 2025-12-06 09:38:48.579576232 +0000 UTC m=+0.090087658 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:38:48 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:38:48 np0005548790.localdomain systemd[1]: tmp-crun.dBdXik.mount: Deactivated successfully.
Dec 06 09:38:48 np0005548790.localdomain podman[164875]: 2025-12-06 09:38:48.640382447 +0000 UTC m=+0.149802874 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 09:38:48 np0005548790.localdomain podman[164875]: 2025-12-06 09:38:48.747690078 +0000 UTC m=+0.257110465 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 09:38:48 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:38:50 np0005548790.localdomain sudo[164916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:38:50 np0005548790.localdomain sudo[164916]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:38:50 np0005548790.localdomain sudo[164916]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:50 np0005548790.localdomain sudo[164934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:38:50 np0005548790.localdomain sudo[164934]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:38:50 np0005548790.localdomain sudo[164934]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58720 DF PROTO=TCP SPT=33918 DPT=9102 SEQ=2757281454 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F850A00000000001030307) 
Dec 06 09:38:51 np0005548790.localdomain sudo[164985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:38:51 np0005548790.localdomain sudo[164985]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:38:51 np0005548790.localdomain sudo[164985]: pam_unix(sudo:session): session closed for user root
Dec 06 09:38:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55020 DF PROTO=TCP SPT=51746 DPT=9105 SEQ=2999305610 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F85D1F0000000001030307) 
Dec 06 09:38:55 np0005548790.localdomain kernel: SELinux:  Converting 2750 SID table entries...
Dec 06 09:38:55 np0005548790.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 06 09:38:55 np0005548790.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 06 09:38:55 np0005548790.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 06 09:38:55 np0005548790.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 06 09:38:55 np0005548790.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 06 09:38:55 np0005548790.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 06 09:38:55 np0005548790.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 06 09:38:56 np0005548790.localdomain groupadd[165119]: group added to /etc/group: name=clevis, GID=985
Dec 06 09:38:56 np0005548790.localdomain groupadd[165119]: group added to /etc/gshadow: name=clevis
Dec 06 09:38:56 np0005548790.localdomain groupadd[165119]: new group: name=clevis, GID=985
Dec 06 09:38:56 np0005548790.localdomain useradd[165126]: new user: name=clevis, UID=985, GID=985, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Dec 06 09:38:56 np0005548790.localdomain usermod[165136]: add 'clevis' to group 'tss'
Dec 06 09:38:56 np0005548790.localdomain usermod[165136]: add 'clevis' to shadow group 'tss'
Dec 06 09:38:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12135 DF PROTO=TCP SPT=37728 DPT=9105 SEQ=4191303782 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F8691F0000000001030307) 
Dec 06 09:39:00 np0005548790.localdomain groupadd[165158]: group added to /etc/group: name=dnsmasq, GID=984
Dec 06 09:39:00 np0005548790.localdomain groupadd[165158]: group added to /etc/gshadow: name=dnsmasq
Dec 06 09:39:00 np0005548790.localdomain groupadd[165158]: new group: name=dnsmasq, GID=984
Dec 06 09:39:00 np0005548790.localdomain useradd[165165]: new user: name=dnsmasq, UID=984, GID=984, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Dec 06 09:39:00 np0005548790.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Dec 06 09:39:00 np0005548790.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=25 res=1
Dec 06 09:39:00 np0005548790.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Dec 06 09:39:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55022 DF PROTO=TCP SPT=51746 DPT=9105 SEQ=2999305610 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F874E00000000001030307) 
Dec 06 09:39:01 np0005548790.localdomain polkitd[1036]: Reloading rules
Dec 06 09:39:01 np0005548790.localdomain polkitd[1036]: Collecting garbage unconditionally...
Dec 06 09:39:01 np0005548790.localdomain polkitd[1036]: Loading rules from directory /etc/polkit-1/rules.d
Dec 06 09:39:01 np0005548790.localdomain polkitd[1036]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 06 09:39:01 np0005548790.localdomain polkitd[1036]: Finished loading, compiling and executing 5 rules
Dec 06 09:39:01 np0005548790.localdomain polkitd[1036]: Reloading rules
Dec 06 09:39:01 np0005548790.localdomain polkitd[1036]: Collecting garbage unconditionally...
Dec 06 09:39:01 np0005548790.localdomain polkitd[1036]: Loading rules from directory /etc/polkit-1/rules.d
Dec 06 09:39:01 np0005548790.localdomain polkitd[1036]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 06 09:39:01 np0005548790.localdomain polkitd[1036]: Finished loading, compiling and executing 5 rules
Dec 06 09:39:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58722 DF PROTO=TCP SPT=33918 DPT=9102 SEQ=2757281454 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F8811F0000000001030307) 
Dec 06 09:39:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57004 DF PROTO=TCP SPT=39628 DPT=9101 SEQ=3163843945 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F88B200000000001030307) 
Dec 06 09:39:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9443 DF PROTO=TCP SPT=56700 DPT=9100 SEQ=1108359472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F8991F0000000001030307) 
Dec 06 09:39:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54374 DF PROTO=TCP SPT=42224 DPT=9100 SEQ=975173215 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F8A5DF0000000001030307) 
Dec 06 09:39:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25518 DF PROTO=TCP SPT=52592 DPT=9102 SEQ=662746923 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F8B9E10000000001030307) 
Dec 06 09:39:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35155 DF PROTO=TCP SPT=41442 DPT=9882 SEQ=3434824694 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F8B9E70000000001030307) 
Dec 06 09:39:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:39:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:39:19 np0005548790.localdomain podman[169783]: 2025-12-06 09:39:19.567985276 +0000 UTC m=+0.075276248 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 09:39:19 np0005548790.localdomain podman[169787]: 2025-12-06 09:39:19.576768717 +0000 UTC m=+0.078046660 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Dec 06 09:39:19 np0005548790.localdomain podman[169783]: 2025-12-06 09:39:19.580060044 +0000 UTC m=+0.087351016 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 09:39:19 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:39:19 np0005548790.localdomain podman[169787]: 2025-12-06 09:39:19.607435537 +0000 UTC m=+0.108688829 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 09:39:19 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:39:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35157 DF PROTO=TCP SPT=41442 DPT=9882 SEQ=3434824694 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F8C5E00000000001030307) 
Dec 06 09:39:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=456 DF PROTO=TCP SPT=56580 DPT=9105 SEQ=1065687561 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F8D25F0000000001030307) 
Dec 06 09:39:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28908 DF PROTO=TCP SPT=54246 DPT=9105 SEQ=3024017020 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F8DD1F0000000001030307) 
Dec 06 09:39:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=458 DF PROTO=TCP SPT=56580 DPT=9105 SEQ=1065687561 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F8EA200000000001030307) 
Dec 06 09:39:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25522 DF PROTO=TCP SPT=52592 DPT=9102 SEQ=662746923 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F8F5200000000001030307) 
Dec 06 09:39:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55240 DF PROTO=TCP SPT=60802 DPT=9101 SEQ=83876793 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F9011F0000000001030307) 
Dec 06 09:39:38 np0005548790.localdomain groupadd[182204]: group added to /etc/group: name=ceph, GID=167
Dec 06 09:39:38 np0005548790.localdomain groupadd[182204]: group added to /etc/gshadow: name=ceph
Dec 06 09:39:38 np0005548790.localdomain groupadd[182204]: new group: name=ceph, GID=167
Dec 06 09:39:38 np0005548790.localdomain useradd[182210]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Dec 06 09:39:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64545 DF PROTO=TCP SPT=49834 DPT=9100 SEQ=311829268 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F90F1F0000000001030307) 
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: Stopping OpenSSH server daemon...
Dec 06 09:39:42 np0005548790.localdomain sshd[119211]: Received signal 15; terminating.
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: sshd.service: Deactivated successfully.
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: Stopped OpenSSH server daemon.
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: Stopped target sshd-keygen.target.
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: Stopping sshd-keygen.target...
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: Reached target sshd-keygen.target.
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: Starting OpenSSH server daemon...
Dec 06 09:39:42 np0005548790.localdomain sshd[182854]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:39:42 np0005548790.localdomain sshd[182854]: Server listening on 0.0.0.0 port 22.
Dec 06 09:39:42 np0005548790.localdomain sshd[182854]: Server listening on :: port 22.
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: Started OpenSSH server daemon.
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:42 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10871 DF PROTO=TCP SPT=39514 DPT=9100 SEQ=2874496174 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F91B1F0000000001030307) 
Dec 06 09:39:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:43 np0005548790.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 09:39:43 np0005548790.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 06 09:39:43 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:39:44 np0005548790.localdomain systemd-sysv-generator[183092]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:39:44 np0005548790.localdomain systemd-rc-local-generator[183086]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:39:44 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:44 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:44 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:39:44 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:44 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:44 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:44 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:44 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:44 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:44 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:39:44 np0005548790.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 09:39:44 np0005548790.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 09:39:48 np0005548790.localdomain sudo[163475]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:39:48.346 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:39:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:39:48.347 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:39:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:39:48.347 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:39:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11557 DF PROTO=TCP SPT=39430 DPT=9102 SEQ=4029694233 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F92F110000000001030307) 
Dec 06 09:39:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53421 DF PROTO=TCP SPT=53504 DPT=9882 SEQ=1580980981 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F92F180000000001030307) 
Dec 06 09:39:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:39:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:39:50 np0005548790.localdomain podman[189348]: 2025-12-06 09:39:50.090240818 +0000 UTC m=+0.090543838 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 09:39:50 np0005548790.localdomain podman[189348]: 2025-12-06 09:39:50.169349699 +0000 UTC m=+0.169652679 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:39:50 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:39:50 np0005548790.localdomain podman[189346]: 2025-12-06 09:39:50.170520751 +0000 UTC m=+0.178274201 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 09:39:50 np0005548790.localdomain podman[189346]: 2025-12-06 09:39:50.251993106 +0000 UTC m=+0.259746566 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 09:39:50 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:39:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53423 DF PROTO=TCP SPT=53504 DPT=9882 SEQ=1580980981 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F93B1F0000000001030307) 
Dec 06 09:39:51 np0005548790.localdomain sudo[190165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:39:51 np0005548790.localdomain sudo[190165]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:39:51 np0005548790.localdomain sudo[190165]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:51 np0005548790.localdomain sudo[190231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:39:51 np0005548790.localdomain sudo[190231]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:39:52 np0005548790.localdomain sudo[190231]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:53 np0005548790.localdomain sudo[190942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:39:53 np0005548790.localdomain sudo[190942]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:39:53 np0005548790.localdomain sudo[190942]: pam_unix(sudo:session): session closed for user root
Dec 06 09:39:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60835 DF PROTO=TCP SPT=43276 DPT=9105 SEQ=2811796419 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F9475F0000000001030307) 
Dec 06 09:39:55 np0005548790.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 09:39:55 np0005548790.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 06 09:39:55 np0005548790.localdomain systemd[1]: man-db-cache-update.service: Consumed 13.660s CPU time.
Dec 06 09:39:55 np0005548790.localdomain systemd[1]: run-r5e68c77ac07d442bb9208c1fc60d6cda.service: Deactivated successfully.
Dec 06 09:39:55 np0005548790.localdomain systemd[1]: run-r2ff3cbb90cce40d492a5b7fe8dee5e1c.service: Deactivated successfully.
Dec 06 09:39:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55025 DF PROTO=TCP SPT=51746 DPT=9105 SEQ=2999305610 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F9531F0000000001030307) 
Dec 06 09:40:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60837 DF PROTO=TCP SPT=43276 DPT=9105 SEQ=2811796419 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F95F1F0000000001030307) 
Dec 06 09:40:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53425 DF PROTO=TCP SPT=53504 DPT=9882 SEQ=1580980981 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F96B1F0000000001030307) 
Dec 06 09:40:04 np0005548790.localdomain sudo[191864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dhpbekpbdqktprxqohseaejsnqegxegj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014004.0007956-992-30940740708326/AnsiballZ_systemd.py
Dec 06 09:40:04 np0005548790.localdomain sudo[191864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:04 np0005548790.localdomain python3.9[191866]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 09:40:04 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:40:05 np0005548790.localdomain systemd-rc-local-generator[191892]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:40:05 np0005548790.localdomain systemd-sysv-generator[191897]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:40:05 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:05 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:40:05 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:05 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:05 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:05 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:05 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:05 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:05 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:05 np0005548790.localdomain sudo[191864]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:05 np0005548790.localdomain sudo[192013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qashgychlezctlksgeteyvibuvtpakto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014005.3697526-992-103388504007399/AnsiballZ_systemd.py
Dec 06 09:40:05 np0005548790.localdomain sudo[192013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:05 np0005548790.localdomain python3.9[192015]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 09:40:05 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:40:06 np0005548790.localdomain systemd-rc-local-generator[192041]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:40:06 np0005548790.localdomain systemd-sysv-generator[192045]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:40:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:40:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:06 np0005548790.localdomain sudo[192013]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:06 np0005548790.localdomain sudo[192162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uwgtsgfxcepqjxnqdhjnajdavibkgwbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014006.4091775-992-222909064992587/AnsiballZ_systemd.py
Dec 06 09:40:06 np0005548790.localdomain sudo[192162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:06 np0005548790.localdomain python3.9[192164]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 09:40:07 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:40:07 np0005548790.localdomain systemd-rc-local-generator[192193]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:40:07 np0005548790.localdomain systemd-sysv-generator[192196]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:40:07 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:07 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:40:07 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:07 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:07 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:07 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:07 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:07 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43340 DF PROTO=TCP SPT=51504 DPT=9100 SEQ=417053145 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F9789F0000000001030307) 
Dec 06 09:40:07 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:07 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:07 np0005548790.localdomain sudo[192162]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:08 np0005548790.localdomain sudo[192311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ovnvahbgonzaxkxzbchkxjqedwdtwrnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014008.0940924-992-234392439501386/AnsiballZ_systemd.py
Dec 06 09:40:08 np0005548790.localdomain sudo[192311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:08 np0005548790.localdomain python3.9[192313]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 09:40:08 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:40:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 09:40:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 6000.1 total, 600.0 interval
                                                          Cumulative writes: 5186 writes, 23K keys, 5186 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5186 writes, 682 syncs, 7.60 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x561316669610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x561316669610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x561316669610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5613166682d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Dec 06 09:40:08 np0005548790.localdomain systemd-sysv-generator[192341]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:40:08 np0005548790.localdomain systemd-rc-local-generator[192338]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:40:08 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:08 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:40:08 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:08 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:08 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:08 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:08 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:08 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:08 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:09 np0005548790.localdomain sudo[192311]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:09 np0005548790.localdomain sudo[192459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xhvnxillbvkdwbaqyouhtiemcljwzjuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014009.425521-1079-241254905489702/AnsiballZ_systemd.py
Dec 06 09:40:09 np0005548790.localdomain sudo[192459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:09 np0005548790.localdomain python3.9[192461]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:40:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54377 DF PROTO=TCP SPT=42224 DPT=9100 SEQ=975173215 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F9851F0000000001030307) 
Dec 06 09:40:11 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:40:11 np0005548790.localdomain systemd-sysv-generator[192495]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:40:11 np0005548790.localdomain systemd-rc-local-generator[192492]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:40:11 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:11 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:40:11 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:11 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:11 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:11 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:11 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:11 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:11 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:11 np0005548790.localdomain sudo[192459]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:11 np0005548790.localdomain sudo[192608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-asskmnmzbjnqstwiwqcrvhdqupnjbwvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014011.4578252-1079-91117980118354/AnsiballZ_systemd.py
Dec 06 09:40:11 np0005548790.localdomain sudo[192608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:12 np0005548790.localdomain python3.9[192610]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:40:12 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:40:12 np0005548790.localdomain systemd-rc-local-generator[192637]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:40:12 np0005548790.localdomain systemd-sysv-generator[192642]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:40:12 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:12 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:12 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:40:12 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:12 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:12 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:12 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:12 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:12 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:12 np0005548790.localdomain sudo[192608]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:12 np0005548790.localdomain sudo[192757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vtonbkmwjtadspxezadlsxfzhanrhrij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014012.5393875-1079-55922432848143/AnsiballZ_systemd.py
Dec 06 09:40:12 np0005548790.localdomain sudo[192757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 09:40:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 6000.2 total, 600.0 interval
                                                          Cumulative writes: 5446 writes, 23K keys, 5446 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5446 writes, 742 syncs, 7.34 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.024       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.2 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.2 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.2 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.2 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.022       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.2 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.2 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.2 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.2 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b53971610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.2 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b53971610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.2 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b53971610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.026       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.2 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.2 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x560b539702d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Dec 06 09:40:13 np0005548790.localdomain python3.9[192759]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:40:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43342 DF PROTO=TCP SPT=51504 DPT=9100 SEQ=417053145 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F9905F0000000001030307) 
Dec 06 09:40:14 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:40:14 np0005548790.localdomain systemd-sysv-generator[192793]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:40:14 np0005548790.localdomain systemd-rc-local-generator[192787]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:40:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:40:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:14 np0005548790.localdomain sudo[192757]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:14 np0005548790.localdomain sudo[192906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nplrendqmlermcbbecgnatplekltlqjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014014.6213412-1079-146504590593280/AnsiballZ_systemd.py
Dec 06 09:40:14 np0005548790.localdomain sudo[192906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:15 np0005548790.localdomain python3.9[192908]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:40:16 np0005548790.localdomain sudo[192906]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:16 np0005548790.localdomain sudo[193019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cgubdkwpjygyivfuhdmbthralqaxhjnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014016.4149816-1079-64186057849457/AnsiballZ_systemd.py
Dec 06 09:40:16 np0005548790.localdomain sudo[193019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:16 np0005548790.localdomain python3.9[193021]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:40:17 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:40:17 np0005548790.localdomain systemd-sysv-generator[193053]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:40:17 np0005548790.localdomain systemd-rc-local-generator[193049]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:40:17 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:17 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:17 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:17 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:17 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:40:17 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:17 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:17 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:17 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:17 np0005548790.localdomain sudo[193019]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:17 np0005548790.localdomain sudo[193169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ylyctuctfeeulpiiqugejnkmzicphenz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014017.5580587-1186-67765139904568/AnsiballZ_systemd.py
Dec 06 09:40:17 np0005548790.localdomain sudo[193169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:18 np0005548790.localdomain python3.9[193171]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 09:40:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8009 DF PROTO=TCP SPT=36060 DPT=9102 SEQ=1813817128 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F9A4400000000001030307) 
Dec 06 09:40:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10728 DF PROTO=TCP SPT=42894 DPT=9882 SEQ=2223463737 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F9A4480000000001030307) 
Dec 06 09:40:19 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:40:19 np0005548790.localdomain systemd-rc-local-generator[193201]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:40:19 np0005548790.localdomain systemd-sysv-generator[193204]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:40:19 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:19 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:19 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:19 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:19 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:40:19 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:19 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:19 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:19 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:40:19 np0005548790.localdomain sudo[193169]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:19 np0005548790.localdomain sudo[193318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rbtmpdozgomyraaplsudhqvyjpotrgfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014019.717404-1211-155891450736750/AnsiballZ_systemd.py
Dec 06 09:40:19 np0005548790.localdomain sudo[193318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:20 np0005548790.localdomain python3.9[193320]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:40:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:40:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:40:20 np0005548790.localdomain sudo[193318]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:20 np0005548790.localdomain podman[193323]: 2025-12-06 09:40:20.401891109 +0000 UTC m=+0.100082793 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 09:40:20 np0005548790.localdomain podman[193323]: 2025-12-06 09:40:20.467058407 +0000 UTC m=+0.165250081 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 09:40:20 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:40:20 np0005548790.localdomain podman[193322]: 2025-12-06 09:40:20.469163613 +0000 UTC m=+0.167197073 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:40:20 np0005548790.localdomain podman[193322]: 2025-12-06 09:40:20.553243478 +0000 UTC m=+0.251276928 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 06 09:40:20 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:40:20 np0005548790.localdomain sudo[193475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pgoslprjxuntufiwjsgepezazvhuttym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014020.4898598-1211-49017590347917/AnsiballZ_systemd.py
Dec 06 09:40:20 np0005548790.localdomain sudo[193475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:21 np0005548790.localdomain python3.9[193477]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:40:21 np0005548790.localdomain sudo[193475]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8011 DF PROTO=TCP SPT=36060 DPT=9102 SEQ=1813817128 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F9B05F0000000001030307) 
Dec 06 09:40:21 np0005548790.localdomain sudo[193588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iakevsfnaipgtibomfddpeskiyfiavcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014021.659529-1211-178352017031581/AnsiballZ_systemd.py
Dec 06 09:40:21 np0005548790.localdomain sudo[193588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:22 np0005548790.localdomain python3.9[193590]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:40:22 np0005548790.localdomain sudo[193588]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:22 np0005548790.localdomain sudo[193701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gfnvsqmnamgmocmarqrswlcwboddqufz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014022.4463582-1211-238350577380904/AnsiballZ_systemd.py
Dec 06 09:40:22 np0005548790.localdomain sudo[193701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:22 np0005548790.localdomain python3.9[193703]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:40:23 np0005548790.localdomain sudo[193701]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:23 np0005548790.localdomain sudo[193814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nbvgutvplsehxjdsqpshfbyeusrihjrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014023.1887472-1211-122797536139819/AnsiballZ_systemd.py
Dec 06 09:40:23 np0005548790.localdomain sudo[193814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:23 np0005548790.localdomain python3.9[193816]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:40:23 np0005548790.localdomain sudo[193814]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:24 np0005548790.localdomain sudo[193927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ujymtlklzqnkwmqffrwzsckbrjzkchsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014023.9347615-1211-264646452009191/AnsiballZ_systemd.py
Dec 06 09:40:24 np0005548790.localdomain sudo[193927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:24 np0005548790.localdomain python3.9[193929]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:40:24 np0005548790.localdomain sudo[193927]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16872 DF PROTO=TCP SPT=37762 DPT=9105 SEQ=2248873528 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F9BC9F0000000001030307) 
Dec 06 09:40:24 np0005548790.localdomain sudo[194040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bizgcvnadcfofsphyyayermsokrpdjxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014024.7093396-1211-26389900517912/AnsiballZ_systemd.py
Dec 06 09:40:24 np0005548790.localdomain sudo[194040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:25 np0005548790.localdomain python3.9[194042]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:40:25 np0005548790.localdomain sudo[194040]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:25 np0005548790.localdomain sudo[194153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hoenmxcbqwucnzmpqcxyxbdcqlzsasko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014025.5031638-1211-88879532299552/AnsiballZ_systemd.py
Dec 06 09:40:25 np0005548790.localdomain sudo[194153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:26 np0005548790.localdomain python3.9[194155]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:40:26 np0005548790.localdomain sudo[194153]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:26 np0005548790.localdomain sudo[194266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eurjzzsedmotokiphccmcngkkhjhuayt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014026.299391-1211-177795927802433/AnsiballZ_systemd.py
Dec 06 09:40:26 np0005548790.localdomain sudo[194266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:26 np0005548790.localdomain python3.9[194268]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:40:26 np0005548790.localdomain sudo[194266]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:27 np0005548790.localdomain sudo[194379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rpqnpbcppgbuputbqhnhmjtcfesvuzvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014027.0537329-1211-255517796270766/AnsiballZ_systemd.py
Dec 06 09:40:27 np0005548790.localdomain sudo[194379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:27 np0005548790.localdomain python3.9[194381]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:40:27 np0005548790.localdomain sudo[194379]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=461 DF PROTO=TCP SPT=56580 DPT=9105 SEQ=1065687561 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F9C9200000000001030307) 
Dec 06 09:40:28 np0005548790.localdomain sudo[194492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-taojbqohpttxytxtlztcacrjyslnaqwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014027.8138266-1211-198835027271155/AnsiballZ_systemd.py
Dec 06 09:40:28 np0005548790.localdomain sudo[194492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:28 np0005548790.localdomain python3.9[194494]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:40:29 np0005548790.localdomain sudo[194492]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:29 np0005548790.localdomain sudo[194605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qcqpiiyivrtipbdguofxarmhjdgkllys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014029.6316133-1211-143155389797130/AnsiballZ_systemd.py
Dec 06 09:40:29 np0005548790.localdomain sudo[194605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:30 np0005548790.localdomain python3.9[194607]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:40:30 np0005548790.localdomain sudo[194605]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:30 np0005548790.localdomain sudo[194718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lngafiyjoieozoflltyarnxclzjhtjwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014030.3886237-1211-238619769952572/AnsiballZ_systemd.py
Dec 06 09:40:30 np0005548790.localdomain sudo[194718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16874 DF PROTO=TCP SPT=37762 DPT=9105 SEQ=2248873528 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F9D45F0000000001030307) 
Dec 06 09:40:30 np0005548790.localdomain python3.9[194720]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:40:32 np0005548790.localdomain sudo[194718]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:32 np0005548790.localdomain sudo[194831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nflbzxjcgefowypugqybvrbnqcwgljny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014032.1510344-1211-50673342931010/AnsiballZ_systemd.py
Dec 06 09:40:32 np0005548790.localdomain sudo[194831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:32 np0005548790.localdomain python3.9[194833]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 06 09:40:32 np0005548790.localdomain sudo[194831]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10732 DF PROTO=TCP SPT=42894 DPT=9882 SEQ=2223463737 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F9E11F0000000001030307) 
Dec 06 09:40:35 np0005548790.localdomain sudo[194944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nloqfvhogzfmdubzjwicohejhjgczpfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014035.519072-1517-175977817843533/AnsiballZ_file.py
Dec 06 09:40:35 np0005548790.localdomain sudo[194944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:35 np0005548790.localdomain python3.9[194946]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:40:35 np0005548790.localdomain sudo[194944]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:36 np0005548790.localdomain sudo[195054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ilmlzsbrofdsgtdnzgqvidsnamxsiscr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014036.1157935-1517-147703690695003/AnsiballZ_file.py
Dec 06 09:40:36 np0005548790.localdomain sudo[195054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62606 DF PROTO=TCP SPT=54882 DPT=9101 SEQ=525176442 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F9EB1F0000000001030307) 
Dec 06 09:40:36 np0005548790.localdomain python3.9[195056]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:40:36 np0005548790.localdomain sudo[195054]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:37 np0005548790.localdomain sudo[195164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oepdlvdticlsgikujcledrydprpymhyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014036.7806935-1517-241423146252723/AnsiballZ_file.py
Dec 06 09:40:37 np0005548790.localdomain sudo[195164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:37 np0005548790.localdomain python3.9[195166]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:40:37 np0005548790.localdomain sudo[195164]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:37 np0005548790.localdomain sudo[195274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kzwulkdyrxayhdzrsesybkoqijwpfbvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014037.3841527-1517-248788501990875/AnsiballZ_file.py
Dec 06 09:40:37 np0005548790.localdomain sudo[195274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:37 np0005548790.localdomain python3.9[195276]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:40:37 np0005548790.localdomain sudo[195274]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:38 np0005548790.localdomain sudo[195384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-myzsfaftoxsfvcabjxvwvmpqmctvakao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014038.01949-1517-187309918149864/AnsiballZ_file.py
Dec 06 09:40:38 np0005548790.localdomain sudo[195384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:38 np0005548790.localdomain python3.9[195386]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:40:38 np0005548790.localdomain sudo[195384]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:38 np0005548790.localdomain sudo[195494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-apxdmusccjpvvbuwcvnnlwxbajcbdwdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014038.6350539-1517-96281033706257/AnsiballZ_file.py
Dec 06 09:40:38 np0005548790.localdomain sudo[195494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:39 np0005548790.localdomain python3.9[195496]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:40:39 np0005548790.localdomain sudo[195494]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:39 np0005548790.localdomain sudo[195604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rtivtxldroniwsjyvxcurwkygympzasp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014039.37256-1646-239365839091325/AnsiballZ_stat.py
Dec 06 09:40:39 np0005548790.localdomain sudo[195604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:39 np0005548790.localdomain python3.9[195606]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:40:40 np0005548790.localdomain sudo[195604]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10874 DF PROTO=TCP SPT=39514 DPT=9100 SEQ=2874496174 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17F9F9200000000001030307) 
Dec 06 09:40:40 np0005548790.localdomain sudo[195694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oazthacjmuaukerjzsppizgohgsfwqlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014039.37256-1646-239365839091325/AnsiballZ_copy.py
Dec 06 09:40:40 np0005548790.localdomain sudo[195694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:40 np0005548790.localdomain python3.9[195696]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014039.37256-1646-239365839091325/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:40 np0005548790.localdomain sudo[195694]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:41 np0005548790.localdomain sudo[195804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-annidinjcjimywxrwnlyeyksybixvyox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014040.8377745-1646-220744115393607/AnsiballZ_stat.py
Dec 06 09:40:41 np0005548790.localdomain sudo[195804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:41 np0005548790.localdomain python3.9[195806]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:40:41 np0005548790.localdomain sudo[195804]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:41 np0005548790.localdomain sudo[195894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-niyuerqkwrnnyazggeuihovfiprxtsbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014040.8377745-1646-220744115393607/AnsiballZ_copy.py
Dec 06 09:40:41 np0005548790.localdomain sudo[195894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:41 np0005548790.localdomain python3.9[195896]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014040.8377745-1646-220744115393607/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:41 np0005548790.localdomain sudo[195894]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:42 np0005548790.localdomain sudo[196004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zolapfnxomojkxftrsfeiazvihgzqpul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014041.9361753-1646-263492135973696/AnsiballZ_stat.py
Dec 06 09:40:42 np0005548790.localdomain sudo[196004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:42 np0005548790.localdomain python3.9[196006]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:40:42 np0005548790.localdomain sudo[196004]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:42 np0005548790.localdomain sudo[196094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fjvtfpntmzwwicwnmeyczetygeidptwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014041.9361753-1646-263492135973696/AnsiballZ_copy.py
Dec 06 09:40:42 np0005548790.localdomain sudo[196094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:42 np0005548790.localdomain python3.9[196096]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014041.9361753-1646-263492135973696/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:42 np0005548790.localdomain sudo[196094]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:43 np0005548790.localdomain sudo[196204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sjgznkzanziomvudbsllbuazhrrtgfwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014043.0146747-1646-154385278564123/AnsiballZ_stat.py
Dec 06 09:40:43 np0005548790.localdomain sudo[196204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47586 DF PROTO=TCP SPT=46724 DPT=9100 SEQ=3627444048 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FA059F0000000001030307) 
Dec 06 09:40:43 np0005548790.localdomain python3.9[196206]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:40:43 np0005548790.localdomain sudo[196204]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:43 np0005548790.localdomain sudo[196294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nseqqycfotxvhsmznobljnayjsadvxmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014043.0146747-1646-154385278564123/AnsiballZ_copy.py
Dec 06 09:40:43 np0005548790.localdomain sudo[196294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:44 np0005548790.localdomain python3.9[196296]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014043.0146747-1646-154385278564123/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:44 np0005548790.localdomain sudo[196294]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:44 np0005548790.localdomain sudo[196404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wauwpurlodrgyilkufphfxswazijnlyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014044.1401815-1646-272767641218700/AnsiballZ_stat.py
Dec 06 09:40:44 np0005548790.localdomain sudo[196404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:44 np0005548790.localdomain python3.9[196406]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:40:44 np0005548790.localdomain sudo[196404]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:45 np0005548790.localdomain sudo[196494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qsfzejwqhrrbfsmivngdqlvxsoykfkcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014044.1401815-1646-272767641218700/AnsiballZ_copy.py
Dec 06 09:40:45 np0005548790.localdomain sudo[196494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:45 np0005548790.localdomain python3.9[196496]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014044.1401815-1646-272767641218700/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=8d9b2057482987a531d808ceb2ac4bc7d43bf17c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:45 np0005548790.localdomain sudo[196494]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:45 np0005548790.localdomain sudo[196604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qwoxldxqfwgplqlereinnrveeoctbnnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014045.3533432-1646-185182829532720/AnsiballZ_stat.py
Dec 06 09:40:45 np0005548790.localdomain sudo[196604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:45 np0005548790.localdomain python3.9[196606]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:40:45 np0005548790.localdomain sudo[196604]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:46 np0005548790.localdomain sudo[196694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cepvxdpaqpbrslbpxojuvntgldebowwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014045.3533432-1646-185182829532720/AnsiballZ_copy.py
Dec 06 09:40:46 np0005548790.localdomain sudo[196694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:46 np0005548790.localdomain python3.9[196696]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014045.3533432-1646-185182829532720/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:46 np0005548790.localdomain sudo[196694]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:46 np0005548790.localdomain sudo[196804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pvdgndhjdbxftkigdemwrlirvjxyyhcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014046.5349393-1646-183244216347225/AnsiballZ_stat.py
Dec 06 09:40:46 np0005548790.localdomain sudo[196804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:46 np0005548790.localdomain python3.9[196806]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:40:46 np0005548790.localdomain sudo[196804]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:47 np0005548790.localdomain sudo[196892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-upqapatpznmtkpaflzipkdfwwxhgqtsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014046.5349393-1646-183244216347225/AnsiballZ_copy.py
Dec 06 09:40:47 np0005548790.localdomain sudo[196892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:47 np0005548790.localdomain python3.9[196894]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014046.5349393-1646-183244216347225/.source.conf follow=False _original_basename=auth.conf checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:47 np0005548790.localdomain sudo[196892]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:47 np0005548790.localdomain sudo[197002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cdychaecvnbzoztwvrbzloxhhvwrkspk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014047.6618066-1646-93434741651821/AnsiballZ_stat.py
Dec 06 09:40:47 np0005548790.localdomain sudo[197002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:48 np0005548790.localdomain python3.9[197004]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:40:48 np0005548790.localdomain sudo[197002]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:40:48.347 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:40:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:40:48.347 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:40:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:40:48.348 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:40:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3589 DF PROTO=TCP SPT=58984 DPT=9102 SEQ=4270372397 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FA19710000000001030307) 
Dec 06 09:40:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1594 DF PROTO=TCP SPT=49316 DPT=9882 SEQ=1408846603 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FA19770000000001030307) 
Dec 06 09:40:48 np0005548790.localdomain sudo[197092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jzdgtsxlfqisubrfmtgrcglkdnvmwlyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014047.6618066-1646-93434741651821/AnsiballZ_copy.py
Dec 06 09:40:48 np0005548790.localdomain sudo[197092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:48 np0005548790.localdomain python3.9[197094]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765014047.6618066-1646-93434741651821/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:48 np0005548790.localdomain sudo[197092]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:49 np0005548790.localdomain sudo[197202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qhspciavapdpkyyywfmbylcqludnzzcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014049.233354-1988-2828685762708/AnsiballZ_file.py
Dec 06 09:40:49 np0005548790.localdomain sudo[197202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:49 np0005548790.localdomain python3.9[197204]: ansible-ansible.builtin.file Invoked with path=/etc/libvirt/passwd.db state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:49 np0005548790.localdomain sudo[197202]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:50 np0005548790.localdomain sudo[197312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xjqkjtoxtkbesufkshypsxrcgukinltf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014049.8928256-2011-23992976335367/AnsiballZ_file.py
Dec 06 09:40:50 np0005548790.localdomain sudo[197312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:50 np0005548790.localdomain python3.9[197314]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:50 np0005548790.localdomain sudo[197312]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:50 np0005548790.localdomain sudo[197422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-peanhgtprbkxltjofamluopfgsucxujo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014050.5812871-2011-32258209475353/AnsiballZ_file.py
Dec 06 09:40:50 np0005548790.localdomain sudo[197422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:40:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:40:50 np0005548790.localdomain systemd[1]: tmp-crun.TZ8ctz.mount: Deactivated successfully.
Dec 06 09:40:50 np0005548790.localdomain podman[197425]: 2025-12-06 09:40:50.954467562 +0000 UTC m=+0.087706595 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 09:40:51 np0005548790.localdomain podman[197426]: 2025-12-06 09:40:51.008126111 +0000 UTC m=+0.137357056 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 06 09:40:51 np0005548790.localdomain podman[197425]: 2025-12-06 09:40:51.038316041 +0000 UTC m=+0.171555124 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:40:51 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:40:51 np0005548790.localdomain python3.9[197424]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:51 np0005548790.localdomain sudo[197422]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:51 np0005548790.localdomain podman[197426]: 2025-12-06 09:40:51.091234661 +0000 UTC m=+0.220465566 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec 06 09:40:51 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:40:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3591 DF PROTO=TCP SPT=58984 DPT=9102 SEQ=4270372397 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FA255F0000000001030307) 
Dec 06 09:40:51 np0005548790.localdomain sudo[197575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jxpburcsnrkzqneqicokzbhweyykpxuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014051.2030785-2011-148348749650667/AnsiballZ_file.py
Dec 06 09:40:51 np0005548790.localdomain sudo[197575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:51 np0005548790.localdomain python3.9[197577]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:51 np0005548790.localdomain sudo[197575]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:51 np0005548790.localdomain systemd[1]: tmp-crun.LPubjh.mount: Deactivated successfully.
Dec 06 09:40:52 np0005548790.localdomain sudo[197685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ndxwvwmrnxtbamvtkbbpopviiyfspdbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014051.8405173-2011-140231375599231/AnsiballZ_file.py
Dec 06 09:40:52 np0005548790.localdomain sudo[197685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:52 np0005548790.localdomain python3.9[197687]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:52 np0005548790.localdomain sudo[197685]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:52 np0005548790.localdomain sudo[197795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dhtnmnwpfoihgluvqaaapdbdxngzfcda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014052.4256058-2011-82758667826540/AnsiballZ_file.py
Dec 06 09:40:52 np0005548790.localdomain sudo[197795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:52 np0005548790.localdomain python3.9[197797]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:52 np0005548790.localdomain sudo[197795]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:53 np0005548790.localdomain sudo[197905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pwjlpiabmpmkazyovnvtfhebarnuasan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014053.053331-2011-169159294858563/AnsiballZ_file.py
Dec 06 09:40:53 np0005548790.localdomain sudo[197905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:53 np0005548790.localdomain sudo[197907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:40:53 np0005548790.localdomain sudo[197907]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:40:53 np0005548790.localdomain sudo[197907]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:53 np0005548790.localdomain sudo[197926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:40:53 np0005548790.localdomain sudo[197926]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:40:53 np0005548790.localdomain python3.9[197910]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:53 np0005548790.localdomain sudo[197905]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:53 np0005548790.localdomain sudo[198065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-omximsevuvkgojugclgrkilfgjdfhsdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014053.6691232-2011-35420472489031/AnsiballZ_file.py
Dec 06 09:40:53 np0005548790.localdomain sudo[198065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:54 np0005548790.localdomain sudo[197926]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:54 np0005548790.localdomain python3.9[198070]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:54 np0005548790.localdomain sudo[198065]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:54 np0005548790.localdomain sudo[198192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pqjrfkmlmrpnrhfsfdtzinondfxvgxsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014054.2520823-2011-215782973389998/AnsiballZ_file.py
Dec 06 09:40:54 np0005548790.localdomain sudo[198192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=223 DF PROTO=TCP SPT=34988 DPT=9105 SEQ=3141065904 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FA31DF0000000001030307) 
Dec 06 09:40:54 np0005548790.localdomain python3.9[198194]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:54 np0005548790.localdomain sudo[198192]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:54 np0005548790.localdomain sudo[198195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:40:54 np0005548790.localdomain sudo[198195]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:40:54 np0005548790.localdomain sudo[198195]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:55 np0005548790.localdomain sudo[198320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oepxokzbtfxaienwwtypcpjlbraedxox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014054.8904045-2011-157227547977470/AnsiballZ_file.py
Dec 06 09:40:55 np0005548790.localdomain sudo[198320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:55 np0005548790.localdomain python3.9[198322]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:55 np0005548790.localdomain sudo[198320]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:55 np0005548790.localdomain sudo[198430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kqwkdocyicrjmgqrcnodkrymxdzbxrxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014055.4672294-2011-217279875770164/AnsiballZ_file.py
Dec 06 09:40:55 np0005548790.localdomain sudo[198430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:55 np0005548790.localdomain python3.9[198432]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:56 np0005548790.localdomain sudo[198430]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:56 np0005548790.localdomain sudo[198540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ricijayspojbzzqwiqsemalhemtikltw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014056.1038203-2011-3499071746888/AnsiballZ_file.py
Dec 06 09:40:56 np0005548790.localdomain sudo[198540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:56 np0005548790.localdomain python3.9[198542]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:56 np0005548790.localdomain sudo[198540]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:56 np0005548790.localdomain sudo[198650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-orqtqikoszvcmyjkxcwtjyqjpwrmiczy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014056.7272866-2011-229488334041441/AnsiballZ_file.py
Dec 06 09:40:56 np0005548790.localdomain sudo[198650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:57 np0005548790.localdomain python3.9[198652]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:57 np0005548790.localdomain sudo[198650]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:57 np0005548790.localdomain sudo[198760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zbxwkjpicussydprpomuxbmhnlpwhlnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014057.2654235-2011-170976850472996/AnsiballZ_file.py
Dec 06 09:40:57 np0005548790.localdomain sudo[198760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60840 DF PROTO=TCP SPT=43276 DPT=9105 SEQ=2811796419 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FA3D1F0000000001030307) 
Dec 06 09:40:57 np0005548790.localdomain python3.9[198762]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:57 np0005548790.localdomain sudo[198760]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:58 np0005548790.localdomain sudo[198870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-feydsycitoibygpukskknrhmzmywhtzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014057.827029-2011-247429574472318/AnsiballZ_file.py
Dec 06 09:40:58 np0005548790.localdomain sudo[198870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:40:58 np0005548790.localdomain python3.9[198872]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:40:58 np0005548790.localdomain sudo[198870]: pam_unix(sudo:session): session closed for user root
Dec 06 09:40:59 np0005548790.localdomain sudo[198980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bvygrzzioubzminnpqrsajlynipprlvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014059.3848796-2308-146068042133461/AnsiballZ_stat.py
Dec 06 09:40:59 np0005548790.localdomain sudo[198980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:00 np0005548790.localdomain python3.9[198982]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:00 np0005548790.localdomain sudo[198980]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=225 DF PROTO=TCP SPT=34988 DPT=9105 SEQ=3141065904 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FA499F0000000001030307) 
Dec 06 09:41:01 np0005548790.localdomain sudo[199068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nyklkhfllndrgmqlopnhabezccepptpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014059.3848796-2308-146068042133461/AnsiballZ_copy.py
Dec 06 09:41:01 np0005548790.localdomain sudo[199068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:01 np0005548790.localdomain python3.9[199070]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014059.3848796-2308-146068042133461/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:01 np0005548790.localdomain sudo[199068]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:01 np0005548790.localdomain sudo[199178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nhvohwlulcbratzilyxeytmdvwvsisik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014061.490499-2308-243422860973484/AnsiballZ_stat.py
Dec 06 09:41:01 np0005548790.localdomain sudo[199178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:01 np0005548790.localdomain python3.9[199180]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:02 np0005548790.localdomain sudo[199178]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:02 np0005548790.localdomain sudo[199266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rhyhywkrxdaoovjnxmwasubgbrqzibds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014061.490499-2308-243422860973484/AnsiballZ_copy.py
Dec 06 09:41:02 np0005548790.localdomain sudo[199266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:02 np0005548790.localdomain python3.9[199268]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014061.490499-2308-243422860973484/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:02 np0005548790.localdomain sudo[199266]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:02 np0005548790.localdomain sudo[199376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hioqdgojkckbqfogkyrsxesxgxkwians ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014062.6949244-2308-121327439135914/AnsiballZ_stat.py
Dec 06 09:41:02 np0005548790.localdomain sudo[199376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:03 np0005548790.localdomain python3.9[199378]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:03 np0005548790.localdomain sudo[199376]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:03 np0005548790.localdomain sudo[199464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kqanlisxknvooegdgozkaahuugcplaxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014062.6949244-2308-121327439135914/AnsiballZ_copy.py
Dec 06 09:41:03 np0005548790.localdomain sudo[199464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1598 DF PROTO=TCP SPT=49316 DPT=9882 SEQ=1408846603 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FA551F0000000001030307) 
Dec 06 09:41:03 np0005548790.localdomain python3.9[199466]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014062.6949244-2308-121327439135914/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:03 np0005548790.localdomain sudo[199464]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:04 np0005548790.localdomain sudo[199574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gtsjtldufdcksmvvoymawmtefmsjfqcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014063.922255-2308-12877617265424/AnsiballZ_stat.py
Dec 06 09:41:04 np0005548790.localdomain sudo[199574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:04 np0005548790.localdomain python3.9[199576]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:04 np0005548790.localdomain sudo[199574]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:04 np0005548790.localdomain sudo[199662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fddyzhmwdnplbxrczqdwltabtfotuxcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014063.922255-2308-12877617265424/AnsiballZ_copy.py
Dec 06 09:41:04 np0005548790.localdomain sudo[199662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:05 np0005548790.localdomain python3.9[199664]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014063.922255-2308-12877617265424/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:05 np0005548790.localdomain sudo[199662]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:05 np0005548790.localdomain sudo[199772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gunfqokwwytyxckwuyfcqdokhyplfrph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014065.1790285-2308-232765974718714/AnsiballZ_stat.py
Dec 06 09:41:05 np0005548790.localdomain sudo[199772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:05 np0005548790.localdomain python3.9[199774]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:05 np0005548790.localdomain sudo[199772]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:05 np0005548790.localdomain sudo[199860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uzqzdrxohrxxrdxvdahmogxvlkubvplj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014065.1790285-2308-232765974718714/AnsiballZ_copy.py
Dec 06 09:41:05 np0005548790.localdomain sudo[199860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:06 np0005548790.localdomain python3.9[199862]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014065.1790285-2308-232765974718714/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:06 np0005548790.localdomain sudo[199860]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:06 np0005548790.localdomain sudo[199970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nsxjuajseyxcyovzxkmrpoujyljflwkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014066.3279588-2308-12753189499136/AnsiballZ_stat.py
Dec 06 09:41:06 np0005548790.localdomain sudo[199970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52573 DF PROTO=TCP SPT=44422 DPT=9101 SEQ=3357396556 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FA61200000000001030307) 
Dec 06 09:41:06 np0005548790.localdomain python3.9[199972]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:06 np0005548790.localdomain sudo[199970]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:07 np0005548790.localdomain sudo[200058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zzhbtqsguxqofrgsleuuqdjidnwtuwcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014066.3279588-2308-12753189499136/AnsiballZ_copy.py
Dec 06 09:41:07 np0005548790.localdomain sudo[200058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:07 np0005548790.localdomain python3.9[200060]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014066.3279588-2308-12753189499136/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:07 np0005548790.localdomain sudo[200058]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:07 np0005548790.localdomain sudo[200168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-twbldmsmcsmmkzcgpllzzysqpbqpekvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014067.435631-2308-62632178299117/AnsiballZ_stat.py
Dec 06 09:41:07 np0005548790.localdomain sudo[200168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:07 np0005548790.localdomain python3.9[200170]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:07 np0005548790.localdomain sudo[200168]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:08 np0005548790.localdomain sudo[200256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pocjjcolwfqbezoftysncjunmvzdauad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014067.435631-2308-62632178299117/AnsiballZ_copy.py
Dec 06 09:41:08 np0005548790.localdomain sudo[200256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:08 np0005548790.localdomain python3.9[200258]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014067.435631-2308-62632178299117/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:08 np0005548790.localdomain sudo[200256]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:08 np0005548790.localdomain sudo[200366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gqkwwvtoezhygnhwepbvosgctbwpwply ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014068.5583153-2308-187334076094621/AnsiballZ_stat.py
Dec 06 09:41:08 np0005548790.localdomain sudo[200366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:09 np0005548790.localdomain python3.9[200368]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:09 np0005548790.localdomain sudo[200366]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:09 np0005548790.localdomain sudo[200454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qdyzwvyolaogpteaudlqijibgszxunlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014068.5583153-2308-187334076094621/AnsiballZ_copy.py
Dec 06 09:41:09 np0005548790.localdomain sudo[200454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:09 np0005548790.localdomain python3.9[200456]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014068.5583153-2308-187334076094621/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:09 np0005548790.localdomain sudo[200454]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:09 np0005548790.localdomain sudo[200564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-grlopebrtyihgapljyceopubzypkjdji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014069.6268845-2308-106412471825724/AnsiballZ_stat.py
Dec 06 09:41:09 np0005548790.localdomain sudo[200564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:10 np0005548790.localdomain python3.9[200566]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:10 np0005548790.localdomain sudo[200564]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43345 DF PROTO=TCP SPT=51504 DPT=9100 SEQ=417053145 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FA6F1F0000000001030307) 
Dec 06 09:41:10 np0005548790.localdomain sudo[200652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mubiglzmparoeajcimflxavkqqhqifse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014069.6268845-2308-106412471825724/AnsiballZ_copy.py
Dec 06 09:41:10 np0005548790.localdomain sudo[200652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:10 np0005548790.localdomain python3.9[200654]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014069.6268845-2308-106412471825724/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:10 np0005548790.localdomain sudo[200652]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:11 np0005548790.localdomain sudo[200762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wokjqazgxnfkibzthmxiicmptzsmmlcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014070.7764559-2308-136623282145782/AnsiballZ_stat.py
Dec 06 09:41:11 np0005548790.localdomain sudo[200762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:11 np0005548790.localdomain python3.9[200764]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:11 np0005548790.localdomain sudo[200762]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:11 np0005548790.localdomain sudo[200850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-swlzegtvavgrhxooiorxkcahqjjytivj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014070.7764559-2308-136623282145782/AnsiballZ_copy.py
Dec 06 09:41:11 np0005548790.localdomain sudo[200850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:11 np0005548790.localdomain python3.9[200852]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014070.7764559-2308-136623282145782/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:11 np0005548790.localdomain sudo[200850]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:12 np0005548790.localdomain sudo[200960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ljyrlckpymikzxpgxkehbrlmggisgdwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014071.896511-2308-207225023631108/AnsiballZ_stat.py
Dec 06 09:41:12 np0005548790.localdomain sudo[200960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:12 np0005548790.localdomain python3.9[200962]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:12 np0005548790.localdomain sudo[200960]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:12 np0005548790.localdomain sudo[201048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sbgfggbgzifeodbbmlyjycfndaufgkql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014071.896511-2308-207225023631108/AnsiballZ_copy.py
Dec 06 09:41:12 np0005548790.localdomain sudo[201048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:12 np0005548790.localdomain python3.9[201050]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014071.896511-2308-207225023631108/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:12 np0005548790.localdomain sudo[201048]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35787 DF PROTO=TCP SPT=39602 DPT=9100 SEQ=1177338799 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FA7A9F0000000001030307) 
Dec 06 09:41:13 np0005548790.localdomain sudo[201158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-efrnulpovtkmecqxmtybzjutysuipbho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014073.0272043-2308-131114815815407/AnsiballZ_stat.py
Dec 06 09:41:13 np0005548790.localdomain sudo[201158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:13 np0005548790.localdomain python3.9[201160]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:13 np0005548790.localdomain sudo[201158]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:13 np0005548790.localdomain sudo[201246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-enfwkekzkvjfassvfcauueztypyvxfqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014073.0272043-2308-131114815815407/AnsiballZ_copy.py
Dec 06 09:41:13 np0005548790.localdomain sudo[201246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:14 np0005548790.localdomain python3.9[201248]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014073.0272043-2308-131114815815407/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:14 np0005548790.localdomain sudo[201246]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:14 np0005548790.localdomain sudo[201356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lfxckmzjdinqtagzirpaehhaqzbxjbxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014074.128409-2308-189222117365210/AnsiballZ_stat.py
Dec 06 09:41:14 np0005548790.localdomain sudo[201356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:14 np0005548790.localdomain python3.9[201358]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:14 np0005548790.localdomain sudo[201356]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:14 np0005548790.localdomain sudo[201444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sqsldetltslzozjwqfzdrmdutomenzzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014074.128409-2308-189222117365210/AnsiballZ_copy.py
Dec 06 09:41:14 np0005548790.localdomain sudo[201444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:15 np0005548790.localdomain python3.9[201446]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014074.128409-2308-189222117365210/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:15 np0005548790.localdomain sudo[201444]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:15 np0005548790.localdomain sudo[201554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-llpjutygtcenhmfltnjhuwtdkaneuuag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014075.2539988-2308-72506398803839/AnsiballZ_stat.py
Dec 06 09:41:15 np0005548790.localdomain sudo[201554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:15 np0005548790.localdomain python3.9[201556]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:15 np0005548790.localdomain sudo[201554]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:16 np0005548790.localdomain sudo[201642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jryvmjgerhzzztrtwwkkyaxptwouipnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014075.2539988-2308-72506398803839/AnsiballZ_copy.py
Dec 06 09:41:16 np0005548790.localdomain sudo[201642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:16 np0005548790.localdomain python3.9[201644]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014075.2539988-2308-72506398803839/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:16 np0005548790.localdomain sudo[201642]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8045 DF PROTO=TCP SPT=56358 DPT=9102 SEQ=4173206771 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FA8EA10000000001030307) 
Dec 06 09:41:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63364 DF PROTO=TCP SPT=36586 DPT=9882 SEQ=489569675 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FA8EA70000000001030307) 
Dec 06 09:41:18 np0005548790.localdomain python3.9[201752]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                                            ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:19 np0005548790.localdomain sudo[201863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rklqlvxinonhtekqkqeezohtcelgjbjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014079.1590521-2927-175108458489150/AnsiballZ_seboolean.py
Dec 06 09:41:19 np0005548790.localdomain sudo[201863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:19 np0005548790.localdomain python3.9[201865]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 06 09:41:20 np0005548790.localdomain sudo[201863]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63366 DF PROTO=TCP SPT=36586 DPT=9882 SEQ=489569675 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FA9A9F0000000001030307) 
Dec 06 09:41:21 np0005548790.localdomain sudo[201973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ajthdffqazfaekacblkqslmvsxldruqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014081.1976721-2957-133373910615874/AnsiballZ_systemd.py
Dec 06 09:41:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:41:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:41:21 np0005548790.localdomain sudo[201973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:21 np0005548790.localdomain podman[201975]: 2025-12-06 09:41:21.588119208 +0000 UTC m=+0.091405794 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 09:41:21 np0005548790.localdomain podman[201975]: 2025-12-06 09:41:21.593998566 +0000 UTC m=+0.097285142 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 06 09:41:21 np0005548790.localdomain podman[201976]: 2025-12-06 09:41:21.645265711 +0000 UTC m=+0.146781219 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Dec 06 09:41:21 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:41:21 np0005548790.localdomain podman[201976]: 2025-12-06 09:41:21.685134541 +0000 UTC m=+0.186650049 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:41:21 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:41:21 np0005548790.localdomain python3.9[201977]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:41:21 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:41:21 np0005548790.localdomain systemd-rc-local-generator[202038]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:41:21 np0005548790.localdomain systemd-sysv-generator[202044]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:41:21 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:21 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:21 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:21 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:21 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:41:21 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:21 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:21 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:21 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:22 np0005548790.localdomain systemd[1]: Starting libvirt logging daemon socket...
Dec 06 09:41:22 np0005548790.localdomain systemd[1]: Listening on libvirt logging daemon socket.
Dec 06 09:41:22 np0005548790.localdomain systemd[1]: Starting libvirt logging daemon admin socket...
Dec 06 09:41:22 np0005548790.localdomain systemd[1]: Listening on libvirt logging daemon admin socket.
Dec 06 09:41:22 np0005548790.localdomain systemd[1]: Starting libvirt logging daemon...
Dec 06 09:41:22 np0005548790.localdomain systemd[1]: Started libvirt logging daemon.
Dec 06 09:41:22 np0005548790.localdomain sudo[201973]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:22 np0005548790.localdomain sudo[202167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-krtpcjycqqdfxaoeqmcjkbffncwztrxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014082.370656-2957-53280050583713/AnsiballZ_systemd.py
Dec 06 09:41:22 np0005548790.localdomain sudo[202167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:22 np0005548790.localdomain python3.9[202169]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:41:22 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:41:23 np0005548790.localdomain systemd-sysv-generator[202197]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:41:23 np0005548790.localdomain systemd-rc-local-generator[202192]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:41:23 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:23 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:23 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:23 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:23 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:41:23 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:23 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:23 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:23 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:23 np0005548790.localdomain systemd[1]: Starting libvirt nodedev daemon socket...
Dec 06 09:41:23 np0005548790.localdomain systemd[1]: Listening on libvirt nodedev daemon socket.
Dec 06 09:41:23 np0005548790.localdomain systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec 06 09:41:23 np0005548790.localdomain systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec 06 09:41:23 np0005548790.localdomain systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec 06 09:41:23 np0005548790.localdomain systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec 06 09:41:23 np0005548790.localdomain systemd[1]: Started libvirt nodedev daemon.
Dec 06 09:41:23 np0005548790.localdomain sudo[202167]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:23 np0005548790.localdomain sudo[202342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fjtdzdfkcqubwgdbwogjxndynxrvlgpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014083.5358558-2957-93771656865834/AnsiballZ_systemd.py
Dec 06 09:41:23 np0005548790.localdomain sudo[202342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:24 np0005548790.localdomain python3.9[202344]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:41:24 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:41:24 np0005548790.localdomain systemd-sysv-generator[202372]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:41:24 np0005548790.localdomain systemd-rc-local-generator[202366]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:41:24 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:24 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:24 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:24 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:24 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:41:24 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:24 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:24 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:24 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:24 np0005548790.localdomain systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec 06 09:41:24 np0005548790.localdomain systemd[1]: Starting libvirt proxy daemon socket...
Dec 06 09:41:24 np0005548790.localdomain systemd[1]: Listening on libvirt proxy daemon socket.
Dec 06 09:41:24 np0005548790.localdomain systemd[1]: Starting libvirt proxy daemon admin socket...
Dec 06 09:41:24 np0005548790.localdomain systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec 06 09:41:24 np0005548790.localdomain systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec 06 09:41:24 np0005548790.localdomain systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec 06 09:41:24 np0005548790.localdomain systemd[1]: Started libvirt proxy daemon.
Dec 06 09:41:24 np0005548790.localdomain sudo[202342]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30561 DF PROTO=TCP SPT=56164 DPT=9105 SEQ=1711170622 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FAA71F0000000001030307) 
Dec 06 09:41:24 np0005548790.localdomain systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec 06 09:41:24 np0005548790.localdomain systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec 06 09:41:24 np0005548790.localdomain systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec 06 09:41:24 np0005548790.localdomain sudo[202520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fzchvwvwcomxcbstwlkfiitajhsiwboq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014084.6663475-2957-260294866620365/AnsiballZ_systemd.py
Dec 06 09:41:24 np0005548790.localdomain sudo[202520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:25 np0005548790.localdomain python3.9[202522]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:41:25 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:41:25 np0005548790.localdomain systemd-sysv-generator[202553]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:41:25 np0005548790.localdomain systemd-rc-local-generator[202549]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:41:25 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:25 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:25 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:25 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:25 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:41:25 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:25 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:25 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:25 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:25 np0005548790.localdomain systemd[1]: Listening on libvirt locking daemon socket.
Dec 06 09:41:25 np0005548790.localdomain systemd[1]: Starting libvirt QEMU daemon socket...
Dec 06 09:41:25 np0005548790.localdomain systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 06 09:41:25 np0005548790.localdomain systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec 06 09:41:25 np0005548790.localdomain systemd[1]: Listening on libvirt QEMU daemon socket.
Dec 06 09:41:25 np0005548790.localdomain systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec 06 09:41:25 np0005548790.localdomain systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec 06 09:41:25 np0005548790.localdomain systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec 06 09:41:25 np0005548790.localdomain systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec 06 09:41:25 np0005548790.localdomain systemd[1]: Started Virtual Machine and Container Registration Service.
Dec 06 09:41:25 np0005548790.localdomain systemd[1]: Started libvirt QEMU daemon.
Dec 06 09:41:25 np0005548790.localdomain sudo[202520]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:25 np0005548790.localdomain setroubleshoot[202381]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 6854adf3-d4e4-4fe1-be4f-55f6065d0eb1
Dec 06 09:41:25 np0005548790.localdomain setroubleshoot[202381]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                                 
                                                                 *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                                 
                                                                 If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                                 Then turn on full auditing to get path information about the offending file and generate the error again.
                                                                 Do
                                                                 
                                                                 Turn on full auditing
                                                                 # auditctl -w /etc/shadow -p w
                                                                 Try to recreate AVC. Then execute
                                                                 # ausearch -m avc -ts recent
                                                                 If you see PATH record check ownership/permissions on file, and fix it,
                                                                 otherwise report as a bugzilla.
                                                                 
                                                                 *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                                 
                                                                 If you believe that virtlogd should have the dac_read_search capability by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                                 # semodule -X 300 -i my-virtlogd.pp
                                                                 
Dec 06 09:41:25 np0005548790.localdomain setroubleshoot[202381]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 6854adf3-d4e4-4fe1-be4f-55f6065d0eb1
Dec 06 09:41:25 np0005548790.localdomain setroubleshoot[202381]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                                 
                                                                 *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                                 
                                                                 If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                                 Then turn on full auditing to get path information about the offending file and generate the error again.
                                                                 Do
                                                                 
                                                                 Turn on full auditing
                                                                 # auditctl -w /etc/shadow -p w
                                                                 Try to recreate AVC. Then execute
                                                                 # ausearch -m avc -ts recent
                                                                 If you see PATH record check ownership/permissions on file, and fix it,
                                                                 otherwise report as a bugzilla.
                                                                 
                                                                 *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                                 
                                                                 If you believe that virtlogd should have the dac_read_search capability by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                                 # semodule -X 300 -i my-virtlogd.pp
                                                                 
Dec 06 09:41:26 np0005548790.localdomain sudo[202696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mqbugoljlfrmxarrgrfbhzxnujwcwckg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014085.8137517-2957-157474882186633/AnsiballZ_systemd.py
Dec 06 09:41:26 np0005548790.localdomain sudo[202696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:26 np0005548790.localdomain python3.9[202698]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:41:26 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:41:26 np0005548790.localdomain systemd-rc-local-generator[202726]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:41:26 np0005548790.localdomain systemd-sysv-generator[202729]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:41:26 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:26 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:26 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:26 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:26 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:41:26 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:26 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:26 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:26 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:41:26 np0005548790.localdomain systemd[1]: Starting libvirt secret daemon socket...
Dec 06 09:41:26 np0005548790.localdomain systemd[1]: Listening on libvirt secret daemon socket.
Dec 06 09:41:26 np0005548790.localdomain systemd[1]: Starting libvirt secret daemon admin socket...
Dec 06 09:41:26 np0005548790.localdomain systemd[1]: Starting libvirt secret daemon read-only socket...
Dec 06 09:41:26 np0005548790.localdomain systemd[1]: Listening on libvirt secret daemon admin socket.
Dec 06 09:41:26 np0005548790.localdomain systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec 06 09:41:26 np0005548790.localdomain systemd[1]: Started libvirt secret daemon.
Dec 06 09:41:26 np0005548790.localdomain sudo[202696]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:27 np0005548790.localdomain sudo[202867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yqubdzgolcvmakeyqlmsrvduworqiqvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014087.2548242-3067-238381043381710/AnsiballZ_file.py
Dec 06 09:41:27 np0005548790.localdomain sudo[202867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:27 np0005548790.localdomain python3.9[202869]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16877 DF PROTO=TCP SPT=37762 DPT=9105 SEQ=2248873528 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FAB31F0000000001030307) 
Dec 06 09:41:27 np0005548790.localdomain sudo[202867]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:28 np0005548790.localdomain sudo[202977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jdefckrvdaqjcjcieqvyoltzmgzdowax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014087.937901-3092-209111242572108/AnsiballZ_find.py
Dec 06 09:41:28 np0005548790.localdomain sudo[202977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:28 np0005548790.localdomain python3.9[202979]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 09:41:28 np0005548790.localdomain sudo[202977]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:28 np0005548790.localdomain sudo[203087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kwumwkcacixrpvexunxlzkgzgozcyebs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014088.6459427-3115-242149337645550/AnsiballZ_command.py
Dec 06 09:41:28 np0005548790.localdomain sudo[203087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:29 np0005548790.localdomain python3.9[203089]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                                            echo ceph
                                                            awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:29 np0005548790.localdomain sudo[203087]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:29 np0005548790.localdomain python3.9[203201]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 09:41:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30563 DF PROTO=TCP SPT=56164 DPT=9105 SEQ=1711170622 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FABEDF0000000001030307) 
Dec 06 09:41:30 np0005548790.localdomain python3.9[203309]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:31 np0005548790.localdomain python3.9[203395]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014090.3611362-3172-57802078854460/.source.xml follow=False _original_basename=secret.xml.j2 checksum=9621e6cf70c8e0de93f1c73ff2a387c8c3ac4910 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:32 np0005548790.localdomain sudo[203503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-afnmgkbqykwqnqkbyqmomrivqzszkhwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014091.4559965-3217-125227181716382/AnsiballZ_command.py
Dec 06 09:41:32 np0005548790.localdomain sudo[203503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:32 np0005548790.localdomain python3.9[203505]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 1939e851-b10c-5c3b-9bb7-8e7f380233e8
                                                            virsh secret-define --file /tmp/secret.xml
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:32 np0005548790.localdomain polkitd[1036]: Registered Authentication Agent for unix-process:203507:1050789 (system bus name :1.2843 [pkttyagent --process 203507 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8)
Dec 06 09:41:32 np0005548790.localdomain polkitd[1036]: Unregistered Authentication Agent for unix-process:203507:1050789 (system bus name :1.2843, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8) (disconnected from bus)
Dec 06 09:41:32 np0005548790.localdomain polkitd[1036]: Registered Authentication Agent for unix-process:203506:1050789 (system bus name :1.2844 [pkttyagent --process 203506 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8)
Dec 06 09:41:32 np0005548790.localdomain polkitd[1036]: Unregistered Authentication Agent for unix-process:203506:1050789 (system bus name :1.2844, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8) (disconnected from bus)
Dec 06 09:41:32 np0005548790.localdomain sudo[203503]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:33 np0005548790.localdomain python3.9[203625]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8049 DF PROTO=TCP SPT=56358 DPT=9102 SEQ=4173206771 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FACB1F0000000001030307) 
Dec 06 09:41:34 np0005548790.localdomain sudo[203733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zbuejsjugnjlrqilcuotlnmfwzwuoldz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014093.7890115-3265-8211098831466/AnsiballZ_command.py
Dec 06 09:41:34 np0005548790.localdomain sudo[203733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:34 np0005548790.localdomain sudo[203733]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:35 np0005548790.localdomain sudo[203844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wznecnnkmvtchqpmsechbunpkrrjbqrk ; FSID=1939e851-b10c-5c3b-9bb7-8e7f380233e8 KEY=AQC14jNpAAAAABAAVDrRWQiDxWIwal0FbWGWhA== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014095.0249176-3290-240608636947508/AnsiballZ_command.py
Dec 06 09:41:35 np0005548790.localdomain sudo[203844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:35 np0005548790.localdomain polkitd[1036]: Registered Authentication Agent for unix-process:203847:1051071 (system bus name :1.2847 [pkttyagent --process 203847 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8)
Dec 06 09:41:35 np0005548790.localdomain polkitd[1036]: Unregistered Authentication Agent for unix-process:203847:1051071 (system bus name :1.2847, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8) (disconnected from bus)
Dec 06 09:41:35 np0005548790.localdomain sudo[203844]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:35 np0005548790.localdomain systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec 06 09:41:35 np0005548790.localdomain systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec 06 09:41:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29745 DF PROTO=TCP SPT=48682 DPT=9101 SEQ=85510950 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FAD51F0000000001030307) 
Dec 06 09:41:37 np0005548790.localdomain sudo[203960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-izixvxhgxdenxflugzjbbbxlmvxnsool ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014096.833561-3314-150754354410635/AnsiballZ_copy.py
Dec 06 09:41:37 np0005548790.localdomain sudo[203960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:37 np0005548790.localdomain python3.9[203962]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:37 np0005548790.localdomain sudo[203960]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:37 np0005548790.localdomain sudo[204070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-egivetlepxdlcrnpqliwipxydianqcps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014097.5394495-3338-15016254563996/AnsiballZ_stat.py
Dec 06 09:41:37 np0005548790.localdomain sudo[204070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:38 np0005548790.localdomain python3.9[204072]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:38 np0005548790.localdomain sudo[204070]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:38 np0005548790.localdomain sudo[204158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ocomoykjzefzvcktzjtwykwoyjtnqtxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014097.5394495-3338-15016254563996/AnsiballZ_copy.py
Dec 06 09:41:38 np0005548790.localdomain sudo[204158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:38 np0005548790.localdomain python3.9[204160]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014097.5394495-3338-15016254563996/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=dc5ee7162311c27a6084cbee4052b901d56cb1ba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:38 np0005548790.localdomain sudo[204158]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:39 np0005548790.localdomain sudo[204268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oglernhqbokjaoqngynvvhwbtmriptzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014098.8694632-3386-219570105844868/AnsiballZ_file.py
Dec 06 09:41:39 np0005548790.localdomain sudo[204268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:39 np0005548790.localdomain python3.9[204270]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:39 np0005548790.localdomain sudo[204268]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:39 np0005548790.localdomain sudo[204378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cnrekaxrtzssilpmopwnspgzsoszmqpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014099.598794-3410-86666602594569/AnsiballZ_stat.py
Dec 06 09:41:39 np0005548790.localdomain sudo[204378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47589 DF PROTO=TCP SPT=46724 DPT=9100 SEQ=3627444048 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FAE31F0000000001030307) 
Dec 06 09:41:40 np0005548790.localdomain python3.9[204380]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:40 np0005548790.localdomain sudo[204378]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:40 np0005548790.localdomain sudo[204435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cdnrqxvkpbwrlfaeuomjstrijtgymlli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014099.598794-3410-86666602594569/AnsiballZ_file.py
Dec 06 09:41:40 np0005548790.localdomain sudo[204435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:40 np0005548790.localdomain python3.9[204437]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:40 np0005548790.localdomain sudo[204435]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:41 np0005548790.localdomain sudo[204545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-htnanskbvqaygcbzwbqxjaqfdjurzjkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014100.8133197-3445-122717297066042/AnsiballZ_stat.py
Dec 06 09:41:41 np0005548790.localdomain sudo[204545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:41 np0005548790.localdomain python3.9[204547]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:41 np0005548790.localdomain sudo[204545]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:41 np0005548790.localdomain sudo[204602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iokfoytwokfrbllqkrckdqepprwsexlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014100.8133197-3445-122717297066042/AnsiballZ_file.py
Dec 06 09:41:41 np0005548790.localdomain sudo[204602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:41 np0005548790.localdomain python3.9[204604]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.5cn5vjmj recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:41 np0005548790.localdomain sudo[204602]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:42 np0005548790.localdomain sudo[204712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-engdywsybhcykzvrssvgczwwvdmlwbof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014102.0048475-3482-279967506248329/AnsiballZ_stat.py
Dec 06 09:41:42 np0005548790.localdomain sudo[204712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:42 np0005548790.localdomain python3.9[204714]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:42 np0005548790.localdomain sudo[204712]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:42 np0005548790.localdomain sudo[204769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xleigvgkwvadznpxilzrdwyneekbewxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014102.0048475-3482-279967506248329/AnsiballZ_file.py
Dec 06 09:41:42 np0005548790.localdomain sudo[204769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:42 np0005548790.localdomain python3.9[204771]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:42 np0005548790.localdomain sudo[204769]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28235 DF PROTO=TCP SPT=60390 DPT=9100 SEQ=2655460767 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FAEFDF0000000001030307) 
Dec 06 09:41:43 np0005548790.localdomain sudo[204879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zqeicyvsvssygtcpjkksezqfgzaoeeit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014103.193693-3520-19772065448076/AnsiballZ_command.py
Dec 06 09:41:43 np0005548790.localdomain sudo[204879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:43 np0005548790.localdomain python3.9[204881]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:43 np0005548790.localdomain sudo[204879]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:44 np0005548790.localdomain sudo[204990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dioudbbwcfewtjpzjdzhogbojvglxmfx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765014103.9585948-3545-240624558621245/AnsiballZ_edpm_nftables_from_files.py
Dec 06 09:41:44 np0005548790.localdomain sudo[204990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:44 np0005548790.localdomain python3[204992]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 06 09:41:44 np0005548790.localdomain sudo[204990]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:45 np0005548790.localdomain sudo[205100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ldciqpeqqzrrwshffuhrqxxpzizgjrcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014104.7507546-3569-196472672900951/AnsiballZ_stat.py
Dec 06 09:41:45 np0005548790.localdomain sudo[205100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:45 np0005548790.localdomain python3.9[205102]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:45 np0005548790.localdomain sudo[205100]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:45 np0005548790.localdomain sudo[205157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ylveaatxtsndqyjakuyzkfmaslfslrbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014104.7507546-3569-196472672900951/AnsiballZ_file.py
Dec 06 09:41:45 np0005548790.localdomain sudo[205157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:45 np0005548790.localdomain python3.9[205159]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:45 np0005548790.localdomain sudo[205157]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:47 np0005548790.localdomain sudo[205267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lmctgagbdjiwoyddszabppkddhiyizcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014106.7227266-3605-177086893413638/AnsiballZ_stat.py
Dec 06 09:41:47 np0005548790.localdomain sudo[205267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:47 np0005548790.localdomain python3.9[205269]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:47 np0005548790.localdomain sudo[205267]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:47 np0005548790.localdomain sudo[205324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yjfsseymvlcaefyhgparphycuzzfqkrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014106.7227266-3605-177086893413638/AnsiballZ_file.py
Dec 06 09:41:47 np0005548790.localdomain sudo[205324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:47 np0005548790.localdomain python3.9[205326]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:47 np0005548790.localdomain sudo[205324]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:48 np0005548790.localdomain sudo[205434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-efgjimxvdiaklzbmkbdhncrgskcnkmjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014107.9229805-3641-132133748953604/AnsiballZ_stat.py
Dec 06 09:41:48 np0005548790.localdomain sudo[205434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:41:48.348 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:41:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:41:48.349 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:41:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:41:48.349 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:41:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53858 DF PROTO=TCP SPT=46586 DPT=9102 SEQ=1799609212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FB03D10000000001030307) 
Dec 06 09:41:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53019 DF PROTO=TCP SPT=55230 DPT=9882 SEQ=2053797910 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FB03D80000000001030307) 
Dec 06 09:41:48 np0005548790.localdomain python3.9[205436]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:48 np0005548790.localdomain sudo[205434]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:49 np0005548790.localdomain sudo[205491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-clobwfsjnslzgmcgegfrwvifjumxwble ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014107.9229805-3641-132133748953604/AnsiballZ_file.py
Dec 06 09:41:49 np0005548790.localdomain sudo[205491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:49 np0005548790.localdomain python3.9[205493]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:49 np0005548790.localdomain sudo[205491]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:49 np0005548790.localdomain sudo[205601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hchygkbammehycbfuefiuvpdjartzuje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014109.5247097-3677-270062382161331/AnsiballZ_stat.py
Dec 06 09:41:49 np0005548790.localdomain sudo[205601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:50 np0005548790.localdomain python3.9[205603]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:50 np0005548790.localdomain sudo[205601]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:50 np0005548790.localdomain sudo[205658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mwhqkqxtxbkwuqahhweuutxblghgbjhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014109.5247097-3677-270062382161331/AnsiballZ_file.py
Dec 06 09:41:50 np0005548790.localdomain sudo[205658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:50 np0005548790.localdomain python3.9[205660]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:50 np0005548790.localdomain sudo[205658]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:51 np0005548790.localdomain sudo[205768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jbysjfessiqvswxjhdfhaxlaenhbbfzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014110.8786929-3714-110635316262365/AnsiballZ_stat.py
Dec 06 09:41:51 np0005548790.localdomain sudo[205768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:51 np0005548790.localdomain python3.9[205770]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:51 np0005548790.localdomain sudo[205768]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53021 DF PROTO=TCP SPT=55230 DPT=9882 SEQ=2053797910 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FB0FE00000000001030307) 
Dec 06 09:41:51 np0005548790.localdomain sudo[205858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xyooagewcwzawvvopxtzdmvdtnuecocb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014110.8786929-3714-110635316262365/AnsiballZ_copy.py
Dec 06 09:41:51 np0005548790.localdomain sudo[205858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:41:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:41:51 np0005548790.localdomain podman[205862]: 2025-12-06 09:41:51.901870297 +0000 UTC m=+0.087856813 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 09:41:51 np0005548790.localdomain systemd[1]: tmp-crun.fobMLh.mount: Deactivated successfully.
Dec 06 09:41:51 np0005548790.localdomain podman[205861]: 2025-12-06 09:41:51.94756974 +0000 UTC m=+0.134474282 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:41:51 np0005548790.localdomain podman[205862]: 2025-12-06 09:41:51.973123734 +0000 UTC m=+0.159110250 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 09:41:51 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:41:51 np0005548790.localdomain podman[205861]: 2025-12-06 09:41:51.988217039 +0000 UTC m=+0.175121581 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 09:41:52 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:41:52 np0005548790.localdomain python3.9[205860]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014110.8786929-3714-110635316262365/.source.nft follow=False _original_basename=ruleset.j2 checksum=e2e2635f27347d386f310e86d2b40c40289835bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:52 np0005548790.localdomain sudo[205858]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:52 np0005548790.localdomain sudo[206008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ahboctyyhajsfwpbxfupmpcnbjtzmuop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014112.2904124-3757-227526560423971/AnsiballZ_file.py
Dec 06 09:41:52 np0005548790.localdomain sudo[206008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:52 np0005548790.localdomain python3.9[206010]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:52 np0005548790.localdomain sudo[206008]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:53 np0005548790.localdomain sudo[206118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zyjnejxokoreunvnrnrldybsynlezzte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014112.93538-3782-279603901839373/AnsiballZ_command.py
Dec 06 09:41:53 np0005548790.localdomain sudo[206118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:53 np0005548790.localdomain python3.9[206120]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:53 np0005548790.localdomain sudo[206118]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:54 np0005548790.localdomain sudo[206231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ssngiqafzrxkqbvrhlhcjrdarfujmhcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014113.649503-3805-189379051537069/AnsiballZ_blockinfile.py
Dec 06 09:41:54 np0005548790.localdomain sudo[206231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:54 np0005548790.localdomain python3.9[206233]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                                            include "/etc/nftables/edpm-chains.nft"
                                                            include "/etc/nftables/edpm-rules.nft"
                                                            include "/etc/nftables/edpm-jumps.nft"
                                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:54 np0005548790.localdomain sudo[206231]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17488 DF PROTO=TCP SPT=56312 DPT=9105 SEQ=4006795017 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FB1C1F0000000001030307) 
Dec 06 09:41:54 np0005548790.localdomain sudo[206294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:41:54 np0005548790.localdomain sudo[206294]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:41:54 np0005548790.localdomain sudo[206294]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:54 np0005548790.localdomain sudo[206334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:41:54 np0005548790.localdomain sudo[206334]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:41:55 np0005548790.localdomain sudo[206376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wjzwzlhkrmcewrtralklyspcyjjafbnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014114.7480643-3833-107271486320757/AnsiballZ_command.py
Dec 06 09:41:55 np0005548790.localdomain sudo[206376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:55 np0005548790.localdomain python3.9[206379]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:55 np0005548790.localdomain sudo[206376]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:55 np0005548790.localdomain sudo[206334]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:55 np0005548790.localdomain sudo[206521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aubzqodkkwaihtigohhepsvhollaqoqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014115.4745598-3857-89440979069123/AnsiballZ_stat.py
Dec 06 09:41:55 np0005548790.localdomain sudo[206521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:56 np0005548790.localdomain python3.9[206523]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:41:56 np0005548790.localdomain sudo[206521]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:56 np0005548790.localdomain sudo[206633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-scknrgjhbctwztgbvpnkymqsjmidlerk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014116.3551278-3882-83060789322154/AnsiballZ_command.py
Dec 06 09:41:56 np0005548790.localdomain sudo[206633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:56 np0005548790.localdomain python3.9[206635]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:41:56 np0005548790.localdomain sudo[206633]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:57 np0005548790.localdomain sudo[206746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nexeyhqwoikwmiduhjwjbyrqdgrnisnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014117.108022-3905-93181379351310/AnsiballZ_file.py
Dec 06 09:41:57 np0005548790.localdomain sudo[206746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=228 DF PROTO=TCP SPT=34988 DPT=9105 SEQ=3141065904 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FB271F0000000001030307) 
Dec 06 09:41:57 np0005548790.localdomain python3.9[206748]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:57 np0005548790.localdomain sudo[206746]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:58 np0005548790.localdomain sudo[206856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kyfaogslhnxjatsbvwmbfgsnbytirdpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014117.8365674-3929-250518106338007/AnsiballZ_stat.py
Dec 06 09:41:58 np0005548790.localdomain sudo[206856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:58 np0005548790.localdomain python3.9[206858]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:58 np0005548790.localdomain sudo[206856]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:58 np0005548790.localdomain sudo[206859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:41:58 np0005548790.localdomain sudo[206859]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:41:58 np0005548790.localdomain sudo[206859]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:58 np0005548790.localdomain sudo[206962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qxegqahmmofmwrlhsghaknsecetqmacl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014117.8365674-3929-250518106338007/AnsiballZ_copy.py
Dec 06 09:41:58 np0005548790.localdomain sudo[206962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:58 np0005548790.localdomain python3.9[206964]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014117.8365674-3929-250518106338007/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:41:58 np0005548790.localdomain sudo[206962]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:59 np0005548790.localdomain sudo[207072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jnawooempngmickxftdkqhbtjynftnvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014119.1320217-3974-23088459767940/AnsiballZ_stat.py
Dec 06 09:41:59 np0005548790.localdomain sudo[207072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:41:59 np0005548790.localdomain python3.9[207074]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:41:59 np0005548790.localdomain sudo[207072]: pam_unix(sudo:session): session closed for user root
Dec 06 09:41:59 np0005548790.localdomain sudo[207160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kdpuqesmccfkuvmwfqgjexjohsmypvcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014119.1320217-3974-23088459767940/AnsiballZ_copy.py
Dec 06 09:41:59 np0005548790.localdomain sudo[207160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:00 np0005548790.localdomain python3.9[207162]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014119.1320217-3974-23088459767940/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:42:00 np0005548790.localdomain sudo[207160]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:00 np0005548790.localdomain sudo[207270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-niswycutkhzdshjakjdpvhxfiipxlsva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014120.3518958-4019-86926615341370/AnsiballZ_stat.py
Dec 06 09:42:00 np0005548790.localdomain sudo[207270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17490 DF PROTO=TCP SPT=56312 DPT=9105 SEQ=4006795017 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FB33DF0000000001030307) 
Dec 06 09:42:00 np0005548790.localdomain python3.9[207272]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:42:00 np0005548790.localdomain sudo[207270]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:01 np0005548790.localdomain sudo[207358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fznppmakzwabvasmpdvloelrtibvjuwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014120.3518958-4019-86926615341370/AnsiballZ_copy.py
Dec 06 09:42:01 np0005548790.localdomain sudo[207358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:01 np0005548790.localdomain python3.9[207360]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014120.3518958-4019-86926615341370/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:42:01 np0005548790.localdomain sudo[207358]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:01 np0005548790.localdomain sudo[207468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qiamnklfeifkzfnyvoyiqurtsziqbudg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014121.7102573-4064-208657203539331/AnsiballZ_systemd.py
Dec 06 09:42:01 np0005548790.localdomain sudo[207468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:02 np0005548790.localdomain python3.9[207470]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:42:02 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:42:02 np0005548790.localdomain systemd-rc-local-generator[207492]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:42:02 np0005548790.localdomain systemd-sysv-generator[207496]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:42:02 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:02 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:02 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:02 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:02 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:42:02 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:02 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:02 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:02 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:02 np0005548790.localdomain systemd[1]: Reached target edpm_libvirt.target.
Dec 06 09:42:02 np0005548790.localdomain sudo[207468]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:03 np0005548790.localdomain sudo[207618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oiolvvtfvtqoowzsebadqnrlenwhxpow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014122.942267-4088-2782160611306/AnsiballZ_systemd.py
Dec 06 09:42:03 np0005548790.localdomain sudo[207618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:03 np0005548790.localdomain python3.9[207620]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 06 09:42:03 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:42:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53862 DF PROTO=TCP SPT=46586 DPT=9102 SEQ=1799609212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FB3F1F0000000001030307) 
Dec 06 09:42:03 np0005548790.localdomain systemd-sysv-generator[207645]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:42:03 np0005548790.localdomain systemd-rc-local-generator[207642]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:42:03 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:03 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:03 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:03 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:03 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:42:03 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:03 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:03 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:03 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:03 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:42:04 np0005548790.localdomain systemd-rc-local-generator[207680]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:42:04 np0005548790.localdomain systemd-sysv-generator[207685]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:42:04 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:04 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:04 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:04 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:04 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:42:04 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:04 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:04 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:04 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:04 np0005548790.localdomain sudo[207618]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:04 np0005548790.localdomain sshd[159384]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:42:04 np0005548790.localdomain systemd[1]: session-52.scope: Deactivated successfully.
Dec 06 09:42:04 np0005548790.localdomain systemd[1]: session-52.scope: Consumed 3min 38.059s CPU time.
Dec 06 09:42:04 np0005548790.localdomain systemd-logind[760]: Session 52 logged out. Waiting for processes to exit.
Dec 06 09:42:04 np0005548790.localdomain systemd-logind[760]: Removed session 52.
Dec 06 09:42:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40242 DF PROTO=TCP SPT=33878 DPT=9101 SEQ=1794420539 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FB4B1F0000000001030307) 
Dec 06 09:42:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35790 DF PROTO=TCP SPT=39602 DPT=9100 SEQ=1177338799 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FB59200000000001030307) 
Dec 06 09:42:11 np0005548790.localdomain sshd[207712]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:42:11 np0005548790.localdomain sshd[207712]: Accepted publickey for zuul from 192.168.122.30 port 59466 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:42:11 np0005548790.localdomain systemd-logind[760]: New session 53 of user zuul.
Dec 06 09:42:11 np0005548790.localdomain systemd[1]: Started Session 53 of User zuul.
Dec 06 09:42:11 np0005548790.localdomain sshd[207712]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:42:12 np0005548790.localdomain python3.9[207823]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:42:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55789 DF PROTO=TCP SPT=43022 DPT=9100 SEQ=1151479475 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FB651F0000000001030307) 
Dec 06 09:42:14 np0005548790.localdomain python3.9[207935]: ansible-ansible.builtin.service_facts Invoked
Dec 06 09:42:14 np0005548790.localdomain network[207952]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 09:42:14 np0005548790.localdomain network[207953]: 'network-scripts' will be removed from distribution in near future.
Dec 06 09:42:14 np0005548790.localdomain network[207954]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 09:42:16 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:42:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62546 DF PROTO=TCP SPT=48972 DPT=9102 SEQ=1208868583 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FB79010000000001030307) 
Dec 06 09:42:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61273 DF PROTO=TCP SPT=60574 DPT=9882 SEQ=44072350 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FB79080000000001030307) 
Dec 06 09:42:20 np0005548790.localdomain sudo[208184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-psfkuwghiizbvbkkgqizkonwvfiqnlnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014140.435311-103-253774178849274/AnsiballZ_setup.py
Dec 06 09:42:20 np0005548790.localdomain sudo[208184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:21 np0005548790.localdomain python3.9[208186]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:42:21 np0005548790.localdomain sudo[208184]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62548 DF PROTO=TCP SPT=48972 DPT=9102 SEQ=1208868583 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FB85200000000001030307) 
Dec 06 09:42:21 np0005548790.localdomain sudo[208247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lbnireoksjnojeffkkoprwzeqtukhqdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014140.435311-103-253774178849274/AnsiballZ_dnf.py
Dec 06 09:42:21 np0005548790.localdomain sudo[208247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:22 np0005548790.localdomain python3.9[208249]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:42:22 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:42:22 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:42:22 np0005548790.localdomain podman[208252]: 2025-12-06 09:42:22.569812995 +0000 UTC m=+0.081598946 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 09:42:22 np0005548790.localdomain systemd[1]: tmp-crun.reuRCq.mount: Deactivated successfully.
Dec 06 09:42:22 np0005548790.localdomain podman[208251]: 2025-12-06 09:42:22.669729371 +0000 UTC m=+0.181773098 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 09:42:22 np0005548790.localdomain podman[208251]: 2025-12-06 09:42:22.678064604 +0000 UTC m=+0.190108321 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 06 09:42:22 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:42:22 np0005548790.localdomain podman[208252]: 2025-12-06 09:42:22.72833237 +0000 UTC m=+0.240118321 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:42:22 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:42:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4514 DF PROTO=TCP SPT=52840 DPT=9105 SEQ=211483315 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FB915F0000000001030307) 
Dec 06 09:42:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30566 DF PROTO=TCP SPT=56164 DPT=9105 SEQ=1711170622 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FB9D1F0000000001030307) 
Dec 06 09:42:29 np0005548790.localdomain sudo[208247]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:30 np0005548790.localdomain sudo[208402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-odoepjtesckqnpgeycvtnuvgrpojrgkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014149.8665674-140-243243308380108/AnsiballZ_stat.py
Dec 06 09:42:30 np0005548790.localdomain sudo[208402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:30 np0005548790.localdomain python3.9[208404]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:42:30 np0005548790.localdomain sudo[208402]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4516 DF PROTO=TCP SPT=52840 DPT=9105 SEQ=211483315 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FBA91F0000000001030307) 
Dec 06 09:42:31 np0005548790.localdomain sudo[208514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-raplzbldppyragklxrjsxcmcyuukbjxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014150.7191114-164-67455878377962/AnsiballZ_copy.py
Dec 06 09:42:31 np0005548790.localdomain sudo[208514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:31 np0005548790.localdomain python3.9[208516]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi mode=preserve remote_src=True src=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi/ backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:42:31 np0005548790.localdomain sudo[208514]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:31 np0005548790.localdomain sudo[208624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rhsrsvetxmunpyvlfcqztkembujgjavr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014151.5703022-187-109642818453444/AnsiballZ_command.py
Dec 06 09:42:31 np0005548790.localdomain sudo[208624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:32 np0005548790.localdomain python3.9[208626]: ansible-ansible.legacy.command Invoked with _raw_params=mv "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi" "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi.adopted"
                                                             _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:42:32 np0005548790.localdomain sudo[208624]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:33 np0005548790.localdomain sudo[208735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-thnthuygerhhyfrattjjvwkpsuvmyktr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014152.451108-212-15024836537330/AnsiballZ_command.py
Dec 06 09:42:33 np0005548790.localdomain sudo[208735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:33 np0005548790.localdomain python3.9[208737]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:42:33 np0005548790.localdomain sudo[208735]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62550 DF PROTO=TCP SPT=48972 DPT=9102 SEQ=1208868583 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FBB5200000000001030307) 
Dec 06 09:42:34 np0005548790.localdomain sudo[208846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dqiqzozndelipnvrlqehplnkigxxgvtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014153.8462-236-238625158016912/AnsiballZ_command.py
Dec 06 09:42:34 np0005548790.localdomain sudo[208846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:34 np0005548790.localdomain python3.9[208848]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -rF /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:42:34 np0005548790.localdomain sudo[208846]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:34 np0005548790.localdomain sudo[208957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-orlufcxqmzidxyzqoequzicrlfnmemdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014154.6791494-262-275502967916395/AnsiballZ_stat.py
Dec 06 09:42:34 np0005548790.localdomain sudo[208957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:35 np0005548790.localdomain python3.9[208959]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:42:35 np0005548790.localdomain sudo[208957]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:36 np0005548790.localdomain sudo[209069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cammipvqypnfvrsthuxqtsvcjqeqiuxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014156.082852-295-189151352214310/AnsiballZ_lineinfile.py
Dec 06 09:42:36 np0005548790.localdomain sudo[209069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:36 np0005548790.localdomain python3.9[209071]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:42:36 np0005548790.localdomain sudo[209069]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:37 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1714 DF PROTO=TCP SPT=44910 DPT=9100 SEQ=3587826138 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FBC29F0000000001030307) 
Dec 06 09:42:37 np0005548790.localdomain sudo[209179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vpgqlfubhdupmjfgznntvmlkzvzcgkxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014157.0610094-322-250907733245166/AnsiballZ_systemd_service.py
Dec 06 09:42:37 np0005548790.localdomain sudo[209179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:38 np0005548790.localdomain python3.9[209181]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:42:38 np0005548790.localdomain systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec 06 09:42:38 np0005548790.localdomain sudo[209179]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:38 np0005548790.localdomain sudo[209293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zoesendtbrscxfegtfmcbdgbqrsgudgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014158.4605477-347-239701263104482/AnsiballZ_systemd_service.py
Dec 06 09:42:38 np0005548790.localdomain sudo[209293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:38 np0005548790.localdomain python3.9[209295]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:42:40 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:42:40 np0005548790.localdomain systemd-rc-local-generator[209322]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:42:40 np0005548790.localdomain systemd-sysv-generator[209328]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:42:40 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:40 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:40 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:40 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:40 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:42:40 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:40 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:40 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:40 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:42:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28238 DF PROTO=TCP SPT=60390 DPT=9100 SEQ=2655460767 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FBCF1F0000000001030307) 
Dec 06 09:42:40 np0005548790.localdomain systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 06 09:42:40 np0005548790.localdomain systemd[1]: Starting Open-iSCSI...
Dec 06 09:42:40 np0005548790.localdomain iscsid[209336]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
Dec 06 09:42:40 np0005548790.localdomain iscsid[209336]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
Dec 06 09:42:40 np0005548790.localdomain iscsid[209336]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Dec 06 09:42:40 np0005548790.localdomain iscsid[209336]: If using hardware iscsi like qla4xxx this message can be ignored.
Dec 06 09:42:40 np0005548790.localdomain iscsid[209336]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
Dec 06 09:42:40 np0005548790.localdomain iscsid[209336]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf
Dec 06 09:42:40 np0005548790.localdomain iscsid[209336]: iscsid: can't open iscsid.ipc_auth_uid configuration file /etc/iscsi/iscsid.conf
Dec 06 09:42:40 np0005548790.localdomain systemd[1]: Started Open-iSCSI.
Dec 06 09:42:40 np0005548790.localdomain systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec 06 09:42:40 np0005548790.localdomain systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec 06 09:42:40 np0005548790.localdomain sudo[209293]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:41 np0005548790.localdomain sudo[209445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dciyrogpccfmefmumosjpqbgflthholh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014161.057426-379-212480234663318/AnsiballZ_service_facts.py
Dec 06 09:42:41 np0005548790.localdomain sudo[209445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:41 np0005548790.localdomain python3.9[209447]: ansible-ansible.builtin.service_facts Invoked
Dec 06 09:42:41 np0005548790.localdomain network[209464]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 09:42:41 np0005548790.localdomain network[209465]: 'network-scripts' will be removed from distribution in near future.
Dec 06 09:42:41 np0005548790.localdomain network[209466]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 09:42:42 np0005548790.localdomain systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec 06 09:42:42 np0005548790.localdomain systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec 06 09:42:42 np0005548790.localdomain systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service.
Dec 06 09:42:42 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:42:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1716 DF PROTO=TCP SPT=44910 DPT=9100 SEQ=3587826138 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FBDA5F0000000001030307) 
Dec 06 09:42:43 np0005548790.localdomain setroubleshoot[209481]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l a51b5575-e1b5-4243-baa8-09cdd95fb31f
Dec 06 09:42:43 np0005548790.localdomain setroubleshoot[209481]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.
                                                                 
                                                                 *****  Plugin catchall (100. confidence) suggests   **************************
                                                                 
                                                                 If you believe that iscsid should be allowed search access on the iscsi directory by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid
                                                                 # semodule -X 300 -i my-iscsid.pp
                                                                 
Dec 06 09:42:43 np0005548790.localdomain setroubleshoot[209481]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l a51b5575-e1b5-4243-baa8-09cdd95fb31f
Dec 06 09:42:43 np0005548790.localdomain setroubleshoot[209481]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.
                                                                 
                                                                 *****  Plugin catchall (100. confidence) suggests   **************************
                                                                 
                                                                 If you believe that iscsid should be allowed search access on the iscsi directory by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid
                                                                 # semodule -X 300 -i my-iscsid.pp
                                                                 
Dec 06 09:42:43 np0005548790.localdomain setroubleshoot[209481]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l a51b5575-e1b5-4243-baa8-09cdd95fb31f
Dec 06 09:42:43 np0005548790.localdomain setroubleshoot[209481]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.
                                                                 
                                                                 *****  Plugin catchall (100. confidence) suggests   **************************
                                                                 
                                                                 If you believe that iscsid should be allowed search access on the iscsi directory by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid
                                                                 # semodule -X 300 -i my-iscsid.pp
                                                                 
Dec 06 09:42:43 np0005548790.localdomain setroubleshoot[209481]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l a51b5575-e1b5-4243-baa8-09cdd95fb31f
Dec 06 09:42:43 np0005548790.localdomain setroubleshoot[209481]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.
                                                                 
                                                                 *****  Plugin catchall (100. confidence) suggests   **************************
                                                                 
                                                                 If you believe that iscsid should be allowed search access on the iscsi directory by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid
                                                                 # semodule -X 300 -i my-iscsid.pp
                                                                 
Dec 06 09:42:43 np0005548790.localdomain setroubleshoot[209481]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l a51b5575-e1b5-4243-baa8-09cdd95fb31f
Dec 06 09:42:43 np0005548790.localdomain setroubleshoot[209481]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.
                                                                 
                                                                 *****  Plugin catchall (100. confidence) suggests   **************************
                                                                 
                                                                 If you believe that iscsid should be allowed search access on the iscsi directory by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid
                                                                 # semodule -X 300 -i my-iscsid.pp
                                                                 
Dec 06 09:42:43 np0005548790.localdomain setroubleshoot[209481]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l a51b5575-e1b5-4243-baa8-09cdd95fb31f
Dec 06 09:42:43 np0005548790.localdomain setroubleshoot[209481]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.
                                                                 
                                                                 *****  Plugin catchall (100. confidence) suggests   **************************
                                                                 
                                                                 If you believe that iscsid should be allowed search access on the iscsi directory by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid
                                                                 # semodule -X 300 -i my-iscsid.pp
                                                                 
Dec 06 09:42:45 np0005548790.localdomain sudo[209445]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:46 np0005548790.localdomain sudo[209713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nlatytjvpvcfhnxpflocnpdlursgrbfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014166.1570542-409-1064404799962/AnsiballZ_file.py
Dec 06 09:42:46 np0005548790.localdomain sudo[209713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:46 np0005548790.localdomain python3.9[209715]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 09:42:46 np0005548790.localdomain sudo[209713]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:47 np0005548790.localdomain sudo[209823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-igjzjkjlphbbgysmbtmpnfsspfvtnude ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014167.1190953-434-164678676431308/AnsiballZ_modprobe.py
Dec 06 09:42:47 np0005548790.localdomain sudo[209823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:47 np0005548790.localdomain python3.9[209825]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec 06 09:42:47 np0005548790.localdomain sudo[209823]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:48 np0005548790.localdomain sudo[209937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wfjrxdrbgftevovhttfftnahwvcunccp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014168.0330696-457-24401780232677/AnsiballZ_stat.py
Dec 06 09:42:48 np0005548790.localdomain sudo[209937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:42:48.349 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:42:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:42:48.350 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:42:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:42:48.350 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:42:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51833 DF PROTO=TCP SPT=56970 DPT=9102 SEQ=1298491132 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FBEE310000000001030307) 
Dec 06 09:42:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29481 DF PROTO=TCP SPT=55260 DPT=9882 SEQ=359150804 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FBEE380000000001030307) 
Dec 06 09:42:48 np0005548790.localdomain python3.9[209939]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:42:48 np0005548790.localdomain sudo[209937]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:48 np0005548790.localdomain sudo[210025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-odwacikwplwusbvlislywdjmdlwiqusi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014168.0330696-457-24401780232677/AnsiballZ_copy.py
Dec 06 09:42:48 np0005548790.localdomain sudo[210025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:49 np0005548790.localdomain python3.9[210027]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014168.0330696-457-24401780232677/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:42:49 np0005548790.localdomain sudo[210025]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:49 np0005548790.localdomain sudo[210135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-otofutojxpwlcbqajxbkvzddriznwftq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014169.3906384-506-234488251179593/AnsiballZ_lineinfile.py
Dec 06 09:42:49 np0005548790.localdomain sudo[210135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:49 np0005548790.localdomain python3.9[210137]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:42:49 np0005548790.localdomain sudo[210135]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:50 np0005548790.localdomain sudo[210245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bodaptpdvrvpsclyzrczyusdhejkxjys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014170.089104-529-246515714064194/AnsiballZ_systemd.py
Dec 06 09:42:50 np0005548790.localdomain sudo[210245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:50 np0005548790.localdomain python3.9[210247]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:42:50 np0005548790.localdomain systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 06 09:42:50 np0005548790.localdomain systemd[1]: Stopped Load Kernel Modules.
Dec 06 09:42:50 np0005548790.localdomain systemd[1]: Stopping Load Kernel Modules...
Dec 06 09:42:51 np0005548790.localdomain systemd[1]: Starting Load Kernel Modules...
Dec 06 09:42:51 np0005548790.localdomain systemd-modules-load[210251]: Module 'msr' is built in
Dec 06 09:42:51 np0005548790.localdomain systemd[1]: Finished Load Kernel Modules.
Dec 06 09:42:51 np0005548790.localdomain sudo[210245]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51835 DF PROTO=TCP SPT=56970 DPT=9102 SEQ=1298491132 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FBFA1F0000000001030307) 
Dec 06 09:42:51 np0005548790.localdomain sudo[210360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rbaxymcvbbwfrxgqgwdgdveohlseluzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014171.328934-554-154874421452395/AnsiballZ_file.py
Dec 06 09:42:51 np0005548790.localdomain sudo[210360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:51 np0005548790.localdomain python3.9[210362]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:42:51 np0005548790.localdomain sudo[210360]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:53 np0005548790.localdomain sudo[210470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jyxopitrqmmpywowmzxryyvfybapsmqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014172.7965415-580-257537075028059/AnsiballZ_stat.py
Dec 06 09:42:53 np0005548790.localdomain sudo[210470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:53 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:42:53 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:42:53 np0005548790.localdomain podman[210474]: 2025-12-06 09:42:53.228322032 +0000 UTC m=+0.080916714 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 09:42:53 np0005548790.localdomain podman[210474]: 2025-12-06 09:42:53.270121009 +0000 UTC m=+0.122715641 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 09:42:53 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:42:53 np0005548790.localdomain podman[210473]: 2025-12-06 09:42:53.290511585 +0000 UTC m=+0.144082223 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 09:42:53 np0005548790.localdomain podman[210473]: 2025-12-06 09:42:53.323097795 +0000 UTC m=+0.176668433 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 09:42:53 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:42:53 np0005548790.localdomain python3.9[210472]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:42:53 np0005548790.localdomain sudo[210470]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:53 np0005548790.localdomain sudo[210623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mrnizaohgnizeutxkiexjcsxmbcniwsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014173.6115696-608-108970539047746/AnsiballZ_stat.py
Dec 06 09:42:53 np0005548790.localdomain sudo[210623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:53 np0005548790.localdomain systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service: Deactivated successfully.
Dec 06 09:42:53 np0005548790.localdomain systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec 06 09:42:54 np0005548790.localdomain python3.9[210625]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:42:54 np0005548790.localdomain sudo[210623]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:54 np0005548790.localdomain sshd[210626]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:42:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60500 DF PROTO=TCP SPT=47634 DPT=9105 SEQ=3610846977 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FC06A00000000001030307) 
Dec 06 09:42:55 np0005548790.localdomain sudo[210735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wtiidllxgjirdaitjdwajjvqifwfqxir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014174.896949-632-77442593060425/AnsiballZ_stat.py
Dec 06 09:42:55 np0005548790.localdomain sudo[210735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:55 np0005548790.localdomain python3.9[210737]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:42:55 np0005548790.localdomain sudo[210735]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:55 np0005548790.localdomain sudo[210823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qfvfholqjjsalqkwlklgcxxewrxzgulf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014174.896949-632-77442593060425/AnsiballZ_copy.py
Dec 06 09:42:55 np0005548790.localdomain sudo[210823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:55 np0005548790.localdomain python3.9[210825]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014174.896949-632-77442593060425/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:42:55 np0005548790.localdomain sudo[210823]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:56 np0005548790.localdomain sudo[210933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xfzyxidizwdpprzecjywitqsliyibrzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014176.2335942-676-59587465690878/AnsiballZ_command.py
Dec 06 09:42:56 np0005548790.localdomain sudo[210933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:56 np0005548790.localdomain python3.9[210935]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:42:56 np0005548790.localdomain sudo[210933]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:57 np0005548790.localdomain sshd[210626]: error: maximum authentication attempts exceeded for root from 14.48.24.90 port 43930 ssh2 [preauth]
Dec 06 09:42:57 np0005548790.localdomain sshd[210626]: Disconnecting authenticating user root 14.48.24.90 port 43930: Too many authentication failures [preauth]
Dec 06 09:42:57 np0005548790.localdomain sudo[211044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bezpaztzgkaidvhwchuctugprnqjvvdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014176.903632-701-236989665505492/AnsiballZ_lineinfile.py
Dec 06 09:42:57 np0005548790.localdomain sudo[211044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:57 np0005548790.localdomain python3.9[211046]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:42:57 np0005548790.localdomain sudo[211044]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:57 np0005548790.localdomain sshd[211064]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:42:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17493 DF PROTO=TCP SPT=56312 DPT=9105 SEQ=4006795017 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FC131F0000000001030307) 
Dec 06 09:42:58 np0005548790.localdomain sudo[211156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zkidhviggvxnkpbljnhkddpmtyoighay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014177.7645843-725-254815865416164/AnsiballZ_replace.py
Dec 06 09:42:58 np0005548790.localdomain sudo[211156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:58 np0005548790.localdomain python3.9[211158]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:42:58 np0005548790.localdomain sudo[211156]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:58 np0005548790.localdomain sudo[211176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:42:58 np0005548790.localdomain sudo[211176]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:42:58 np0005548790.localdomain sudo[211176]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:58 np0005548790.localdomain sudo[211194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 06 09:42:58 np0005548790.localdomain sudo[211194]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:42:58 np0005548790.localdomain sudo[211315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-njgfktxhsqczfyomuecufzgcrqevhnqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014178.621264-749-136834125266545/AnsiballZ_replace.py
Dec 06 09:42:58 np0005548790.localdomain sudo[211315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:58 np0005548790.localdomain sudo[211194]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:59 np0005548790.localdomain python3.9[211323]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:42:59 np0005548790.localdomain sudo[211315]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:59 np0005548790.localdomain sudo[211343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:42:59 np0005548790.localdomain sudo[211343]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:42:59 np0005548790.localdomain sudo[211343]: pam_unix(sudo:session): session closed for user root
Dec 06 09:42:59 np0005548790.localdomain sudo[211367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:42:59 np0005548790.localdomain sudo[211367]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:42:59 np0005548790.localdomain sudo[211469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pdfzefiinxiyuapcjgeqlrrjildcwrxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014179.3793156-776-49750087839693/AnsiballZ_lineinfile.py
Dec 06 09:42:59 np0005548790.localdomain sudo[211469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:42:59 np0005548790.localdomain python3.9[211471]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:42:59 np0005548790.localdomain sudo[211469]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:00 np0005548790.localdomain sudo[211367]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:00 np0005548790.localdomain sshd[211064]: error: maximum authentication attempts exceeded for root from 14.48.24.90 port 44586 ssh2 [preauth]
Dec 06 09:43:00 np0005548790.localdomain sshd[211064]: Disconnecting authenticating user root 14.48.24.90 port 44586: Too many authentication failures [preauth]
Dec 06 09:43:00 np0005548790.localdomain sudo[211611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jaioegwfusjrbknepkqghgxtapjoivsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014179.9503767-776-112909164763856/AnsiballZ_lineinfile.py
Dec 06 09:43:00 np0005548790.localdomain sudo[211611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:00 np0005548790.localdomain python3.9[211613]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:43:00 np0005548790.localdomain sudo[211611]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:00 np0005548790.localdomain sshd[211647]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:43:00 np0005548790.localdomain sudo[211662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:43:00 np0005548790.localdomain sudo[211662]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:43:00 np0005548790.localdomain sudo[211662]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60502 DF PROTO=TCP SPT=47634 DPT=9105 SEQ=3610846977 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FC1E5F0000000001030307) 
Dec 06 09:43:00 np0005548790.localdomain sudo[211741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vzraxnfpxqcxhrnelcxbogxhfugrdxgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014180.5728679-776-126042386040876/AnsiballZ_lineinfile.py
Dec 06 09:43:00 np0005548790.localdomain sudo[211741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:01 np0005548790.localdomain python3.9[211743]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:43:01 np0005548790.localdomain sudo[211741]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:01 np0005548790.localdomain sudo[211851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wjspwuhmkdhsjquqoslfwwudvxvfyibn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014181.1776006-776-205737750938743/AnsiballZ_lineinfile.py
Dec 06 09:43:01 np0005548790.localdomain sudo[211851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:01 np0005548790.localdomain python3.9[211853]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:43:01 np0005548790.localdomain sudo[211851]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:02 np0005548790.localdomain sudo[211961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iipzzmeerjgsyycdfqbwfiuqjmblezxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014181.993088-862-197874740464321/AnsiballZ_stat.py
Dec 06 09:43:02 np0005548790.localdomain sudo[211961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:02 np0005548790.localdomain python3.9[211963]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:43:02 np0005548790.localdomain sudo[211961]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:02 np0005548790.localdomain sudo[212073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hibhjogxxyvumrqonnstybyopwwwnhqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014182.705928-886-249429629949258/AnsiballZ_file.py
Dec 06 09:43:02 np0005548790.localdomain sudo[212073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:03 np0005548790.localdomain python3.9[212075]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:43:03 np0005548790.localdomain sudo[212073]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:03 np0005548790.localdomain sshd[211647]: error: maximum authentication attempts exceeded for root from 14.48.24.90 port 45256 ssh2 [preauth]
Dec 06 09:43:03 np0005548790.localdomain sshd[211647]: Disconnecting authenticating user root 14.48.24.90 port 45256: Too many authentication failures [preauth]
Dec 06 09:43:03 np0005548790.localdomain sshd[212147]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:43:03 np0005548790.localdomain sudo[212185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pfbaogtwtpdmqvkhgezajlypzqtkifiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014183.5429916-913-144414114407639/AnsiballZ_file.py
Dec 06 09:43:03 np0005548790.localdomain sudo[212185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51837 DF PROTO=TCP SPT=56970 DPT=9102 SEQ=1298491132 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FC2B1F0000000001030307) 
Dec 06 09:43:04 np0005548790.localdomain python3.9[212187]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:43:04 np0005548790.localdomain sudo[212185]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:05 np0005548790.localdomain sudo[212295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hzobhktfsuqgpebfrkfkpddofduqjiak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014184.8792334-938-137307287176796/AnsiballZ_stat.py
Dec 06 09:43:05 np0005548790.localdomain sudo[212295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:05 np0005548790.localdomain python3.9[212297]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:43:05 np0005548790.localdomain sudo[212295]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:05 np0005548790.localdomain sshd[212147]: Received disconnect from 14.48.24.90 port 45886:11: disconnected by user [preauth]
Dec 06 09:43:05 np0005548790.localdomain sshd[212147]: Disconnected from authenticating user root 14.48.24.90 port 45886 [preauth]
Dec 06 09:43:05 np0005548790.localdomain sudo[212352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-afpwlnnynekdajxnsfpmxxiiaejrrysk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014184.8792334-938-137307287176796/AnsiballZ_file.py
Dec 06 09:43:05 np0005548790.localdomain sudo[212352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:05 np0005548790.localdomain sshd[212355]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:43:05 np0005548790.localdomain python3.9[212354]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:43:05 np0005548790.localdomain sudo[212352]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5607 DF PROTO=TCP SPT=55308 DPT=9101 SEQ=1969437328 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FC35200000000001030307) 
Dec 06 09:43:06 np0005548790.localdomain sshd[212355]: Invalid user admin from 14.48.24.90 port 46224
Dec 06 09:43:07 np0005548790.localdomain sudo[212464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qqmkcwilrmldbfanbqzltboeolxtjyvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014185.9543374-938-145614689757027/AnsiballZ_stat.py
Dec 06 09:43:07 np0005548790.localdomain sudo[212464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:07 np0005548790.localdomain python3.9[212466]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:43:07 np0005548790.localdomain sudo[212464]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:07 np0005548790.localdomain sudo[212521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uesgfphbpyiqksdekxwmvdynkbekpkjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014185.9543374-938-145614689757027/AnsiballZ_file.py
Dec 06 09:43:07 np0005548790.localdomain sudo[212521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:07 np0005548790.localdomain python3.9[212523]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:43:07 np0005548790.localdomain sudo[212521]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:08 np0005548790.localdomain sshd[212355]: error: maximum authentication attempts exceeded for invalid user admin from 14.48.24.90 port 46224 ssh2 [preauth]
Dec 06 09:43:08 np0005548790.localdomain sshd[212355]: Disconnecting invalid user admin 14.48.24.90 port 46224: Too many authentication failures [preauth]
Dec 06 09:43:08 np0005548790.localdomain sudo[212631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-achbmyxamuumejnewhjsjzskekjnxhgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014187.881702-1006-17210335100/AnsiballZ_file.py
Dec 06 09:43:08 np0005548790.localdomain sudo[212631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:08 np0005548790.localdomain python3.9[212633]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:43:08 np0005548790.localdomain sudo[212631]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:08 np0005548790.localdomain sshd[212651]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:43:08 np0005548790.localdomain sudo[212743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wzmrfmtfmdohvqvsbcelhhebatmgrsjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014188.6151805-1030-197590975016157/AnsiballZ_stat.py
Dec 06 09:43:08 np0005548790.localdomain sudo[212743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:09 np0005548790.localdomain python3.9[212745]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:43:09 np0005548790.localdomain sudo[212743]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:09 np0005548790.localdomain sudo[212800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vajfktudwvvbaykolpktacbifubpiojk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014188.6151805-1030-197590975016157/AnsiballZ_file.py
Dec 06 09:43:09 np0005548790.localdomain sudo[212800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:09 np0005548790.localdomain python3.9[212802]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:43:09 np0005548790.localdomain sudo[212800]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:09 np0005548790.localdomain sshd[212651]: Invalid user admin from 14.48.24.90 port 46834
Dec 06 09:43:10 np0005548790.localdomain sudo[212910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jtflmdtyligycxisvnchbnaziyyalrkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014189.7342527-1067-49391240198721/AnsiballZ_stat.py
Dec 06 09:43:10 np0005548790.localdomain sudo[212910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55792 DF PROTO=TCP SPT=43022 DPT=9100 SEQ=1151479475 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FC431F0000000001030307) 
Dec 06 09:43:10 np0005548790.localdomain python3.9[212912]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:43:10 np0005548790.localdomain sudo[212910]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:10 np0005548790.localdomain sudo[212967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aavrtfvojijosoyeqpvnrqlgqnihvaiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014189.7342527-1067-49391240198721/AnsiballZ_file.py
Dec 06 09:43:10 np0005548790.localdomain sudo[212967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:10 np0005548790.localdomain python3.9[212969]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:43:10 np0005548790.localdomain sudo[212967]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:10 np0005548790.localdomain sshd[212651]: error: maximum authentication attempts exceeded for invalid user admin from 14.48.24.90 port 46834 ssh2 [preauth]
Dec 06 09:43:10 np0005548790.localdomain sshd[212651]: Disconnecting invalid user admin 14.48.24.90 port 46834: Too many authentication failures [preauth]
Dec 06 09:43:11 np0005548790.localdomain sudo[213077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wsyfivvoxczzlpsaaojblkzznitzcrwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014190.9504087-1103-87554484853345/AnsiballZ_systemd.py
Dec 06 09:43:11 np0005548790.localdomain sudo[213077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:11 np0005548790.localdomain sshd[213080]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:43:11 np0005548790.localdomain python3.9[213079]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:43:11 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:43:11 np0005548790.localdomain systemd-rc-local-generator[213106]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:43:11 np0005548790.localdomain systemd-sysv-generator[213111]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:43:11 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:11 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:11 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:11 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:11 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:43:11 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:11 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:11 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:11 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:12 np0005548790.localdomain sudo[213077]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:12 np0005548790.localdomain sudo[213227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zzfweckdarcedfkeyawnaknuucyswhqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014192.2465646-1126-213439215770958/AnsiballZ_stat.py
Dec 06 09:43:12 np0005548790.localdomain sudo[213227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:12 np0005548790.localdomain python3.9[213229]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:43:12 np0005548790.localdomain sshd[213080]: Invalid user admin from 14.48.24.90 port 47444
Dec 06 09:43:12 np0005548790.localdomain sudo[213227]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:13 np0005548790.localdomain sudo[213284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xnguehsyfdhbvqcbsxatermttgvclssf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014192.2465646-1126-213439215770958/AnsiballZ_file.py
Dec 06 09:43:13 np0005548790.localdomain sudo[213284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:13 np0005548790.localdomain python3.9[213286]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:43:13 np0005548790.localdomain sudo[213284]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60532 DF PROTO=TCP SPT=55490 DPT=9100 SEQ=3385390523 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FC4F5F0000000001030307) 
Dec 06 09:43:13 np0005548790.localdomain sshd[213080]: Received disconnect from 14.48.24.90 port 47444:11: disconnected by user [preauth]
Dec 06 09:43:13 np0005548790.localdomain sshd[213080]: Disconnected from invalid user admin 14.48.24.90 port 47444 [preauth]
Dec 06 09:43:13 np0005548790.localdomain sudo[213394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-keezbcccxdmvrjauaxcdcedgzdjemumc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014193.4287899-1163-266017965372422/AnsiballZ_stat.py
Dec 06 09:43:13 np0005548790.localdomain sudo[213394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:13 np0005548790.localdomain sshd[213397]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:43:13 np0005548790.localdomain python3.9[213396]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:43:13 np0005548790.localdomain sudo[213394]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:14 np0005548790.localdomain sudo[213453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yqoqzyvddtaoujwtwdcysiwvorrjhlfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014193.4287899-1163-266017965372422/AnsiballZ_file.py
Dec 06 09:43:14 np0005548790.localdomain sudo[213453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:14 np0005548790.localdomain python3.9[213455]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:43:14 np0005548790.localdomain sudo[213453]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:15 np0005548790.localdomain sudo[213563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zgjafdysirzioaknwqbeecerfybfzrwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014194.630164-1199-36796617842967/AnsiballZ_systemd.py
Dec 06 09:43:15 np0005548790.localdomain sudo[213563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:15 np0005548790.localdomain python3.9[213565]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:43:15 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:43:15 np0005548790.localdomain systemd-sysv-generator[213593]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:43:15 np0005548790.localdomain systemd-rc-local-generator[213590]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:43:15 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:15 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:15 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:15 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:15 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:43:15 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:15 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:15 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:15 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:15 np0005548790.localdomain sshd[213397]: Invalid user oracle from 14.48.24.90 port 47954
Dec 06 09:43:15 np0005548790.localdomain systemd[1]: Starting Create netns directory...
Dec 06 09:43:15 np0005548790.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 09:43:15 np0005548790.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 09:43:15 np0005548790.localdomain systemd[1]: Finished Create netns directory.
Dec 06 09:43:15 np0005548790.localdomain sudo[213563]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:16 np0005548790.localdomain sudo[213715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ymrjfzodzogcccmaqjfzvssdzvjvvqiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014196.1962404-1229-98840654116155/AnsiballZ_file.py
Dec 06 09:43:16 np0005548790.localdomain sudo[213715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:16 np0005548790.localdomain python3.9[213717]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:43:16 np0005548790.localdomain sudo[213715]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:16 np0005548790.localdomain sshd[213397]: error: maximum authentication attempts exceeded for invalid user oracle from 14.48.24.90 port 47954 ssh2 [preauth]
Dec 06 09:43:16 np0005548790.localdomain sshd[213397]: Disconnecting invalid user oracle 14.48.24.90 port 47954: Too many authentication failures [preauth]
Dec 06 09:43:17 np0005548790.localdomain sudo[213825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-okxituvcgqancrtlxouawlvfjbdnfpuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014196.9167051-1252-80739045889199/AnsiballZ_stat.py
Dec 06 09:43:17 np0005548790.localdomain sudo[213825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:17 np0005548790.localdomain sshd[213828]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:43:17 np0005548790.localdomain python3.9[213827]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:43:17 np0005548790.localdomain sudo[213825]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:17 np0005548790.localdomain sudo[213915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wvxfekgzvmbwehoobqbniwbhrclavwvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014196.9167051-1252-80739045889199/AnsiballZ_copy.py
Dec 06 09:43:17 np0005548790.localdomain sudo[213915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:17 np0005548790.localdomain python3.9[213917]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014196.9167051-1252-80739045889199/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:43:18 np0005548790.localdomain sudo[213915]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46833 DF PROTO=TCP SPT=35866 DPT=9102 SEQ=1007950733 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FC63610000000001030307) 
Dec 06 09:43:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12909 DF PROTO=TCP SPT=48652 DPT=9882 SEQ=2337967496 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FC63670000000001030307) 
Dec 06 09:43:18 np0005548790.localdomain sshd[213828]: Invalid user oracle from 14.48.24.90 port 48620
Dec 06 09:43:18 np0005548790.localdomain sudo[214025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yjiayyuiwbmkpwmgwwetuucmhspznnfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014198.53598-1304-276743596628089/AnsiballZ_file.py
Dec 06 09:43:18 np0005548790.localdomain sudo[214025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:19 np0005548790.localdomain python3.9[214027]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:43:19 np0005548790.localdomain sudo[214025]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:19 np0005548790.localdomain sudo[214135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tzyrjygalzhglneiaiqhtpwfzoofydcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014199.2652066-1327-169095444311560/AnsiballZ_stat.py
Dec 06 09:43:19 np0005548790.localdomain sudo[214135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:19 np0005548790.localdomain sshd[213828]: error: maximum authentication attempts exceeded for invalid user oracle from 14.48.24.90 port 48620 ssh2 [preauth]
Dec 06 09:43:19 np0005548790.localdomain sshd[213828]: Disconnecting invalid user oracle 14.48.24.90 port 48620: Too many authentication failures [preauth]
Dec 06 09:43:19 np0005548790.localdomain python3.9[214137]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:43:19 np0005548790.localdomain sudo[214135]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:20 np0005548790.localdomain sudo[214223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ryqcivtllbdpeschuqvnowjgyakeyuog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014199.2652066-1327-169095444311560/AnsiballZ_copy.py
Dec 06 09:43:20 np0005548790.localdomain sudo[214223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:20 np0005548790.localdomain sshd[214226]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:43:20 np0005548790.localdomain python3.9[214225]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014199.2652066-1327-169095444311560/.source.json _original_basename=.3h1eepb0 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:43:20 np0005548790.localdomain sudo[214223]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:20 np0005548790.localdomain sudo[214335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ittveifrqvfxtsygglmsphldwgheemls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014200.5525792-1372-239063260786812/AnsiballZ_file.py
Dec 06 09:43:20 np0005548790.localdomain sudo[214335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:21 np0005548790.localdomain python3.9[214337]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:43:21 np0005548790.localdomain sudo[214335]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60533 DF PROTO=TCP SPT=55490 DPT=9100 SEQ=3385390523 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FC6F1F0000000001030307) 
Dec 06 09:43:21 np0005548790.localdomain sudo[214445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qdzvozxfwpihentclqzpgzdfrgyntrjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014201.2510195-1396-201226034852721/AnsiballZ_stat.py
Dec 06 09:43:21 np0005548790.localdomain sudo[214445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:21 np0005548790.localdomain sudo[214445]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:21 np0005548790.localdomain sshd[214226]: Invalid user oracle from 14.48.24.90 port 49230
Dec 06 09:43:22 np0005548790.localdomain sudo[214533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dfidzowjzqzngvqvlmjwrjcwlvxoxryt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014201.2510195-1396-201226034852721/AnsiballZ_copy.py
Dec 06 09:43:22 np0005548790.localdomain sudo[214533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:22 np0005548790.localdomain sshd[214226]: Received disconnect from 14.48.24.90 port 49230:11: disconnected by user [preauth]
Dec 06 09:43:22 np0005548790.localdomain sshd[214226]: Disconnected from invalid user oracle 14.48.24.90 port 49230 [preauth]
Dec 06 09:43:22 np0005548790.localdomain sudo[214533]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:22 np0005548790.localdomain sshd[214553]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:43:23 np0005548790.localdomain sudo[214645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nnbxrhkjmmdlphagjjezxanlwgmvoouv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014202.7698567-1448-265356789028229/AnsiballZ_container_config_data.py
Dec 06 09:43:23 np0005548790.localdomain sudo[214645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:43:23 np0005548790.localdomain python3.9[214647]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec 06 09:43:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:43:23 np0005548790.localdomain systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec 06 09:43:23 np0005548790.localdomain sudo[214645]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:23 np0005548790.localdomain podman[214650]: 2025-12-06 09:43:23.527043636 +0000 UTC m=+0.082136346 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller)
Dec 06 09:43:23 np0005548790.localdomain systemd[1]: tmp-crun.puYGWT.mount: Deactivated successfully.
Dec 06 09:43:23 np0005548790.localdomain podman[214649]: 2025-12-06 09:43:23.600464188 +0000 UTC m=+0.156006030 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:43:23 np0005548790.localdomain podman[214649]: 2025-12-06 09:43:23.604975479 +0000 UTC m=+0.160517251 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 06 09:43:23 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:43:23 np0005548790.localdomain podman[214650]: 2025-12-06 09:43:23.661351126 +0000 UTC m=+0.216443865 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 09:43:23 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:43:23 np0005548790.localdomain sshd[214553]: Invalid user usuario from 14.48.24.90 port 49734
Dec 06 09:43:24 np0005548790.localdomain sudo[214800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cwfpvhhqpfvggxkdndvlmifxobwcshpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014203.7671158-1474-203621451448265/AnsiballZ_container_config_hash.py
Dec 06 09:43:24 np0005548790.localdomain sudo[214800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:24 np0005548790.localdomain python3.9[214802]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 09:43:24 np0005548790.localdomain sudo[214800]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:24 np0005548790.localdomain systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 06 09:43:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64801 DF PROTO=TCP SPT=55962 DPT=9105 SEQ=27927424 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FC7BDF0000000001030307) 
Dec 06 09:43:24 np0005548790.localdomain sshd[214553]: error: maximum authentication attempts exceeded for invalid user usuario from 14.48.24.90 port 49734 ssh2 [preauth]
Dec 06 09:43:24 np0005548790.localdomain sshd[214553]: Disconnecting invalid user usuario 14.48.24.90 port 49734: Too many authentication failures [preauth]
Dec 06 09:43:25 np0005548790.localdomain sshd[214913]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:43:25 np0005548790.localdomain sudo[214911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-brbenzcdkbjoatwmieboadtnkmgqhpvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014204.861258-1501-91211422451710/AnsiballZ_podman_container_info.py
Dec 06 09:43:25 np0005548790.localdomain sudo[214911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:25 np0005548790.localdomain python3.9[214915]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 06 09:43:25 np0005548790.localdomain sudo[214911]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:26 np0005548790.localdomain sshd[214913]: Invalid user usuario from 14.48.24.90 port 50346
Dec 06 09:43:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4519 DF PROTO=TCP SPT=52840 DPT=9105 SEQ=211483315 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FC87200000000001030307) 
Dec 06 09:43:27 np0005548790.localdomain sshd[214913]: error: maximum authentication attempts exceeded for invalid user usuario from 14.48.24.90 port 50346 ssh2 [preauth]
Dec 06 09:43:27 np0005548790.localdomain sshd[214913]: Disconnecting invalid user usuario 14.48.24.90 port 50346: Too many authentication failures [preauth]
Dec 06 09:43:28 np0005548790.localdomain sshd[214960]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:43:29 np0005548790.localdomain sudo[215052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jgojwhxpyxnzbjgodkfgputagyiqsjth ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765014209.1177368-1540-172836004941042/AnsiballZ_edpm_container_manage.py
Dec 06 09:43:29 np0005548790.localdomain sudo[215052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:29 np0005548790.localdomain python3[215054]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 09:43:29 np0005548790.localdomain sshd[214960]: Invalid user usuario from 14.48.24.90 port 50970
Dec 06 09:43:30 np0005548790.localdomain sshd[214960]: Received disconnect from 14.48.24.90 port 50970:11: disconnected by user [preauth]
Dec 06 09:43:30 np0005548790.localdomain sshd[214960]: Disconnected from invalid user usuario 14.48.24.90 port 50970 [preauth]
Dec 06 09:43:30 np0005548790.localdomain sshd[215081]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:43:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64803 DF PROTO=TCP SPT=55962 DPT=9105 SEQ=27927424 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FC939F0000000001030307) 
Dec 06 09:43:31 np0005548790.localdomain podman[215067]: 2025-12-06 09:43:29.983374163 +0000 UTC m=+0.047192503 image pull  quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 06 09:43:31 np0005548790.localdomain podman[215116]: 
Dec 06 09:43:31 np0005548790.localdomain podman[215116]: 2025-12-06 09:43:31.718311029 +0000 UTC m=+0.082636170 container create 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:43:31 np0005548790.localdomain podman[215116]: 2025-12-06 09:43:31.67832143 +0000 UTC m=+0.042646591 image pull  quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 06 09:43:31 np0005548790.localdomain python3[215054]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 06 09:43:31 np0005548790.localdomain sudo[215052]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:32 np0005548790.localdomain sshd[215081]: Invalid user test from 14.48.24.90 port 51444
Dec 06 09:43:33 np0005548790.localdomain sshd[215081]: error: maximum authentication attempts exceeded for invalid user test from 14.48.24.90 port 51444 ssh2 [preauth]
Dec 06 09:43:33 np0005548790.localdomain sshd[215081]: Disconnecting invalid user test 14.48.24.90 port 51444: Too many authentication failures [preauth]
Dec 06 09:43:33 np0005548790.localdomain sudo[215260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iifxiowdvhlarmxlmjtqysipqqvmvqyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014212.9182646-1565-61913975119378/AnsiballZ_stat.py
Dec 06 09:43:33 np0005548790.localdomain sudo[215260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:33 np0005548790.localdomain python3.9[215262]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:43:33 np0005548790.localdomain sudo[215260]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46837 DF PROTO=TCP SPT=35866 DPT=9102 SEQ=1007950733 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FC9F1F0000000001030307) 
Dec 06 09:43:33 np0005548790.localdomain sshd[215282]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:43:34 np0005548790.localdomain sudo[215374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ehulatotepavqrrtltowokyuzalzkbph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014213.9482782-1592-262751882162763/AnsiballZ_file.py
Dec 06 09:43:34 np0005548790.localdomain sudo[215374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:34 np0005548790.localdomain python3.9[215376]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:43:34 np0005548790.localdomain sudo[215374]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:35 np0005548790.localdomain sudo[215429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-emhkmhnwjsglfajphljzgrfmpaosgcbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014213.9482782-1592-262751882162763/AnsiballZ_stat.py
Dec 06 09:43:35 np0005548790.localdomain sudo[215429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:35 np0005548790.localdomain sshd[215282]: Invalid user test from 14.48.24.90 port 52068
Dec 06 09:43:35 np0005548790.localdomain python3.9[215431]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:43:35 np0005548790.localdomain sudo[215429]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:35 np0005548790.localdomain systemd[1]: virtqemud.service: Deactivated successfully.
Dec 06 09:43:35 np0005548790.localdomain systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 06 09:43:36 np0005548790.localdomain sudo[215541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tyavgqqbrckvddusaoajwttrzfbisfjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014215.5927272-1592-43564755375411/AnsiballZ_copy.py
Dec 06 09:43:36 np0005548790.localdomain sudo[215541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:36 np0005548790.localdomain python3.9[215543]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014215.5927272-1592-43564755375411/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:43:36 np0005548790.localdomain sudo[215541]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:36 np0005548790.localdomain sshd[215282]: error: maximum authentication attempts exceeded for invalid user test from 14.48.24.90 port 52068 ssh2 [preauth]
Dec 06 09:43:36 np0005548790.localdomain sshd[215282]: Disconnecting invalid user test 14.48.24.90 port 52068: Too many authentication failures [preauth]
Dec 06 09:43:36 np0005548790.localdomain sudo[215596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xtsdpzzpswovvdhkjmjkafqmmneavfog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014215.5927272-1592-43564755375411/AnsiballZ_systemd.py
Dec 06 09:43:36 np0005548790.localdomain sudo[215596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61066 DF PROTO=TCP SPT=40332 DPT=9101 SEQ=1795393211 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FCAB1F0000000001030307) 
Dec 06 09:43:36 np0005548790.localdomain python3.9[215598]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:43:36 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:43:36 np0005548790.localdomain sshd[215600]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:43:37 np0005548790.localdomain systemd-rc-local-generator[215623]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:43:37 np0005548790.localdomain systemd-sysv-generator[215630]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:43:37 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:37 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:37 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:37 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:37 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:43:37 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:37 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:37 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:37 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:37 np0005548790.localdomain sudo[215596]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:37 np0005548790.localdomain sudo[215689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wjbjwmibpieywwnslrocdoeqpocuykhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014215.5927272-1592-43564755375411/AnsiballZ_systemd.py
Dec 06 09:43:37 np0005548790.localdomain sudo[215689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:37 np0005548790.localdomain python3.9[215691]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:43:37 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:43:38 np0005548790.localdomain systemd-rc-local-generator[215721]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:43:38 np0005548790.localdomain systemd-sysv-generator[215724]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:43:38 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:38 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:38 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:38 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:38 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:43:38 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:38 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:38 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:38 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:38 np0005548790.localdomain sshd[215600]: Invalid user test from 14.48.24.90 port 52646
Dec 06 09:43:38 np0005548790.localdomain systemd[1]: Starting multipathd container...
Dec 06 09:43:38 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:43:38 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47fbb370759c39bdb74da5224d50e5a49f3578761a7064abc47fc899ae9a7650/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:38 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47fbb370759c39bdb74da5224d50e5a49f3578761a7064abc47fc899ae9a7650/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:43:38 np0005548790.localdomain podman[215733]: 2025-12-06 09:43:38.455518221 +0000 UTC m=+0.159042931 container init 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 09:43:38 np0005548790.localdomain multipathd[215749]: + sudo -E kolla_set_configs
Dec 06 09:43:38 np0005548790.localdomain sudo[215755]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 09:43:38 np0005548790.localdomain sudo[215755]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 06 09:43:38 np0005548790.localdomain sudo[215755]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 09:43:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:43:38 np0005548790.localdomain podman[215733]: 2025-12-06 09:43:38.504094339 +0000 UTC m=+0.207619049 container start 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd)
Dec 06 09:43:38 np0005548790.localdomain podman[215733]: multipathd
Dec 06 09:43:38 np0005548790.localdomain systemd[1]: Started multipathd container.
Dec 06 09:43:38 np0005548790.localdomain sudo[215689]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:38 np0005548790.localdomain multipathd[215749]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 09:43:38 np0005548790.localdomain multipathd[215749]: INFO:__main__:Validating config file
Dec 06 09:43:38 np0005548790.localdomain multipathd[215749]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 09:43:38 np0005548790.localdomain multipathd[215749]: INFO:__main__:Writing out command to execute
Dec 06 09:43:38 np0005548790.localdomain sudo[215755]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:38 np0005548790.localdomain multipathd[215749]: ++ cat /run_command
Dec 06 09:43:38 np0005548790.localdomain multipathd[215749]: + CMD='/usr/sbin/multipathd -d'
Dec 06 09:43:38 np0005548790.localdomain multipathd[215749]: + ARGS=
Dec 06 09:43:38 np0005548790.localdomain multipathd[215749]: + sudo kolla_copy_cacerts
Dec 06 09:43:38 np0005548790.localdomain podman[215758]: 2025-12-06 09:43:38.586343648 +0000 UTC m=+0.081035407 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 09:43:38 np0005548790.localdomain sudo[215774]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 06 09:43:38 np0005548790.localdomain sudo[215774]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 06 09:43:38 np0005548790.localdomain sudo[215774]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 09:43:38 np0005548790.localdomain podman[215758]: 2025-12-06 09:43:38.597197438 +0000 UTC m=+0.091889237 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:43:38 np0005548790.localdomain sudo[215774]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:38 np0005548790.localdomain multipathd[215749]: + [[ ! -n '' ]]
Dec 06 09:43:38 np0005548790.localdomain multipathd[215749]: + . kolla_extend_start
Dec 06 09:43:38 np0005548790.localdomain multipathd[215749]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 06 09:43:38 np0005548790.localdomain multipathd[215749]: Running command: '/usr/sbin/multipathd -d'
Dec 06 09:43:38 np0005548790.localdomain multipathd[215749]: + umask 0022
Dec 06 09:43:38 np0005548790.localdomain multipathd[215749]: + exec /usr/sbin/multipathd -d
Dec 06 09:43:38 np0005548790.localdomain podman[215758]: unhealthy
Dec 06 09:43:38 np0005548790.localdomain multipathd[215749]: 10633.815431 | --------start up--------
Dec 06 09:43:38 np0005548790.localdomain multipathd[215749]: 10633.815455 | read /etc/multipath.conf
Dec 06 09:43:38 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:43:38 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Failed with result 'exit-code'.
Dec 06 09:43:38 np0005548790.localdomain multipathd[215749]: 10633.819327 | path checkers start up
Dec 06 09:43:38 np0005548790.localdomain sshd[215600]: Received disconnect from 14.48.24.90 port 52646:11: disconnected by user [preauth]
Dec 06 09:43:38 np0005548790.localdomain sshd[215600]: Disconnected from invalid user test 14.48.24.90 port 52646 [preauth]
Dec 06 09:43:38 np0005548790.localdomain sshd[215804]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:43:40 np0005548790.localdomain python3.9[215896]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:43:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1719 DF PROTO=TCP SPT=44910 DPT=9100 SEQ=3587826138 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FCB91F0000000001030307) 
Dec 06 09:43:40 np0005548790.localdomain sshd[215804]: Invalid user user from 14.48.24.90 port 53114
Dec 06 09:43:41 np0005548790.localdomain sshd[215804]: error: maximum authentication attempts exceeded for invalid user user from 14.48.24.90 port 53114 ssh2 [preauth]
Dec 06 09:43:41 np0005548790.localdomain sshd[215804]: Disconnecting invalid user user 14.48.24.90 port 53114: Too many authentication failures [preauth]
Dec 06 09:43:41 np0005548790.localdomain sudo[216006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ihxxzqwaiynlmyvmrqcwkldddrvicauo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014221.4212987-1700-40664102761327/AnsiballZ_command.py
Dec 06 09:43:41 np0005548790.localdomain sudo[216006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:41 np0005548790.localdomain python3.9[216008]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:43:41 np0005548790.localdomain sshd[216010]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:43:42 np0005548790.localdomain sudo[216006]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:42 np0005548790.localdomain sudo[216131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wqxezwljpuafvxeideqpnrkwisjttjvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014222.226174-1723-125371192749686/AnsiballZ_systemd.py
Dec 06 09:43:42 np0005548790.localdomain sudo[216131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:42 np0005548790.localdomain python3.9[216133]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:43:42 np0005548790.localdomain systemd[1]: Stopping multipathd container...
Dec 06 09:43:42 np0005548790.localdomain multipathd[215749]: 10638.169598 | exit (signal)
Dec 06 09:43:42 np0005548790.localdomain multipathd[215749]: 10638.170185 | --------shut down-------
Dec 06 09:43:42 np0005548790.localdomain systemd[1]: libpod-97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.scope: Deactivated successfully.
Dec 06 09:43:42 np0005548790.localdomain podman[216137]: 2025-12-06 09:43:42.993375544 +0000 UTC m=+0.098368289 container died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:43:43 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.timer: Deactivated successfully.
Dec 06 09:43:43 np0005548790.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:43:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c-userdata-shm.mount: Deactivated successfully.
Dec 06 09:43:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-47fbb370759c39bdb74da5224d50e5a49f3578761a7064abc47fc899ae9a7650-merged.mount: Deactivated successfully.
Dec 06 09:43:43 np0005548790.localdomain podman[216137]: 2025-12-06 09:43:43.176442277 +0000 UTC m=+0.281434972 container cleanup 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 09:43:43 np0005548790.localdomain podman[216137]: multipathd
Dec 06 09:43:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26089 DF PROTO=TCP SPT=47412 DPT=9100 SEQ=1874915620 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FCC49F0000000001030307) 
Dec 06 09:43:43 np0005548790.localdomain podman[216163]: 2025-12-06 09:43:43.292345115 +0000 UTC m=+0.079064055 container cleanup 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:43:43 np0005548790.localdomain podman[216163]: multipathd
Dec 06 09:43:43 np0005548790.localdomain systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec 06 09:43:43 np0005548790.localdomain systemd[1]: Stopped multipathd container.
Dec 06 09:43:43 np0005548790.localdomain systemd[1]: Starting multipathd container...
Dec 06 09:43:43 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:43:43 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47fbb370759c39bdb74da5224d50e5a49f3578761a7064abc47fc899ae9a7650/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:43 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47fbb370759c39bdb74da5224d50e5a49f3578761a7064abc47fc899ae9a7650/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 06 09:43:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:43:43 np0005548790.localdomain podman[216176]: 2025-12-06 09:43:43.447682765 +0000 UTC m=+0.121799896 container init 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 06 09:43:43 np0005548790.localdomain multipathd[216191]: + sudo -E kolla_set_configs
Dec 06 09:43:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:43:43 np0005548790.localdomain sudo[216197]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 09:43:43 np0005548790.localdomain sshd[216010]: Invalid user user from 14.48.24.90 port 53732
Dec 06 09:43:43 np0005548790.localdomain sudo[216197]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 06 09:43:43 np0005548790.localdomain sudo[216197]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 09:43:43 np0005548790.localdomain podman[216176]: 2025-12-06 09:43:43.486681388 +0000 UTC m=+0.160798509 container start 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 09:43:43 np0005548790.localdomain podman[216176]: multipathd
Dec 06 09:43:43 np0005548790.localdomain systemd[1]: Started multipathd container.
Dec 06 09:43:43 np0005548790.localdomain sudo[216131]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:43 np0005548790.localdomain multipathd[216191]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 09:43:43 np0005548790.localdomain multipathd[216191]: INFO:__main__:Validating config file
Dec 06 09:43:43 np0005548790.localdomain multipathd[216191]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 09:43:43 np0005548790.localdomain multipathd[216191]: INFO:__main__:Writing out command to execute
Dec 06 09:43:43 np0005548790.localdomain sudo[216197]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:43 np0005548790.localdomain multipathd[216191]: ++ cat /run_command
Dec 06 09:43:43 np0005548790.localdomain multipathd[216191]: + CMD='/usr/sbin/multipathd -d'
Dec 06 09:43:43 np0005548790.localdomain multipathd[216191]: + ARGS=
Dec 06 09:43:43 np0005548790.localdomain multipathd[216191]: + sudo kolla_copy_cacerts
Dec 06 09:43:43 np0005548790.localdomain sudo[216217]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 06 09:43:43 np0005548790.localdomain sudo[216217]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 06 09:43:43 np0005548790.localdomain sudo[216217]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 06 09:43:43 np0005548790.localdomain sudo[216217]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:43 np0005548790.localdomain multipathd[216191]: + [[ ! -n '' ]]
Dec 06 09:43:43 np0005548790.localdomain multipathd[216191]: + . kolla_extend_start
Dec 06 09:43:43 np0005548790.localdomain multipathd[216191]: Running command: '/usr/sbin/multipathd -d'
Dec 06 09:43:43 np0005548790.localdomain multipathd[216191]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 06 09:43:43 np0005548790.localdomain multipathd[216191]: + umask 0022
Dec 06 09:43:43 np0005548790.localdomain multipathd[216191]: + exec /usr/sbin/multipathd -d
Dec 06 09:43:43 np0005548790.localdomain multipathd[216191]: 10638.794490 | --------start up--------
Dec 06 09:43:43 np0005548790.localdomain multipathd[216191]: 10638.794651 | read /etc/multipath.conf
Dec 06 09:43:43 np0005548790.localdomain multipathd[216191]: 10638.799171 | path checkers start up
Dec 06 09:43:43 np0005548790.localdomain podman[216199]: 2025-12-06 09:43:43.599474242 +0000 UTC m=+0.109714673 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 09:43:43 np0005548790.localdomain podman[216199]: 2025-12-06 09:43:43.607465925 +0000 UTC m=+0.117706336 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 06 09:43:43 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:43:44 np0005548790.localdomain sudo[216337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nstjmqpxjstcklxrhvcvgcoumlcdjfxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014223.8601918-1747-94526089206781/AnsiballZ_file.py
Dec 06 09:43:44 np0005548790.localdomain sudo[216337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:44 np0005548790.localdomain python3.9[216339]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:43:44 np0005548790.localdomain sudo[216337]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:44 np0005548790.localdomain sshd[216010]: error: maximum authentication attempts exceeded for invalid user user from 14.48.24.90 port 53732 ssh2 [preauth]
Dec 06 09:43:44 np0005548790.localdomain sshd[216010]: Disconnecting invalid user user 14.48.24.90 port 53732: Too many authentication failures [preauth]
Dec 06 09:43:44 np0005548790.localdomain sshd[216357]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:43:45 np0005548790.localdomain sudo[216449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-trdiiyyflooiiktysawxvhzgfstacohx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014225.3878605-1784-211727041203644/AnsiballZ_file.py
Dec 06 09:43:45 np0005548790.localdomain sudo[216449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:45 np0005548790.localdomain python3.9[216451]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 09:43:45 np0005548790.localdomain sudo[216449]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:46 np0005548790.localdomain sudo[216559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qhuadlelsvfuxcbajxickfxrohsctgqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014226.1536632-1808-266845474431731/AnsiballZ_modprobe.py
Dec 06 09:43:46 np0005548790.localdomain sudo[216559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:46 np0005548790.localdomain sshd[216357]: Invalid user user from 14.48.24.90 port 54316
Dec 06 09:43:46 np0005548790.localdomain python3.9[216561]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec 06 09:43:46 np0005548790.localdomain sudo[216559]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:47 np0005548790.localdomain sshd[216357]: Received disconnect from 14.48.24.90 port 54316:11: disconnected by user [preauth]
Dec 06 09:43:47 np0005548790.localdomain sshd[216357]: Disconnected from invalid user user 14.48.24.90 port 54316 [preauth]
Dec 06 09:43:47 np0005548790.localdomain sshd[216641]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:43:47 np0005548790.localdomain sudo[216679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wrqviyheqdtzwaofwfamouupeesfbjst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014227.5347114-1832-170415039827751/AnsiballZ_stat.py
Dec 06 09:43:47 np0005548790.localdomain sudo[216679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:48 np0005548790.localdomain python3.9[216681]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:43:48 np0005548790.localdomain sudo[216679]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:43:48.350 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:43:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:43:48.351 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:43:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:43:48.351 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:43:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5617 DF PROTO=TCP SPT=51080 DPT=9102 SEQ=4126584668 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FCD8910000000001030307) 
Dec 06 09:43:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49106 DF PROTO=TCP SPT=53622 DPT=9882 SEQ=2135504814 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FCD8980000000001030307) 
Dec 06 09:43:48 np0005548790.localdomain sudo[216767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hpwajakunlxvgcvbzckllwdklpaflier ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014227.5347114-1832-170415039827751/AnsiballZ_copy.py
Dec 06 09:43:48 np0005548790.localdomain sudo[216767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:48 np0005548790.localdomain python3.9[216769]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014227.5347114-1832-170415039827751/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:43:48 np0005548790.localdomain sudo[216767]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:49 np0005548790.localdomain sshd[216641]: Invalid user ftpuser from 14.48.24.90 port 54902
Dec 06 09:43:49 np0005548790.localdomain sudo[216877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lazdxelmngyaoknstczxssoyudrryspa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014229.171528-1879-75707109481915/AnsiballZ_lineinfile.py
Dec 06 09:43:49 np0005548790.localdomain sudo[216877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:49 np0005548790.localdomain python3.9[216879]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:43:49 np0005548790.localdomain sudo[216877]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:50 np0005548790.localdomain sshd[216641]: error: maximum authentication attempts exceeded for invalid user ftpuser from 14.48.24.90 port 54902 ssh2 [preauth]
Dec 06 09:43:50 np0005548790.localdomain sshd[216641]: Disconnecting invalid user ftpuser 14.48.24.90 port 54902: Too many authentication failures [preauth]
Dec 06 09:43:50 np0005548790.localdomain sudo[216987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-junmqzcwwtrtxyrgxgojiqapoylkvgly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014229.898292-1904-135401522966929/AnsiballZ_systemd.py
Dec 06 09:43:50 np0005548790.localdomain sudo[216987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:50 np0005548790.localdomain sshd[216990]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:43:50 np0005548790.localdomain python3.9[216989]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:43:50 np0005548790.localdomain systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 06 09:43:50 np0005548790.localdomain systemd[1]: Stopped Load Kernel Modules.
Dec 06 09:43:50 np0005548790.localdomain systemd[1]: Stopping Load Kernel Modules...
Dec 06 09:43:50 np0005548790.localdomain systemd[1]: Starting Load Kernel Modules...
Dec 06 09:43:50 np0005548790.localdomain systemd-modules-load[216995]: Module 'msr' is built in
Dec 06 09:43:50 np0005548790.localdomain systemd[1]: Finished Load Kernel Modules.
Dec 06 09:43:50 np0005548790.localdomain sudo[216987]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:51 np0005548790.localdomain sudo[217103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-opzmllxutnvqsdmucvmndmlauhfxwput ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014231.0940163-1928-203121866863856/AnsiballZ_dnf.py
Dec 06 09:43:51 np0005548790.localdomain sudo[217103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:43:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49108 DF PROTO=TCP SPT=53622 DPT=9882 SEQ=2135504814 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FCE49F0000000001030307) 
Dec 06 09:43:51 np0005548790.localdomain python3.9[217105]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:43:51 np0005548790.localdomain sshd[216990]: Invalid user ftpuser from 14.48.24.90 port 55508
Dec 06 09:43:53 np0005548790.localdomain sshd[216990]: error: maximum authentication attempts exceeded for invalid user ftpuser from 14.48.24.90 port 55508 ssh2 [preauth]
Dec 06 09:43:53 np0005548790.localdomain sshd[216990]: Disconnecting invalid user ftpuser 14.48.24.90 port 55508: Too many authentication failures [preauth]
Dec 06 09:43:53 np0005548790.localdomain sshd[217108]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:43:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:43:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:43:54 np0005548790.localdomain systemd[1]: tmp-crun.ucnsZJ.mount: Deactivated successfully.
Dec 06 09:43:54 np0005548790.localdomain podman[217111]: 2025-12-06 09:43:54.5929 +0000 UTC m=+0.102263187 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:43:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32064 DF PROTO=TCP SPT=37104 DPT=9105 SEQ=3055687102 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FCF0DF0000000001030307) 
Dec 06 09:43:54 np0005548790.localdomain systemd[1]: tmp-crun.DgPOac.mount: Deactivated successfully.
Dec 06 09:43:54 np0005548790.localdomain podman[217110]: 2025-12-06 09:43:54.672828062 +0000 UTC m=+0.181886600 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 06 09:43:54 np0005548790.localdomain podman[217111]: 2025-12-06 09:43:54.696514634 +0000 UTC m=+0.205877811 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 09:43:54 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:43:54 np0005548790.localdomain podman[217110]: 2025-12-06 09:43:54.754758808 +0000 UTC m=+0.263817356 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 06 09:43:54 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:43:54 np0005548790.localdomain sshd[217108]: Invalid user ftpuser from 14.48.24.90 port 56048
Dec 06 09:43:55 np0005548790.localdomain sshd[217108]: Received disconnect from 14.48.24.90 port 56048:11: disconnected by user [preauth]
Dec 06 09:43:55 np0005548790.localdomain sshd[217108]: Disconnected from invalid user ftpuser 14.48.24.90 port 56048 [preauth]
Dec 06 09:43:55 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:43:56 np0005548790.localdomain sshd[217159]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:43:56 np0005548790.localdomain systemd-rc-local-generator[217184]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:43:56 np0005548790.localdomain systemd-sysv-generator[217187]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:43:56 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:56 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:56 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:56 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:56 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:43:56 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:56 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:56 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:56 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:56 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:43:56 np0005548790.localdomain systemd-rc-local-generator[217218]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:43:56 np0005548790.localdomain systemd-sysv-generator[217225]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:43:56 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:56 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:56 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:56 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:56 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:43:56 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:56 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:56 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:56 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:56 np0005548790.localdomain systemd-logind[760]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 06 09:43:56 np0005548790.localdomain systemd-logind[760]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 06 09:43:56 np0005548790.localdomain lvm[217268]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 09:43:56 np0005548790.localdomain lvm[217268]: VG ceph_vg0 finished
Dec 06 09:43:56 np0005548790.localdomain lvm[217269]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 06 09:43:56 np0005548790.localdomain lvm[217269]: VG ceph_vg1 finished
Dec 06 09:43:56 np0005548790.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 06 09:43:56 np0005548790.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 06 09:43:56 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:43:56 np0005548790.localdomain systemd-rc-local-generator[217322]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:43:57 np0005548790.localdomain systemd-sysv-generator[217325]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:43:57 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:57 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:57 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:57 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:57 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:43:57 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:57 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:57 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:57 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:43:57 np0005548790.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 06 09:43:57 np0005548790.localdomain sshd[217159]: Invalid user test1 from 14.48.24.90 port 56540
Dec 06 09:43:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60505 DF PROTO=TCP SPT=47634 DPT=9105 SEQ=3610846977 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FCFD1F0000000001030307) 
Dec 06 09:43:58 np0005548790.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 06 09:43:58 np0005548790.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 06 09:43:58 np0005548790.localdomain systemd[1]: man-db-cache-update.service: Consumed 1.306s CPU time.
Dec 06 09:43:58 np0005548790.localdomain systemd[1]: run-rde954ab6a20146918a027814a4219b24.service: Deactivated successfully.
Dec 06 09:43:58 np0005548790.localdomain sudo[217103]: pam_unix(sudo:session): session closed for user root
Dec 06 09:43:58 np0005548790.localdomain sshd[217159]: error: maximum authentication attempts exceeded for invalid user test1 from 14.48.24.90 port 56540 ssh2 [preauth]
Dec 06 09:43:58 np0005548790.localdomain sshd[217159]: Disconnecting invalid user test1 14.48.24.90 port 56540: Too many authentication failures [preauth]
Dec 06 09:43:58 np0005548790.localdomain sshd[218491]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:43:59 np0005548790.localdomain python3.9[218567]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:44:00 np0005548790.localdomain sshd[218491]: Invalid user test1 from 14.48.24.90 port 57118
Dec 06 09:44:00 np0005548790.localdomain sudo[218679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mcnptltnraipynfbryihiogaaxrgykgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014240.41319-1980-280232332802856/AnsiballZ_file.py
Dec 06 09:44:00 np0005548790.localdomain sudo[218679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32066 DF PROTO=TCP SPT=37104 DPT=9105 SEQ=3055687102 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FD08A00000000001030307) 
Dec 06 09:44:00 np0005548790.localdomain sudo[218681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:44:00 np0005548790.localdomain sudo[218681]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:44:00 np0005548790.localdomain sudo[218681]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:00 np0005548790.localdomain sudo[218700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:44:00 np0005548790.localdomain sudo[218700]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:44:00 np0005548790.localdomain python3.9[218687]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:44:00 np0005548790.localdomain sudo[218679]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:01 np0005548790.localdomain sshd[218491]: error: maximum authentication attempts exceeded for invalid user test1 from 14.48.24.90 port 57118 ssh2 [preauth]
Dec 06 09:44:01 np0005548790.localdomain sshd[218491]: Disconnecting invalid user test1 14.48.24.90 port 57118: Too many authentication failures [preauth]
Dec 06 09:44:01 np0005548790.localdomain sudo[218700]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:01 np0005548790.localdomain sshd[218820]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:44:01 np0005548790.localdomain sudo[218858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qjipqsnovnqpyrmjcilpahjknrbcmpni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014241.7172828-2013-226896868717036/AnsiballZ_systemd_service.py
Dec 06 09:44:01 np0005548790.localdomain sudo[218858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:02 np0005548790.localdomain sudo[218861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:44:02 np0005548790.localdomain sudo[218861]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:44:02 np0005548790.localdomain sudo[218861]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:02 np0005548790.localdomain python3.9[218860]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:44:02 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:44:02 np0005548790.localdomain systemd-sysv-generator[218908]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:44:02 np0005548790.localdomain systemd-rc-local-generator[218903]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:44:02 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:44:02 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:44:02 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:44:02 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:44:02 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:44:02 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:44:02 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:44:02 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:44:02 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:44:02 np0005548790.localdomain sudo[218858]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:03 np0005548790.localdomain sshd[218820]: Invalid user test1 from 14.48.24.90 port 57722
Dec 06 09:44:03 np0005548790.localdomain python3.9[219022]: ansible-ansible.builtin.service_facts Invoked
Dec 06 09:44:03 np0005548790.localdomain network[219039]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 09:44:03 np0005548790.localdomain network[219040]: 'network-scripts' will be removed from distribution in near future.
Dec 06 09:44:03 np0005548790.localdomain network[219041]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 09:44:03 np0005548790.localdomain sshd[218820]: Received disconnect from 14.48.24.90 port 57722:11: disconnected by user [preauth]
Dec 06 09:44:03 np0005548790.localdomain sshd[218820]: Disconnected from invalid user test1 14.48.24.90 port 57722 [preauth]
Dec 06 09:44:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5621 DF PROTO=TCP SPT=51080 DPT=9102 SEQ=4126584668 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FD151F0000000001030307) 
Dec 06 09:44:03 np0005548790.localdomain sshd[219047]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:44:04 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:44:05 np0005548790.localdomain sshd[219047]: Invalid user test2 from 14.48.24.90 port 58128
Dec 06 09:44:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36523 DF PROTO=TCP SPT=37778 DPT=9101 SEQ=2145730028 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FD1F200000000001030307) 
Dec 06 09:44:06 np0005548790.localdomain sshd[219047]: error: maximum authentication attempts exceeded for invalid user test2 from 14.48.24.90 port 58128 ssh2 [preauth]
Dec 06 09:44:06 np0005548790.localdomain sshd[219047]: Disconnecting invalid user test2 14.48.24.90 port 58128: Too many authentication failures [preauth]
Dec 06 09:44:07 np0005548790.localdomain sshd[219186]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:44:08 np0005548790.localdomain sudo[219278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-itvyfbucktspvbwtbrhwqlrzmioosaby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014248.1438572-2069-243463611535013/AnsiballZ_systemd_service.py
Dec 06 09:44:08 np0005548790.localdomain sudo[219278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:08 np0005548790.localdomain sshd[219186]: Invalid user test2 from 14.48.24.90 port 58802
Dec 06 09:44:08 np0005548790.localdomain python3.9[219280]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:44:09 np0005548790.localdomain sudo[219278]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:09 np0005548790.localdomain sshd[219186]: error: maximum authentication attempts exceeded for invalid user test2 from 14.48.24.90 port 58802 ssh2 [preauth]
Dec 06 09:44:09 np0005548790.localdomain sshd[219186]: Disconnecting invalid user test2 14.48.24.90 port 58802: Too many authentication failures [preauth]
Dec 06 09:44:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60535 DF PROTO=TCP SPT=55490 DPT=9100 SEQ=3385390523 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FD2D1F0000000001030307) 
Dec 06 09:44:10 np0005548790.localdomain sudo[219389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gkylchpnlllbikqccpmnkqrzhlyqcecz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014249.914524-2069-65718698098617/AnsiballZ_systemd_service.py
Dec 06 09:44:10 np0005548790.localdomain sudo[219389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:10 np0005548790.localdomain sshd[219392]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:44:10 np0005548790.localdomain python3.9[219391]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:44:10 np0005548790.localdomain sudo[219389]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:11 np0005548790.localdomain sudo[219502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oyclwuvujtgmimghbcavuvqnmmoeasre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014251.1381938-2069-15377775333756/AnsiballZ_systemd_service.py
Dec 06 09:44:11 np0005548790.localdomain sudo[219502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:11 np0005548790.localdomain sshd[219392]: Invalid user test2 from 14.48.24.90 port 59430
Dec 06 09:44:11 np0005548790.localdomain python3.9[219504]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:44:11 np0005548790.localdomain sudo[219502]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:12 np0005548790.localdomain sshd[219392]: Received disconnect from 14.48.24.90 port 59430:11: disconnected by user [preauth]
Dec 06 09:44:12 np0005548790.localdomain sshd[219392]: Disconnected from invalid user test2 14.48.24.90 port 59430 [preauth]
Dec 06 09:44:12 np0005548790.localdomain sshd[219615]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:44:12 np0005548790.localdomain sudo[219613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jysxjrhlzynabbcakzievgkciowwuxyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014251.9976046-2069-199508177693053/AnsiballZ_systemd_service.py
Dec 06 09:44:12 np0005548790.localdomain sudo[219613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:12 np0005548790.localdomain python3.9[219617]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:44:12 np0005548790.localdomain sudo[219613]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:13 np0005548790.localdomain sudo[219726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-slpiaimgzomyfswjyrwggydysbkmdypf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014252.7521353-2069-51799289643087/AnsiballZ_systemd_service.py
Dec 06 09:44:13 np0005548790.localdomain sudo[219726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48909 DF PROTO=TCP SPT=52182 DPT=9100 SEQ=3043321480 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FD39DF0000000001030307) 
Dec 06 09:44:13 np0005548790.localdomain python3.9[219728]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:44:13 np0005548790.localdomain sudo[219726]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:13 np0005548790.localdomain sudo[219837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mxmmwumtptqlauoofbghnkxovavnhdfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014253.4937365-2069-142483333818139/AnsiballZ_systemd_service.py
Dec 06 09:44:13 np0005548790.localdomain sudo[219837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:44:13 np0005548790.localdomain sshd[219615]: Invalid user ubuntu from 14.48.24.90 port 59874
Dec 06 09:44:13 np0005548790.localdomain podman[219840]: 2025-12-06 09:44:13.955908638 +0000 UTC m=+0.119869671 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:44:13 np0005548790.localdomain podman[219840]: 2025-12-06 09:44:13.970868341 +0000 UTC m=+0.134829434 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:44:13 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:44:14 np0005548790.localdomain python3.9[219839]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:44:14 np0005548790.localdomain sudo[219837]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:14 np0005548790.localdomain sudo[219967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gbixperxgfjhnmbxclvdezsxrsfnhpfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014254.2935264-2069-69586601517262/AnsiballZ_systemd_service.py
Dec 06 09:44:14 np0005548790.localdomain sudo[219967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:14 np0005548790.localdomain python3.9[219969]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:44:14 np0005548790.localdomain sudo[219967]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:14 np0005548790.localdomain sshd[219615]: error: maximum authentication attempts exceeded for invalid user ubuntu from 14.48.24.90 port 59874 ssh2 [preauth]
Dec 06 09:44:14 np0005548790.localdomain sshd[219615]: Disconnecting invalid user ubuntu 14.48.24.90 port 59874: Too many authentication failures [preauth]
Dec 06 09:44:15 np0005548790.localdomain sudo[220078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-epqekmcutfjnibysvddfacoedpujkuje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014255.0806434-2069-31544779866730/AnsiballZ_systemd_service.py
Dec 06 09:44:15 np0005548790.localdomain sudo[220078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:15 np0005548790.localdomain sshd[220081]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:44:15 np0005548790.localdomain python3.9[220080]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:44:15 np0005548790.localdomain sudo[220078]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:16 np0005548790.localdomain sshd[220081]: Invalid user ubuntu from 14.48.24.90 port 60532
Dec 06 09:44:17 np0005548790.localdomain sshd[220081]: error: maximum authentication attempts exceeded for invalid user ubuntu from 14.48.24.90 port 60532 ssh2 [preauth]
Dec 06 09:44:17 np0005548790.localdomain sshd[220081]: Disconnecting invalid user ubuntu 14.48.24.90 port 60532: Too many authentication failures [preauth]
Dec 06 09:44:18 np0005548790.localdomain sshd[220101]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:44:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13116 DF PROTO=TCP SPT=44486 DPT=9102 SEQ=188143747 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FD4DC20000000001030307) 
Dec 06 09:44:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3946 DF PROTO=TCP SPT=44622 DPT=9882 SEQ=2778849868 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FD4DC80000000001030307) 
Dec 06 09:44:19 np0005548790.localdomain sudo[220193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gtcmluuzzedehexkkctsbtlkpwaxwzfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014258.7858748-2246-106247919910362/AnsiballZ_file.py
Dec 06 09:44:19 np0005548790.localdomain sudo[220193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:19 np0005548790.localdomain python3.9[220195]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:44:19 np0005548790.localdomain sudo[220193]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:19 np0005548790.localdomain sshd[220101]: Invalid user ubuntu from 14.48.24.90 port 32928
Dec 06 09:44:19 np0005548790.localdomain sudo[220303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dqufstbgttnpjojbaxbcffdjuspxppnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014259.4368467-2246-110282553791169/AnsiballZ_file.py
Dec 06 09:44:19 np0005548790.localdomain sudo[220303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:19 np0005548790.localdomain python3.9[220305]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:44:19 np0005548790.localdomain sudo[220303]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:20 np0005548790.localdomain sudo[220413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xtlmtazbqoqftokhtpuiabyzampafuxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014260.062143-2246-252133060777279/AnsiballZ_file.py
Dec 06 09:44:20 np0005548790.localdomain sudo[220413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:20 np0005548790.localdomain sshd[220101]: Received disconnect from 14.48.24.90 port 32928:11: disconnected by user [preauth]
Dec 06 09:44:20 np0005548790.localdomain sshd[220101]: Disconnected from invalid user ubuntu 14.48.24.90 port 32928 [preauth]
Dec 06 09:44:20 np0005548790.localdomain python3.9[220415]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:44:20 np0005548790.localdomain sudo[220413]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:20 np0005548790.localdomain sshd[220464]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:44:20 np0005548790.localdomain sudo[220525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lbywmfdhjrheifvujliheclxrqppbovd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014260.606392-2246-240638344254082/AnsiballZ_file.py
Dec 06 09:44:20 np0005548790.localdomain sudo[220525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:21 np0005548790.localdomain python3.9[220527]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:44:21 np0005548790.localdomain sudo[220525]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:21 np0005548790.localdomain sudo[220635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cndcsjjaqnrusnqirwlbvgboddjmjqoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014261.192885-2246-82504778974522/AnsiballZ_file.py
Dec 06 09:44:21 np0005548790.localdomain sudo[220635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3948 DF PROTO=TCP SPT=44622 DPT=9882 SEQ=2778849868 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FD59DF0000000001030307) 
Dec 06 09:44:21 np0005548790.localdomain python3.9[220637]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:44:21 np0005548790.localdomain sudo[220635]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:22 np0005548790.localdomain sshd[220464]: Invalid user pi from 14.48.24.90 port 33448
Dec 06 09:44:22 np0005548790.localdomain sudo[220745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vskwgoligogbiqsjbiqriievqnqrjvfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014262.2225597-2246-184645468543271/AnsiballZ_file.py
Dec 06 09:44:22 np0005548790.localdomain sudo[220745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:22 np0005548790.localdomain python3.9[220747]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:44:22 np0005548790.localdomain sudo[220745]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:23 np0005548790.localdomain sshd[220464]: Received disconnect from 14.48.24.90 port 33448:11: disconnected by user [preauth]
Dec 06 09:44:23 np0005548790.localdomain sshd[220464]: Disconnected from invalid user pi 14.48.24.90 port 33448 [preauth]
Dec 06 09:44:23 np0005548790.localdomain sudo[220855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hjykrrikwhetfeowznxquiztfymrijdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014262.8842762-2246-119590626635499/AnsiballZ_file.py
Dec 06 09:44:23 np0005548790.localdomain sudo[220855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:23 np0005548790.localdomain sshd[220858]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:44:23 np0005548790.localdomain python3.9[220857]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:44:23 np0005548790.localdomain sudo[220855]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:23 np0005548790.localdomain sudo[220967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dgjwvqhvrhkrpjxbxmruwybrrwdlgpta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014263.4889994-2246-112999759708394/AnsiballZ_file.py
Dec 06 09:44:23 np0005548790.localdomain sudo[220967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:24 np0005548790.localdomain python3.9[220969]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:44:24 np0005548790.localdomain sudo[220967]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15388 DF PROTO=TCP SPT=56458 DPT=9105 SEQ=3909730165 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FD661F0000000001030307) 
Dec 06 09:44:24 np0005548790.localdomain sshd[220858]: Invalid user baikal from 14.48.24.90 port 33946
Dec 06 09:44:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:44:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:44:25 np0005548790.localdomain podman[220988]: 2025-12-06 09:44:25.009746863 +0000 UTC m=+0.084196799 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, config_id=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 09:44:25 np0005548790.localdomain podman[220987]: 2025-12-06 09:44:25.076278856 +0000 UTC m=+0.151795741 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:44:25 np0005548790.localdomain sshd[220858]: Received disconnect from 14.48.24.90 port 33946:11: disconnected by user [preauth]
Dec 06 09:44:25 np0005548790.localdomain sshd[220858]: Disconnected from invalid user baikal 14.48.24.90 port 33946 [preauth]
Dec 06 09:44:25 np0005548790.localdomain podman[220987]: 2025-12-06 09:44:25.110746765 +0000 UTC m=+0.186263660 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 09:44:25 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:44:25 np0005548790.localdomain podman[220988]: 2025-12-06 09:44:25.161206335 +0000 UTC m=+0.235656321 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:44:25 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:44:25 np0005548790.localdomain sudo[221118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iuepxodmilfctcantxtgkgduwxgcemwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014265.519662-2418-239189824325729/AnsiballZ_file.py
Dec 06 09:44:25 np0005548790.localdomain sudo[221118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:26 np0005548790.localdomain python3.9[221120]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:44:26 np0005548790.localdomain sudo[221118]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:26 np0005548790.localdomain sudo[221228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jtlayfmpumfpmcofaqcgazlsodqcufbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014266.1533194-2418-229185417078606/AnsiballZ_file.py
Dec 06 09:44:26 np0005548790.localdomain sudo[221228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:26 np0005548790.localdomain python3.9[221230]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:44:26 np0005548790.localdomain sudo[221228]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:27 np0005548790.localdomain sudo[221338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-veiwitumdccekxschajfrjvsxmtuqecv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014266.822812-2418-64001138340108/AnsiballZ_file.py
Dec 06 09:44:27 np0005548790.localdomain sudo[221338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:27 np0005548790.localdomain python3.9[221340]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:44:27 np0005548790.localdomain sudo[221338]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64806 DF PROTO=TCP SPT=55962 DPT=9105 SEQ=27927424 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FD711F0000000001030307) 
Dec 06 09:44:27 np0005548790.localdomain sudo[221448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xpuufnsnsgcznebrhmftlperteewfnpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014267.4526823-2418-236582935432278/AnsiballZ_file.py
Dec 06 09:44:27 np0005548790.localdomain sudo[221448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:28 np0005548790.localdomain python3.9[221450]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:44:28 np0005548790.localdomain sudo[221448]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:28 np0005548790.localdomain sudo[221558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mgycilmzurweojgbruplqdimbsmgqgor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014268.2958589-2418-82186942491811/AnsiballZ_file.py
Dec 06 09:44:28 np0005548790.localdomain sudo[221558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:28 np0005548790.localdomain python3.9[221560]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:44:28 np0005548790.localdomain sudo[221558]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:29 np0005548790.localdomain sudo[221668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-odcvzgpfmwmuehmojzeaaoyqbihpzkpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014268.9629054-2418-130496950784251/AnsiballZ_file.py
Dec 06 09:44:29 np0005548790.localdomain sudo[221668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:29 np0005548790.localdomain python3.9[221670]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:44:29 np0005548790.localdomain sudo[221668]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:29 np0005548790.localdomain sudo[221778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fibgsnjinlqcdxqtfhhjzfexbyfnrgre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014269.5649526-2418-197741926840012/AnsiballZ_file.py
Dec 06 09:44:29 np0005548790.localdomain sudo[221778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:30 np0005548790.localdomain python3.9[221780]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:44:30 np0005548790.localdomain sudo[221778]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:30 np0005548790.localdomain sudo[221888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nejmkyjzgwanvuvqekzzhvtqokihwgzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014270.202153-2418-67859218374028/AnsiballZ_file.py
Dec 06 09:44:30 np0005548790.localdomain sudo[221888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:30 np0005548790.localdomain python3.9[221890]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:44:30 np0005548790.localdomain sudo[221888]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15390 DF PROTO=TCP SPT=56458 DPT=9105 SEQ=3909730165 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FD7DDF0000000001030307) 
Dec 06 09:44:31 np0005548790.localdomain sudo[221998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dxrvnsiogsqzqyfcpvyuroliunvtepba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014271.1502862-2592-253988430883940/AnsiballZ_command.py
Dec 06 09:44:31 np0005548790.localdomain sudo[221998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:31 np0005548790.localdomain python3.9[222000]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                              systemctl disable --now certmonger.service
                                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                            fi
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:44:31 np0005548790.localdomain sudo[221998]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:32 np0005548790.localdomain python3.9[222110]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 09:44:33 np0005548790.localdomain sudo[222218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xnvnarxnmtghlbdbyndrkfccjelshmgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014272.8273592-2645-91899347322964/AnsiballZ_systemd_service.py
Dec 06 09:44:33 np0005548790.localdomain sudo[222218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:33 np0005548790.localdomain python3.9[222220]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:44:33 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:44:33 np0005548790.localdomain systemd-sysv-generator[222252]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:44:33 np0005548790.localdomain systemd-rc-local-generator[222247]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:44:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3950 DF PROTO=TCP SPT=44622 DPT=9882 SEQ=2778849868 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FD89200000000001030307) 
Dec 06 09:44:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:44:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:44:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:44:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:44:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:44:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:44:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:44:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:44:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:44:33 np0005548790.localdomain sudo[222218]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:35 np0005548790.localdomain sudo[222365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ihiyrhihcrkgskycstjjcdqrryrbywdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014274.0258386-2670-195245234147602/AnsiballZ_command.py
Dec 06 09:44:35 np0005548790.localdomain sudo[222365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:35 np0005548790.localdomain python3.9[222367]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:44:35 np0005548790.localdomain sudo[222365]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:36 np0005548790.localdomain sudo[222476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hrkwfsjrundjlvdlybikooongyceabjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014275.7675185-2670-8104235669309/AnsiballZ_command.py
Dec 06 09:44:36 np0005548790.localdomain sudo[222476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:36 np0005548790.localdomain python3.9[222478]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:44:36 np0005548790.localdomain sudo[222476]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57503 DF PROTO=TCP SPT=36060 DPT=9101 SEQ=617393997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FD951F0000000001030307) 
Dec 06 09:44:36 np0005548790.localdomain sudo[222587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ntlmlmicdgqdcprcggjklwedtaaahcgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014276.4130485-2670-255698816371595/AnsiballZ_command.py
Dec 06 09:44:36 np0005548790.localdomain sudo[222587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:36 np0005548790.localdomain python3.9[222589]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:44:36 np0005548790.localdomain sudo[222587]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:37 np0005548790.localdomain sudo[222698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fioxcyjgybrsrxqxdotapnmskscuepwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014277.700907-2670-166662563191698/AnsiballZ_command.py
Dec 06 09:44:37 np0005548790.localdomain sudo[222698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:38 np0005548790.localdomain python3.9[222700]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:44:38 np0005548790.localdomain sudo[222698]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:38 np0005548790.localdomain sudo[222809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-akewohbblrorigmtzpayaoooekoejkha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014278.3361382-2670-144955220070023/AnsiballZ_command.py
Dec 06 09:44:38 np0005548790.localdomain sudo[222809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:38 np0005548790.localdomain python3.9[222811]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:44:38 np0005548790.localdomain sudo[222809]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:39 np0005548790.localdomain sudo[222920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xhhnvguhtohilrsgykkzjhvjbemxiuuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014278.9956737-2670-266328071969301/AnsiballZ_command.py
Dec 06 09:44:39 np0005548790.localdomain sudo[222920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:39 np0005548790.localdomain python3.9[222922]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:44:39 np0005548790.localdomain sudo[222920]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:39 np0005548790.localdomain sudo[223031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hggkvbelgxpttubhqkjhqwvgmnwitixd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014279.5736613-2670-204456796603759/AnsiballZ_command.py
Dec 06 09:44:39 np0005548790.localdomain sudo[223031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:40 np0005548790.localdomain python3.9[223033]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:44:40 np0005548790.localdomain sudo[223031]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26092 DF PROTO=TCP SPT=47412 DPT=9100 SEQ=1874915620 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FDA31F0000000001030307) 
Dec 06 09:44:40 np0005548790.localdomain sudo[223142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hmunopzuqnsrlmhwqdscokqrcbmjiwlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014280.2037866-2670-140937105713829/AnsiballZ_command.py
Dec 06 09:44:40 np0005548790.localdomain sudo[223142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:40 np0005548790.localdomain python3.9[223144]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:44:40 np0005548790.localdomain sudo[223142]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:42 np0005548790.localdomain sudo[223253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-laymlcwuwwzgmcdfkxsfuysudopugztc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014282.3724227-2877-220636271327689/AnsiballZ_file.py
Dec 06 09:44:42 np0005548790.localdomain sudo[223253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:42 np0005548790.localdomain python3.9[223255]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:44:42 np0005548790.localdomain sudo[223253]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:43 np0005548790.localdomain sudo[223363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pyjjbdaezgnkgfjvlbxqncicgkkpeuxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014282.9865997-2877-199645745398846/AnsiballZ_file.py
Dec 06 09:44:43 np0005548790.localdomain sudo[223363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29942 DF PROTO=TCP SPT=33274 DPT=9100 SEQ=3236169768 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FDAF1F0000000001030307) 
Dec 06 09:44:43 np0005548790.localdomain python3.9[223365]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:44:43 np0005548790.localdomain sudo[223363]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:43 np0005548790.localdomain sudo[223473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kpduumsfduzrpawpbdnmlqpypvdmupvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014283.6127713-2877-186372034324887/AnsiballZ_file.py
Dec 06 09:44:43 np0005548790.localdomain sudo[223473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:44 np0005548790.localdomain python3.9[223475]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:44:44 np0005548790.localdomain sudo[223473]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:44:44 np0005548790.localdomain podman[223547]: 2025-12-06 09:44:44.582366385 +0000 UTC m=+0.086022201 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:44:44 np0005548790.localdomain podman[223547]: 2025-12-06 09:44:44.625253245 +0000 UTC m=+0.128909111 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 09:44:44 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:44:44 np0005548790.localdomain sudo[223602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-suygktafcnqppsldgfvclugwtimwktif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014284.3622534-2943-139440342536270/AnsiballZ_file.py
Dec 06 09:44:44 np0005548790.localdomain sudo[223602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:44 np0005548790.localdomain python3.9[223604]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:44:44 np0005548790.localdomain sudo[223602]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:45 np0005548790.localdomain sudo[223713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xwdlijkgnwvnqcfqqqdabvhqualxzqah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014285.0270646-2943-57355683288522/AnsiballZ_file.py
Dec 06 09:44:45 np0005548790.localdomain sudo[223713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:45 np0005548790.localdomain python3.9[223715]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:44:45 np0005548790.localdomain sudo[223713]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:46 np0005548790.localdomain sudo[223823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ldhfrkzdioxpwnksfzqfifebdtjwoixy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014285.750853-2943-6310829032355/AnsiballZ_file.py
Dec 06 09:44:46 np0005548790.localdomain sudo[223823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:46 np0005548790.localdomain python3.9[223825]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:44:46 np0005548790.localdomain sudo[223823]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:46 np0005548790.localdomain sudo[223933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wsqyynvacpvrzlmbenfrtsjgaplcxqua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014286.3546634-2943-71934402600228/AnsiballZ_file.py
Dec 06 09:44:46 np0005548790.localdomain sudo[223933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:46 np0005548790.localdomain python3.9[223935]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:44:46 np0005548790.localdomain sudo[223933]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:47 np0005548790.localdomain sudo[224043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pgqeeqdnqwklvqcuakhbobzkocpfzfjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014286.9689035-2943-138977037429010/AnsiballZ_file.py
Dec 06 09:44:47 np0005548790.localdomain sudo[224043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:47 np0005548790.localdomain python3.9[224045]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:44:47 np0005548790.localdomain sudo[224043]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:44:48.351 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:44:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:44:48.351 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:44:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:44:48.352 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:44:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54867 DF PROTO=TCP SPT=47594 DPT=9102 SEQ=754247052 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FDC2F10000000001030307) 
Dec 06 09:44:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8281 DF PROTO=TCP SPT=47268 DPT=9882 SEQ=137193491 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FDC2F80000000001030307) 
Dec 06 09:44:48 np0005548790.localdomain sudo[224153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kwstxbiujjvactfcdpfbcodytyngtznn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014287.6130443-2943-73238144046108/AnsiballZ_file.py
Dec 06 09:44:48 np0005548790.localdomain sudo[224153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:48 np0005548790.localdomain python3.9[224155]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:44:48 np0005548790.localdomain sudo[224153]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:49 np0005548790.localdomain sudo[224263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uiqruymdnrexyelpfzcfupjdpprbpvoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014289.0643382-2943-258240501849407/AnsiballZ_file.py
Dec 06 09:44:49 np0005548790.localdomain sudo[224263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:49 np0005548790.localdomain python3.9[224265]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:44:49 np0005548790.localdomain sudo[224263]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54869 DF PROTO=TCP SPT=47594 DPT=9102 SEQ=754247052 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FDCEDF0000000001030307) 
Dec 06 09:44:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60407 DF PROTO=TCP SPT=46136 DPT=9105 SEQ=579710487 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FDDB5F0000000001030307) 
Dec 06 09:44:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:44:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:44:55 np0005548790.localdomain systemd[1]: tmp-crun.dPskVo.mount: Deactivated successfully.
Dec 06 09:44:55 np0005548790.localdomain podman[224283]: 2025-12-06 09:44:55.575601272 +0000 UTC m=+0.090849498 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 09:44:55 np0005548790.localdomain podman[224284]: 2025-12-06 09:44:55.630221889 +0000 UTC m=+0.141280903 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 06 09:44:55 np0005548790.localdomain podman[224283]: 2025-12-06 09:44:55.65673399 +0000 UTC m=+0.171982196 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:44:55 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:44:55 np0005548790.localdomain podman[224284]: 2025-12-06 09:44:55.748120104 +0000 UTC m=+0.259179038 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:44:55 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:44:56 np0005548790.localdomain sudo[224417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-esnstqvonkcwiyfgsrefedinellivbjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014296.019736-3268-139946254396941/AnsiballZ_getent.py
Dec 06 09:44:56 np0005548790.localdomain sudo[224417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:56 np0005548790.localdomain python3.9[224419]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec 06 09:44:56 np0005548790.localdomain sudo[224417]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:57 np0005548790.localdomain sudo[224528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-maqowopsqcuneskwhoifznbosvrwapuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014296.906069-3291-56113909050542/AnsiballZ_group.py
Dec 06 09:44:57 np0005548790.localdomain sudo[224528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:57 np0005548790.localdomain python3.9[224530]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 06 09:44:57 np0005548790.localdomain groupadd[224531]: group added to /etc/group: name=nova, GID=42436
Dec 06 09:44:57 np0005548790.localdomain groupadd[224531]: group added to /etc/gshadow: name=nova
Dec 06 09:44:57 np0005548790.localdomain groupadd[224531]: new group: name=nova, GID=42436
Dec 06 09:44:57 np0005548790.localdomain sudo[224528]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32069 DF PROTO=TCP SPT=37104 DPT=9105 SEQ=3055687102 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FDE71F0000000001030307) 
Dec 06 09:44:58 np0005548790.localdomain sudo[224644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sssbshdjvbnkspxbxowdiumxgxgyvozf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014297.8282542-3316-194307253559573/AnsiballZ_user.py
Dec 06 09:44:58 np0005548790.localdomain sudo[224644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:44:58 np0005548790.localdomain python3.9[224646]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005548790.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 06 09:44:58 np0005548790.localdomain useradd[224648]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Dec 06 09:44:58 np0005548790.localdomain useradd[224648]: add 'nova' to group 'libvirt'
Dec 06 09:44:58 np0005548790.localdomain useradd[224648]: add 'nova' to shadow group 'libvirt'
Dec 06 09:44:58 np0005548790.localdomain sudo[224644]: pam_unix(sudo:session): session closed for user root
Dec 06 09:44:59 np0005548790.localdomain sshd[224672]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:44:59 np0005548790.localdomain sshd[224672]: Accepted publickey for zuul from 192.168.122.30 port 42346 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:44:59 np0005548790.localdomain systemd-logind[760]: New session 54 of user zuul.
Dec 06 09:44:59 np0005548790.localdomain systemd[1]: Started Session 54 of User zuul.
Dec 06 09:44:59 np0005548790.localdomain sshd[224672]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:44:59 np0005548790.localdomain sshd[224675]: Received disconnect from 192.168.122.30 port 42346:11: disconnected by user
Dec 06 09:44:59 np0005548790.localdomain sshd[224675]: Disconnected from user zuul 192.168.122.30 port 42346
Dec 06 09:44:59 np0005548790.localdomain sshd[224672]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:44:59 np0005548790.localdomain systemd[1]: session-54.scope: Deactivated successfully.
Dec 06 09:44:59 np0005548790.localdomain systemd-logind[760]: Session 54 logged out. Waiting for processes to exit.
Dec 06 09:44:59 np0005548790.localdomain systemd-logind[760]: Removed session 54.
Dec 06 09:45:00 np0005548790.localdomain python3.9[224783]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:45:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60409 DF PROTO=TCP SPT=46136 DPT=9105 SEQ=579710487 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FDF31F0000000001030307) 
Dec 06 09:45:01 np0005548790.localdomain python3.9[224869]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014300.1466885-3390-167842760168573/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:45:02 np0005548790.localdomain sudo[224941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:45:02 np0005548790.localdomain sudo[224941]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:45:02 np0005548790.localdomain sudo[224941]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:02 np0005548790.localdomain sudo[224996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 09:45:02 np0005548790.localdomain sudo[224996]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:45:02 np0005548790.localdomain python3.9[224995]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:45:02 np0005548790.localdomain python3.9[225087]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:45:03 np0005548790.localdomain systemd[1]: tmp-crun.bjcz3m.mount: Deactivated successfully.
Dec 06 09:45:03 np0005548790.localdomain podman[225178]: 2025-12-06 09:45:03.184957661 +0000 UTC m=+0.093561772 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, distribution-scope=public, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, vcs-type=git, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, version=7)
Dec 06 09:45:03 np0005548790.localdomain podman[225178]: 2025-12-06 09:45:03.27876041 +0000 UTC m=+0.187364491 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, CEPH_POINT_RELEASE=, RELEASE=main, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, ceph=True)
Dec 06 09:45:03 np0005548790.localdomain sudo[224996]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:03 np0005548790.localdomain sudo[225262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:45:03 np0005548790.localdomain sudo[225262]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:45:03 np0005548790.localdomain sudo[225262]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:03 np0005548790.localdomain sudo[225280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:45:03 np0005548790.localdomain sudo[225280]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:45:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54871 DF PROTO=TCP SPT=47594 DPT=9102 SEQ=754247052 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FDFF1F0000000001030307) 
Dec 06 09:45:04 np0005548790.localdomain python3.9[225350]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:45:04 np0005548790.localdomain sudo[225280]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:04 np0005548790.localdomain python3.9[225467]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014303.058673-3390-73056911911934/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:45:05 np0005548790.localdomain sudo[225576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:45:05 np0005548790.localdomain sudo[225576]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:45:05 np0005548790.localdomain sudo[225576]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:05 np0005548790.localdomain python3.9[225575]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:45:05 np0005548790.localdomain python3.9[225679]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014304.9324481-3390-174109274598376/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=da0d7199af82d0dd331a8c2ddfaef41c68c4707d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:45:06 np0005548790.localdomain python3.9[225787]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:45:07 np0005548790.localdomain python3.9[225873]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014306.1392167-3390-273766847462308/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:45:07 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53613 DF PROTO=TCP SPT=38246 DPT=9100 SEQ=4027383519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FE0C600000000001030307) 
Dec 06 09:45:07 np0005548790.localdomain python3.9[225981]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:45:08 np0005548790.localdomain python3.9[226067]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014307.1750882-3390-14548335451345/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:45:09 np0005548790.localdomain sudo[226175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ydpwrljkulsxgsqrhiqzkjpsjfcbflyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014308.9682896-3641-69612273897052/AnsiballZ_file.py
Dec 06 09:45:09 np0005548790.localdomain sudo[226175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:09 np0005548790.localdomain python3.9[226177]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:45:09 np0005548790.localdomain sudo[226175]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:09 np0005548790.localdomain sudo[226285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bxfcokvttcbemgpiozzpnzcbouerlrau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014309.6717436-3664-219792112295973/AnsiballZ_copy.py
Dec 06 09:45:09 np0005548790.localdomain sudo[226285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:10 np0005548790.localdomain python3.9[226287]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:45:10 np0005548790.localdomain sudo[226285]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48912 DF PROTO=TCP SPT=52182 DPT=9100 SEQ=3043321480 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FE191F0000000001030307) 
Dec 06 09:45:10 np0005548790.localdomain sudo[226395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-foktsnopgzosftgqtmzcuiaxtgfirgga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014310.3366742-3688-83564222383988/AnsiballZ_stat.py
Dec 06 09:45:10 np0005548790.localdomain sudo[226395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:10 np0005548790.localdomain python3.9[226397]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:45:10 np0005548790.localdomain sudo[226395]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:11 np0005548790.localdomain sudo[226507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-emolvozeqhrdtysfpnyvitrxidxgaauq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014311.1424584-3715-279835138100844/AnsiballZ_file.py
Dec 06 09:45:11 np0005548790.localdomain sudo[226507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:11 np0005548790.localdomain python3.9[226509]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:45:11 np0005548790.localdomain sudo[226507]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:12 np0005548790.localdomain python3.9[226617]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:45:13 np0005548790.localdomain python3.9[226727]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:45:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53615 DF PROTO=TCP SPT=38246 DPT=9100 SEQ=4027383519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FE241F0000000001030307) 
Dec 06 09:45:13 np0005548790.localdomain python3.9[226813]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014312.5835736-3767-268711137697002/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:45:14 np0005548790.localdomain python3.9[226921]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:45:14 np0005548790.localdomain python3.9[227007]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014313.7493896-3811-199965415843780/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:45:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:45:15 np0005548790.localdomain sudo[227125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ihgxbvfoscedvgbjfsegvzfrhpsfjvjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014315.275151-3862-115661326818571/AnsiballZ_container_config_data.py
Dec 06 09:45:15 np0005548790.localdomain sudo[227125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:15 np0005548790.localdomain podman[227098]: 2025-12-06 09:45:15.57692786 +0000 UTC m=+0.084190481 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 09:45:15 np0005548790.localdomain podman[227098]: 2025-12-06 09:45:15.596063624 +0000 UTC m=+0.103326275 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 09:45:15 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:45:15 np0005548790.localdomain python3.9[227128]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec 06 09:45:15 np0005548790.localdomain sudo[227125]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:16 np0005548790.localdomain sudo[227244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yjlnvafhsiggbqfqjtjntlzbhpnbjoly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014316.1462755-3889-246515597205882/AnsiballZ_container_config_hash.py
Dec 06 09:45:16 np0005548790.localdomain sudo[227244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:16 np0005548790.localdomain sshd[227247]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:45:16 np0005548790.localdomain python3.9[227246]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 09:45:16 np0005548790.localdomain sudo[227244]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:17 np0005548790.localdomain sudo[227356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-womarydlixendqahdlrumtenmkkefqul ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765014317.1042795-3918-35816577502370/AnsiballZ_edpm_container_manage.py
Dec 06 09:45:17 np0005548790.localdomain sudo[227356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:17 np0005548790.localdomain sshd[227247]: Connection reset by authenticating user root 45.135.232.92 port 56490 [preauth]
Dec 06 09:45:17 np0005548790.localdomain python3[227358]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 09:45:17 np0005548790.localdomain sshd[227360]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:45:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41602 DF PROTO=TCP SPT=37656 DPT=9102 SEQ=3384589045 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FE38210000000001030307) 
Dec 06 09:45:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27157 DF PROTO=TCP SPT=35430 DPT=9882 SEQ=2986927324 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FE38270000000001030307) 
Dec 06 09:45:18 np0005548790.localdomain sshd[227360]: Invalid user osmc from 45.135.232.92 port 56498
Dec 06 09:45:19 np0005548790.localdomain sshd[227360]: Connection reset by invalid user osmc 45.135.232.92 port 56498 [preauth]
Dec 06 09:45:19 np0005548790.localdomain sshd[227387]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:45:20 np0005548790.localdomain sshd[227387]: Invalid user install from 45.135.232.92 port 56512
Dec 06 09:45:20 np0005548790.localdomain sshd[227387]: Connection reset by invalid user install 45.135.232.92 port 56512 [preauth]
Dec 06 09:45:21 np0005548790.localdomain sshd[227402]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:45:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41604 DF PROTO=TCP SPT=37656 DPT=9102 SEQ=3384589045 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FE44200000000001030307) 
Dec 06 09:45:22 np0005548790.localdomain sshd[227402]: Connection reset by authenticating user root 45.135.232.92 port 56514 [preauth]
Dec 06 09:45:22 np0005548790.localdomain sshd[227404]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:45:23 np0005548790.localdomain sshd[227404]: Invalid user admin from 45.135.232.92 port 56528
Dec 06 09:45:23 np0005548790.localdomain sshd[227404]: Connection reset by invalid user admin 45.135.232.92 port 56528 [preauth]
Dec 06 09:45:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61472 DF PROTO=TCP SPT=59788 DPT=9105 SEQ=1676669871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FE509F0000000001030307) 
Dec 06 09:45:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:45:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:45:27 np0005548790.localdomain podman[227419]: 2025-12-06 09:45:27.723005428 +0000 UTC m=+1.234294684 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 09:45:27 np0005548790.localdomain podman[227419]: 2025-12-06 09:45:27.792167394 +0000 UTC m=+1.303456640 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 06 09:45:27 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:45:27 np0005548790.localdomain podman[227373]: 2025-12-06 09:45:17.768619771 +0000 UTC m=+0.044715641 image pull  quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 06 09:45:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15393 DF PROTO=TCP SPT=56458 DPT=9105 SEQ=3909730165 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FE5D1F0000000001030307) 
Dec 06 09:45:27 np0005548790.localdomain podman[227420]: 2025-12-06 09:45:27.875667225 +0000 UTC m=+1.387529726 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 09:45:27 np0005548790.localdomain podman[227420]: 2025-12-06 09:45:27.917289993 +0000 UTC m=+1.429152534 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:45:27 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:45:28 np0005548790.localdomain podman[227485]: 
Dec 06 09:45:28 np0005548790.localdomain podman[227485]: 2025-12-06 09:45:28.090843562 +0000 UTC m=+0.085812855 container create a395dbd0ab59430267ec664b211f8908514541a37ff8aaee17c95683340e21f7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible, config_id=edpm)
Dec 06 09:45:28 np0005548790.localdomain podman[227485]: 2025-12-06 09:45:28.050493559 +0000 UTC m=+0.045462862 image pull  quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 06 09:45:28 np0005548790.localdomain python3[227358]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec 06 09:45:28 np0005548790.localdomain sudo[227356]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:29 np0005548790.localdomain sudo[227629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qjyfcxwmdfggxikgcfgykjsysiszzgit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014329.6734076-3943-270796904258756/AnsiballZ_stat.py
Dec 06 09:45:29 np0005548790.localdomain sudo[227629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:30 np0005548790.localdomain python3.9[227631]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:45:30 np0005548790.localdomain sudo[227629]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61474 DF PROTO=TCP SPT=59788 DPT=9105 SEQ=1676669871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FE685F0000000001030307) 
Dec 06 09:45:31 np0005548790.localdomain sudo[227741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lqnhpadmyydkxvcybgfmwdczxghwhdtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014330.9998894-3978-55136816396174/AnsiballZ_container_config_data.py
Dec 06 09:45:31 np0005548790.localdomain sudo[227741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:31 np0005548790.localdomain python3.9[227743]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec 06 09:45:31 np0005548790.localdomain sudo[227741]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:32 np0005548790.localdomain sudo[227851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sqtamcvfzcbydtuhlxujbsicaycbbnya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014331.8637743-4006-223173617024246/AnsiballZ_container_config_hash.py
Dec 06 09:45:32 np0005548790.localdomain sudo[227851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:32 np0005548790.localdomain python3.9[227853]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 09:45:32 np0005548790.localdomain sudo[227851]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:33 np0005548790.localdomain sudo[227961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eqpnljqclebcqirxwlmranydvxbtobcs ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765014332.849416-4035-275585859280539/AnsiballZ_edpm_container_manage.py
Dec 06 09:45:33 np0005548790.localdomain sudo[227961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:33 np0005548790.localdomain python3[227963]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 09:45:33 np0005548790.localdomain python3[227963]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3",
                                                                    "Digest": "sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2025-12-01T06:31:10.62653219Z",
                                                                    "Config": {
                                                                         "User": "nova",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20251125",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 1211779450,
                                                                    "VirtualSize": 1211779450,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                                              "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                                              "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",
                                                                              "sha256:baa8e0bc73d6b505f07c40d4f69a464312cc41ae2045c7975dd4759c27721a22",
                                                                              "sha256:d0cde44181262e43c105085c32a5af158b232f2e2ce4fe4b50530d7cdc5126cd"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20251125",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "nova",
                                                                    "History": [
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223494528Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223562059Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:39.054452717Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025707917Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025744608Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025767729Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025791379Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.02581523Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025867611Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.469442331Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:02.029095017Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:05.672474685Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.113425253Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.532320725Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.370061347Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.805172373Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.259306372Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.625948784Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.028304824Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.423316076Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.801219631Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.239187116Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.70996597Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.147342611Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.5739488Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.006975065Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.421255505Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.066694755Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.475695836Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.8971372Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:18.542651107Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622503041Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622561802Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622578342Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622594423Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:22.080892529Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:15.092312074Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:53.218820537Z",
                                                                              "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:56.858075591Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:17:53.072482982Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:18:02.761216507Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:18:03.785234187Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /etc/ssh && touch /etc/ssh/ssh_known_host",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:19:17.194997182Z",
                                                                              "created_by": "/bin/sh -c dnf install -y openstack-nova-common && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:19:24.14458279Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:29:30.048641643Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-nova-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:09.707360362Z",
                                                                              "created_by": "/bin/sh -c dnf -y install e2fsprogs xfsprogs xorriso iscsi-initiator-utils nfs-utils targetcli nvme-cli device-mapper-multipath ceph-common openssh-clients openstack-nova-compute openvswitch swtpm swtpm-tools && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.208898452Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.624465805Z",
                                                                              "created_by": "/bin/sh -c rm -f /etc/machine-id",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.624514176Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:18.661822382Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 06 09:45:33 np0005548790.localdomain podman[228014]: 2025-12-06 09:45:33.878228493 +0000 UTC m=+0.082875576 container remove 1839791d1614579f8e33c50a104d5b12e1b80ed3874a0a72f7fa47e51249c254 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '02a6418b6bc78669b6757e55b0a3cf68-1c14d9f34e8565ad391b489e982af70f'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team)
Dec 06 09:45:33 np0005548790.localdomain python3[227963]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force nova_compute
Dec 06 09:45:33 np0005548790.localdomain podman[228028]: 
Dec 06 09:45:33 np0005548790.localdomain podman[228028]: 2025-12-06 09:45:33.982213004 +0000 UTC m=+0.086613866 container create 30d4ec6a5d3288f1c354684ec26537de7e0edd7aa78b7c3d46996c8317c45d26 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 09:45:33 np0005548790.localdomain podman[228028]: 2025-12-06 09:45:33.941224104 +0000 UTC m=+0.045624996 image pull  quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 06 09:45:33 np0005548790.localdomain python3[227963]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec 06 09:45:34 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41606 DF PROTO=TCP SPT=37656 DPT=9102 SEQ=3384589045 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FE751F0000000001030307) 
Dec 06 09:45:34 np0005548790.localdomain sudo[227961]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:35 np0005548790.localdomain sudo[228174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-muupdkdivggobgkogmnawckdhmofcddi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014335.345104-4061-220469682346898/AnsiballZ_stat.py
Dec 06 09:45:35 np0005548790.localdomain sudo[228174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:35 np0005548790.localdomain python3.9[228176]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:45:35 np0005548790.localdomain sudo[228174]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:36 np0005548790.localdomain sudo[228286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zpmwzpqykpnmzhjpsfokhgndhqnesghj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014336.2329023-4086-137213626968743/AnsiballZ_file.py
Dec 06 09:45:36 np0005548790.localdomain sudo[228286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37660 DF PROTO=TCP SPT=47902 DPT=9101 SEQ=1764638136 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FE7F1F0000000001030307) 
Dec 06 09:45:36 np0005548790.localdomain python3.9[228288]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:45:36 np0005548790.localdomain sudo[228286]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:37 np0005548790.localdomain sudo[228395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yvjoqewaricnydeizeuextkxyenddlak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014336.7742684-4086-243731611524240/AnsiballZ_copy.py
Dec 06 09:45:37 np0005548790.localdomain sudo[228395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:37 np0005548790.localdomain python3.9[228397]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014336.7742684-4086-243731611524240/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:45:37 np0005548790.localdomain sudo[228395]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:37 np0005548790.localdomain sudo[228450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gohwgusddsddqsjjgmerxxssqotfcjcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014336.7742684-4086-243731611524240/AnsiballZ_systemd.py
Dec 06 09:45:37 np0005548790.localdomain sudo[228450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:37 np0005548790.localdomain python3.9[228452]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:45:37 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:45:38 np0005548790.localdomain systemd-rc-local-generator[228469]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:45:38 np0005548790.localdomain systemd-sysv-generator[228472]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:45:38 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:45:38 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:45:38 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:45:38 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:45:38 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:45:38 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:45:38 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:45:38 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:45:38 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:45:38 np0005548790.localdomain sudo[228450]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:38 np0005548790.localdomain sudo[228541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sewhqcflhyhqbgxlwqpcgzzkftcwgolo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014336.7742684-4086-243731611524240/AnsiballZ_systemd.py
Dec 06 09:45:38 np0005548790.localdomain sudo[228541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:38 np0005548790.localdomain python3.9[228543]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:45:38 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:45:38 np0005548790.localdomain systemd-rc-local-generator[228568]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:45:38 np0005548790.localdomain systemd-sysv-generator[228571]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:45:39 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:45:39 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:45:39 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:45:39 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:45:39 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:45:39 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:45:39 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:45:39 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:45:39 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:45:39 np0005548790.localdomain systemd[1]: Starting nova_compute container...
Dec 06 09:45:39 np0005548790.localdomain systemd[1]: tmp-crun.eMozv6.mount: Deactivated successfully.
Dec 06 09:45:39 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:45:39 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14c055f0120ab812f6d286999a71cde76653890989ed47d549081354032c382/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:39 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14c055f0120ab812f6d286999a71cde76653890989ed47d549081354032c382/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:39 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14c055f0120ab812f6d286999a71cde76653890989ed47d549081354032c382/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:39 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14c055f0120ab812f6d286999a71cde76653890989ed47d549081354032c382/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:39 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14c055f0120ab812f6d286999a71cde76653890989ed47d549081354032c382/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:39 np0005548790.localdomain podman[228584]: 2025-12-06 09:45:39.343429607 +0000 UTC m=+0.124123332 container init 30d4ec6a5d3288f1c354684ec26537de7e0edd7aa78b7c3d46996c8317c45d26 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 09:45:39 np0005548790.localdomain podman[228584]: 2025-12-06 09:45:39.356900459 +0000 UTC m=+0.137594184 container start 30d4ec6a5d3288f1c354684ec26537de7e0edd7aa78b7c3d46996c8317c45d26 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, container_name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:45:39 np0005548790.localdomain podman[228584]: nova_compute
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: + sudo -E kolla_set_configs
Dec 06 09:45:39 np0005548790.localdomain systemd[1]: Started nova_compute container.
Dec 06 09:45:39 np0005548790.localdomain sudo[228541]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Validating config file
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Copying service configuration files
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Deleting /etc/ceph
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Creating directory /etc/ceph
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Setting permission for /etc/ceph
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Writing out command to execute
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: ++ cat /run_command
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: + CMD=nova-compute
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: + ARGS=
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: + sudo kolla_copy_cacerts
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: + [[ ! -n '' ]]
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: + . kolla_extend_start
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: Running command: 'nova-compute'
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: + echo 'Running command: '\''nova-compute'\'''
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: + umask 0022
Dec 06 09:45:39 np0005548790.localdomain nova_compute[228597]: + exec nova-compute
Dec 06 09:45:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29945 DF PROTO=TCP SPT=33274 DPT=9100 SEQ=3236169768 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FE8D1F0000000001030307) 
Dec 06 09:45:40 np0005548790.localdomain systemd[1]: tmp-crun.pvnTd4.mount: Deactivated successfully.
Dec 06 09:45:40 np0005548790.localdomain python3.9[228717]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.191 228601 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.191 228601 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.192 228601 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.192 228601 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.306 228601 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.328 228601 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.328 228601 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 06 09:45:41 np0005548790.localdomain python3.9[228829]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.858 228601 INFO nova.virt.driver [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.972 228601 INFO nova.compute.provider_config [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.979 228601 WARNING nova.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.979 228601 DEBUG oslo_concurrency.lockutils [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.980 228601 DEBUG oslo_concurrency.lockutils [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.980 228601 DEBUG oslo_concurrency.lockutils [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.980 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.980 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.980 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.980 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.981 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.981 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.981 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.981 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.981 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.981 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.981 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.982 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.982 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.982 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.982 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.982 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.982 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.983 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.983 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.983 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] console_host                   = np0005548790.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.983 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.983 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.983 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.983 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.984 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.984 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.984 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.984 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.984 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.984 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.985 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.985 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.985 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.985 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.985 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.985 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.985 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.985 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.986 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] host                           = np0005548790.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.986 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.986 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.986 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.986 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.986 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.987 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.987 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.987 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.987 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.987 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.987 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.988 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.988 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.988 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.988 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.988 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.988 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.988 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.989 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.989 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.989 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.989 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.989 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.989 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.989 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.989 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.990 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.990 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.990 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.990 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.990 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.990 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.990 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.991 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.991 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.991 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.991 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.991 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.991 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.992 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.992 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.992 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] my_block_storage_ip            = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.992 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] my_ip                          = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.992 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.992 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.992 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.993 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.993 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.993 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.993 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.993 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.993 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.993 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.994 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.994 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.994 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.994 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.994 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.994 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.994 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.995 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.995 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.995 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.995 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.995 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.995 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.996 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.996 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.996 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.996 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.996 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.996 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.996 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.996 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.997 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.997 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.997 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.997 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.997 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.997 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.997 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.998 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.998 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.998 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.998 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.998 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.998 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.998 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.999 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.999 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.999 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.999 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.999 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:41 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.999 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:41.999 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.000 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.000 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.000 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.000 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.000 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.000 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.000 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.000 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.001 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.001 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.001 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.001 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.001 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.001 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.001 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.002 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.002 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.002 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.002 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.002 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.002 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.002 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.003 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.003 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.003 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.003 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.003 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.003 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.003 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.004 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.004 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.004 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.004 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.004 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.004 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.004 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.005 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.005 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.005 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.005 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.005 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.005 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.005 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.005 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.006 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.006 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.006 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.006 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.006 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.006 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.006 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.007 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.007 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.007 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.007 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.007 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.007 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.007 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.008 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.008 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.008 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.008 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.008 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.008 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.008 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.009 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.009 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.009 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.009 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.009 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.009 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.009 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.009 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.010 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.010 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.010 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.010 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.010 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.010 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.010 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.011 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.011 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.011 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.011 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.011 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.011 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.011 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.012 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.012 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.012 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.012 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.012 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.012 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.012 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.012 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.013 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.013 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.013 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.013 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.013 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.013 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.013 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.014 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.014 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.014 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.014 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.014 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.014 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.014 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.015 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.015 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.015 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.015 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.015 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.015 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.015 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.015 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.016 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.016 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.016 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.016 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.016 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.016 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.016 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.017 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.017 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.017 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.017 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.017 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.017 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.017 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.018 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.018 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.018 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.018 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.018 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.018 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.018 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.019 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.019 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.019 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.019 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.019 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.019 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.019 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.019 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.020 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.020 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.020 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.020 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.020 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.020 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.020 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.021 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.021 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.021 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.021 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.021 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.021 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.021 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.022 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.022 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.022 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.022 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.022 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.022 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.022 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.023 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.023 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.023 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.023 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.023 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.023 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.023 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.024 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.024 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.024 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.024 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.024 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.024 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.024 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.024 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.025 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.025 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.025 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.025 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.025 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.025 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.025 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.026 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.026 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.026 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.026 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.026 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.026 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.026 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.027 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.027 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.027 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.027 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.027 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.027 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.027 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.027 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.028 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.028 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.028 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.028 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.028 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.028 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.028 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.029 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.029 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.029 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.029 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.029 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.029 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.030 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.030 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.030 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.030 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.030 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.030 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.031 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.031 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.031 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.031 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.031 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.031 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.032 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.032 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.032 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.032 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.032 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.032 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.032 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.032 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.033 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.033 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.033 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.033 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.033 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.033 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.033 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.034 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.034 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.034 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.034 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.034 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.034 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.034 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.034 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.035 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.035 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.035 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.035 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.035 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.035 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.036 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.036 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.036 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.036 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.036 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.036 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.037 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.037 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.037 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.037 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.037 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.037 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.038 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.038 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.038 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.038 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.038 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.038 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.038 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.038 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.039 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.039 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.039 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.039 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.039 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.039 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.039 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.040 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.040 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.040 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.040 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.040 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.040 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.040 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.040 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.041 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.041 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.041 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.041 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.041 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.041 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.041 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.042 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.042 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.042 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.042 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.042 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.042 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.043 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.043 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.043 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.043 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.043 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.043 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.043 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.043 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.044 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.044 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.044 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.044 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.044 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.044 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.045 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.045 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.045 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.045 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.045 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.045 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.045 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.045 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.046 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.046 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.046 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.046 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.046 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.046 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.047 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.047 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.047 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.047 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.047 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.047 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.048 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.048 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.048 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.048 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.048 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.048 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.048 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.048 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.049 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.049 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.049 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.049 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.049 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.049 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.049 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.050 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.050 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.050 228601 WARNING oslo_config.cfg [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: and ``live_migration_inbound_addr`` respectively.
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: ).  Its value may be silently ignored in the future.
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.050 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.live_migration_uri     = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.050 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.050 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.051 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.051 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.051 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.051 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.051 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.051 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.051 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.052 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.052 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.052 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.052 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.052 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.052 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.052 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.053 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.053 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.rbd_secret_uuid        = 1939e851-b10c-5c3b-9bb7-8e7f380233e8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.053 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.053 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.053 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.053 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.053 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.054 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.054 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.054 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.054 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.054 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.054 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.055 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.055 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.055 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.055 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.055 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.055 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.056 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.056 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.056 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.056 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.056 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.056 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.056 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.057 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.057 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.057 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.057 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.057 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.057 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.057 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.058 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.058 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.058 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.058 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.058 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.058 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.058 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.058 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.059 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.059 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.059 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.059 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.059 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.059 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.060 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.060 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.060 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.060 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.060 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.060 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.060 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.061 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.061 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.061 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.061 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.061 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.061 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.061 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.062 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.062 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.062 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.062 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.062 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.062 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.062 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.063 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.063 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.063 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.063 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.063 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.063 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.auth_url             = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.063 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.064 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.064 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.064 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.064 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.064 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.064 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.064 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.064 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.065 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.065 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.065 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.065 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.065 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.065 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.065 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.065 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.066 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.066 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.066 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.066 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.066 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.066 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.066 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.067 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.067 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.067 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.067 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.067 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.067 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.067 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.068 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.068 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.068 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.068 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.068 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.068 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.068 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.068 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.069 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.069 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.069 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.069 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.069 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.069 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.069 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.070 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.070 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.070 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.070 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.070 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.070 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.071 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.071 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.071 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.071 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.071 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.071 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.071 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.072 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.072 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.072 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.072 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.072 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.072 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.072 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.072 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.073 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.073 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.073 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.073 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.073 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.073 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.073 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.074 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.074 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.074 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.074 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.074 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.074 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.074 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.075 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.075 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.075 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.075 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.075 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.075 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.075 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.076 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.076 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.076 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.076 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.076 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.076 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.077 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.077 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.077 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.077 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.077 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.077 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.077 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.078 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.078 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.078 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.078 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.078 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.078 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.078 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.079 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.079 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.079 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.079 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.079 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.079 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.079 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.080 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.080 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.080 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.080 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.080 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.080 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.080 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.081 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.081 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.081 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.081 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.081 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.081 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.081 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.081 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.082 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.082 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.082 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.082 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.082 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.082 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.082 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.083 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.083 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.083 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.083 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.083 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.083 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.083 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.084 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.084 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.084 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.084 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.084 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.084 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.084 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.085 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.085 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.085 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.085 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.085 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.085 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.085 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vnc.novncproxy_base_url        = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.086 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.086 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.086 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.086 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vnc.server_proxyclient_address = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.086 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.086 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.087 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.087 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.087 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.087 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.087 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.087 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.087 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.087 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.088 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.088 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.088 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.088 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.088 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.088 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.088 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.089 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.089 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.089 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.089 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.089 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.089 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.089 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.090 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.090 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.090 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.090 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.090 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.090 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.090 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.091 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.091 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.091 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.091 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.091 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.091 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.092 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.092 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.092 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.092 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.092 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.092 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.092 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.093 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.093 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.093 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.093 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.093 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.093 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.093 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.094 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.094 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.094 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.094 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.094 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.094 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.094 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.095 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.095 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.095 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.095 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.095 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.095 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.095 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.096 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.096 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.096 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.096 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.096 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.096 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.096 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.097 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.097 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.097 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.097 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.097 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.097 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.097 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.097 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.098 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.098 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.098 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.098 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.098 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.098 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.099 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.099 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.099 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.099 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.099 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.auth_url            = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.099 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.099 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.100 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.100 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.100 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.100 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.100 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.100 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.100 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.101 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.101 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.101 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.101 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.101 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.101 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.101 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.102 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.102 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.102 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.102 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.102 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.102 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.102 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.103 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.103 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.103 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.103 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.103 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.103 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.103 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.103 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.104 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.104 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.104 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.104 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.104 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.104 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.105 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.105 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.105 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.105 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.105 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.105 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.105 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.106 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.106 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.106 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.106 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.106 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.106 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.106 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.107 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.107 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.107 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.107 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.107 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.107 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.107 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.108 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.108 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.108 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.108 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.108 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.108 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.108 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.109 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.109 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.109 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.109 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.109 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.109 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.109 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.109 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.110 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.110 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.110 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.110 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.110 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.110 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.110 228601 DEBUG oslo_service.service [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.111 228601 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.122 228601 INFO nova.virt.node [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Determined node identity 9d142787-bd19-4b53-bf45-24c0e0c1cff0 from /var/lib/nova/compute_id
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.123 228601 DEBUG nova.virt.libvirt.host [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.123 228601 DEBUG nova.virt.libvirt.host [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.123 228601 DEBUG nova.virt.libvirt.host [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.124 228601 DEBUG nova.virt.libvirt.host [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 06 09:45:42 np0005548790.localdomain systemd[1]: Started libvirt QEMU daemon.
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.187 228601 DEBUG nova.virt.libvirt.host [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f6f09fda490> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.191 228601 DEBUG nova.virt.libvirt.host [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f6f09fda490> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.192 228601 INFO nova.virt.libvirt.driver [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Connection event '1' reason 'None'
Dec 06 09:45:42 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:42.209 228601 DEBUG nova.virt.libvirt.volume.mount [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:43.101 228601 INFO nova.virt.libvirt.host [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Libvirt host capabilities <capabilities>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <host>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <uuid>f03c6239-85fa-4e2b-b1f7-56cf939bb96f</uuid>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <cpu>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <arch>x86_64</arch>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model>EPYC-Rome-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <vendor>AMD</vendor>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <microcode version='16777317'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <signature family='23' model='49' stepping='0'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature name='x2apic'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature name='tsc-deadline'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature name='osxsave'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature name='hypervisor'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature name='tsc_adjust'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature name='spec-ctrl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature name='stibp'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature name='arch-capabilities'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature name='ssbd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature name='cmp_legacy'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature name='topoext'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature name='virt-ssbd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature name='lbrv'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature name='tsc-scale'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature name='vmcb-clean'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature name='pause-filter'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature name='pfthreshold'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature name='svme-addr-chk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature name='rdctl-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature name='skip-l1dfl-vmentry'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature name='mds-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature name='pschange-mc-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <pages unit='KiB' size='4'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <pages unit='KiB' size='2048'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <pages unit='KiB' size='1048576'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </cpu>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <power_management>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <suspend_mem/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <suspend_disk/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <suspend_hybrid/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </power_management>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <iommu support='no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <migration_features>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <live/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <uri_transports>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <uri_transport>tcp</uri_transport>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <uri_transport>rdma</uri_transport>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </uri_transports>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </migration_features>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <topology>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <cells num='1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <cell id='0'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:           <memory unit='KiB'>16116612</memory>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:           <pages unit='KiB' size='4'>4029153</pages>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:           <pages unit='KiB' size='2048'>0</pages>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:           <distances>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:             <sibling id='0' value='10'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:           </distances>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:           <cpus num='8'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:           </cpus>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         </cell>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </cells>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </topology>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <cache>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </cache>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <secmodel>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model>selinux</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <doi>0</doi>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </secmodel>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <secmodel>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model>dac</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <doi>0</doi>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </secmodel>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   </host>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <guest>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <os_type>hvm</os_type>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <arch name='i686'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <wordsize>32</wordsize>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <domain type='qemu'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <domain type='kvm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </arch>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <features>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <pae/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <nonpae/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <acpi default='on' toggle='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <apic default='on' toggle='no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <cpuselection/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <deviceboot/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <disksnapshot default='on' toggle='no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <externalSnapshot/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </features>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   </guest>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <guest>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <os_type>hvm</os_type>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <arch name='x86_64'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <wordsize>64</wordsize>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <domain type='qemu'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <domain type='kvm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </arch>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <features>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <acpi default='on' toggle='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <apic default='on' toggle='no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <cpuselection/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <deviceboot/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <disksnapshot default='on' toggle='no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <externalSnapshot/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </features>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   </guest>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: </capabilities>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:43.110 228601 DEBUG nova.virt.libvirt.host [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:43.129 228601 DEBUG nova.virt.libvirt.host [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: <domainCapabilities>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <domain>kvm</domain>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <arch>i686</arch>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <vcpu max='1024'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <iothreads supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <os supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <enum name='firmware'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <loader supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='type'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>rom</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>pflash</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='readonly'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>yes</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>no</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='secure'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>no</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </loader>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   </os>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <cpu>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <mode name='host-passthrough' supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='hostPassthroughMigratable'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>on</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>off</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </mode>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <mode name='maximum' supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='maximumMigratable'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>on</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>off</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </mode>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <mode name='host-model' supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <vendor>AMD</vendor>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='x2apic'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='hypervisor'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='stibp'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='ssbd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='overflow-recov'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='succor'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='ibrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='lbrv'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='tsc-scale'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='pause-filter'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='pfthreshold'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='disable' name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </mode>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <mode name='custom' supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-noTSX'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cooperlake'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cooperlake-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cooperlake-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Denverton'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mpx'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Denverton-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mpx'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Denverton-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Denverton-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Dhyana-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Genoa'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amd-psfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='auto-ibrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='no-nested-data-bp'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='null-sel-clr-base'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='stibp-always-on'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amd-psfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='auto-ibrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='no-nested-data-bp'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='null-sel-clr-base'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='stibp-always-on'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Milan'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Milan-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Milan-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amd-psfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='no-nested-data-bp'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='null-sel-clr-base'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='stibp-always-on'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Rome'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Rome-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Rome-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Rome-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='GraniteRapids'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mcdt-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pbrsb-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='prefetchiti'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='GraniteRapids-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mcdt-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pbrsb-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='prefetchiti'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='GraniteRapids-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx10'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx10-128'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx10-256'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx10-512'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mcdt-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pbrsb-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='prefetchiti'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-noTSX'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v5'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v6'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v7'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='IvyBridge'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='IvyBridge-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='IvyBridge-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='IvyBridge-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='KnightsMill'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-4fmaps'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-4vnniw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512er'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512pf'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='KnightsMill-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-4fmaps'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-4vnniw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512er'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512pf'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Opteron_G4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fma4'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xop'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Opteron_G4-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fma4'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xop'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Opteron_G5'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fma4'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tbm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xop'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Opteron_G5-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fma4'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tbm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xop'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='SapphireRapids'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='SapphireRapids-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='SapphireRapids-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='SapphireRapids-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='SierraForest'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-ne-convert'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cmpccxadd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mcdt-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pbrsb-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='SierraForest-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-ne-convert'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cmpccxadd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mcdt-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pbrsb-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-v5'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Snowridge'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='core-capability'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mpx'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='split-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Snowridge-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='core-capability'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mpx'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='split-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Snowridge-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='core-capability'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='split-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Snowridge-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='core-capability'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='split-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Snowridge-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='athlon'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnow'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnowext'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='athlon-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnow'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnowext'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='core2duo'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='core2duo-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='coreduo'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='coreduo-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='n270'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='n270-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='phenom'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnow'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnowext'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='phenom-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnow'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnowext'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </mode>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   </cpu>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <memoryBacking supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <enum name='sourceType'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <value>file</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <value>anonymous</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <value>memfd</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   </memoryBacking>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <devices>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <disk supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='diskDevice'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>disk</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>cdrom</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>floppy</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>lun</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='bus'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>fdc</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>scsi</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>usb</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>sata</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='model'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio-transitional</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio-non-transitional</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </disk>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <graphics supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='type'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>vnc</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>egl-headless</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>dbus</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </graphics>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <video supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='modelType'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>vga</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>cirrus</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>none</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>bochs</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>ramfb</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </video>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <hostdev supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='mode'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>subsystem</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='startupPolicy'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>default</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>mandatory</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>requisite</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>optional</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='subsysType'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>usb</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>pci</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>scsi</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='capsType'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='pciBackend'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </hostdev>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <rng supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='model'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio-transitional</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio-non-transitional</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='backendModel'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>random</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>egd</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>builtin</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </rng>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <filesystem supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='driverType'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>path</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>handle</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtiofs</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </filesystem>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <tpm supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='model'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>tpm-tis</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>tpm-crb</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='backendModel'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>emulator</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>external</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='backendVersion'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>2.0</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </tpm>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <redirdev supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='bus'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>usb</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </redirdev>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <channel supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='type'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>pty</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>unix</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </channel>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <crypto supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='model'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='type'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>qemu</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='backendModel'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>builtin</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </crypto>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <interface supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='backendType'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>default</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>passt</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </interface>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <panic supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='model'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>isa</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>hyperv</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </panic>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <console supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='type'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>null</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>vc</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>pty</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>dev</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>file</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>pipe</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>stdio</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>udp</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>tcp</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>unix</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>qemu-vdagent</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>dbus</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </console>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   </devices>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <features>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <gic supported='no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <vmcoreinfo supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <genid supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <backingStoreInput supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <backup supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <async-teardown supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <ps2 supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <sev supported='no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <sgx supported='no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <hyperv supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='features'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>relaxed</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>vapic</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>spinlocks</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>vpindex</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>runtime</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>synic</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>stimer</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>reset</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>vendor_id</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>frequencies</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>reenlightenment</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>tlbflush</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>ipi</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>avic</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>emsr_bitmap</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>xmm_input</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <defaults>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <spinlocks>4095</spinlocks>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <stimer_direct>on</stimer_direct>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <tlbflush_direct>off</tlbflush_direct>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <tlbflush_extended>off</tlbflush_extended>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </defaults>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </hyperv>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <launchSecurity supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='sectype'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>tdx</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </launchSecurity>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   </features>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: </domainCapabilities>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:43.137 228601 DEBUG nova.virt.libvirt.host [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: <domainCapabilities>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <domain>kvm</domain>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <arch>i686</arch>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <vcpu max='240'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <iothreads supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <os supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <enum name='firmware'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <loader supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='type'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>rom</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>pflash</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='readonly'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>yes</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>no</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='secure'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>no</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </loader>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   </os>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <cpu>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <mode name='host-passthrough' supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='hostPassthroughMigratable'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>on</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>off</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </mode>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <mode name='maximum' supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='maximumMigratable'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>on</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>off</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </mode>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <mode name='host-model' supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <vendor>AMD</vendor>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='x2apic'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='hypervisor'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='stibp'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='ssbd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='overflow-recov'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='succor'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='ibrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='lbrv'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='tsc-scale'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='pause-filter'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='pfthreshold'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='disable' name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </mode>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <mode name='custom' supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-noTSX'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cooperlake'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cooperlake-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cooperlake-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Denverton'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mpx'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Denverton-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mpx'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Denverton-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Denverton-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Dhyana-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Genoa'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amd-psfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='auto-ibrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='no-nested-data-bp'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='null-sel-clr-base'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='stibp-always-on'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amd-psfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='auto-ibrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='no-nested-data-bp'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='null-sel-clr-base'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='stibp-always-on'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Milan'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Milan-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Milan-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amd-psfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='no-nested-data-bp'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='null-sel-clr-base'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='stibp-always-on'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Rome'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Rome-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Rome-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Rome-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='GraniteRapids'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mcdt-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pbrsb-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='prefetchiti'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='GraniteRapids-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mcdt-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pbrsb-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='prefetchiti'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='GraniteRapids-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx10'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx10-128'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx10-256'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx10-512'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mcdt-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pbrsb-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='prefetchiti'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-noTSX'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v5'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v6'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v7'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='IvyBridge'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='IvyBridge-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='IvyBridge-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='IvyBridge-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='KnightsMill'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-4fmaps'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-4vnniw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512er'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512pf'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='KnightsMill-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-4fmaps'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-4vnniw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512er'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512pf'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Opteron_G4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fma4'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xop'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Opteron_G4-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fma4'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xop'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Opteron_G5'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fma4'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tbm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xop'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Opteron_G5-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fma4'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tbm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xop'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='SapphireRapids'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='SapphireRapids-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='SapphireRapids-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='SapphireRapids-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='SierraForest'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-ne-convert'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cmpccxadd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mcdt-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pbrsb-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='SierraForest-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-ne-convert'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cmpccxadd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mcdt-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pbrsb-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-v5'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Snowridge'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='core-capability'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mpx'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='split-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Snowridge-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='core-capability'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mpx'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='split-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Snowridge-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='core-capability'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='split-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Snowridge-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='core-capability'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='split-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Snowridge-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='athlon'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnow'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnowext'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='athlon-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnow'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnowext'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='core2duo'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='core2duo-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='coreduo'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='coreduo-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='n270'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='n270-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='phenom'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnow'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnowext'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='phenom-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnow'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnowext'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </mode>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   </cpu>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <memoryBacking supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <enum name='sourceType'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <value>file</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <value>anonymous</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <value>memfd</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   </memoryBacking>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <devices>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <disk supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='diskDevice'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>disk</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>cdrom</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>floppy</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>lun</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='bus'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>ide</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>fdc</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>scsi</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>usb</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>sata</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='model'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio-transitional</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio-non-transitional</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </disk>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <graphics supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='type'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>vnc</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>egl-headless</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>dbus</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </graphics>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <video supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='modelType'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>vga</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>cirrus</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>none</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>bochs</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>ramfb</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </video>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <hostdev supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='mode'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>subsystem</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='startupPolicy'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>default</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>mandatory</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>requisite</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>optional</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='subsysType'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>usb</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>pci</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>scsi</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='capsType'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='pciBackend'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </hostdev>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <rng supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='model'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio-transitional</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio-non-transitional</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='backendModel'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>random</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>egd</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>builtin</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </rng>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <filesystem supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='driverType'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>path</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>handle</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtiofs</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </filesystem>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <tpm supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='model'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>tpm-tis</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>tpm-crb</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='backendModel'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>emulator</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>external</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='backendVersion'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>2.0</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </tpm>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <redirdev supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='bus'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>usb</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </redirdev>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <channel supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='type'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>pty</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>unix</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </channel>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <crypto supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='model'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='type'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>qemu</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='backendModel'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>builtin</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </crypto>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <interface supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='backendType'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>default</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>passt</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </interface>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <panic supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='model'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>isa</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>hyperv</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </panic>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <console supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='type'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>null</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>vc</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>pty</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>dev</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>file</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>pipe</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>stdio</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>udp</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>tcp</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>unix</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>qemu-vdagent</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>dbus</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </console>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   </devices>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <features>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <gic supported='no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <vmcoreinfo supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <genid supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <backingStoreInput supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <backup supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <async-teardown supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <ps2 supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <sev supported='no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <sgx supported='no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <hyperv supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='features'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>relaxed</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>vapic</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>spinlocks</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>vpindex</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>runtime</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>synic</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>stimer</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>reset</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>vendor_id</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>frequencies</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>reenlightenment</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>tlbflush</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>ipi</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>avic</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>emsr_bitmap</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>xmm_input</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <defaults>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <spinlocks>4095</spinlocks>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <stimer_direct>on</stimer_direct>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <tlbflush_direct>off</tlbflush_direct>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <tlbflush_extended>off</tlbflush_extended>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </defaults>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </hyperv>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <launchSecurity supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='sectype'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>tdx</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </launchSecurity>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   </features>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: </domainCapabilities>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:43.184 228601 DEBUG nova.virt.libvirt.host [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:43.190 228601 DEBUG nova.virt.libvirt.host [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: <domainCapabilities>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <domain>kvm</domain>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <arch>x86_64</arch>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <vcpu max='1024'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <iothreads supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <os supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <enum name='firmware'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <value>efi</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <loader supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='type'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>rom</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>pflash</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='readonly'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>yes</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>no</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='secure'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>yes</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>no</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </loader>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   </os>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <cpu>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <mode name='host-passthrough' supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='hostPassthroughMigratable'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>on</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>off</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </mode>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <mode name='maximum' supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='maximumMigratable'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>on</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>off</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </mode>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <mode name='host-model' supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <vendor>AMD</vendor>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='x2apic'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='hypervisor'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='stibp'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='ssbd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='overflow-recov'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='succor'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='ibrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='lbrv'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='tsc-scale'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='pause-filter'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='pfthreshold'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='disable' name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </mode>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <mode name='custom' supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-noTSX'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cooperlake'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cooperlake-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cooperlake-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Denverton'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mpx'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Denverton-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mpx'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Denverton-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Denverton-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Dhyana-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Genoa'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amd-psfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='auto-ibrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='no-nested-data-bp'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='null-sel-clr-base'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='stibp-always-on'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amd-psfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='auto-ibrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='no-nested-data-bp'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='null-sel-clr-base'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='stibp-always-on'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Milan'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Milan-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Milan-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amd-psfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='no-nested-data-bp'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='null-sel-clr-base'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='stibp-always-on'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Rome'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Rome-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Rome-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Rome-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='GraniteRapids'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mcdt-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pbrsb-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='prefetchiti'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='GraniteRapids-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mcdt-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pbrsb-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='prefetchiti'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='GraniteRapids-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx10'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx10-128'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx10-256'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx10-512'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mcdt-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36942 DF PROTO=TCP SPT=40606 DPT=9100 SEQ=2510765874 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FE995F0000000001030307) 
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pbrsb-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='prefetchiti'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-noTSX'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v5'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v6'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v7'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='IvyBridge'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='IvyBridge-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='IvyBridge-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='IvyBridge-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='KnightsMill'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-4fmaps'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-4vnniw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512er'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512pf'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='KnightsMill-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-4fmaps'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-4vnniw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512er'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512pf'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Opteron_G4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fma4'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xop'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Opteron_G4-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fma4'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xop'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Opteron_G5'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fma4'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tbm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xop'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Opteron_G5-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fma4'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tbm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xop'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='SapphireRapids'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='SapphireRapids-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='SapphireRapids-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='SapphireRapids-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='SierraForest'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-ne-convert'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cmpccxadd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mcdt-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pbrsb-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='SierraForest-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-ne-convert'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cmpccxadd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mcdt-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pbrsb-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-v5'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Snowridge'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='core-capability'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mpx'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='split-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Snowridge-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='core-capability'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mpx'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='split-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Snowridge-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='core-capability'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='split-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Snowridge-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='core-capability'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='split-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Snowridge-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='athlon'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnow'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnowext'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='athlon-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnow'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnowext'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='core2duo'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='core2duo-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='coreduo'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='coreduo-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='n270'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='n270-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='phenom'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnow'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnowext'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='phenom-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnow'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnowext'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </mode>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   </cpu>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <memoryBacking supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <enum name='sourceType'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <value>file</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <value>anonymous</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <value>memfd</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   </memoryBacking>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <devices>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <disk supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='diskDevice'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>disk</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>cdrom</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>floppy</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>lun</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='bus'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>fdc</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>scsi</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>usb</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>sata</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='model'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio-transitional</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio-non-transitional</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </disk>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <graphics supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='type'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>vnc</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>egl-headless</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>dbus</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </graphics>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <video supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='modelType'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>vga</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>cirrus</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>none</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>bochs</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>ramfb</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </video>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <hostdev supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='mode'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>subsystem</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='startupPolicy'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>default</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>mandatory</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>requisite</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>optional</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='subsysType'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>usb</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>pci</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>scsi</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='capsType'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='pciBackend'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </hostdev>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <rng supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='model'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio-transitional</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio-non-transitional</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='backendModel'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>random</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>egd</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>builtin</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </rng>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <filesystem supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='driverType'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>path</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>handle</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtiofs</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </filesystem>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <tpm supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='model'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>tpm-tis</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>tpm-crb</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='backendModel'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>emulator</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>external</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='backendVersion'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>2.0</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </tpm>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <redirdev supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='bus'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>usb</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </redirdev>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <channel supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='type'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>pty</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>unix</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </channel>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <crypto supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='model'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='type'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>qemu</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='backendModel'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>builtin</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </crypto>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <interface supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='backendType'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>default</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>passt</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </interface>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <panic supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='model'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>isa</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>hyperv</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </panic>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <console supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='type'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>null</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>vc</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>pty</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>dev</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>file</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>pipe</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>stdio</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>udp</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>tcp</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>unix</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>qemu-vdagent</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>dbus</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </console>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   </devices>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <features>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <gic supported='no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <vmcoreinfo supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <genid supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <backingStoreInput supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <backup supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <async-teardown supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <ps2 supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <sev supported='no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <sgx supported='no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <hyperv supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='features'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>relaxed</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>vapic</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>spinlocks</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>vpindex</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>runtime</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>synic</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>stimer</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>reset</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>vendor_id</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>frequencies</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>reenlightenment</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>tlbflush</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>ipi</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>avic</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>emsr_bitmap</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>xmm_input</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <defaults>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <spinlocks>4095</spinlocks>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <stimer_direct>on</stimer_direct>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <tlbflush_direct>off</tlbflush_direct>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <tlbflush_extended>off</tlbflush_extended>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </defaults>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </hyperv>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <launchSecurity supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='sectype'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>tdx</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </launchSecurity>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   </features>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: </domainCapabilities>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:43.246 228601 DEBUG nova.virt.libvirt.host [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: <domainCapabilities>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <domain>kvm</domain>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <arch>x86_64</arch>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <vcpu max='240'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <iothreads supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <os supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <enum name='firmware'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <loader supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='type'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>rom</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>pflash</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='readonly'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>yes</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>no</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='secure'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>no</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </loader>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   </os>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <cpu>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <mode name='host-passthrough' supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='hostPassthroughMigratable'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>on</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>off</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </mode>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <mode name='maximum' supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='maximumMigratable'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>on</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>off</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </mode>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <mode name='host-model' supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <vendor>AMD</vendor>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='x2apic'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='hypervisor'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='stibp'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='ssbd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='overflow-recov'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='succor'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='ibrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='lbrv'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='tsc-scale'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='pause-filter'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='pfthreshold'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <feature policy='disable' name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </mode>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <mode name='custom' supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-noTSX'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Broadwell-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cooperlake'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cooperlake-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Cooperlake-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Denverton'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mpx'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Denverton-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mpx'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Denverton-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Denverton-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Dhyana-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Genoa'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amd-psfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='auto-ibrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='no-nested-data-bp'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='null-sel-clr-base'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='stibp-always-on'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amd-psfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='auto-ibrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='no-nested-data-bp'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='null-sel-clr-base'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='stibp-always-on'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Milan'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Milan-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Milan-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amd-psfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='no-nested-data-bp'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='null-sel-clr-base'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='stibp-always-on'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Rome'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Rome-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Rome-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-Rome-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='EPYC-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='GraniteRapids'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mcdt-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pbrsb-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='prefetchiti'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='GraniteRapids-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mcdt-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pbrsb-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='prefetchiti'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='GraniteRapids-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx10'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx10-128'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx10-256'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx10-512'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mcdt-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pbrsb-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='prefetchiti'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-noTSX'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Haswell-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v5'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v6'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Icelake-Server-v7'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='IvyBridge'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='IvyBridge-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='IvyBridge-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='IvyBridge-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='KnightsMill'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-4fmaps'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-4vnniw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512er'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512pf'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='KnightsMill-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-4fmaps'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-4vnniw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512er'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512pf'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Opteron_G4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fma4'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xop'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Opteron_G4-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fma4'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xop'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Opteron_G5'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fma4'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tbm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xop'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Opteron_G5-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fma4'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tbm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xop'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='SapphireRapids'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='SapphireRapids-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='SapphireRapids-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='SapphireRapids-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='amx-tile'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-bf16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-fp16'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bitalg'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrc'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fzrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='la57'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='taa-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xfd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='SierraForest'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-ne-convert'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cmpccxadd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mcdt-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pbrsb-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='SierraForest-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-ifma'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-ne-convert'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx-vnni-int8'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cmpccxadd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fbsdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='fsrs'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ibrs-all'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mcdt-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pbrsb-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='psdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='serialize'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vaes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Client-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='hle'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='rtm'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Skylake-Server-v5'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512bw'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512cd'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512dq'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512f'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='avx512vl'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='invpcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pcid'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='pku'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Snowridge'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='core-capability'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mpx'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='split-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Snowridge-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='core-capability'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='mpx'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='split-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Snowridge-v2'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='core-capability'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='split-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Snowridge-v3'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='core-capability'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='split-lock-detect'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='Snowridge-v4'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='cldemote'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='erms'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='gfni'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdir64b'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='movdiri'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='xsaves'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='athlon'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnow'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnowext'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='athlon-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnow'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnowext'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='core2duo'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='core2duo-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='coreduo'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='coreduo-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='n270'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='n270-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='ss'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='phenom'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnow'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnowext'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <blockers model='phenom-v1'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnow'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <feature name='3dnowext'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </blockers>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </mode>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   </cpu>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <memoryBacking supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <enum name='sourceType'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <value>file</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <value>anonymous</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <value>memfd</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   </memoryBacking>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <devices>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <disk supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='diskDevice'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>disk</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>cdrom</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>floppy</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>lun</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='bus'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>ide</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>fdc</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>scsi</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>usb</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>sata</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='model'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio-transitional</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio-non-transitional</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </disk>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <graphics supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='type'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>vnc</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>egl-headless</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>dbus</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </graphics>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <video supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='modelType'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>vga</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>cirrus</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>none</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>bochs</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>ramfb</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </video>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <hostdev supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='mode'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>subsystem</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='startupPolicy'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>default</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>mandatory</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>requisite</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>optional</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='subsysType'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>usb</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>pci</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>scsi</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='capsType'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='pciBackend'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </hostdev>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <rng supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='model'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio-transitional</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtio-non-transitional</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='backendModel'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>random</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>egd</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>builtin</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </rng>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <filesystem supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='driverType'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>path</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>handle</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>virtiofs</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </filesystem>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <tpm supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='model'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>tpm-tis</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>tpm-crb</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='backendModel'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>emulator</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>external</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='backendVersion'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>2.0</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </tpm>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <redirdev supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='bus'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>usb</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </redirdev>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <channel supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='type'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>pty</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>unix</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </channel>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <crypto supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='model'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='type'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>qemu</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='backendModel'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>builtin</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </crypto>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <interface supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='backendType'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>default</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>passt</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </interface>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <panic supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='model'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>isa</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>hyperv</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </panic>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <console supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='type'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>null</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>vc</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>pty</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>dev</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>file</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>pipe</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>stdio</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>udp</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>tcp</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>unix</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>qemu-vdagent</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>dbus</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </console>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   </devices>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   <features>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <gic supported='no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <vmcoreinfo supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <genid supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <backingStoreInput supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <backup supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <async-teardown supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <ps2 supported='yes'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <sev supported='no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <sgx supported='no'/>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <hyperv supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='features'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>relaxed</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>vapic</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>spinlocks</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>vpindex</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>runtime</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>synic</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>stimer</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>reset</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>vendor_id</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>frequencies</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>reenlightenment</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>tlbflush</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>ipi</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>avic</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>emsr_bitmap</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>xmm_input</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <defaults>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <spinlocks>4095</spinlocks>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <stimer_direct>on</stimer_direct>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <tlbflush_direct>off</tlbflush_direct>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <tlbflush_extended>off</tlbflush_extended>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </defaults>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </hyperv>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     <launchSecurity supported='yes'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       <enum name='sectype'>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:         <value>tdx</value>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:       </enum>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:     </launchSecurity>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:   </features>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: </domainCapabilities>
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:43.295 228601 DEBUG nova.virt.libvirt.host [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:43.296 228601 INFO nova.virt.libvirt.host [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Secure Boot support detected
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:43.298 228601 INFO nova.virt.libvirt.driver [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:43.298 228601 INFO nova.virt.libvirt.driver [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:43.309 228601 DEBUG nova.virt.libvirt.driver [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:43.332 228601 INFO nova.virt.node [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Determined node identity 9d142787-bd19-4b53-bf45-24c0e0c1cff0 from /var/lib/nova/compute_id
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:43.367 228601 DEBUG nova.compute.manager [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Verified node 9d142787-bd19-4b53-bf45-24c0e0c1cff0 matches my host np0005548790.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:43.401 228601 INFO nova.compute.manager [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:43.857 228601 INFO nova.service [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Updating service version for nova-compute on np0005548790.localdomain from 57 to 66
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:43.893 228601 DEBUG oslo_concurrency.lockutils [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:43.893 228601 DEBUG oslo_concurrency.lockutils [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:43.894 228601 DEBUG oslo_concurrency.lockutils [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:43.894 228601 DEBUG nova.compute.resource_tracker [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 09:45:43 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:43.895 228601 DEBUG oslo_concurrency.processutils [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:45:44 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:44.350 228601 DEBUG oslo_concurrency.processutils [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:45:44 np0005548790.localdomain systemd[1]: Started libvirt nodedev daemon.
Dec 06 09:45:44 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:44.692 228601 WARNING nova.virt.libvirt.driver [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 09:45:44 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:44.694 228601 DEBUG nova.compute.resource_tracker [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=13637MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 09:45:44 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:44.694 228601 DEBUG oslo_concurrency.lockutils [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:45:44 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:44.694 228601 DEBUG oslo_concurrency.lockutils [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:45:44 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:44.870 228601 DEBUG nova.compute.resource_tracker [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 09:45:44 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:44.870 228601 DEBUG nova.compute.resource_tracker [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 09:45:44 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:44.942 228601 DEBUG nova.scheduler.client.report [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Refreshing inventories for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 09:45:45 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:45.007 228601 DEBUG nova.scheduler.client.report [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Updating ProviderTree inventory for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 09:45:45 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:45.008 228601 DEBUG nova.compute.provider_tree [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Updating inventory in ProviderTree for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 09:45:45 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:45.023 228601 DEBUG nova.scheduler.client.report [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Refreshing aggregate associations for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 09:45:45 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:45.047 228601 DEBUG nova.scheduler.client.report [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Refreshing trait associations for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0, traits: COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_ABM,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSSE3,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_LAN9118,HW_CPU_X86_SSE2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 09:45:45 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:45.065 228601 DEBUG oslo_concurrency.processutils [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:45:45 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:45.541 228601 DEBUG oslo_concurrency.processutils [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:45:45 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:45.548 228601 DEBUG nova.virt.libvirt.host [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 06 09:45:45 np0005548790.localdomain nova_compute[228597]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Dec 06 09:45:45 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:45.548 228601 INFO nova.virt.libvirt.host [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] kernel doesn't support AMD SEV
Dec 06 09:45:45 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:45.551 228601 DEBUG nova.compute.provider_tree [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 09:45:45 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:45.551 228601 DEBUG nova.virt.libvirt.driver [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 09:45:45 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:45.572 228601 DEBUG nova.scheduler.client.report [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 09:45:45 np0005548790.localdomain python3.9[229067]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:45:45 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:45.687 228601 DEBUG nova.compute.provider_tree [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Updating resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 generation from 2 to 3 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 06 09:45:45 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:45.712 228601 DEBUG nova.compute.resource_tracker [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 09:45:45 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:45.712 228601 DEBUG oslo_concurrency.lockutils [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.018s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:45:45 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:45.713 228601 DEBUG nova.service [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Dec 06 09:45:45 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:45.758 228601 DEBUG nova.service [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Dec 06 09:45:45 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:45.759 228601 DEBUG nova.servicegroup.drivers.db [None req-29b40ac1-b584-447a-8d24-52199d587c7d - - - - - -] DB_Driver: join new ServiceGroup member np0005548790.localdomain to the compute group, service = <Service: host=np0005548790.localdomain, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Dec 06 09:45:46 np0005548790.localdomain sudo[229177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kjtscgtnokdpbhspjmojkyfppbqqmtvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014345.9808087-4267-280353067381651/AnsiballZ_podman_container.py
Dec 06 09:45:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:45:46 np0005548790.localdomain sudo[229177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:46 np0005548790.localdomain podman[229179]: 2025-12-06 09:45:46.573456454 +0000 UTC m=+0.083520354 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 09:45:46 np0005548790.localdomain podman[229179]: 2025-12-06 09:45:46.611518305 +0000 UTC m=+0.121582195 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 09:45:46 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:45:46 np0005548790.localdomain python3.9[229180]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 06 09:45:46 np0005548790.localdomain sudo[229177]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:46 np0005548790.localdomain systemd-journald[47675]: Field hash table of /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal has a fill level at 121.0 (403 of 333 items), suggesting rotation.
Dec 06 09:45:46 np0005548790.localdomain systemd-journald[47675]: /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 06 09:45:46 np0005548790.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 09:45:46 np0005548790.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 09:45:48 np0005548790.localdomain sudo[229330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hhhwlfbuylszrqmtfsqvsrsrtqcttucm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014347.7755709-4291-211014933808154/AnsiballZ_systemd.py
Dec 06 09:45:48 np0005548790.localdomain sudo[229330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:48 np0005548790.localdomain python3.9[229332]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:45:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:45:48.352 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:45:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:45:48.354 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:45:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:45:48.354 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:45:48 np0005548790.localdomain systemd[1]: Stopping nova_compute container...
Dec 06 09:45:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43729 DF PROTO=TCP SPT=48666 DPT=9102 SEQ=1964801877 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FEAD510000000001030307) 
Dec 06 09:45:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9772 DF PROTO=TCP SPT=51560 DPT=9882 SEQ=3173196481 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FEAD580000000001030307) 
Dec 06 09:45:48 np0005548790.localdomain systemd[1]: tmp-crun.roPPWS.mount: Deactivated successfully.
Dec 06 09:45:49 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:49.878 228601 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored
Dec 06 09:45:49 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:49.880 228601 DEBUG oslo_concurrency.lockutils [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 09:45:49 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:49.880 228601 DEBUG oslo_concurrency.lockutils [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 09:45:49 np0005548790.localdomain nova_compute[228597]: 2025-12-06 09:45:49.881 228601 DEBUG oslo_concurrency.lockutils [None req-bb418cb8-a1fc-4c30-96a4-00867f38a93d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 09:45:50 np0005548790.localdomain systemd[1]: libpod-30d4ec6a5d3288f1c354684ec26537de7e0edd7aa78b7c3d46996c8317c45d26.scope: Deactivated successfully.
Dec 06 09:45:50 np0005548790.localdomain virtqemud[228868]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 06 09:45:50 np0005548790.localdomain virtqemud[228868]: hostname: np0005548790.localdomain
Dec 06 09:45:50 np0005548790.localdomain virtqemud[228868]: End of file while reading data: Input/output error
Dec 06 09:45:50 np0005548790.localdomain systemd[1]: libpod-30d4ec6a5d3288f1c354684ec26537de7e0edd7aa78b7c3d46996c8317c45d26.scope: Consumed 3.846s CPU time.
Dec 06 09:45:50 np0005548790.localdomain podman[229336]: 2025-12-06 09:45:50.235632378 +0000 UTC m=+1.856959988 container died 30d4ec6a5d3288f1c354684ec26537de7e0edd7aa78b7c3d46996c8317c45d26 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm)
Dec 06 09:45:51 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-30d4ec6a5d3288f1c354684ec26537de7e0edd7aa78b7c3d46996c8317c45d26-userdata-shm.mount: Deactivated successfully.
Dec 06 09:45:51 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a14c055f0120ab812f6d286999a71cde76653890989ed47d549081354032c382-merged.mount: Deactivated successfully.
Dec 06 09:45:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36943 DF PROTO=TCP SPT=40606 DPT=9100 SEQ=2510765874 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FEB9200000000001030307) 
Dec 06 09:45:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2783 DF PROTO=TCP SPT=32930 DPT=9105 SEQ=1230918687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FEC59F0000000001030307) 
Dec 06 09:45:54 np0005548790.localdomain podman[229336]: 2025-12-06 09:45:54.717097825 +0000 UTC m=+6.338425435 container cleanup 30d4ec6a5d3288f1c354684ec26537de7e0edd7aa78b7c3d46996c8317c45d26 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm)
Dec 06 09:45:54 np0005548790.localdomain podman[229336]: nova_compute
Dec 06 09:45:54 np0005548790.localdomain podman[229616]: error opening file `/run/crun/30d4ec6a5d3288f1c354684ec26537de7e0edd7aa78b7c3d46996c8317c45d26/status`: No such file or directory
Dec 06 09:45:54 np0005548790.localdomain podman[229603]: 2025-12-06 09:45:54.825861404 +0000 UTC m=+0.075721603 container cleanup 30d4ec6a5d3288f1c354684ec26537de7e0edd7aa78b7c3d46996c8317c45d26 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 09:45:54 np0005548790.localdomain systemd[1]: tmp-crun.2qkAi7.mount: Deactivated successfully.
Dec 06 09:45:54 np0005548790.localdomain podman[229603]: nova_compute
Dec 06 09:45:54 np0005548790.localdomain systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec 06 09:45:54 np0005548790.localdomain systemd[1]: Stopped nova_compute container.
Dec 06 09:45:54 np0005548790.localdomain systemd[1]: Starting nova_compute container...
Dec 06 09:45:54 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:45:54 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14c055f0120ab812f6d286999a71cde76653890989ed47d549081354032c382/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:54 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14c055f0120ab812f6d286999a71cde76653890989ed47d549081354032c382/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:54 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14c055f0120ab812f6d286999a71cde76653890989ed47d549081354032c382/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:54 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14c055f0120ab812f6d286999a71cde76653890989ed47d549081354032c382/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:54 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14c055f0120ab812f6d286999a71cde76653890989ed47d549081354032c382/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:54 np0005548790.localdomain podman[229618]: 2025-12-06 09:45:54.969453699 +0000 UTC m=+0.110457987 container init 30d4ec6a5d3288f1c354684ec26537de7e0edd7aa78b7c3d46996c8317c45d26 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=nova_compute, org.label-schema.build-date=20251125)
Dec 06 09:45:54 np0005548790.localdomain podman[229618]: 2025-12-06 09:45:54.978584453 +0000 UTC m=+0.119588741 container start 30d4ec6a5d3288f1c354684ec26537de7e0edd7aa78b7c3d46996c8317c45d26 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 09:45:54 np0005548790.localdomain podman[229618]: nova_compute
Dec 06 09:45:54 np0005548790.localdomain nova_compute[229633]: + sudo -E kolla_set_configs
Dec 06 09:45:54 np0005548790.localdomain systemd[1]: Started nova_compute container.
Dec 06 09:45:55 np0005548790.localdomain sudo[229330]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Validating config file
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Copying service configuration files
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Deleting /etc/ceph
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Creating directory /etc/ceph
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Setting permission for /etc/ceph
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Writing out command to execute
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: ++ cat /run_command
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: + CMD=nova-compute
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: + ARGS=
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: + sudo kolla_copy_cacerts
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: + [[ ! -n '' ]]
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: + . kolla_extend_start
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: + echo 'Running command: '\''nova-compute'\'''
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: Running command: 'nova-compute'
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: + umask 0022
Dec 06 09:45:55 np0005548790.localdomain nova_compute[229633]: + exec nova-compute
Dec 06 09:45:55 np0005548790.localdomain sudo[229752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aqzlbwjmctgmgrffjsuwqysexztlyvmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014355.29396-4317-154432091630663/AnsiballZ_podman_container.py
Dec 06 09:45:55 np0005548790.localdomain sudo[229752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:45:55 np0005548790.localdomain python3.9[229754]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 06 09:45:56 np0005548790.localdomain systemd[1]: Started libpod-conmon-a395dbd0ab59430267ec664b211f8908514541a37ff8aaee17c95683340e21f7.scope.
Dec 06 09:45:56 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:45:56 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f34bb3b53a6136c2bbdfae7ed4dc44ec54288f4d2e146a5e9ea956f190ac6df/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:56 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f34bb3b53a6136c2bbdfae7ed4dc44ec54288f4d2e146a5e9ea956f190ac6df/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:56 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f34bb3b53a6136c2bbdfae7ed4dc44ec54288f4d2e146a5e9ea956f190ac6df/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 09:45:56 np0005548790.localdomain podman[229779]: 2025-12-06 09:45:56.115354758 +0000 UTC m=+0.127666958 container init a395dbd0ab59430267ec664b211f8908514541a37ff8aaee17c95683340e21f7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 09:45:56 np0005548790.localdomain podman[229779]: 2025-12-06 09:45:56.127842873 +0000 UTC m=+0.140155073 container start a395dbd0ab59430267ec664b211f8908514541a37ff8aaee17c95683340e21f7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.build-date=20251125, container_name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:45:56 np0005548790.localdomain python3.9[229754]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec 06 09:45:56 np0005548790.localdomain nova_compute_init[229800]: INFO:nova_statedir:Applying nova statedir ownership
Dec 06 09:45:56 np0005548790.localdomain nova_compute_init[229800]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec 06 09:45:56 np0005548790.localdomain nova_compute_init[229800]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec 06 09:45:56 np0005548790.localdomain nova_compute_init[229800]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec 06 09:45:56 np0005548790.localdomain nova_compute_init[229800]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec 06 09:45:56 np0005548790.localdomain nova_compute_init[229800]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec 06 09:45:56 np0005548790.localdomain nova_compute_init[229800]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec 06 09:45:56 np0005548790.localdomain nova_compute_init[229800]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec 06 09:45:56 np0005548790.localdomain nova_compute_init[229800]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute
Dec 06 09:45:56 np0005548790.localdomain nova_compute_init[229800]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec 06 09:45:56 np0005548790.localdomain nova_compute_init[229800]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec 06 09:45:56 np0005548790.localdomain nova_compute_init[229800]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec 06 09:45:56 np0005548790.localdomain nova_compute_init[229800]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec 06 09:45:56 np0005548790.localdomain nova_compute_init[229800]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec 06 09:45:56 np0005548790.localdomain nova_compute_init[229800]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/
Dec 06 09:45:56 np0005548790.localdomain nova_compute_init[229800]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436
Dec 06 09:45:56 np0005548790.localdomain nova_compute_init[229800]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0
Dec 06 09:45:56 np0005548790.localdomain nova_compute_init[229800]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/
Dec 06 09:45:56 np0005548790.localdomain nova_compute_init[229800]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436
Dec 06 09:45:56 np0005548790.localdomain nova_compute_init[229800]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0
Dec 06 09:45:56 np0005548790.localdomain nova_compute_init[229800]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/b234715fc878456b41e32c4fbc669b417044dbe6c6684bbc9059e5c93396ffea
Dec 06 09:45:56 np0005548790.localdomain nova_compute_init[229800]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/20273498b7380904530133bcb3f720bd45f4f00b810dc4597d81d23acd8f9673
Dec 06 09:45:56 np0005548790.localdomain nova_compute_init[229800]: INFO:nova_statedir:Nova statedir ownership complete
Dec 06 09:45:56 np0005548790.localdomain systemd[1]: libpod-a395dbd0ab59430267ec664b211f8908514541a37ff8aaee17c95683340e21f7.scope: Deactivated successfully.
Dec 06 09:45:56 np0005548790.localdomain podman[229798]: 2025-12-06 09:45:56.206946177 +0000 UTC m=+0.058410579 container died a395dbd0ab59430267ec664b211f8908514541a37ff8aaee17c95683340e21f7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:45:56 np0005548790.localdomain podman[229813]: 2025-12-06 09:45:56.307333711 +0000 UTC m=+0.090256134 container cleanup a395dbd0ab59430267ec664b211f8908514541a37ff8aaee17c95683340e21f7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 09:45:56 np0005548790.localdomain systemd[1]: libpod-conmon-a395dbd0ab59430267ec664b211f8908514541a37ff8aaee17c95683340e21f7.scope: Deactivated successfully.
Dec 06 09:45:56 np0005548790.localdomain sudo[229752]: pam_unix(sudo:session): session closed for user root
Dec 06 09:45:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:56.738 229637 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 09:45:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:56.739 229637 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 09:45:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:56.739 229637 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 09:45:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:56.739 229637 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 06 09:45:56 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-9f34bb3b53a6136c2bbdfae7ed4dc44ec54288f4d2e146a5e9ea956f190ac6df-merged.mount: Deactivated successfully.
Dec 06 09:45:56 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a395dbd0ab59430267ec664b211f8908514541a37ff8aaee17c95683340e21f7-userdata-shm.mount: Deactivated successfully.
Dec 06 09:45:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:56.855 229637 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:45:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:56.877 229637 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:45:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:56.877 229637 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 06 09:45:56 np0005548790.localdomain sshd[207712]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:45:56 np0005548790.localdomain systemd[1]: session-53.scope: Deactivated successfully.
Dec 06 09:45:56 np0005548790.localdomain systemd[1]: session-53.scope: Consumed 2min 11.993s CPU time.
Dec 06 09:45:56 np0005548790.localdomain systemd-logind[760]: Session 53 logged out. Waiting for processes to exit.
Dec 06 09:45:56 np0005548790.localdomain systemd-logind[760]: Removed session 53.
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.253 229637 INFO nova.virt.driver [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.364 229637 INFO nova.compute.provider_config [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.373 229637 WARNING nova.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.374 229637 DEBUG oslo_concurrency.lockutils [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.374 229637 DEBUG oslo_concurrency.lockutils [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.374 229637 DEBUG oslo_concurrency.lockutils [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.374 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.375 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.375 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.375 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.375 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.375 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.375 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.375 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.376 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.376 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.376 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.376 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.376 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.376 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.376 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.377 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.377 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.377 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.377 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.377 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] console_host                   = np0005548790.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.377 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.378 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.378 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.378 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.378 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.378 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.378 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.378 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.379 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.379 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.379 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.379 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.379 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.379 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.379 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.380 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.380 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.380 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.380 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] host                           = np0005548790.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.380 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.380 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.381 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.381 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.381 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.381 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.381 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.381 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.381 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.381 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.382 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.382 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.382 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.382 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.382 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.382 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.382 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.383 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.383 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.383 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.383 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.383 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.383 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.384 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.384 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.384 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.384 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.384 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.384 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.384 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.384 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.385 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.385 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.385 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.385 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.385 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.385 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.385 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.386 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.386 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.386 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.386 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] my_block_storage_ip            = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.386 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] my_ip                          = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.386 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.386 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.387 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.387 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.387 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.387 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.387 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.387 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.387 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.388 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.388 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.388 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.388 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.388 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.388 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.388 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.388 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.389 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.389 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.389 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.389 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.389 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.389 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.389 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.390 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.390 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.390 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.390 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.390 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.390 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.390 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.390 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.391 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.391 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.391 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.391 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.391 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.391 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.391 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.391 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.392 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.392 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.392 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.392 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.392 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.392 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.392 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.393 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.393 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.393 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.393 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.393 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.393 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.393 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.393 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.394 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.394 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.394 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.394 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.394 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.394 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.394 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.394 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.395 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.395 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.395 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.395 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.395 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.395 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.395 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.396 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.396 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.396 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.396 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.396 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.396 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.397 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.397 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.397 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.397 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.397 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.397 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.397 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.398 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.398 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.398 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.398 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.398 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.398 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.398 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.399 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.399 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.399 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.399 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.399 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.399 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.399 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.399 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.400 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.400 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.400 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.400 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.400 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.400 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.400 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.401 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.401 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.401 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.401 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.401 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.401 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.401 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.401 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.402 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.402 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.402 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.402 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.402 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.402 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.402 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.403 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.403 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.403 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.403 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.403 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.403 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.403 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.403 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.404 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.404 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.404 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.404 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.404 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.404 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.404 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.405 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.405 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.405 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.405 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.405 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.405 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.405 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.405 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.406 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.406 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.406 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.406 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.406 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.406 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.406 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.407 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.407 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.407 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.407 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.407 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.407 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.407 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.408 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.408 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.408 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.408 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.408 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.408 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.408 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.409 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.409 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.409 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.409 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.409 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.409 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.409 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.409 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.410 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.410 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.410 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.410 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.410 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.410 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.410 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.410 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.411 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.411 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.411 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.411 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.411 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.411 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.412 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.412 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.412 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.412 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.412 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.412 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.412 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.413 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.413 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.413 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.413 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.413 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.413 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.413 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.413 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.414 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.414 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.414 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.414 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.414 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.414 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.414 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.415 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.415 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.415 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.415 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.415 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.415 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.415 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.416 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.416 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.416 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.416 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.416 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.416 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.416 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.417 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.417 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.417 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.417 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.417 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.417 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.417 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.418 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.418 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.418 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.418 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.418 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.418 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.418 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.419 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.419 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.419 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.419 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.419 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.419 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.419 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.420 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.420 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.420 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.420 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.420 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.420 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.420 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.420 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.421 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.421 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.421 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.421 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.421 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.422 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.422 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.422 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.422 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.422 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.422 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.422 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.422 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.423 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.423 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.423 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.423 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.423 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.423 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.423 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.424 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.424 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.424 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.424 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.424 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.424 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.425 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.425 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.425 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.425 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.425 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.425 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.425 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.426 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.426 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.426 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.426 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.426 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.426 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.426 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.426 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.427 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.427 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.427 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.427 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.427 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.427 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.427 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.428 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.428 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.428 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.428 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.428 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.428 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.428 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.429 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.429 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.429 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.429 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.429 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.429 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.429 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.429 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.430 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.430 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.430 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.430 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.430 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.430 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.430 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.431 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.431 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.431 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.431 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.431 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.431 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.431 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.431 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.432 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.432 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.432 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.432 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.432 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.432 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.432 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.433 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.433 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.433 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.433 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.433 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.433 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.434 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.434 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.434 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.434 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.434 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.434 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.434 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.435 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.435 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.435 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.435 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.435 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.435 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.435 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.436 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.436 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.436 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.436 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.436 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.436 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.436 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.437 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.437 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.437 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.437 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.437 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.437 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.437 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.437 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.438 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.438 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.438 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.438 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.438 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.438 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.438 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.439 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.439 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.439 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.439 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.439 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.439 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.439 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.440 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.440 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.440 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.440 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.440 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.440 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.440 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.440 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.441 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.441 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.441 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.441 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.441 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.441 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.441 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.442 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.442 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.442 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.442 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.442 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.442 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.442 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.443 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.443 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.443 229637 WARNING oslo_config.cfg [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: and ``live_migration_inbound_addr`` respectively.
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: ).  Its value may be silently ignored in the future.
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.443 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.live_migration_uri     = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.443 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.443 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.443 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.444 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.444 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.444 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.444 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.444 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.444 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.445 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.445 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.445 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.445 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.445 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.445 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.445 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.446 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.446 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.rbd_secret_uuid        = 1939e851-b10c-5c3b-9bb7-8e7f380233e8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.446 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.446 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.446 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.446 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.446 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.447 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.447 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.447 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.447 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.447 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.447 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.448 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.448 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.448 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.448 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.448 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.448 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.448 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.449 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.449 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.449 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.449 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.449 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.449 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.449 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.449 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.450 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.450 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.450 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.450 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.450 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.450 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.450 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.451 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.451 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.451 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.451 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.451 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.451 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.451 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.452 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.452 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.452 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.452 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.452 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.452 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.452 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.453 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.453 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.453 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.453 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.453 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.453 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.453 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.454 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.454 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.454 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.454 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.454 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.454 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.454 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.455 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.455 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.455 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.455 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.455 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.455 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.455 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.456 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.456 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.456 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.456 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.auth_url             = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.456 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.456 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.456 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.457 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.457 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.457 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.457 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.457 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.457 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.457 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.457 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.458 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.458 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.458 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.458 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.458 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.458 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.458 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.459 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.459 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.459 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.459 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.459 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.459 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.459 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.459 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.460 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.460 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.460 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.460 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.460 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.460 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.460 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.460 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.461 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.461 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.461 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.461 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.461 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.461 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.461 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.462 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.462 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.462 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.462 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.462 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.462 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.462 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.463 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.463 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.463 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.463 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.463 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.463 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.464 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.464 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.464 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.464 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.464 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.464 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.464 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.465 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.465 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.465 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.465 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.465 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.465 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.465 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.466 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.466 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.466 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.466 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.466 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.466 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.466 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.466 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.467 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.467 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.467 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.467 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.467 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.467 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.467 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.468 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.468 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.468 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.468 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.468 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.469 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.469 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.469 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.469 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.469 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.469 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.470 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.470 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.470 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.470 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.470 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.470 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.470 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.470 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.471 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.471 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.471 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.471 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.471 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.471 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.472 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.472 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.472 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.472 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.472 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.472 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.472 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.473 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.473 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.473 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.473 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.473 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.473 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.473 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.473 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.474 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.474 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.474 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.474 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.474 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.474 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.474 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.474 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.475 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.475 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.475 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.475 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.475 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.475 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.475 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.476 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.476 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.476 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.476 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.476 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.476 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.476 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.477 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.477 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.477 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.477 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.477 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.477 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.477 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.478 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.478 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.478 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.478 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.478 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.478 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vnc.novncproxy_base_url        = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.479 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.479 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.479 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.479 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vnc.server_proxyclient_address = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.479 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.479 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.479 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.480 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.480 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.480 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.480 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.480 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.480 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.480 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.481 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.481 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.481 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.481 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.481 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.481 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.481 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.482 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.482 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.482 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.482 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.482 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.482 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.482 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.483 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.483 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.483 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.483 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.483 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.483 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.483 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.483 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.484 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.484 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.484 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.484 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.484 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.485 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.485 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.485 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.485 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.485 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.485 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.485 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.486 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.486 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.486 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.486 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.486 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.486 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.486 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.486 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.487 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.487 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.487 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.487 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.487 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.487 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.487 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.488 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.488 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.488 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.488 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.488 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.488 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.488 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.489 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.489 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.489 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.489 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.489 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.489 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.489 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.489 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.490 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.490 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.490 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.490 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.490 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.490 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.490 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.491 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.491 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.491 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.491 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.491 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.491 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.491 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.491 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.492 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.492 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.auth_url            = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.492 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.492 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.492 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.492 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.492 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.493 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.493 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.493 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.493 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.493 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.493 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.493 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.493 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.494 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.494 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.494 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.494 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.494 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.494 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.494 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.495 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.495 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.495 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.495 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.495 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.495 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.495 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.495 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.496 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.496 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.496 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.496 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.496 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.496 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.496 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.496 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.497 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.497 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.497 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.497 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.497 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.497 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.497 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.498 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.498 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.498 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.498 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.498 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.498 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.498 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.499 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.499 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.499 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.499 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.499 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.499 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.499 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.499 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.500 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.500 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.500 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.500 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.500 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.500 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.500 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.501 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.501 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.501 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.501 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.501 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.501 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.501 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.501 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.502 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.502 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.502 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.502 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.502 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.502 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.502 229637 DEBUG oslo_service.service [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.504 229637 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.523 229637 INFO nova.virt.node [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Determined node identity 9d142787-bd19-4b53-bf45-24c0e0c1cff0 from /var/lib/nova/compute_id
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.524 229637 DEBUG nova.virt.libvirt.host [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.525 229637 DEBUG nova.virt.libvirt.host [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.526 229637 DEBUG nova.virt.libvirt.host [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.526 229637 DEBUG nova.virt.libvirt.host [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 06 09:45:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60412 DF PROTO=TCP SPT=46136 DPT=9105 SEQ=579710487 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FED11F0000000001030307) 
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.556 229637 DEBUG nova.virt.libvirt.host [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f41aea928e0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.568 229637 DEBUG nova.virt.libvirt.host [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f41aea928e0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.576 229637 INFO nova.virt.libvirt.driver [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Connection event '1' reason 'None'
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.586 229637 INFO nova.virt.libvirt.host [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Libvirt host capabilities <capabilities>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <host>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <uuid>f03c6239-85fa-4e2b-b1f7-56cf939bb96f</uuid>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <cpu>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <arch>x86_64</arch>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model>EPYC-Rome-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <vendor>AMD</vendor>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <microcode version='16777317'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <signature family='23' model='49' stepping='0'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature name='x2apic'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature name='tsc-deadline'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature name='osxsave'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature name='hypervisor'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature name='tsc_adjust'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature name='spec-ctrl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature name='stibp'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature name='arch-capabilities'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature name='ssbd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature name='cmp_legacy'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature name='topoext'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature name='virt-ssbd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature name='lbrv'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature name='tsc-scale'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature name='vmcb-clean'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature name='pause-filter'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature name='pfthreshold'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature name='svme-addr-chk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature name='rdctl-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature name='skip-l1dfl-vmentry'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature name='mds-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature name='pschange-mc-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <pages unit='KiB' size='4'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <pages unit='KiB' size='2048'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <pages unit='KiB' size='1048576'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </cpu>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <power_management>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <suspend_mem/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <suspend_disk/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <suspend_hybrid/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </power_management>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <iommu support='no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <migration_features>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <live/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <uri_transports>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <uri_transport>tcp</uri_transport>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <uri_transport>rdma</uri_transport>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </uri_transports>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </migration_features>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <topology>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <cells num='1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <cell id='0'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:           <memory unit='KiB'>16116612</memory>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:           <pages unit='KiB' size='4'>4029153</pages>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:           <pages unit='KiB' size='2048'>0</pages>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:           <distances>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:             <sibling id='0' value='10'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:           </distances>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:           <cpus num='8'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:           </cpus>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         </cell>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </cells>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </topology>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <cache>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </cache>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <secmodel>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model>selinux</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <doi>0</doi>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </secmodel>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <secmodel>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model>dac</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <doi>0</doi>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </secmodel>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   </host>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <guest>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <os_type>hvm</os_type>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <arch name='i686'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <wordsize>32</wordsize>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <domain type='qemu'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <domain type='kvm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </arch>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <features>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <pae/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <nonpae/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <acpi default='on' toggle='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <apic default='on' toggle='no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <cpuselection/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <deviceboot/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <disksnapshot default='on' toggle='no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <externalSnapshot/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </features>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   </guest>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <guest>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <os_type>hvm</os_type>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <arch name='x86_64'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <wordsize>64</wordsize>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <domain type='qemu'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <domain type='kvm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </arch>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <features>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <acpi default='on' toggle='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <apic default='on' toggle='no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <cpuselection/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <deviceboot/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <disksnapshot default='on' toggle='no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <externalSnapshot/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </features>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   </guest>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: </capabilities>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.596 229637 DEBUG nova.virt.libvirt.host [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.600 229637 DEBUG nova.virt.libvirt.host [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: <domainCapabilities>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <domain>kvm</domain>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <arch>i686</arch>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <vcpu max='1024'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <iothreads supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <os supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <enum name='firmware'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <loader supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='type'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>rom</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>pflash</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='readonly'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>yes</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>no</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='secure'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>no</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </loader>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   </os>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <cpu>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <mode name='host-passthrough' supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='hostPassthroughMigratable'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>on</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>off</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </mode>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <mode name='maximum' supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='maximumMigratable'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>on</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>off</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </mode>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <mode name='host-model' supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <vendor>AMD</vendor>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='x2apic'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='hypervisor'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='stibp'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='ssbd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='overflow-recov'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='succor'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='ibrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='lbrv'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='tsc-scale'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='pause-filter'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='pfthreshold'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='disable' name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </mode>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <mode name='custom' supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-noTSX'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cooperlake'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cooperlake-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cooperlake-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Denverton'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mpx'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Denverton-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mpx'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Denverton-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Denverton-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Dhyana-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Genoa'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amd-psfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='auto-ibrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='no-nested-data-bp'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='null-sel-clr-base'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='stibp-always-on'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amd-psfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='auto-ibrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='no-nested-data-bp'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='null-sel-clr-base'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='stibp-always-on'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Milan'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Milan-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Milan-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amd-psfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='no-nested-data-bp'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='null-sel-clr-base'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='stibp-always-on'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Rome'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Rome-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Rome-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Rome-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='GraniteRapids'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mcdt-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pbrsb-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='prefetchiti'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='GraniteRapids-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mcdt-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pbrsb-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='prefetchiti'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='GraniteRapids-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx10'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx10-128'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx10-256'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx10-512'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mcdt-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pbrsb-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='prefetchiti'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-noTSX'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v5'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v6'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v7'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='IvyBridge'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='IvyBridge-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='IvyBridge-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='IvyBridge-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='KnightsMill'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-4fmaps'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-4vnniw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512er'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512pf'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='KnightsMill-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-4fmaps'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-4vnniw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512er'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512pf'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Opteron_G4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fma4'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xop'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Opteron_G4-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fma4'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xop'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Opteron_G5'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fma4'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tbm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xop'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Opteron_G5-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fma4'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tbm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xop'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='SapphireRapids'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='SapphireRapids-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='SapphireRapids-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='SapphireRapids-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='SierraForest'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-ne-convert'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cmpccxadd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mcdt-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pbrsb-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='SierraForest-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-ne-convert'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cmpccxadd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mcdt-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pbrsb-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-v5'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Snowridge'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='core-capability'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mpx'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='split-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Snowridge-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='core-capability'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mpx'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='split-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Snowridge-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='core-capability'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='split-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Snowridge-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='core-capability'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='split-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Snowridge-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='athlon'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnow'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnowext'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='athlon-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnow'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnowext'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='core2duo'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='core2duo-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='coreduo'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='coreduo-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='n270'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='n270-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='phenom'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnow'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnowext'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='phenom-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnow'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnowext'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </mode>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   </cpu>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <memoryBacking supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <enum name='sourceType'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <value>file</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <value>anonymous</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <value>memfd</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   </memoryBacking>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <devices>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <disk supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='diskDevice'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>disk</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>cdrom</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>floppy</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>lun</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='bus'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>fdc</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>scsi</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>usb</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>sata</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='model'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio-transitional</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio-non-transitional</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </disk>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <graphics supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='type'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>vnc</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>egl-headless</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>dbus</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </graphics>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <video supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='modelType'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>vga</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>cirrus</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>none</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>bochs</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>ramfb</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </video>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <hostdev supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='mode'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>subsystem</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='startupPolicy'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>default</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>mandatory</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>requisite</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>optional</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='subsysType'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>usb</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>pci</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>scsi</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='capsType'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='pciBackend'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </hostdev>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <rng supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='model'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio-transitional</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio-non-transitional</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='backendModel'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>random</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>egd</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>builtin</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </rng>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <filesystem supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='driverType'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>path</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>handle</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtiofs</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </filesystem>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <tpm supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='model'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>tpm-tis</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>tpm-crb</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='backendModel'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>emulator</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>external</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='backendVersion'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>2.0</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </tpm>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <redirdev supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='bus'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>usb</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </redirdev>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <channel supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='type'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>pty</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>unix</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </channel>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <crypto supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='model'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='type'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>qemu</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='backendModel'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>builtin</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </crypto>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <interface supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='backendType'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>default</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>passt</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </interface>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <panic supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='model'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>isa</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>hyperv</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </panic>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <console supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='type'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>null</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>vc</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>pty</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>dev</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>file</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>pipe</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>stdio</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>udp</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>tcp</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>unix</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>qemu-vdagent</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>dbus</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </console>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   </devices>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <features>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <gic supported='no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <vmcoreinfo supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <genid supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <backingStoreInput supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <backup supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <async-teardown supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <ps2 supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <sev supported='no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <sgx supported='no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <hyperv supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='features'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>relaxed</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>vapic</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>spinlocks</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>vpindex</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>runtime</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>synic</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>stimer</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>reset</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>vendor_id</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>frequencies</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>reenlightenment</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>tlbflush</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>ipi</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>avic</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>emsr_bitmap</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>xmm_input</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <defaults>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <spinlocks>4095</spinlocks>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <stimer_direct>on</stimer_direct>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <tlbflush_direct>off</tlbflush_direct>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <tlbflush_extended>off</tlbflush_extended>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </defaults>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </hyperv>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <launchSecurity supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='sectype'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>tdx</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </launchSecurity>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   </features>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: </domainCapabilities>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.603 229637 DEBUG nova.virt.libvirt.volume.mount [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.608 229637 DEBUG nova.virt.libvirt.host [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: <domainCapabilities>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <domain>kvm</domain>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <arch>i686</arch>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <vcpu max='240'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <iothreads supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <os supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <enum name='firmware'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <loader supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='type'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>rom</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>pflash</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='readonly'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>yes</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>no</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='secure'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>no</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </loader>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   </os>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <cpu>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <mode name='host-passthrough' supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='hostPassthroughMigratable'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>on</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>off</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </mode>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <mode name='maximum' supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='maximumMigratable'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>on</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>off</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </mode>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <mode name='host-model' supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <vendor>AMD</vendor>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='x2apic'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='hypervisor'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='stibp'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='ssbd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='overflow-recov'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='succor'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='ibrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='lbrv'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='tsc-scale'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='pause-filter'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='pfthreshold'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='disable' name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </mode>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <mode name='custom' supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-noTSX'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cooperlake'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cooperlake-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cooperlake-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Denverton'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mpx'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Denverton-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mpx'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Denverton-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Denverton-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Dhyana-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Genoa'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amd-psfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='auto-ibrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='no-nested-data-bp'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='null-sel-clr-base'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='stibp-always-on'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amd-psfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='auto-ibrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='no-nested-data-bp'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='null-sel-clr-base'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='stibp-always-on'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Milan'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Milan-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Milan-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amd-psfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='no-nested-data-bp'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='null-sel-clr-base'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='stibp-always-on'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Rome'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Rome-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Rome-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Rome-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='GraniteRapids'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mcdt-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pbrsb-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='prefetchiti'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='GraniteRapids-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mcdt-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pbrsb-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='prefetchiti'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='GraniteRapids-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx10'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx10-128'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx10-256'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx10-512'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mcdt-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pbrsb-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='prefetchiti'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-noTSX'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v5'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v6'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v7'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='IvyBridge'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='IvyBridge-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='IvyBridge-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='IvyBridge-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='KnightsMill'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-4fmaps'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-4vnniw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512er'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512pf'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='KnightsMill-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-4fmaps'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-4vnniw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512er'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512pf'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Opteron_G4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fma4'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xop'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Opteron_G4-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fma4'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xop'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Opteron_G5'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fma4'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tbm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xop'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Opteron_G5-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fma4'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tbm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xop'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='SapphireRapids'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='SapphireRapids-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='SapphireRapids-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='SapphireRapids-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='SierraForest'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-ne-convert'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cmpccxadd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mcdt-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pbrsb-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='SierraForest-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-ne-convert'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cmpccxadd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mcdt-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pbrsb-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-v5'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Snowridge'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='core-capability'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mpx'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='split-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Snowridge-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='core-capability'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mpx'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='split-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Snowridge-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='core-capability'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='split-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Snowridge-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='core-capability'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='split-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Snowridge-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='athlon'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnow'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnowext'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='athlon-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnow'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnowext'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='core2duo'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='core2duo-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='coreduo'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='coreduo-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='n270'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='n270-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='phenom'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnow'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnowext'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='phenom-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnow'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnowext'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </mode>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   </cpu>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <memoryBacking supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <enum name='sourceType'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <value>file</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <value>anonymous</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <value>memfd</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   </memoryBacking>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <devices>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <disk supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='diskDevice'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>disk</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>cdrom</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>floppy</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>lun</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='bus'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>ide</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>fdc</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>scsi</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>usb</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>sata</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='model'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio-transitional</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio-non-transitional</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </disk>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <graphics supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='type'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>vnc</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>egl-headless</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>dbus</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </graphics>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <video supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='modelType'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>vga</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>cirrus</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>none</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>bochs</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>ramfb</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </video>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <hostdev supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='mode'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>subsystem</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='startupPolicy'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>default</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>mandatory</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>requisite</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>optional</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='subsysType'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>usb</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>pci</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>scsi</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='capsType'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='pciBackend'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </hostdev>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <rng supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='model'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio-transitional</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio-non-transitional</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='backendModel'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>random</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>egd</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>builtin</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </rng>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <filesystem supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='driverType'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>path</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>handle</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtiofs</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </filesystem>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <tpm supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='model'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>tpm-tis</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>tpm-crb</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='backendModel'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>emulator</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>external</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='backendVersion'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>2.0</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </tpm>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <redirdev supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='bus'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>usb</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </redirdev>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <channel supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='type'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>pty</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>unix</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </channel>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <crypto supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='model'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='type'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>qemu</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='backendModel'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>builtin</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </crypto>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <interface supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='backendType'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>default</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>passt</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </interface>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <panic supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='model'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>isa</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>hyperv</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </panic>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <console supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='type'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>null</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>vc</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>pty</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>dev</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>file</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>pipe</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>stdio</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>udp</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>tcp</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>unix</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>qemu-vdagent</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>dbus</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </console>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   </devices>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <features>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <gic supported='no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <vmcoreinfo supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <genid supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <backingStoreInput supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <backup supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <async-teardown supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <ps2 supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <sev supported='no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <sgx supported='no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <hyperv supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='features'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>relaxed</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>vapic</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>spinlocks</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>vpindex</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>runtime</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>synic</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>stimer</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>reset</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>vendor_id</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>frequencies</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>reenlightenment</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>tlbflush</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>ipi</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>avic</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>emsr_bitmap</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>xmm_input</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <defaults>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <spinlocks>4095</spinlocks>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <stimer_direct>on</stimer_direct>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <tlbflush_direct>off</tlbflush_direct>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <tlbflush_extended>off</tlbflush_extended>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </defaults>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </hyperv>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <launchSecurity supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='sectype'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>tdx</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </launchSecurity>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   </features>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: </domainCapabilities>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.639 229637 DEBUG nova.virt.libvirt.host [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.644 229637 DEBUG nova.virt.libvirt.host [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: <domainCapabilities>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <domain>kvm</domain>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <arch>x86_64</arch>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <vcpu max='1024'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <iothreads supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <os supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <enum name='firmware'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <value>efi</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <loader supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='type'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>rom</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>pflash</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='readonly'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>yes</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>no</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='secure'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>yes</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>no</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </loader>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   </os>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <cpu>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <mode name='host-passthrough' supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='hostPassthroughMigratable'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>on</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>off</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </mode>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <mode name='maximum' supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='maximumMigratable'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>on</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>off</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </mode>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <mode name='host-model' supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <vendor>AMD</vendor>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='x2apic'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='hypervisor'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='stibp'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='ssbd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='overflow-recov'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='succor'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='ibrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='lbrv'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='tsc-scale'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='pause-filter'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='pfthreshold'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='disable' name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </mode>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <mode name='custom' supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-noTSX'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cooperlake'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cooperlake-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cooperlake-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Denverton'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mpx'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Denverton-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mpx'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Denverton-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Denverton-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Dhyana-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Genoa'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amd-psfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='auto-ibrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='no-nested-data-bp'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='null-sel-clr-base'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='stibp-always-on'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amd-psfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='auto-ibrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='no-nested-data-bp'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='null-sel-clr-base'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='stibp-always-on'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Milan'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Milan-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Milan-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amd-psfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='no-nested-data-bp'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='null-sel-clr-base'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='stibp-always-on'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Rome'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Rome-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Rome-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Rome-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='GraniteRapids'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mcdt-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pbrsb-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='prefetchiti'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='GraniteRapids-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mcdt-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pbrsb-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='prefetchiti'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='GraniteRapids-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx10'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx10-128'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx10-256'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx10-512'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mcdt-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pbrsb-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='prefetchiti'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-noTSX'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v5'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v6'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v7'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='IvyBridge'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='IvyBridge-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='IvyBridge-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='IvyBridge-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='KnightsMill'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-4fmaps'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-4vnniw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512er'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512pf'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='KnightsMill-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-4fmaps'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-4vnniw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512er'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512pf'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Opteron_G4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fma4'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xop'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Opteron_G4-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fma4'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xop'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Opteron_G5'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fma4'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tbm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xop'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Opteron_G5-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fma4'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tbm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xop'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='SapphireRapids'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='SapphireRapids-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='SapphireRapids-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='SapphireRapids-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='SierraForest'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-ne-convert'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cmpccxadd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mcdt-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pbrsb-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='SierraForest-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-ne-convert'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cmpccxadd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mcdt-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pbrsb-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-v5'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Snowridge'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='core-capability'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mpx'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='split-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Snowridge-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='core-capability'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mpx'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='split-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Snowridge-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='core-capability'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='split-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Snowridge-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='core-capability'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='split-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Snowridge-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='athlon'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnow'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnowext'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='athlon-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnow'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnowext'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='core2duo'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='core2duo-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='coreduo'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='coreduo-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='n270'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='n270-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='phenom'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnow'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnowext'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='phenom-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnow'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnowext'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </mode>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   </cpu>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <memoryBacking supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <enum name='sourceType'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <value>file</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <value>anonymous</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <value>memfd</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   </memoryBacking>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <devices>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <disk supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='diskDevice'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>disk</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>cdrom</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>floppy</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>lun</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='bus'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>fdc</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>scsi</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>usb</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>sata</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='model'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio-transitional</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio-non-transitional</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </disk>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <graphics supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='type'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>vnc</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>egl-headless</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>dbus</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </graphics>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <video supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='modelType'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>vga</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>cirrus</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>none</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>bochs</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>ramfb</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </video>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <hostdev supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='mode'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>subsystem</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='startupPolicy'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>default</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>mandatory</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>requisite</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>optional</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='subsysType'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>usb</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>pci</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>scsi</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='capsType'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='pciBackend'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </hostdev>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <rng supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='model'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio-transitional</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio-non-transitional</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='backendModel'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>random</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>egd</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>builtin</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </rng>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <filesystem supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='driverType'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>path</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>handle</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtiofs</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </filesystem>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <tpm supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='model'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>tpm-tis</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>tpm-crb</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='backendModel'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>emulator</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>external</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='backendVersion'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>2.0</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </tpm>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <redirdev supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='bus'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>usb</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </redirdev>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <channel supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='type'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>pty</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>unix</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </channel>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <crypto supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='model'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='type'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>qemu</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='backendModel'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>builtin</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </crypto>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <interface supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='backendType'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>default</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>passt</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </interface>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <panic supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='model'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>isa</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>hyperv</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </panic>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <console supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='type'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>null</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>vc</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>pty</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>dev</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>file</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>pipe</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>stdio</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>udp</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>tcp</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>unix</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>qemu-vdagent</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>dbus</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </console>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   </devices>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <features>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <gic supported='no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <vmcoreinfo supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <genid supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <backingStoreInput supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <backup supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <async-teardown supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <ps2 supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <sev supported='no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <sgx supported='no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <hyperv supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='features'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>relaxed</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>vapic</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>spinlocks</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>vpindex</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>runtime</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>synic</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>stimer</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>reset</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>vendor_id</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>frequencies</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>reenlightenment</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>tlbflush</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>ipi</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>avic</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>emsr_bitmap</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>xmm_input</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <defaults>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <spinlocks>4095</spinlocks>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <stimer_direct>on</stimer_direct>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <tlbflush_direct>off</tlbflush_direct>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <tlbflush_extended>off</tlbflush_extended>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </defaults>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </hyperv>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <launchSecurity supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='sectype'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>tdx</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </launchSecurity>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   </features>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: </domainCapabilities>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.696 229637 DEBUG nova.virt.libvirt.host [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: <domainCapabilities>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <domain>kvm</domain>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <arch>x86_64</arch>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <vcpu max='240'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <iothreads supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <os supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <enum name='firmware'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <loader supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='type'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>rom</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>pflash</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='readonly'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>yes</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>no</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='secure'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>no</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </loader>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   </os>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <cpu>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <mode name='host-passthrough' supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='hostPassthroughMigratable'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>on</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>off</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </mode>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <mode name='maximum' supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='maximumMigratable'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>on</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>off</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </mode>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <mode name='host-model' supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <vendor>AMD</vendor>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='x2apic'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='hypervisor'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='stibp'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='ssbd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='overflow-recov'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='succor'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='ibrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='lbrv'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='tsc-scale'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='pause-filter'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='pfthreshold'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <feature policy='disable' name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </mode>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <mode name='custom' supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-noTSX'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Broadwell-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cooperlake'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cooperlake-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Cooperlake-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Denverton'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mpx'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Denverton-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mpx'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Denverton-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Denverton-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Dhyana-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Genoa'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amd-psfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='auto-ibrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='no-nested-data-bp'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='null-sel-clr-base'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='stibp-always-on'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amd-psfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='auto-ibrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='no-nested-data-bp'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='null-sel-clr-base'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='stibp-always-on'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Milan'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Milan-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Milan-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amd-psfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='no-nested-data-bp'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='null-sel-clr-base'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='stibp-always-on'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Rome'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Rome-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Rome-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-Rome-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='EPYC-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='GraniteRapids'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mcdt-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pbrsb-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='prefetchiti'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='GraniteRapids-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mcdt-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pbrsb-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='prefetchiti'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='GraniteRapids-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx10'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx10-128'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx10-256'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx10-512'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mcdt-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pbrsb-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='prefetchiti'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-noTSX'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Haswell-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v5'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v6'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Icelake-Server-v7'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='IvyBridge'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='IvyBridge-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='IvyBridge-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='IvyBridge-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='KnightsMill'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-4fmaps'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-4vnniw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512er'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512pf'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='KnightsMill-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-4fmaps'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-4vnniw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512er'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512pf'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Opteron_G4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fma4'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xop'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Opteron_G4-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fma4'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xop'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Opteron_G5'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fma4'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tbm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xop'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Opteron_G5-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fma4'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tbm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xop'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='SapphireRapids'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='SapphireRapids-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='SapphireRapids-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='SapphireRapids-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='amx-tile'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-bf16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-fp16'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bitalg'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vbmi2'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrc'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fzrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='la57'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='taa-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='tsx-ldtrk'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xfd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='SierraForest'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-ne-convert'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cmpccxadd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mcdt-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pbrsb-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='SierraForest-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-ifma'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-ne-convert'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx-vnni-int8'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='bus-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cmpccxadd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fbsdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='fsrs'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ibrs-all'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mcdt-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pbrsb-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='psdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='serialize'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vaes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='vpclmulqdq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Client-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='hle'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='rtm'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Skylake-Server-v5'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512bw'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512cd'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512dq'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512f'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='avx512vl'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='invpcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pcid'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='pku'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Snowridge'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='core-capability'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mpx'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='split-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Snowridge-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='core-capability'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='mpx'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='split-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Snowridge-v2'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='core-capability'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='split-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Snowridge-v3'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='core-capability'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='split-lock-detect'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='Snowridge-v4'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='cldemote'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='erms'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='gfni'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdir64b'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='movdiri'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='xsaves'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='athlon'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnow'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnowext'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='athlon-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnow'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnowext'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='core2duo'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='core2duo-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='coreduo'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='coreduo-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='n270'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='n270-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='ss'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='phenom'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnow'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnowext'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <blockers model='phenom-v1'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnow'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <feature name='3dnowext'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </blockers>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </mode>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   </cpu>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <memoryBacking supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <enum name='sourceType'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <value>file</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <value>anonymous</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <value>memfd</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   </memoryBacking>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <devices>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <disk supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='diskDevice'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>disk</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>cdrom</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>floppy</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>lun</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='bus'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>ide</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>fdc</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>scsi</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>usb</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>sata</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='model'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio-transitional</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio-non-transitional</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </disk>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <graphics supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='type'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>vnc</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>egl-headless</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>dbus</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </graphics>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <video supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='modelType'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>vga</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>cirrus</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>none</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>bochs</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>ramfb</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </video>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <hostdev supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='mode'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>subsystem</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='startupPolicy'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>default</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>mandatory</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>requisite</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>optional</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='subsysType'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>usb</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>pci</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>scsi</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='capsType'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='pciBackend'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </hostdev>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <rng supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='model'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio-transitional</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtio-non-transitional</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='backendModel'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>random</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>egd</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>builtin</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </rng>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <filesystem supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='driverType'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>path</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>handle</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>virtiofs</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </filesystem>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <tpm supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='model'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>tpm-tis</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>tpm-crb</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='backendModel'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>emulator</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>external</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='backendVersion'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>2.0</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </tpm>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <redirdev supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='bus'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>usb</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </redirdev>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <channel supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='type'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>pty</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>unix</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </channel>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <crypto supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='model'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='type'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>qemu</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='backendModel'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>builtin</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </crypto>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <interface supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='backendType'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>default</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>passt</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </interface>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <panic supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='model'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>isa</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>hyperv</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </panic>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <console supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='type'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>null</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>vc</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>pty</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>dev</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>file</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>pipe</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>stdio</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>udp</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>tcp</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>unix</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>qemu-vdagent</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>dbus</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </console>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   </devices>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   <features>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <gic supported='no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <vmcoreinfo supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <genid supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <backingStoreInput supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <backup supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <async-teardown supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <ps2 supported='yes'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <sev supported='no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <sgx supported='no'/>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <hyperv supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='features'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>relaxed</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>vapic</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>spinlocks</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>vpindex</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>runtime</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>synic</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>stimer</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>reset</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>vendor_id</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>frequencies</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>reenlightenment</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>tlbflush</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>ipi</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>avic</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>emsr_bitmap</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>xmm_input</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <defaults>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <spinlocks>4095</spinlocks>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <stimer_direct>on</stimer_direct>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <tlbflush_direct>off</tlbflush_direct>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <tlbflush_extended>off</tlbflush_extended>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </defaults>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </hyperv>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     <launchSecurity supported='yes'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       <enum name='sectype'>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:         <value>tdx</value>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:       </enum>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:     </launchSecurity>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:   </features>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: </domainCapabilities>
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.756 229637 DEBUG nova.virt.libvirt.host [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.757 229637 INFO nova.virt.libvirt.host [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Secure Boot support detected
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.759 229637 INFO nova.virt.libvirt.driver [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.759 229637 INFO nova.virt.libvirt.driver [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.771 229637 DEBUG nova.virt.libvirt.driver [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.802 229637 INFO nova.virt.node [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Determined node identity 9d142787-bd19-4b53-bf45-24c0e0c1cff0 from /var/lib/nova/compute_id
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.825 229637 DEBUG nova.compute.manager [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Verified node 9d142787-bd19-4b53-bf45-24c0e0c1cff0 matches my host np0005548790.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.856 229637 INFO nova.compute.manager [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.991 229637 DEBUG oslo_concurrency.lockutils [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.991 229637 DEBUG oslo_concurrency.lockutils [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.992 229637 DEBUG oslo_concurrency.lockutils [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.992 229637 DEBUG nova.compute.resource_tracker [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 09:45:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:57.993 229637 DEBUG oslo_concurrency.processutils [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:45:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:58.409 229637 DEBUG oslo_concurrency.processutils [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:45:58 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:45:58 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:45:58 np0005548790.localdomain systemd[1]: tmp-crun.CRsrRz.mount: Deactivated successfully.
Dec 06 09:45:58 np0005548790.localdomain podman[229906]: 2025-12-06 09:45:58.575017471 +0000 UTC m=+0.085635290 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:45:58 np0005548790.localdomain podman[229905]: 2025-12-06 09:45:58.652845322 +0000 UTC m=+0.164963531 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:45:58 np0005548790.localdomain podman[229905]: 2025-12-06 09:45:58.688145635 +0000 UTC m=+0.200263854 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:45:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:58.695 229637 WARNING nova.virt.libvirt.driver [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 09:45:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:58.697 229637 DEBUG nova.compute.resource_tracker [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=13621MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 09:45:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:58.697 229637 DEBUG oslo_concurrency.lockutils [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:45:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:58.698 229637 DEBUG oslo_concurrency.lockutils [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:45:58 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:45:58 np0005548790.localdomain podman[229906]: 2025-12-06 09:45:58.739368715 +0000 UTC m=+0.249986554 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 06 09:45:58 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:45:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:58.837 229637 DEBUG nova.compute.resource_tracker [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 09:45:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:58.838 229637 DEBUG nova.compute.resource_tracker [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 09:45:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:58.910 229637 DEBUG nova.scheduler.client.report [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Refreshing inventories for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 09:45:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:58.938 229637 DEBUG nova.scheduler.client.report [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Updating ProviderTree inventory for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 09:45:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:58.938 229637 DEBUG nova.compute.provider_tree [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Updating inventory in ProviderTree for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 09:45:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:58.969 229637 DEBUG nova.scheduler.client.report [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Refreshing aggregate associations for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 09:45:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:59.022 229637 DEBUG nova.scheduler.client.report [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Refreshing trait associations for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0, traits: COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 09:45:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:59.043 229637 DEBUG oslo_concurrency.processutils [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:45:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:59.491 229637 DEBUG oslo_concurrency.processutils [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:45:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:59.496 229637 DEBUG nova.virt.libvirt.host [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 06 09:45:59 np0005548790.localdomain nova_compute[229633]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Dec 06 09:45:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:59.497 229637 INFO nova.virt.libvirt.host [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] kernel doesn't support AMD SEV
Dec 06 09:45:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:59.498 229637 DEBUG nova.compute.provider_tree [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 09:45:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:59.498 229637 DEBUG nova.virt.libvirt.driver [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 09:45:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:59.531 229637 DEBUG nova.scheduler.client.report [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 09:45:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:59.555 229637 DEBUG nova.compute.resource_tracker [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 09:45:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:59.555 229637 DEBUG oslo_concurrency.lockutils [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:45:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:59.556 229637 DEBUG nova.service [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Dec 06 09:45:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:59.602 229637 DEBUG nova.service [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Dec 06 09:45:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:45:59.603 229637 DEBUG nova.servicegroup.drivers.db [None req-89f40e39-20de-4add-a6ae-52da68e08ca7 - - - - - -] DB_Driver: join new ServiceGroup member np0005548790.localdomain to the compute group, service = <Service: host=np0005548790.localdomain, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Dec 06 09:46:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2785 DF PROTO=TCP SPT=32930 DPT=9105 SEQ=1230918687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FEDD600000000001030307) 
Dec 06 09:46:02 np0005548790.localdomain sshd[229971]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:46:03 np0005548790.localdomain sshd[229971]: Accepted publickey for zuul from 192.168.122.30 port 51284 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:46:03 np0005548790.localdomain systemd-logind[760]: New session 55 of user zuul.
Dec 06 09:46:03 np0005548790.localdomain systemd[1]: Started Session 55 of User zuul.
Dec 06 09:46:03 np0005548790.localdomain sshd[229971]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:46:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43733 DF PROTO=TCP SPT=48666 DPT=9102 SEQ=1964801877 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FEE91F0000000001030307) 
Dec 06 09:46:04 np0005548790.localdomain python3.9[230082]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:46:05 np0005548790.localdomain sudo[230158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:46:05 np0005548790.localdomain sudo[230158]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:46:05 np0005548790.localdomain sudo[230158]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:05 np0005548790.localdomain sudo[230193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:46:05 np0005548790.localdomain sudo[230193]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:46:05 np0005548790.localdomain sudo[230229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oaqmpnwtbyrkeqtyitrrgkuqccemaqib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014364.9209628-70-105837170290241/AnsiballZ_systemd_service.py
Dec 06 09:46:05 np0005548790.localdomain sudo[230229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:05 np0005548790.localdomain python3.9[230232]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:46:05 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:46:05 np0005548790.localdomain systemd-rc-local-generator[230270]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:46:05 np0005548790.localdomain systemd-sysv-generator[230273]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:46:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:46:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:46:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:46:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:46:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:46:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:46:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:46:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:46:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:46:06 np0005548790.localdomain sudo[230193]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:06 np0005548790.localdomain sudo[230229]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43668 DF PROTO=TCP SPT=35552 DPT=9101 SEQ=347702036 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FEF51F0000000001030307) 
Dec 06 09:46:06 np0005548790.localdomain sudo[230399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:46:06 np0005548790.localdomain sudo[230399]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:46:06 np0005548790.localdomain sudo[230399]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:07 np0005548790.localdomain python3.9[230413]: ansible-ansible.builtin.service_facts Invoked
Dec 06 09:46:07 np0005548790.localdomain network[230441]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 09:46:07 np0005548790.localdomain network[230442]: 'network-scripts' will be removed from distribution in near future.
Dec 06 09:46:07 np0005548790.localdomain network[230443]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 09:46:07 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:07.605 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:46:07 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:07.632 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:46:09 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:46:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53618 DF PROTO=TCP SPT=38246 DPT=9100 SEQ=4027383519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FF031F0000000001030307) 
Dec 06 09:46:13 np0005548790.localdomain sudo[230676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ixcgmtzgqyvkqhdbrovnxzvladvueuqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014372.7628925-127-249923414913762/AnsiballZ_systemd_service.py
Dec 06 09:46:13 np0005548790.localdomain sudo[230676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37657 DF PROTO=TCP SPT=36710 DPT=9100 SEQ=1863305399 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FF0E9F0000000001030307) 
Dec 06 09:46:13 np0005548790.localdomain python3.9[230678]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:46:13 np0005548790.localdomain sudo[230676]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:14 np0005548790.localdomain sudo[230787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xncqebwnsafctygxzqstmeftoqpkhogb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014373.965767-158-268679719508020/AnsiballZ_file.py
Dec 06 09:46:14 np0005548790.localdomain sudo[230787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:14 np0005548790.localdomain python3.9[230789]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:46:14 np0005548790.localdomain sudo[230787]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:14 np0005548790.localdomain systemd-journald[47675]: Field hash table of /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal has a fill level at 76.3 (254 of 333 items), suggesting rotation.
Dec 06 09:46:14 np0005548790.localdomain systemd-journald[47675]: /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 06 09:46:14 np0005548790.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 09:46:14 np0005548790.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 09:46:14 np0005548790.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 09:46:15 np0005548790.localdomain sudo[230898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vbqzuwkrrhczykyxwysklrgzcnddrbhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014374.817334-182-270087679814114/AnsiballZ_file.py
Dec 06 09:46:15 np0005548790.localdomain sudo[230898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:15 np0005548790.localdomain python3.9[230900]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:46:15 np0005548790.localdomain sudo[230898]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:16 np0005548790.localdomain sudo[231008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xmmzrczghfntlgjrjwemcjrsdcagzofx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014375.6990554-209-31608391310690/AnsiballZ_command.py
Dec 06 09:46:16 np0005548790.localdomain sudo[231008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:16 np0005548790.localdomain python3.9[231010]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                              systemctl disable --now certmonger.service
                                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                            fi
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:46:16 np0005548790.localdomain sudo[231008]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:17 np0005548790.localdomain python3.9[231120]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 09:46:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:46:17 np0005548790.localdomain podman[231138]: 2025-12-06 09:46:17.578978148 +0000 UTC m=+0.090480470 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:46:17 np0005548790.localdomain podman[231138]: 2025-12-06 09:46:17.616034359 +0000 UTC m=+0.127536661 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 06 09:46:17 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:46:17 np0005548790.localdomain sudo[231247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nlnvkjyunhoztjybklnucomukqhqvgxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014377.5525837-263-130598376600843/AnsiballZ_systemd_service.py
Dec 06 09:46:17 np0005548790.localdomain sudo[231247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:18 np0005548790.localdomain python3.9[231249]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:46:18 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:46:18 np0005548790.localdomain systemd-rc-local-generator[231272]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:46:18 np0005548790.localdomain systemd-sysv-generator[231275]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:46:18 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:46:18 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:46:18 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:46:18 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:46:18 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:46:18 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:46:18 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:46:18 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:46:18 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:46:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53284 DF PROTO=TCP SPT=33002 DPT=9102 SEQ=4106126901 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FF22800000000001030307) 
Dec 06 09:46:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10695 DF PROTO=TCP SPT=47450 DPT=9882 SEQ=3926462727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FF22880000000001030307) 
Dec 06 09:46:18 np0005548790.localdomain sudo[231247]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:19 np0005548790.localdomain sudo[231393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-almdpyfiqfvdknuivfsfukvpmssdozsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014379.288124-287-119792568178393/AnsiballZ_command.py
Dec 06 09:46:19 np0005548790.localdomain sudo[231393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:19 np0005548790.localdomain python3.9[231395]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:46:19 np0005548790.localdomain sudo[231393]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:20 np0005548790.localdomain sudo[231504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tjswjpwngefhwrdkyhtbsoqsvmolcako ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014380.1330597-314-253893492872352/AnsiballZ_file.py
Dec 06 09:46:20 np0005548790.localdomain sudo[231504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:20 np0005548790.localdomain python3.9[231506]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:46:20 np0005548790.localdomain sudo[231504]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53286 DF PROTO=TCP SPT=33002 DPT=9102 SEQ=4106126901 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FF2E9F0000000001030307) 
Dec 06 09:46:22 np0005548790.localdomain python3.9[231614]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:46:22 np0005548790.localdomain python3.9[231724]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:46:23 np0005548790.localdomain python3.9[231810]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014382.3171477-362-230809920566635/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=4f86617083a6f44e01f41ec95006f5208579ba75 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:46:23 np0005548790.localdomain sudo[231918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wywqwwmfkegevmriikwqhyjwswmzvgnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014383.6188054-407-181412533166795/AnsiballZ_group.py
Dec 06 09:46:23 np0005548790.localdomain sudo[231918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:24 np0005548790.localdomain python3.9[231920]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Dec 06 09:46:24 np0005548790.localdomain sudo[231918]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29246 DF PROTO=TCP SPT=52920 DPT=9105 SEQ=2278114415 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FF3ADF0000000001030307) 
Dec 06 09:46:25 np0005548790.localdomain sudo[232028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-odtchewifoeqlbifpuxsfzjactpustnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014384.998534-440-156085890009404/AnsiballZ_getent.py
Dec 06 09:46:25 np0005548790.localdomain sudo[232028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:25 np0005548790.localdomain python3.9[232030]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec 06 09:46:25 np0005548790.localdomain sudo[232028]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:26 np0005548790.localdomain sudo[232139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tzigqlkjrosvuotcisynanuynynxbqwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014385.8361897-463-9928219238201/AnsiballZ_group.py
Dec 06 09:46:26 np0005548790.localdomain sudo[232139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:26 np0005548790.localdomain python3.9[232141]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 06 09:46:26 np0005548790.localdomain groupadd[232142]: group added to /etc/group: name=ceilometer, GID=42405
Dec 06 09:46:26 np0005548790.localdomain groupadd[232142]: group added to /etc/gshadow: name=ceilometer
Dec 06 09:46:26 np0005548790.localdomain groupadd[232142]: new group: name=ceilometer, GID=42405
Dec 06 09:46:26 np0005548790.localdomain sudo[232139]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:27 np0005548790.localdomain sudo[232255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cvftsyfiotsjdznqzpxdxlgqzfhswfla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014386.6799626-487-271587519107821/AnsiballZ_user.py
Dec 06 09:46:27 np0005548790.localdomain sudo[232255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:27 np0005548790.localdomain python3.9[232257]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005548790.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 06 09:46:27 np0005548790.localdomain useradd[232259]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Dec 06 09:46:27 np0005548790.localdomain useradd[232259]: add 'ceilometer' to group 'libvirt'
Dec 06 09:46:27 np0005548790.localdomain useradd[232259]: add 'ceilometer' to shadow group 'libvirt'
Dec 06 09:46:27 np0005548790.localdomain sudo[232255]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61477 DF PROTO=TCP SPT=59788 DPT=9105 SEQ=1676669871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FF47200000000001030307) 
Dec 06 09:46:28 np0005548790.localdomain python3.9[232373]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:46:29 np0005548790.localdomain python3.9[232459]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765014388.450869-565-77906161714915/.source.conf _original_basename=ceilometer.conf follow=False checksum=e90760659247c177dccfbe1ef7de974794985ce9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:46:29 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:46:29 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:46:29 np0005548790.localdomain podman[232460]: 2025-12-06 09:46:29.578934841 +0000 UTC m=+0.087541601 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 09:46:29 np0005548790.localdomain podman[232460]: 2025-12-06 09:46:29.615124489 +0000 UTC m=+0.123731269 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 06 09:46:29 np0005548790.localdomain podman[232461]: 2025-12-06 09:46:29.62601025 +0000 UTC m=+0.134086776 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 09:46:29 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:46:29 np0005548790.localdomain podman[232461]: 2025-12-06 09:46:29.666175803 +0000 UTC m=+0.174252319 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 09:46:29 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:46:30 np0005548790.localdomain python3.9[232610]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:46:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29248 DF PROTO=TCP SPT=52920 DPT=9105 SEQ=2278114415 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FF529F0000000001030307) 
Dec 06 09:46:31 np0005548790.localdomain python3.9[232696]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765014389.5889432-565-54711632992602/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:46:31 np0005548790.localdomain python3.9[232804]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:46:32 np0005548790.localdomain python3.9[232890]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765014391.4022682-565-230929597809344/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:46:33 np0005548790.localdomain python3.9[232998]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:46:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53288 DF PROTO=TCP SPT=33002 DPT=9102 SEQ=4106126901 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FF5F1F0000000001030307) 
Dec 06 09:46:34 np0005548790.localdomain python3.9[233106]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:46:35 np0005548790.localdomain python3.9[233214]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:46:35 np0005548790.localdomain python3.9[233300]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014394.6809518-742-204402603437897/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:46:36 np0005548790.localdomain python3.9[233408]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:46:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57243 DF PROTO=TCP SPT=54428 DPT=9101 SEQ=3502941754 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FF691F0000000001030307) 
Dec 06 09:46:36 np0005548790.localdomain python3.9[233463]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:46:37 np0005548790.localdomain python3.9[233572]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:46:37 np0005548790.localdomain python3.9[233658]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014396.836746-742-278529157677493/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=d15068604cf730dd6e7b88a19d62f57d3a39f94f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:46:38 np0005548790.localdomain python3.9[233766]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:46:38 np0005548790.localdomain python3.9[233852]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014397.9719374-742-208972265288257/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:46:39 np0005548790.localdomain python3.9[233960]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:46:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36945 DF PROTO=TCP SPT=40606 DPT=9100 SEQ=2510765874 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FF77200000000001030307) 
Dec 06 09:46:40 np0005548790.localdomain python3.9[234046]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014399.0549312-742-238805022235873/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:46:40 np0005548790.localdomain python3.9[234154]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:46:41 np0005548790.localdomain python3.9[234240]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014400.2076786-742-190120770129184/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=7e5ab36b7368c1d4a00810e02af11a7f7d7c84e8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:46:41 np0005548790.localdomain python3.9[234348]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:46:42 np0005548790.localdomain python3.9[234434]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014401.4501045-742-173499401871727/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:46:42 np0005548790.localdomain python3.9[234542]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:46:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30275 DF PROTO=TCP SPT=45854 DPT=9100 SEQ=3552094298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FF83DF0000000001030307) 
Dec 06 09:46:43 np0005548790.localdomain python3.9[234628]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014402.535357-742-264357214389339/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=0e4ea521b0035bea70b7a804346a5c89364dcbc3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:46:44 np0005548790.localdomain python3.9[234736]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:46:45 np0005548790.localdomain python3.9[234822]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014404.434824-742-54559244358049/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=b056dcaaba7624b93826bb95ee9e82f81bde6c72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:46:46 np0005548790.localdomain python3.9[234930]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:46:47 np0005548790.localdomain python3.9[235016]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014405.829198-742-25611279562759/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=885ccc6f5edd8803cb385bdda5648d0b3017b4e4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:46:48 np0005548790.localdomain python3.9[235124]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:46:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:46:48.353 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:46:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:46:48.354 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:46:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:46:48.354 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:46:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9201 DF PROTO=TCP SPT=48800 DPT=9102 SEQ=1431178276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FF97B10000000001030307) 
Dec 06 09:46:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2747 DF PROTO=TCP SPT=39648 DPT=9882 SEQ=1121500136 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FF97B70000000001030307) 
Dec 06 09:46:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:46:48 np0005548790.localdomain podman[235211]: 2025-12-06 09:46:48.576610139 +0000 UTC m=+0.086432782 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd)
Dec 06 09:46:48 np0005548790.localdomain podman[235211]: 2025-12-06 09:46:48.589060711 +0000 UTC m=+0.098883374 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd)
Dec 06 09:46:48 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:46:48 np0005548790.localdomain python3.9[235210]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014407.6575098-742-169948091426820/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:46:50 np0005548790.localdomain sudo[235337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ilsmzgzohvxespwlbwasjaawdittzlfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014409.757954-1208-62829256404146/AnsiballZ_file.py
Dec 06 09:46:50 np0005548790.localdomain sudo[235337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:50 np0005548790.localdomain python3.9[235339]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:46:50 np0005548790.localdomain sudo[235337]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:50 np0005548790.localdomain sudo[235447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dhqnqngtyytcukckijbcxhucxoumvrvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014410.4833183-1232-249270133871795/AnsiballZ_systemd_service.py
Dec 06 09:46:50 np0005548790.localdomain sudo[235447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:51 np0005548790.localdomain python3.9[235449]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:46:51 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:46:51 np0005548790.localdomain systemd-sysv-generator[235480]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:46:51 np0005548790.localdomain systemd-rc-local-generator[235476]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:46:51 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:46:51 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:46:51 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:46:51 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:46:51 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:46:51 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:46:51 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:46:51 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:46:51 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:46:51 np0005548790.localdomain systemd[1]: Listening on Podman API Socket.
Dec 06 09:46:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9203 DF PROTO=TCP SPT=48800 DPT=9102 SEQ=1431178276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FFA3A00000000001030307) 
Dec 06 09:46:51 np0005548790.localdomain sudo[235447]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:52 np0005548790.localdomain sudo[235597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wzlwbszsoeykpdynekkcxoverwmrqtcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014411.9090664-1259-163902228444240/AnsiballZ_stat.py
Dec 06 09:46:52 np0005548790.localdomain sudo[235597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:52 np0005548790.localdomain python3.9[235599]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:46:52 np0005548790.localdomain sudo[235597]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:52 np0005548790.localdomain sudo[235685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xbsxzhqekhyjhhtjfwwptheugeengbfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014411.9090664-1259-163902228444240/AnsiballZ_copy.py
Dec 06 09:46:52 np0005548790.localdomain sudo[235685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:53 np0005548790.localdomain python3.9[235687]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014411.9090664-1259-163902228444240/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:46:53 np0005548790.localdomain sudo[235685]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:53 np0005548790.localdomain sudo[235740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-redrhskucyfyonrorspzoobxluahxqge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014411.9090664-1259-163902228444240/AnsiballZ_stat.py
Dec 06 09:46:53 np0005548790.localdomain sudo[235740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:53 np0005548790.localdomain python3.9[235742]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:46:53 np0005548790.localdomain sudo[235740]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:53 np0005548790.localdomain sudo[235828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mktvymwplzxzhravvucvfgnldosortfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014411.9090664-1259-163902228444240/AnsiballZ_copy.py
Dec 06 09:46:53 np0005548790.localdomain sudo[235828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:54 np0005548790.localdomain python3.9[235830]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014411.9090664-1259-163902228444240/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:46:54 np0005548790.localdomain sudo[235828]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52007 DF PROTO=TCP SPT=42378 DPT=9105 SEQ=1830276597 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FFB0200000000001030307) 
Dec 06 09:46:55 np0005548790.localdomain sudo[235938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kvpbrxgpjzuyvfghhvlczxqqodonmioe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014414.6193223-1342-25164595684828/AnsiballZ_container_config_data.py
Dec 06 09:46:55 np0005548790.localdomain sudo[235938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:55 np0005548790.localdomain python3.9[235940]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Dec 06 09:46:55 np0005548790.localdomain sudo[235938]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:55 np0005548790.localdomain sudo[236048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nvkmnbixhowfvoxztvfgkaehrnnuecni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014415.5637496-1370-226887914615555/AnsiballZ_container_config_hash.py
Dec 06 09:46:55 np0005548790.localdomain sudo[236048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:56 np0005548790.localdomain python3.9[236050]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 09:46:56 np0005548790.localdomain sudo[236048]: pam_unix(sudo:session): session closed for user root
Dec 06 09:46:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:56.888 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:46:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:56.889 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:46:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:56.890 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 09:46:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:56.890 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 09:46:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:56.907 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 09:46:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:56.908 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:46:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:56.908 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:46:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:56.909 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:46:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:56.909 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:46:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:56.909 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:46:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:56.910 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:46:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:56.910 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 09:46:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:56.910 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:46:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:56.930 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:46:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:56.931 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:46:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:56.931 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:46:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:56.931 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 09:46:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:56.932 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:46:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:57.417 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:46:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2788 DF PROTO=TCP SPT=32930 DPT=9105 SEQ=1230918687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FFBB200000000001030307) 
Dec 06 09:46:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:57.643 229637 WARNING nova.virt.libvirt.driver [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 09:46:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:57.645 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=13614MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 09:46:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:57.646 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:46:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:57.647 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:46:57 np0005548790.localdomain sudo[236180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qltvbtqhswyomhieyuaozzlzigydlbbw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765014416.6532242-1399-120053064899204/AnsiballZ_edpm_container_manage.py
Dec 06 09:46:57 np0005548790.localdomain sudo[236180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:46:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:57.743 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 09:46:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:57.744 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 09:46:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:57.760 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:46:58 np0005548790.localdomain python3[236182]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 09:46:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:58.212 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:46:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:58.220 229637 DEBUG nova.compute.provider_tree [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 09:46:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:58.241 229637 DEBUG nova.scheduler.client.report [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 09:46:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:58.244 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 09:46:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:46:58.244 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:46:58 np0005548790.localdomain python3[236182]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "343ba269c9fe0a56d7572c8ca328dbce002017c4dd4986f43667971dd03085c2",
                                                                    "Digest": "sha256:667029e1ec7e63fffa1a096f432f6160b441ba36df1bddc9066cbd1129b82009",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:667029e1ec7e63fffa1a096f432f6160b441ba36df1bddc9066cbd1129b82009"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2025-12-01T06:21:53.58682213Z",
                                                                    "Config": {
                                                                         "User": "root",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20251125",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 505175293,
                                                                    "VirtualSize": 505175293,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                                              "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                                              "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",
                                                                              "sha256:a47016624274f5ebad76019f5a2e465c1737f96caa539b36f90ab8e33592f415",
                                                                              "sha256:38a03f5e96658211fb28e2f87c11ffad531281d1797368f48e6cd4af7ac97c0e"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20251125",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "root",
                                                                    "History": [
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223494528Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223562059Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:39.054452717Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025707917Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025744608Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025767729Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025791379Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.02581523Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025867611Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.469442331Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:02.029095017Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:05.672474685Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.113425253Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.532320725Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.370061347Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.805172373Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.259306372Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.625948784Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.028304824Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.423316076Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.801219631Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.239187116Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.70996597Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.147342611Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.5739488Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.006975065Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.421255505Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.066694755Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.475695836Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.8971372Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:18.542651107Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622503041Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622561802Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622578342Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622594423Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:22.080892529Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:15.092312074Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:53.218820537Z",
                                                                              "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:56.858075591Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:14:56.244673147Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:14:56.960273159Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage ceilometer",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:15:37.588899909Z",
                                                                              "created_by": "/bin/sh -c dnf -y install openstack-ceilometer-common && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:15:41.197123864Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:21:19.680010224Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-ceilometer-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:21:53.584924649Z",
                                                                              "created_by": "/bin/sh -c dnf -y install openstack-ceilometer-compute && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:21:56.278821402Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified
Dec 06 09:46:58 np0005548790.localdomain podman[236257]: 2025-12-06 09:46:58.385558373 +0000 UTC m=+0.089901344 container remove 610f8c140e8a59a72e6e46f1e837730304784102fa73e5180c09ffbe968540be (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '94eddc2d1a780b6dc03d015a7bd0e411'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc.)
Dec 06 09:46:58 np0005548790.localdomain python3[236182]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ceilometer_agent_compute
Dec 06 09:46:58 np0005548790.localdomain podman[236272]: 
Dec 06 09:46:58 np0005548790.localdomain podman[236272]: 2025-12-06 09:46:58.488993748 +0000 UTC m=+0.085655160 container create 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:46:58 np0005548790.localdomain podman[236272]: 2025-12-06 09:46:58.448433284 +0000 UTC m=+0.045094726 image pull  quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified
Dec 06 09:46:58 np0005548790.localdomain python3[236182]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified kolla_start
Dec 06 09:46:58 np0005548790.localdomain sudo[236180]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:00 np0005548790.localdomain sudo[236419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gzpumuvkxfbbitiercjtclvcumaxdprx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014419.0480103-1424-142520689605450/AnsiballZ_stat.py
Dec 06 09:47:00 np0005548790.localdomain sudo[236419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:00 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:47:00 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:47:00 np0005548790.localdomain systemd[1]: tmp-crun.fjftob.mount: Deactivated successfully.
Dec 06 09:47:00 np0005548790.localdomain podman[236421]: 2025-12-06 09:47:00.544939868 +0000 UTC m=+0.091467867 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 06 09:47:00 np0005548790.localdomain podman[236421]: 2025-12-06 09:47:00.578270769 +0000 UTC m=+0.124798748 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:47:00 np0005548790.localdomain systemd[1]: tmp-crun.BAR146.mount: Deactivated successfully.
Dec 06 09:47:00 np0005548790.localdomain podman[236423]: 2025-12-06 09:47:00.594898863 +0000 UTC m=+0.142198072 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 09:47:00 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:47:00 np0005548790.localdomain podman[236423]: 2025-12-06 09:47:00.634124512 +0000 UTC m=+0.181423711 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 06 09:47:00 np0005548790.localdomain python3.9[236422]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:47:00 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:47:00 np0005548790.localdomain sudo[236419]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52009 DF PROTO=TCP SPT=42378 DPT=9105 SEQ=1830276597 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FFC7DF0000000001030307) 
Dec 06 09:47:01 np0005548790.localdomain sudo[236569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pupcbfbtgjesfwmlcxxbkljlgkdfbqdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014421.053854-1450-17155909976802/AnsiballZ_file.py
Dec 06 09:47:01 np0005548790.localdomain sudo[236569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:01 np0005548790.localdomain python3.9[236571]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:01 np0005548790.localdomain sudo[236569]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:02 np0005548790.localdomain sudo[236678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wpkojqftvucnekvsgftltfubzjlvayyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014421.6289656-1450-54057644307928/AnsiballZ_copy.py
Dec 06 09:47:02 np0005548790.localdomain sudo[236678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:02 np0005548790.localdomain python3.9[236680]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014421.6289656-1450-54057644307928/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:02 np0005548790.localdomain sudo[236678]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:02 np0005548790.localdomain sudo[236733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kniffialorrangwgpefmjugrxabvutof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014421.6289656-1450-54057644307928/AnsiballZ_systemd.py
Dec 06 09:47:02 np0005548790.localdomain sudo[236733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:02 np0005548790.localdomain python3.9[236735]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:47:02 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:47:03 np0005548790.localdomain systemd-sysv-generator[236763]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:47:03 np0005548790.localdomain systemd-rc-local-generator[236758]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:47:03 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:03 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:03 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:03 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:03 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:47:03 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:03 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:03 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:03 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:03 np0005548790.localdomain sudo[236733]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2751 DF PROTO=TCP SPT=39648 DPT=9882 SEQ=1121500136 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FFD31F0000000001030307) 
Dec 06 09:47:03 np0005548790.localdomain sudo[236824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-okvvzkasvaacxqvnqxgmotctoowulkpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014421.6289656-1450-54057644307928/AnsiballZ_systemd.py
Dec 06 09:47:03 np0005548790.localdomain sudo[236824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:03 np0005548790.localdomain python3.9[236826]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:47:04 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:47:04 np0005548790.localdomain systemd-rc-local-generator[236850]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:47:04 np0005548790.localdomain systemd-sysv-generator[236854]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:47:04 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:04 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:04 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:04 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:04 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:47:04 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:04 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:04 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:04 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:04 np0005548790.localdomain systemd[1]: Starting ceilometer_agent_compute container...
Dec 06 09:47:04 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:47:04 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc8c293e1758c0ea097a85c6f613899bf193b428a1129e52ba5e7aac608f9cd/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 06 09:47:04 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc8c293e1758c0ea097a85c6f613899bf193b428a1129e52ba5e7aac608f9cd/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 06 09:47:04 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:47:04 np0005548790.localdomain podman[236867]: 2025-12-06 09:47:04.562915214 +0000 UTC m=+0.136953018 container init 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: + sudo -E kolla_set_configs
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: sudo: unable to send audit message: Operation not permitted
Dec 06 09:47:04 np0005548790.localdomain sudo[236887]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 09:47:04 np0005548790.localdomain sudo[236887]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 06 09:47:04 np0005548790.localdomain sudo[236887]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 06 09:47:04 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:47:04 np0005548790.localdomain podman[236867]: 2025-12-06 09:47:04.595381701 +0000 UTC m=+0.169419475 container start 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 09:47:04 np0005548790.localdomain podman[236867]: ceilometer_agent_compute
Dec 06 09:47:04 np0005548790.localdomain systemd[1]: Started ceilometer_agent_compute container.
Dec 06 09:47:04 np0005548790.localdomain sudo[236824]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: INFO:__main__:Validating config file
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: INFO:__main__:Copying service configuration files
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: INFO:__main__:Writing out command to execute
Dec 06 09:47:04 np0005548790.localdomain sudo[236887]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: ++ cat /run_command
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: + ARGS=
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: + sudo kolla_copy_cacerts
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: sudo: unable to send audit message: Operation not permitted
Dec 06 09:47:04 np0005548790.localdomain sudo[236900]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 06 09:47:04 np0005548790.localdomain sudo[236900]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 06 09:47:04 np0005548790.localdomain sudo[236900]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 06 09:47:04 np0005548790.localdomain sudo[236900]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: + [[ ! -n '' ]]
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: + . kolla_extend_start
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: + umask 0022
Dec 06 09:47:04 np0005548790.localdomain ceilometer_agent_compute[236881]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec 06 09:47:04 np0005548790.localdomain podman[236889]: 2025-12-06 09:47:04.684808475 +0000 UTC m=+0.083704161 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_managed=true)
Dec 06 09:47:04 np0005548790.localdomain podman[236889]: 2025-12-06 09:47:04.717180448 +0000 UTC m=+0.116076094 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 06 09:47:04 np0005548790.localdomain podman[236889]: unhealthy
Dec 06 09:47:04 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:47:04 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Failed with result 'exit-code'.
Dec 06 09:47:05 np0005548790.localdomain sudo[237019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vzjtfftqvyvzqhzapdauomziicyuljaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014424.9919322-1523-116590185329934/AnsiballZ_systemd.py
Dec 06 09:47:05 np0005548790.localdomain sudo[237019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.398 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.399 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.399 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.399 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.399 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.399 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.399 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.399 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.399 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.399 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.399 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.400 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.400 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.400 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.400 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.400 2 DEBUG cotyledon.oslo_config_glue [-] host                           = np0005548790.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.400 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.400 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.400 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.400 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.400 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.400 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.401 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.401 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.401 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.401 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.401 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.401 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.401 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.401 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.401 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.401 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.401 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.401 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.401 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.401 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.402 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.402 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.402 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.402 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.402 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.402 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.402 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.402 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.402 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.402 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.402 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.403 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.403 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.403 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.403 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.403 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.403 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.403 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.403 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.403 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.403 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.403 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.403 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.403 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.404 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.404 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.404 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.404 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.404 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.404 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.404 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.404 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.404 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.404 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.404 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.404 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.405 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.406 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.406 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.406 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.406 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.406 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.406 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.406 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.406 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.406 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.406 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.406 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.406 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.406 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.407 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.407 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.407 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.407 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.407 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.407 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.407 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.407 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.407 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.407 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.407 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.408 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.408 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.408 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.408 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.408 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.408 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.408 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.408 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.408 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.408 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.408 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.408 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.408 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.409 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.409 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.409 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.409 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.409 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.409 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.409 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.409 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.409 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.409 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.409 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.409 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.410 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.410 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.410 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.410 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.410 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.410 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.410 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.410 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.410 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.410 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.410 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.410 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.410 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.411 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.411 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.411 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.411 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.411 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.411 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.411 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.411 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.411 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.411 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.411 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.411 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.411 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.412 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.412 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.412 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.412 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.412 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.430 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.431 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.432 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.526 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.586 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.586 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.586 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.586 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.586 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.586 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.587 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.587 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.587 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.587 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.587 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.587 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.587 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.587 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.587 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.587 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.587 12 DEBUG cotyledon.oslo_config_glue [-] host                           = np0005548790.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.588 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.588 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.588 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.588 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.588 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.588 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.588 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.588 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.588 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.588 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.589 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.589 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.589 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.589 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.589 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.589 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.589 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.589 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.589 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.589 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.589 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.589 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.589 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.589 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.590 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.590 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.590 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.590 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.590 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.590 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.590 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.590 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.590 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.590 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.590 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.590 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.591 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.591 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.591 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.591 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.591 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.591 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.591 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.591 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.591 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.591 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.591 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.591 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.591 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.592 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.592 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.592 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.592 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.592 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.592 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.592 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.592 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.592 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.592 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.592 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.592 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.593 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.593 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.593 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.593 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.593 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.593 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.593 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.593 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.593 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.593 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.593 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.593 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.593 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.594 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.594 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.594 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.594 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.594 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.594 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.594 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.594 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.594 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.594 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.594 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.594 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.595 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.595 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.595 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.595 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.595 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.595 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.595 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.595 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.595 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.595 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.595 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.596 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.596 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.596 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.596 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.596 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.596 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.596 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.596 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.596 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.596 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.596 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.596 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.596 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.597 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.597 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.597 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain python3.9[237021]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.597 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.597 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.597 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.597 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.597 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.597 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.597 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.597 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.597 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.598 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.598 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.598 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.598 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.598 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.598 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.598 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.598 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.598 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.598 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.598 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.598 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.598 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.599 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.599 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.599 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.599 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.599 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.599 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.599 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.599 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.599 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.599 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.599 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.599 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.599 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.600 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.600 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.600 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.600 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.600 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.600 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.600 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.600 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.600 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.600 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.600 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.600 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.600 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.601 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.601 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.601 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.601 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.601 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.601 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.601 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.601 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.601 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.601 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.601 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.601 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.601 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.602 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.602 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.602 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.602 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.602 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.602 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.602 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.602 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.602 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.602 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.602 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.602 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.602 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.603 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.603 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.603 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.603 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.603 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.603 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.603 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.603 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.603 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.603 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.603 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.603 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.604 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.604 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.604 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.607 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.616 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.619 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.619 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.619 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.619 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.619 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.619 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.619 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.620 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.620 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.620 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.620 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.620 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.621 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.621 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.621 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.621 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.621 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.621 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.621 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.622 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.622 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.622 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:05 np0005548790.localdomain systemd[1]: Stopping ceilometer_agent_compute container...
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.694 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.795 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.795 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.795 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Dec 06 09:47:05 np0005548790.localdomain ceilometer_agent_compute[236881]: 2025-12-06 09:47:05.803 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Dec 06 09:47:05 np0005548790.localdomain virtqemud[228868]: End of file while reading data: Input/output error
Dec 06 09:47:05 np0005548790.localdomain virtqemud[228868]: End of file while reading data: Input/output error
Dec 06 09:47:05 np0005548790.localdomain systemd[1]: libpod-8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.scope: Deactivated successfully.
Dec 06 09:47:05 np0005548790.localdomain systemd[1]: libpod-8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.scope: Consumed 1.178s CPU time.
Dec 06 09:47:05 np0005548790.localdomain podman[237031]: 2025-12-06 09:47:05.929291479 +0000 UTC m=+0.269345862 container died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:47:05 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.timer: Deactivated successfully.
Dec 06 09:47:05 np0005548790.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:47:05 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8-userdata-shm.mount: Deactivated successfully.
Dec 06 09:47:06 np0005548790.localdomain podman[237031]: 2025-12-06 09:47:06.007883091 +0000 UTC m=+0.347937473 container cleanup 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS)
Dec 06 09:47:06 np0005548790.localdomain podman[237031]: ceilometer_agent_compute
Dec 06 09:47:06 np0005548790.localdomain podman[237058]: 2025-12-06 09:47:06.112021062 +0000 UTC m=+0.071622584 container cleanup 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 06 09:47:06 np0005548790.localdomain podman[237058]: ceilometer_agent_compute
Dec 06 09:47:06 np0005548790.localdomain systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Dec 06 09:47:06 np0005548790.localdomain systemd[1]: Stopped ceilometer_agent_compute container.
Dec 06 09:47:06 np0005548790.localdomain systemd[1]: Starting ceilometer_agent_compute container...
Dec 06 09:47:06 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:47:06 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc8c293e1758c0ea097a85c6f613899bf193b428a1129e52ba5e7aac608f9cd/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 06 09:47:06 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc8c293e1758c0ea097a85c6f613899bf193b428a1129e52ba5e7aac608f9cd/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 06 09:47:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:47:06 np0005548790.localdomain podman[237070]: 2025-12-06 09:47:06.265001942 +0000 UTC m=+0.123109154 container init 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: + sudo -E kolla_set_configs
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: sudo: unable to send audit message: Operation not permitted
Dec 06 09:47:06 np0005548790.localdomain sudo[237089]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 06 09:47:06 np0005548790.localdomain sudo[237089]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 06 09:47:06 np0005548790.localdomain sudo[237089]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 06 09:47:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:47:06 np0005548790.localdomain podman[237070]: 2025-12-06 09:47:06.296214885 +0000 UTC m=+0.154322117 container start 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:47:06 np0005548790.localdomain podman[237070]: ceilometer_agent_compute
Dec 06 09:47:06 np0005548790.localdomain systemd[1]: Started ceilometer_agent_compute container.
Dec 06 09:47:06 np0005548790.localdomain sudo[237019]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: INFO:__main__:Validating config file
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: INFO:__main__:Copying service configuration files
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: INFO:__main__:Writing out command to execute
Dec 06 09:47:06 np0005548790.localdomain sudo[237089]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: ++ cat /run_command
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: + ARGS=
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: + sudo kolla_copy_cacerts
Dec 06 09:47:06 np0005548790.localdomain sudo[237107]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: sudo: unable to send audit message: Operation not permitted
Dec 06 09:47:06 np0005548790.localdomain sudo[237107]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 06 09:47:06 np0005548790.localdomain sudo[237107]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 06 09:47:06 np0005548790.localdomain podman[237091]: 2025-12-06 09:47:06.388756873 +0000 UTC m=+0.088423468 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:47:06 np0005548790.localdomain sudo[237107]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: + [[ ! -n '' ]]
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: + . kolla_extend_start
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: + umask 0022
Dec 06 09:47:06 np0005548790.localdomain ceilometer_agent_compute[237083]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec 06 09:47:06 np0005548790.localdomain podman[237091]: 2025-12-06 09:47:06.416929753 +0000 UTC m=+0.116596358 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 06 09:47:06 np0005548790.localdomain podman[237091]: unhealthy
Dec 06 09:47:06 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:47:06 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Failed with result 'exit-code'.
Dec 06 09:47:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51299 DF PROTO=TCP SPT=59236 DPT=9101 SEQ=1839489273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FFDF200000000001030307) 
Dec 06 09:47:07 np0005548790.localdomain sudo[237131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:47:07 np0005548790.localdomain sudo[237131]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:47:07 np0005548790.localdomain sudo[237131]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.112 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.113 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.113 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.113 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.113 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.113 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.113 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.113 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.114 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.114 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.114 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.114 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.114 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.114 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.114 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.115 2 DEBUG cotyledon.oslo_config_glue [-] host                           = np0005548790.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.115 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.115 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.115 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.115 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.115 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.115 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain sudo[237149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.115 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.115 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.115 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.115 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.115 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.116 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.116 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.116 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.116 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.116 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.116 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.116 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.116 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.116 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.116 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.116 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.116 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.116 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.116 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.117 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.117 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.117 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.117 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.117 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.117 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.117 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.117 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.117 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.117 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.117 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.117 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.118 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.118 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.118 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.118 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.118 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain sudo[237149]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.118 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.118 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.118 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.118 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.118 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.118 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.118 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.119 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.119 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.119 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.119 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.119 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.119 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.120 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.121 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.121 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.121 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.121 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.121 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.121 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.121 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.121 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.121 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.121 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.121 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.121 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.122 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.122 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.122 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.122 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.122 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.122 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.122 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.122 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.122 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.122 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.122 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.123 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.123 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.123 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.123 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.123 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.123 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.123 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.123 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.123 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.123 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.123 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.123 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.124 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.124 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.124 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.124 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.124 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.124 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.124 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.124 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.124 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.124 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.124 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.124 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.125 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.125 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.125 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.125 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.125 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.125 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.125 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.125 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.125 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.125 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.125 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.125 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.125 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.126 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.126 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.126 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.126 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.126 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.126 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.126 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.126 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.126 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.126 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.126 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.126 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.127 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.127 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.127 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.127 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.145 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.146 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.148 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.164 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.289 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.289 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.289 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.289 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.290 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.290 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.290 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.290 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.290 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.290 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.290 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.290 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.290 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.290 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.290 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.290 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.291 12 DEBUG cotyledon.oslo_config_glue [-] host                           = np0005548790.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.291 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.291 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.291 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.291 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.291 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.291 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.291 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.291 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.291 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.291 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.291 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.292 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.292 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.292 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.292 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.292 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.292 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.292 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.292 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.292 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.292 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.292 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.292 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.292 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.293 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.293 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.293 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.293 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.293 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.293 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.293 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.293 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.293 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.293 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.293 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.293 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.294 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.294 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.294 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.294 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.294 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.294 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.294 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.294 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.294 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.294 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.294 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.294 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.295 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.295 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.295 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.295 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.295 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.295 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.295 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.295 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.295 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.295 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.295 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.295 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.296 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.296 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.296 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.296 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.296 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.296 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.296 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.296 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.296 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.296 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.296 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.296 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.297 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.297 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.297 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.297 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.297 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.297 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.297 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.297 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.297 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.297 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.297 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.297 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.297 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.298 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.298 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.298 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.298 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.298 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.298 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.298 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.298 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.298 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.298 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.298 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.299 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.299 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.299 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.299 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.299 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.299 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.299 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.299 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.299 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.299 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.299 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.299 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.300 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.300 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.300 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.300 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.300 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.300 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.300 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.300 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.300 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.300 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.300 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.300 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.301 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.301 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.301 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.301 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.301 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.301 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.301 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.301 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.301 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.301 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.301 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.301 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.301 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.302 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.302 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.302 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.302 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.302 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.302 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.302 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.302 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.302 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.302 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.302 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.302 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.302 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.303 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.303 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.303 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.303 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.303 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.303 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.303 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.303 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.303 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.303 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.303 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.303 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.303 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.304 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.304 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.304 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.304 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.304 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.304 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.304 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.304 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.304 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.304 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.304 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.304 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.304 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.305 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.305 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.305 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.305 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.305 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.305 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.305 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.305 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.305 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.305 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.305 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.305 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.306 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.306 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.306 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.306 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.306 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.306 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.306 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.306 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.306 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.306 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.306 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.306 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.306 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.307 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.307 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.307 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.307 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.307 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.310 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.318 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:47:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:47:07 np0005548790.localdomain sudo[237271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-igxbnofpubatkhtvshwpfdinzrtakifz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014427.1477358-1547-121969306569599/AnsiballZ_stat.py
Dec 06 09:47:07 np0005548790.localdomain sudo[237271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:07 np0005548790.localdomain python3.9[237279]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:47:07 np0005548790.localdomain sudo[237271]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:07 np0005548790.localdomain sudo[237149]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:07 np0005548790.localdomain sudo[237382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fglkxyjkagjbvqltfsugvtqmvuyptxwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014427.1477358-1547-121969306569599/AnsiballZ_copy.py
Dec 06 09:47:07 np0005548790.localdomain sudo[237382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:08 np0005548790.localdomain python3.9[237384]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014427.1477358-1547-121969306569599/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:47:08 np0005548790.localdomain sudo[237382]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:08 np0005548790.localdomain sudo[237402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:47:08 np0005548790.localdomain sudo[237402]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:47:08 np0005548790.localdomain sudo[237402]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:08 np0005548790.localdomain sudo[237510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mruispfyycrqgizcgqatwskxltnwryqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014428.6039834-1598-255407837428398/AnsiballZ_container_config_data.py
Dec 06 09:47:08 np0005548790.localdomain sudo[237510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:09 np0005548790.localdomain python3.9[237512]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Dec 06 09:47:09 np0005548790.localdomain sudo[237510]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37660 DF PROTO=TCP SPT=36710 DPT=9100 SEQ=1863305399 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FFED1F0000000001030307) 
Dec 06 09:47:10 np0005548790.localdomain sudo[237620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-llrykmcifwxdrvwkumkusawtiyekpouq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014430.65769-1625-157630831681503/AnsiballZ_container_config_hash.py
Dec 06 09:47:10 np0005548790.localdomain sudo[237620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:11 np0005548790.localdomain python3.9[237622]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 09:47:11 np0005548790.localdomain sudo[237620]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:12 np0005548790.localdomain sudo[237730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vukgczpyxitvrqyhekgsgxofphdogisk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765014431.9556568-1654-142146924383801/AnsiballZ_edpm_container_manage.py
Dec 06 09:47:12 np0005548790.localdomain sudo[237730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:12 np0005548790.localdomain python3[237732]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 09:47:12 np0005548790.localdomain podman[237772]: 
Dec 06 09:47:12 np0005548790.localdomain podman[237772]: 2025-12-06 09:47:12.791508487 +0000 UTC m=+0.083057994 container create 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible)
Dec 06 09:47:12 np0005548790.localdomain podman[237772]: 2025-12-06 09:47:12.751951049 +0000 UTC m=+0.043500636 image pull  quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c
Dec 06 09:47:12 np0005548790.localdomain python3[237732]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Dec 06 09:47:12 np0005548790.localdomain sudo[237730]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50706 DF PROTO=TCP SPT=33670 DPT=9100 SEQ=3558002238 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A17FFF8DF0000000001030307) 
Dec 06 09:47:13 np0005548790.localdomain sudo[237914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-firjpnrpxmyombblvxioqnuqhrjmepcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014433.211987-1678-275053369151224/AnsiballZ_stat.py
Dec 06 09:47:13 np0005548790.localdomain sudo[237914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:13 np0005548790.localdomain python3.9[237916]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:47:13 np0005548790.localdomain sudo[237914]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:14 np0005548790.localdomain sudo[238026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iewcjzvjlybfrzgkfgesfaghymeigvkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014434.237782-1705-196021409352792/AnsiballZ_file.py
Dec 06 09:47:14 np0005548790.localdomain sudo[238026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:14 np0005548790.localdomain python3.9[238028]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:14 np0005548790.localdomain sudo[238026]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:15 np0005548790.localdomain sudo[238135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xoocsefqqrqhygopzwymnpxbzrecnsoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014434.7911158-1705-247464850935617/AnsiballZ_copy.py
Dec 06 09:47:15 np0005548790.localdomain sudo[238135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:15 np0005548790.localdomain python3.9[238137]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014434.7911158-1705-247464850935617/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:15 np0005548790.localdomain sudo[238135]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:15 np0005548790.localdomain sudo[238190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qhyupyxmzlgnfhbqsdqyveakcfyixtxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014434.7911158-1705-247464850935617/AnsiballZ_systemd.py
Dec 06 09:47:15 np0005548790.localdomain sudo[238190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:16 np0005548790.localdomain python3.9[238192]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:47:16 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:47:16 np0005548790.localdomain systemd-sysv-generator[238220]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:47:16 np0005548790.localdomain systemd-rc-local-generator[238215]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:47:16 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:16 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:16 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:16 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:16 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:47:16 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:16 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:16 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:16 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:16 np0005548790.localdomain sudo[238190]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:16 np0005548790.localdomain sudo[238281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ciguilmsithsnzcimpkfflsxexeaeqjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014434.7911158-1705-247464850935617/AnsiballZ_systemd.py
Dec 06 09:47:16 np0005548790.localdomain sudo[238281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:16 np0005548790.localdomain python3.9[238283]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:47:17 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:47:17 np0005548790.localdomain systemd-rc-local-generator[238306]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:47:17 np0005548790.localdomain systemd-sysv-generator[238312]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:47:17 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:17 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:17 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:17 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:17 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:47:17 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:17 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:17 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:17 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:17 np0005548790.localdomain systemd[1]: Starting node_exporter container...
Dec 06 09:47:17 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:47:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:47:17 np0005548790.localdomain podman[238323]: 2025-12-06 09:47:17.463835677 +0000 UTC m=+0.109634761 container init 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.479Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.479Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.479Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.480Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.480Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.480Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.480Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=arp
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=bcache
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=bonding
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=cpu
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=edac
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=filefd
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=netclass
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=netdev
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=netstat
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=nfs
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=nvme
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=softnet
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=systemd
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=xfs
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.481Z caller=node_exporter.go:117 level=info collector=zfs
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.482Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec 06 09:47:17 np0005548790.localdomain node_exporter[238338]: ts=2025-12-06T09:47:17.482Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100
Dec 06 09:47:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:47:17 np0005548790.localdomain podman[238323]: 2025-12-06 09:47:17.504104394 +0000 UTC m=+0.149903488 container start 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 09:47:17 np0005548790.localdomain podman[238323]: node_exporter
Dec 06 09:47:17 np0005548790.localdomain systemd[1]: Started node_exporter container.
Dec 06 09:47:17 np0005548790.localdomain sudo[238281]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:17 np0005548790.localdomain podman[238347]: 2025-12-06 09:47:17.590613399 +0000 UTC m=+0.082600821 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=starting, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 09:47:17 np0005548790.localdomain podman[238347]: 2025-12-06 09:47:17.603117446 +0000 UTC m=+0.095104848 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 09:47:17 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:47:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49923 DF PROTO=TCP SPT=54808 DPT=9102 SEQ=2376768085 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18000CE10000000001030307) 
Dec 06 09:47:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7764 DF PROTO=TCP SPT=35496 DPT=9882 SEQ=1525894189 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18000CE70000000001030307) 
Dec 06 09:47:19 np0005548790.localdomain sudo[238475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lytlolnqzztqmmlgmbgwuarhchobxlkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014439.0291731-1777-126707645584064/AnsiballZ_systemd.py
Dec 06 09:47:19 np0005548790.localdomain sudo[238475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:47:19 np0005548790.localdomain podman[238477]: 2025-12-06 09:47:19.464433652 +0000 UTC m=+0.130427022 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec 06 09:47:19 np0005548790.localdomain podman[238477]: 2025-12-06 09:47:19.503207109 +0000 UTC m=+0.169200439 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:47:19 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:47:19 np0005548790.localdomain python3.9[238478]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:47:19 np0005548790.localdomain systemd[1]: Stopping node_exporter container...
Dec 06 09:47:19 np0005548790.localdomain systemd[1]: tmp-crun.LWasic.mount: Deactivated successfully.
Dec 06 09:47:19 np0005548790.localdomain systemd[1]: libpod-028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.scope: Deactivated successfully.
Dec 06 09:47:19 np0005548790.localdomain podman[238500]: 2025-12-06 09:47:19.736935628 +0000 UTC m=+0.066199758 container died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 09:47:19 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.timer: Deactivated successfully.
Dec 06 09:47:19 np0005548790.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:47:19 np0005548790.localdomain podman[238500]: 2025-12-06 09:47:19.779093446 +0000 UTC m=+0.108357566 container cleanup 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 09:47:19 np0005548790.localdomain podman[238500]: node_exporter
Dec 06 09:47:19 np0005548790.localdomain systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 06 09:47:19 np0005548790.localdomain podman[238526]: 2025-12-06 09:47:19.876057874 +0000 UTC m=+0.068679465 container cleanup 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 09:47:19 np0005548790.localdomain podman[238526]: node_exporter
Dec 06 09:47:19 np0005548790.localdomain systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Dec 06 09:47:19 np0005548790.localdomain systemd[1]: Stopped node_exporter container.
Dec 06 09:47:19 np0005548790.localdomain systemd[1]: Starting node_exporter container...
Dec 06 09:47:20 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:47:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:47:20 np0005548790.localdomain podman[238539]: 2025-12-06 09:47:20.037213604 +0000 UTC m=+0.132315402 container init 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.051Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.051Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.051Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.052Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.052Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.052Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.052Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.053Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:117 level=info collector=arp
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:117 level=info collector=bcache
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:117 level=info collector=bonding
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:117 level=info collector=cpu
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:117 level=info collector=edac
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:117 level=info collector=filefd
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:117 level=info collector=netclass
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:117 level=info collector=netdev
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:117 level=info collector=netstat
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:117 level=info collector=nfs
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:117 level=info collector=nvme
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.054Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.055Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.055Z caller=node_exporter.go:117 level=info collector=softnet
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.055Z caller=node_exporter.go:117 level=info collector=systemd
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.055Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.055Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.055Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.055Z caller=node_exporter.go:117 level=info collector=xfs
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.055Z caller=node_exporter.go:117 level=info collector=zfs
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.055Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec 06 09:47:20 np0005548790.localdomain node_exporter[238554]: ts=2025-12-06T09:47:20.055Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100
Dec 06 09:47:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:47:20 np0005548790.localdomain podman[238539]: 2025-12-06 09:47:20.074753078 +0000 UTC m=+0.169854866 container start 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 09:47:20 np0005548790.localdomain podman[238539]: node_exporter
Dec 06 09:47:20 np0005548790.localdomain systemd[1]: Started node_exporter container.
Dec 06 09:47:20 np0005548790.localdomain sudo[238475]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:20 np0005548790.localdomain podman[238563]: 2025-12-06 09:47:20.164419959 +0000 UTC m=+0.089610710 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=starting, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 09:47:20 np0005548790.localdomain podman[238563]: 2025-12-06 09:47:20.178361315 +0000 UTC m=+0.103552066 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 09:47:20 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:47:20 np0005548790.localdomain sudo[238694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tijhyypcrwsnzebxykdgjbqtrxoefixg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014440.3812542-1802-194466985490345/AnsiballZ_stat.py
Dec 06 09:47:20 np0005548790.localdomain sudo[238694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:20 np0005548790.localdomain python3.9[238696]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:47:20 np0005548790.localdomain sudo[238694]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49925 DF PROTO=TCP SPT=54808 DPT=9102 SEQ=2376768085 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180018E00000000001030307) 
Dec 06 09:47:22 np0005548790.localdomain sudo[238782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lyyshsgbpaxdriwwhwnwshvyyuzzekiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014440.3812542-1802-194466985490345/AnsiballZ_copy.py
Dec 06 09:47:22 np0005548790.localdomain sudo[238782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:22 np0005548790.localdomain python3.9[238784]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014440.3812542-1802-194466985490345/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:47:22 np0005548790.localdomain sudo[238782]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:23 np0005548790.localdomain sudo[238892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tqhbydgjicyejjlcbpbqafuchklxrasg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014443.0426123-1853-154256739951219/AnsiballZ_container_config_data.py
Dec 06 09:47:23 np0005548790.localdomain sudo[238892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:23 np0005548790.localdomain python3.9[238894]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Dec 06 09:47:23 np0005548790.localdomain sudo[238892]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:24 np0005548790.localdomain sudo[239002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-esxstwtvnpoeieglwkeahdhlgwxvlvve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014444.3128486-1880-176557099726276/AnsiballZ_container_config_hash.py
Dec 06 09:47:24 np0005548790.localdomain sudo[239002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24864 DF PROTO=TCP SPT=43308 DPT=9105 SEQ=3194383658 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1800255F0000000001030307) 
Dec 06 09:47:24 np0005548790.localdomain python3.9[239004]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 09:47:24 np0005548790.localdomain sudo[239002]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:25 np0005548790.localdomain sudo[239112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qxwkapumxfrsiweehyuvmrkvhhgcwmsb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765014445.297843-1910-118397295135256/AnsiballZ_edpm_container_manage.py
Dec 06 09:47:25 np0005548790.localdomain sudo[239112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:25 np0005548790.localdomain python3[239114]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 09:47:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29251 DF PROTO=TCP SPT=52920 DPT=9105 SEQ=2278114415 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1800311F0000000001030307) 
Dec 06 09:47:27 np0005548790.localdomain podman[239129]: 2025-12-06 09:47:25.914590423 +0000 UTC m=+0.045819658 image pull  quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd
Dec 06 09:47:27 np0005548790.localdomain podman[239199]: 
Dec 06 09:47:27 np0005548790.localdomain podman[239199]: 2025-12-06 09:47:27.90817469 +0000 UTC m=+0.081212774 container create 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 09:47:27 np0005548790.localdomain podman[239199]: 2025-12-06 09:47:27.870519053 +0000 UTC m=+0.043557167 image pull  quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd
Dec 06 09:47:27 np0005548790.localdomain python3[239114]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd
Dec 06 09:47:28 np0005548790.localdomain sudo[239112]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:28 np0005548790.localdomain sudo[239345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wcbmuexzdqglhkrodysrdgmdvfoxtaea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014448.3797169-1933-65679660044496/AnsiballZ_stat.py
Dec 06 09:47:28 np0005548790.localdomain sudo[239345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:28 np0005548790.localdomain python3.9[239347]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:47:28 np0005548790.localdomain sudo[239345]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:29 np0005548790.localdomain sudo[239457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zjvcfzwkotenghwaouegpkuwfjwcdyeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014449.2307663-1960-226193795403973/AnsiballZ_file.py
Dec 06 09:47:29 np0005548790.localdomain sudo[239457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:29 np0005548790.localdomain python3.9[239459]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:29 np0005548790.localdomain sudo[239457]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:30 np0005548790.localdomain sudo[239566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rlqshghwdqmkhtqnsrybhgopmcegfwje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014449.7930965-1960-198809630456024/AnsiballZ_copy.py
Dec 06 09:47:30 np0005548790.localdomain sudo[239566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:30 np0005548790.localdomain python3.9[239568]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014449.7930965-1960-198809630456024/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:47:30 np0005548790.localdomain sudo[239566]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:30 np0005548790.localdomain sudo[239621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-thhunkvlftaaqoduixkczpbcpgfggxri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014449.7930965-1960-198809630456024/AnsiballZ_systemd.py
Dec 06 09:47:30 np0005548790.localdomain sudo[239621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:47:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:47:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24866 DF PROTO=TCP SPT=43308 DPT=9105 SEQ=3194383658 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18003D1F0000000001030307) 
Dec 06 09:47:30 np0005548790.localdomain systemd[1]: tmp-crun.3YqZIS.mount: Deactivated successfully.
Dec 06 09:47:30 np0005548790.localdomain podman[239625]: 2025-12-06 09:47:30.762158974 +0000 UTC m=+0.086784594 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:47:30 np0005548790.localdomain podman[239625]: 2025-12-06 09:47:30.804702282 +0000 UTC m=+0.129327982 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 09:47:30 np0005548790.localdomain systemd[1]: tmp-crun.AofNXI.mount: Deactivated successfully.
Dec 06 09:47:30 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:47:30 np0005548790.localdomain podman[239624]: 2025-12-06 09:47:30.816995564 +0000 UTC m=+0.141081889 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 09:47:30 np0005548790.localdomain podman[239624]: 2025-12-06 09:47:30.900267432 +0000 UTC m=+0.224353697 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 09:47:30 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:47:30 np0005548790.localdomain python3.9[239623]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:47:30 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:47:31 np0005548790.localdomain systemd-sysv-generator[239688]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:47:31 np0005548790.localdomain systemd-rc-local-generator[239685]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:47:31 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:31 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:31 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:31 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:31 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:47:31 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:31 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:31 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:31 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:31 np0005548790.localdomain sudo[239621]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:31 np0005548790.localdomain sudo[239752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ljkwimtdqcfutezmcvlnaeuyxcfhwwqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014449.7930965-1960-198809630456024/AnsiballZ_systemd.py
Dec 06 09:47:31 np0005548790.localdomain sudo[239752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:31 np0005548790.localdomain python3.9[239754]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:47:32 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:47:33 np0005548790.localdomain systemd-sysv-generator[239786]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:47:33 np0005548790.localdomain systemd-rc-local-generator[239783]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:47:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:47:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:47:33 np0005548790.localdomain systemd[1]: Starting podman_exporter container...
Dec 06 09:47:33 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:47:33 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:47:33 np0005548790.localdomain podman[239795]: 2025-12-06 09:47:33.415697086 +0000 UTC m=+0.141505762 container init 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 09:47:33 np0005548790.localdomain podman_exporter[239809]: ts=2025-12-06T09:47:33.436Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec 06 09:47:33 np0005548790.localdomain podman_exporter[239809]: ts=2025-12-06T09:47:33.436Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec 06 09:47:33 np0005548790.localdomain podman_exporter[239809]: ts=2025-12-06T09:47:33.437Z caller=handler.go:94 level=info msg="enabled collectors"
Dec 06 09:47:33 np0005548790.localdomain podman_exporter[239809]: ts=2025-12-06T09:47:33.437Z caller=handler.go:105 level=info collector=container
Dec 06 09:47:33 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:47:33 np0005548790.localdomain podman[239795]: 2025-12-06 09:47:33.44956692 +0000 UTC m=+0.175375546 container start 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 09:47:33 np0005548790.localdomain podman[239795]: podman_exporter
Dec 06 09:47:33 np0005548790.localdomain systemd[1]: Starting Podman API Service...
Dec 06 09:47:33 np0005548790.localdomain systemd[1]: Started podman_exporter container.
Dec 06 09:47:33 np0005548790.localdomain systemd[1]: Started Podman API Service.
Dec 06 09:47:33 np0005548790.localdomain sudo[239752]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:33 np0005548790.localdomain podman[239825]: time="2025-12-06T09:47:33Z" level=info msg="/usr/bin/podman filtering at log level info"
Dec 06 09:47:33 np0005548790.localdomain podman[239825]: time="2025-12-06T09:47:33Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Dec 06 09:47:33 np0005548790.localdomain podman[239825]: time="2025-12-06T09:47:33Z" level=info msg="Setting parallel job count to 25"
Dec 06 09:47:33 np0005548790.localdomain podman[239825]: time="2025-12-06T09:47:33Z" level=info msg="Using systemd socket activation to determine API endpoint"
Dec 06 09:47:33 np0005548790.localdomain podman[239825]: time="2025-12-06T09:47:33Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"/run/podman/podman.sock\""
Dec 06 09:47:33 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:47:33 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec 06 09:47:33 np0005548790.localdomain podman[239825]: time="2025-12-06T09:47:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 09:47:33 np0005548790.localdomain podman[239819]: 2025-12-06 09:47:33.560213127 +0000 UTC m=+0.102743644 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 09:47:33 np0005548790.localdomain podman[239819]: 2025-12-06 09:47:33.568346176 +0000 UTC m=+0.110876703 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 09:47:33 np0005548790.localdomain podman[239819]: unhealthy
Dec 06 09:47:33 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:47:33 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Failed with result 'exit-code'.
Dec 06 09:47:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49927 DF PROTO=TCP SPT=54808 DPT=9102 SEQ=2376768085 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1800491F0000000001030307) 
Dec 06 09:47:34 np0005548790.localdomain sudo[239966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oiimzzyfzwtdszuwlluwzkolwpeueilv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014454.5180109-2033-68226345669397/AnsiballZ_systemd.py
Dec 06 09:47:34 np0005548790.localdomain sudo[239966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:35 np0005548790.localdomain python3.9[239968]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:47:35 np0005548790.localdomain systemd[1]: Stopping podman_exporter container...
Dec 06 09:47:35 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:47:33 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 2790 "" "Go-http-client/1.1"
Dec 06 09:47:35 np0005548790.localdomain systemd[1]: libpod-0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.scope: Deactivated successfully.
Dec 06 09:47:35 np0005548790.localdomain podman[239972]: 2025-12-06 09:47:35.293529778 +0000 UTC m=+0.071680575 container died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 09:47:35 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.timer: Deactivated successfully.
Dec 06 09:47:35 np0005548790.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:47:35 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7-userdata-shm.mount: Deactivated successfully.
Dec 06 09:47:36 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:47:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:47:36 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:47:36 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:47:36 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f94e142dc2522aea0ec82c33a754334612c73a202660eba4bba5a36df12b5f20-merged.mount: Deactivated successfully.
Dec 06 09:47:36 np0005548790.localdomain podman[239972]: 2025-12-06 09:47:36.718069604 +0000 UTC m=+1.496220341 container cleanup 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 09:47:36 np0005548790.localdomain podman[239972]: podman_exporter
Dec 06 09:47:36 np0005548790.localdomain podman[239998]: 2025-12-06 09:47:36.760734916 +0000 UTC m=+0.295479498 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 06 09:47:36 np0005548790.localdomain podman[239984]: 2025-12-06 09:47:36.785187646 +0000 UTC m=+1.490153208 container cleanup 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 09:47:36 np0005548790.localdomain podman[239998]: 2025-12-06 09:47:36.793176101 +0000 UTC m=+0.327920703 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:47:36 np0005548790.localdomain podman[239998]: unhealthy
Dec 06 09:47:37 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63163 DF PROTO=TCP SPT=40486 DPT=9100 SEQ=3433094193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180056600000000001030307) 
Dec 06 09:47:38 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:47:38 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:47:38 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:47:38 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:47:38 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Failed with result 'exit-code'.
Dec 06 09:47:38 np0005548790.localdomain systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 06 09:47:38 np0005548790.localdomain podman[240017]: 2025-12-06 09:47:38.77015306 +0000 UTC m=+0.085600801 container cleanup 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 09:47:38 np0005548790.localdomain podman[240017]: podman_exporter
Dec 06 09:47:39 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:47:39 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:47:39 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:47:39 np0005548790.localdomain systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Dec 06 09:47:39 np0005548790.localdomain systemd[1]: Stopped podman_exporter container.
Dec 06 09:47:40 np0005548790.localdomain systemd[1]: Starting podman_exporter container...
Dec 06 09:47:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30278 DF PROTO=TCP SPT=45854 DPT=9100 SEQ=3552094298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1800631F0000000001030307) 
Dec 06 09:47:40 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:47:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:47:40 np0005548790.localdomain podman[240030]: 2025-12-06 09:47:40.712421651 +0000 UTC m=+0.687697205 container init 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 09:47:40 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:47:40 np0005548790.localdomain podman_exporter[240045]: ts=2025-12-06T09:47:40.724Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec 06 09:47:40 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:47:40 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec 06 09:47:40 np0005548790.localdomain podman[239825]: time="2025-12-06T09:47:40Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 09:47:40 np0005548790.localdomain podman_exporter[240045]: ts=2025-12-06T09:47:40.724Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec 06 09:47:40 np0005548790.localdomain podman_exporter[240045]: ts=2025-12-06T09:47:40.725Z caller=handler.go:94 level=info msg="enabled collectors"
Dec 06 09:47:40 np0005548790.localdomain podman_exporter[240045]: ts=2025-12-06T09:47:40.725Z caller=handler.go:105 level=info collector=container
Dec 06 09:47:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:47:40 np0005548790.localdomain podman[240030]: 2025-12-06 09:47:40.754498338 +0000 UTC m=+0.729773882 container start 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 09:47:40 np0005548790.localdomain podman[240030]: podman_exporter
Dec 06 09:47:40 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a1185e7325783fe8cba63270bc6e59299386d7c73e4bc34c560a1fbc9e6d7e2c-merged.mount: Deactivated successfully.
Dec 06 09:47:40 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b-merged.mount: Deactivated successfully.
Dec 06 09:47:41 np0005548790.localdomain systemd[1]: Started podman_exporter container.
Dec 06 09:47:41 np0005548790.localdomain sudo[239966]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:41 np0005548790.localdomain podman[240054]: 2025-12-06 09:47:41.045546154 +0000 UTC m=+0.285775085 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 09:47:41 np0005548790.localdomain podman[240054]: 2025-12-06 09:47:41.055235906 +0000 UTC m=+0.295464797 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 09:47:41 np0005548790.localdomain podman[240054]: unhealthy
Dec 06 09:47:41 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:47:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:47:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-9c5ded326c11c52da5ab8fc5537c56948ef8cb9ad4217d530c95e9a7122f4a61-merged.mount: Deactivated successfully.
Dec 06 09:47:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63165 DF PROTO=TCP SPT=40486 DPT=9100 SEQ=3433094193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18006E1F0000000001030307) 
Dec 06 09:47:43 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:47:43 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Failed with result 'exit-code'.
Dec 06 09:47:44 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-0438ade5aeea533b00cd75095bec75fbc2b307bace4c89bb39b75d428637bcd8-merged.mount: Deactivated successfully.
Dec 06 09:47:44 np0005548790.localdomain sudo[240184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fceshjhnsevjfruvpykwsrpylmbwaasi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014464.088109-2056-217027025824221/AnsiballZ_stat.py
Dec 06 09:47:44 np0005548790.localdomain sudo[240184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:44 np0005548790.localdomain python3.9[240186]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:47:44 np0005548790.localdomain sudo[240184]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:44 np0005548790.localdomain sudo[240272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hovkhdaaapcgfcrnjidhlwoghniirbss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014464.088109-2056-217027025824221/AnsiballZ_copy.py
Dec 06 09:47:44 np0005548790.localdomain sudo[240272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:45 np0005548790.localdomain python3.9[240274]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014464.088109-2056-217027025824221/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:47:45 np0005548790.localdomain sudo[240272]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:45 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:47:45 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:47:45 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 06 09:47:46 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 06 09:47:46 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:47:46 np0005548790.localdomain sudo[240382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yzmrkhebpqjbaoytbxhwmjioqxpxmawm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014466.6506045-2107-190641277685090/AnsiballZ_container_config_data.py
Dec 06 09:47:46 np0005548790.localdomain sudo[240382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:47 np0005548790.localdomain python3.9[240384]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Dec 06 09:47:47 np0005548790.localdomain sudo[240382]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:47 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:47:47 np0005548790.localdomain sudo[240492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hzgckgzsbhzzmjgdrszvwddqheardcjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014467.458375-2135-19592672160318/AnsiballZ_container_config_hash.py
Dec 06 09:47:47 np0005548790.localdomain sudo[240492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:47 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:47:47 np0005548790.localdomain python3.9[240494]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 09:47:47 np0005548790.localdomain sudo[240492]: pam_unix(sudo:session): session closed for user root
Dec 06 09:47:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:47:48.354 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:47:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:47:48.355 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:47:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:47:48.355 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:47:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28246 DF PROTO=TCP SPT=56270 DPT=9102 SEQ=561881602 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180082110000000001030307) 
Dec 06 09:47:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61906 DF PROTO=TCP SPT=41932 DPT=9882 SEQ=2891171614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180082180000000001030307) 
Dec 06 09:47:48 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:47:48 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:47:49 np0005548790.localdomain sudo[240602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rxjsaizewnxqvtvmhlogjwyiradhkvar ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765014468.3763423-2164-225231045587401/AnsiballZ_edpm_container_manage.py
Dec 06 09:47:49 np0005548790.localdomain sudo[240602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:47:49 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:47:49 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:47:49 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:47:49 np0005548790.localdomain python3[240604]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 09:47:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:47:49 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:47:49 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:47:49 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:47:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:47:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28248 DF PROTO=TCP SPT=56270 DPT=9102 SEQ=561881602 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18008E200000000001030307) 
Dec 06 09:47:51 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 06 09:47:51 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e776680c96e76fe14e56ff40e379ac9bdb9ba3d732b302c8beb7b6113cedc1ac-merged.mount: Deactivated successfully.
Dec 06 09:47:51 np0005548790.localdomain podman[240617]: 2025-12-06 09:47:51.83884241 +0000 UTC m=+2.104364138 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 09:47:51 np0005548790.localdomain podman[240617]: 2025-12-06 09:47:51.848036149 +0000 UTC m=+2.113557887 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 06 09:47:51 np0005548790.localdomain podman[240628]: 2025-12-06 09:47:51.939832297 +0000 UTC m=+1.456974293 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 09:47:51 np0005548790.localdomain podman[240628]: 2025-12-06 09:47:51.970997198 +0000 UTC m=+1.488139114 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 09:47:52 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1-merged.mount: Deactivated successfully.
Dec 06 09:47:52 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e1fac4507a16e359f79966290a44e975bb0ed717e8b6cc0e34b61e8c96e0a1a3-merged.mount: Deactivated successfully.
Dec 06 09:47:53 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:47:54 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-9c5ded326c11c52da5ab8fc5537c56948ef8cb9ad4217d530c95e9a7122f4a61-merged.mount: Deactivated successfully.
Dec 06 09:47:54 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:47:54 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:47:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49706 DF PROTO=TCP SPT=52714 DPT=9105 SEQ=4117510545 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18009A5F0000000001030307) 
Dec 06 09:47:54 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-1e604deea57dbda554a168861cff1238f93b8c6c69c863c43aed37d9d99c5fed-merged.mount: Deactivated successfully.
Dec 06 09:47:54 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1-merged.mount: Deactivated successfully.
Dec 06 09:47:56 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 06 09:47:56 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:47:56 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:47:56 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 06 09:47:56 np0005548790.localdomain podman[239825]: time="2025-12-06T09:47:56Z" level=error msg="Unmounting /var/lib/containers/storage/overlay/f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958/merged: invalid argument"
Dec 06 09:47:56 np0005548790.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:47:56 np0005548790.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:47:56 np0005548790.localdomain podman[239825]: time="2025-12-06T09:47:56Z" level=error msg="Getting root fs size for \"0f653fecfc67df9cb5a24adae077e22b1fae5e81ccf4d53df968317d5cab89d6\": getting diffsize of layer \"f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958\" and its parent \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\": creating overlay mount to /var/lib/containers/storage/overlay/f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/l/XVN6RLVGVK6A4CYHJWUWJDQC7Q:/var/lib/containers/storage/overlay/l/KEAEZY6IHY6VIZOZTB25O7P4XO:/var/lib/containers/storage/overlay/l/TTSLVTNK7GTY3BTXZICSWH3UA2,upperdir=/var/lib/containers/storage/overlay/f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958/diff,workdir=/var/lib/containers/storage/overlay/f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958/work,nodev,metacopy=on\": no such file or directory"
Dec 06 09:47:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52012 DF PROTO=TCP SPT=42378 DPT=9105 SEQ=1830276597 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1800A71F0000000001030307) 
Dec 06 09:47:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:58.237 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:47:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:58.254 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:47:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:58.254 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:47:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:58.254 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:47:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:58.254 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:47:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:58.254 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:47:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:58.255 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 09:47:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:58.255 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:47:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:58.268 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:47:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:58.268 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:47:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:58.268 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:47:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:58.269 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 09:47:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:58.269 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:47:58 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:47:58 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:47:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:58.726 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:47:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:58.876 229637 WARNING nova.virt.libvirt.driver [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 09:47:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:58.877 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=13180MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 09:47:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:58.877 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:47:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:58.877 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:47:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:58.947 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 09:47:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:58.948 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 09:47:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:58.970 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:47:59 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 06 09:47:59 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e776680c96e76fe14e56ff40e379ac9bdb9ba3d732b302c8beb7b6113cedc1ac-merged.mount: Deactivated successfully.
Dec 06 09:47:59 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:47:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:59.436 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:47:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:59.441 229637 DEBUG nova.compute.provider_tree [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 09:47:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:59.465 229637 DEBUG nova.scheduler.client.report [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 09:47:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:59.468 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 09:47:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:47:59.468 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:47:59 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:47:59 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e1fac4507a16e359f79966290a44e975bb0ed717e8b6cc0e34b61e8c96e0a1a3-merged.mount: Deactivated successfully.
Dec 06 09:48:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:48:00.100 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:48:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:48:00.100 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:48:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:48:00.101 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 09:48:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:48:00.101 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 09:48:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:48:00.131 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 09:48:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:48:00.132 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:48:00 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:48:00 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:48:00 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:48:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49708 DF PROTO=TCP SPT=52714 DPT=9105 SEQ=4117510545 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1800B21F0000000001030307) 
Dec 06 09:48:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:48:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:48:01 np0005548790.localdomain podman[240739]: 2025-12-06 09:48:01.462648626 +0000 UTC m=+0.143612578 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec 06 09:48:01 np0005548790.localdomain podman[240739]: 2025-12-06 09:48:01.502077491 +0000 UTC m=+0.183041463 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 09:48:01 np0005548790.localdomain podman[240738]: 2025-12-06 09:48:01.511847504 +0000 UTC m=+0.193407402 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 09:48:01 np0005548790.localdomain podman[240738]: 2025-12-06 09:48:01.517162908 +0000 UTC m=+0.198722806 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 09:48:01 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:48:02 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:48:02 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:48:03 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:48:03 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:48:03 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:48:03 np0005548790.localdomain podman[240640]: 2025-12-06 09:47:54.168988513 +0000 UTC m=+2.368232191 image pull  quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7
Dec 06 09:48:03 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:48:04 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61910 DF PROTO=TCP SPT=41932 DPT=9882 SEQ=2891171614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1800BF1F0000000001030307) 
Dec 06 09:48:05 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:48:05 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-4d2b1f5f95097aa03fff37b0cad6d11c71f5f221a93020bec6d77c9d01b86ad5-merged.mount: Deactivated successfully.
Dec 06 09:48:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17293 DF PROTO=TCP SPT=60442 DPT=9101 SEQ=2513564287 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1800C91F0000000001030307) 
Dec 06 09:48:08 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 06 09:48:08 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 06 09:48:08 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:48:08 np0005548790.localdomain sudo[240800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:48:08 np0005548790.localdomain sudo[240800]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:48:08 np0005548790.localdomain sudo[240800]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:08 np0005548790.localdomain sudo[240818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:48:08 np0005548790.localdomain sudo[240818]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:48:08 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:48:09 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:48:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50709 DF PROTO=TCP SPT=33670 DPT=9100 SEQ=3558002238 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1800D71F0000000001030307) 
Dec 06 09:48:10 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:48:10 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:48:10 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 06 09:48:10 np0005548790.localdomain podman[240856]: 2025-12-06 09:48:10.750843678 +0000 UTC m=+1.997510102 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 09:48:10 np0005548790.localdomain podman[240856]: 2025-12-06 09:48:10.784123434 +0000 UTC m=+2.030789818 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:48:10 np0005548790.localdomain podman[240856]: unhealthy
Dec 06 09:48:10 np0005548790.localdomain podman[240846]: 2025-12-06 09:48:08.763159869 +0000 UTC m=+0.040262593 image pull  quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7
Dec 06 09:48:11 np0005548790.localdomain sudo[240818]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:11 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 06 09:48:11 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:48:11 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:48:11 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:48:11 np0005548790.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:11 np0005548790.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:11 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:48:11 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Failed with result 'exit-code'.
Dec 06 09:48:12 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:48:12 np0005548790.localdomain podman[239825]: time="2025-12-06T09:48:12Z" level=error msg="Getting root fs size for \"19f5afcedae52571a906616dc033c991a4c107c1ab9296a14f7098b386a253a9\": getting diffsize of layer \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\" and its parent \"c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6\": unmounting layer efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf: replacing mount point \"/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged\": device or resource busy"
Dec 06 09:48:12 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:12 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:12 np0005548790.localdomain sudo[240909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:48:12 np0005548790.localdomain sudo[240909]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:48:12 np0005548790.localdomain sudo[240909]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13421 DF PROTO=TCP SPT=39854 DPT=9100 SEQ=1369793106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1800E3600000000001030307) 
Dec 06 09:48:13 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:48:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:48:13 np0005548790.localdomain podman[240927]: 2025-12-06 09:48:13.575437626 +0000 UTC m=+0.091858365 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 09:48:13 np0005548790.localdomain podman[240927]: 2025-12-06 09:48:13.586204422 +0000 UTC m=+0.102625151 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 09:48:13 np0005548790.localdomain podman[240927]: unhealthy
Dec 06 09:48:14 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 06 09:48:14 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-7f25be2b5c3eb053d9b9667fd987790b6eba6abedf717019f3a18cf19bc3f462-merged.mount: Deactivated successfully.
Dec 06 09:48:14 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-7f25be2b5c3eb053d9b9667fd987790b6eba6abedf717019f3a18cf19bc3f462-merged.mount: Deactivated successfully.
Dec 06 09:48:15 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:48:15 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-4d2b1f5f95097aa03fff37b0cad6d11c71f5f221a93020bec6d77c9d01b86ad5-merged.mount: Deactivated successfully.
Dec 06 09:48:15 np0005548790.localdomain podman[240846]: 
Dec 06 09:48:15 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:48:15 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Failed with result 'exit-code'.
Dec 06 09:48:15 np0005548790.localdomain podman[240846]: 2025-12-06 09:48:15.41067517 +0000 UTC m=+6.687777854 container create 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc.)
Dec 06 09:48:16 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:48:16 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 06 09:48:16 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 06 09:48:17 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 06 09:48:17 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 06 09:48:17 np0005548790.localdomain python3[240604]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7
Dec 06 09:48:17 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:17 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42151 DF PROTO=TCP SPT=59706 DPT=9102 SEQ=691793878 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1800F7410000000001030307) 
Dec 06 09:48:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57193 DF PROTO=TCP SPT=60504 DPT=9882 SEQ=2267164443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1800F7480000000001030307) 
Dec 06 09:48:19 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:48:19 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 06 09:48:19 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 06 09:48:19 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:19 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:20 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:48:20 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:48:20 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:48:20 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:48:20 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:48:20 np0005548790.localdomain sudo[240602]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13422 DF PROTO=TCP SPT=39854 DPT=9100 SEQ=1369793106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1801031F0000000001030307) 
Dec 06 09:48:21 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:21 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:21 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:21 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:48:21 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 06 09:48:21 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-0bc46538263e83004254135f9d6e87019950ff3d90284a88c9b18ce035a29527-merged.mount: Deactivated successfully.
Dec 06 09:48:21 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:48:21 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:48:22 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:48:22 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 06 09:48:22 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 06 09:48:23 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:48:23 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:48:23 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 06 09:48:23 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:23 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:48:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:48:24 np0005548790.localdomain podman[240989]: 2025-12-06 09:48:24.579065586 +0000 UTC m=+0.094335440 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 09:48:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13073 DF PROTO=TCP SPT=40060 DPT=9105 SEQ=2895532945 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18010F9F0000000001030307) 
Dec 06 09:48:24 np0005548790.localdomain podman[240990]: 2025-12-06 09:48:24.631293896 +0000 UTC m=+0.142841452 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 09:48:24 np0005548790.localdomain podman[240990]: 2025-12-06 09:48:24.646038478 +0000 UTC m=+0.157586024 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:48:24 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:24 np0005548790.localdomain podman[240989]: 2025-12-06 09:48:24.661859319 +0000 UTC m=+0.177129193 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 09:48:24 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:48:24 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-7f25be2b5c3eb053d9b9667fd987790b6eba6abedf717019f3a18cf19bc3f462-merged.mount: Deactivated successfully.
Dec 06 09:48:24 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:24 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:24 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:48:24 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:48:25 np0005548790.localdomain podman[239825]: time="2025-12-06T09:48:25Z" level=error msg="Getting root fs size for \"20fa1e3e2c450f83fcc6f14f36e25e8e45d1e92d34b521d91999fbb7d129e37d\": unmounting layer c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6: replacing mount point \"/var/lib/containers/storage/overlay/c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6/merged\": device or resource busy"
Dec 06 09:48:25 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:25 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:25 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:25 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:25 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:48:25 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:48:25 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:48:26 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 06 09:48:26 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-0bc46538263e83004254135f9d6e87019950ff3d90284a88c9b18ce035a29527-merged.mount: Deactivated successfully.
Dec 06 09:48:26 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-180b3aecafe7c8da44d60e3a56d560d6da12982d5d153924b65220d20b7de3a0-merged.mount: Deactivated successfully.
Dec 06 09:48:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24869 DF PROTO=TCP SPT=43308 DPT=9105 SEQ=3194383658 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18011B1F0000000001030307) 
Dec 06 09:48:27 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:48:27 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 06 09:48:27 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 06 09:48:28 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:48:28 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:48:28 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 06 09:48:29 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:48:29 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:48:29 np0005548790.localdomain sudo[241118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-swwovcunnspugizyakmjylzftzaarucg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014509.6539028-2188-181634028395081/AnsiballZ_stat.py
Dec 06 09:48:29 np0005548790.localdomain sudo[241118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:29 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:48:30 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 06 09:48:30 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 06 09:48:30 np0005548790.localdomain python3.9[241120]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:48:30 np0005548790.localdomain sudo[241118]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13075 DF PROTO=TCP SPT=40060 DPT=9105 SEQ=2895532945 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1801275F0000000001030307) 
Dec 06 09:48:30 np0005548790.localdomain sudo[241230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dkjdbrfjhbivwjparwgagqsndleglkuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014510.4661489-2215-99562025433644/AnsiballZ_file.py
Dec 06 09:48:30 np0005548790.localdomain sudo[241230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:30 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:48:30 np0005548790.localdomain python3.9[241232]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:48:30 np0005548790.localdomain sudo[241230]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:31 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 06 09:48:31 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 06 09:48:31 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 06 09:48:31 np0005548790.localdomain sudo[241339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oaezgukkmybxyrrhouuealiezyxggdez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014511.0413852-2215-225277821280709/AnsiballZ_copy.py
Dec 06 09:48:31 np0005548790.localdomain sudo[241339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:31 np0005548790.localdomain python3.9[241341]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014511.0413852-2215-225277821280709/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:48:31 np0005548790.localdomain sudo[241339]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:31 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 06 09:48:31 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-180b3aecafe7c8da44d60e3a56d560d6da12982d5d153924b65220d20b7de3a0-merged.mount: Deactivated successfully.
Dec 06 09:48:31 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 06 09:48:31 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 06 09:48:32 np0005548790.localdomain sudo[241394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wknsjfrsrgplhsuldvcfkqmsxldphezr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014511.0413852-2215-225277821280709/AnsiballZ_systemd.py
Dec 06 09:48:32 np0005548790.localdomain sudo[241394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:32 np0005548790.localdomain python3.9[241396]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:48:32 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:48:32 np0005548790.localdomain systemd-sysv-generator[241429]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:48:32 np0005548790.localdomain systemd-rc-local-generator[241425]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:48:32 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:48:32 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:48:32 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:48:32 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:48:32 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:48:32 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:48:32 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:48:32 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:48:32 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:48:32 np0005548790.localdomain sudo[241394]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:33 np0005548790.localdomain sudo[241485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-culwvjxrjxaggjmjmvnaicavwzturiqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014511.0413852-2215-225277821280709/AnsiballZ_systemd.py
Dec 06 09:48:33 np0005548790.localdomain sudo[241485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:33 np0005548790.localdomain python3.9[241487]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:48:33 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:48:33 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:48:33 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:48:33 np0005548790.localdomain podman[241489]: 2025-12-06 09:48:33.470648058 +0000 UTC m=+0.092906353 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 09:48:33 np0005548790.localdomain podman[241489]: 2025-12-06 09:48:33.508266679 +0000 UTC m=+0.130524984 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 06 09:48:33 np0005548790.localdomain systemd-sysv-generator[241550]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:48:33 np0005548790.localdomain systemd-rc-local-generator[241547]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:48:33 np0005548790.localdomain podman[241490]: 2025-12-06 09:48:33.529695499 +0000 UTC m=+0.151028969 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec 06 09:48:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:48:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:48:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:48:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:48:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:48:33 np0005548790.localdomain podman[241490]: 2025-12-06 09:48:33.638204556 +0000 UTC m=+0.259537996 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 06 09:48:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:48:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:48:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:48:33 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:48:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57197 DF PROTO=TCP SPT=60504 DPT=9882 SEQ=2267164443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180133200000000001030307) 
Dec 06 09:48:33 np0005548790.localdomain systemd[1]: Starting openstack_network_exporter container...
Dec 06 09:48:33 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 06 09:48:34 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:48:34 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:48:36 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 06 09:48:36 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 06 09:48:36 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 06 09:48:36 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:48:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61370 DF PROTO=TCP SPT=47332 DPT=9101 SEQ=4104029600 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18013F1F0000000001030307) 
Dec 06 09:48:37 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:48:37 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:48:37 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d175e2df1681c89dcf70fc0995a2b31af4208ed46b2b89d660fb1e45de4aea/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 06 09:48:37 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d175e2df1681c89dcf70fc0995a2b31af4208ed46b2b89d660fb1e45de4aea/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:48:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:48:37 np0005548790.localdomain podman[241570]: 2025-12-06 09:48:37.121882022 +0000 UTC m=+3.312878712 container init 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_id=edpm, distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 06 09:48:37 np0005548790.localdomain openstack_network_exporter[241585]: INFO    09:48:37 main.go:48: registering *bridge.Collector
Dec 06 09:48:37 np0005548790.localdomain openstack_network_exporter[241585]: INFO    09:48:37 main.go:48: registering *coverage.Collector
Dec 06 09:48:37 np0005548790.localdomain openstack_network_exporter[241585]: INFO    09:48:37 main.go:48: registering *datapath.Collector
Dec 06 09:48:37 np0005548790.localdomain openstack_network_exporter[241585]: INFO    09:48:37 main.go:48: registering *iface.Collector
Dec 06 09:48:37 np0005548790.localdomain openstack_network_exporter[241585]: INFO    09:48:37 main.go:48: registering *memory.Collector
Dec 06 09:48:37 np0005548790.localdomain openstack_network_exporter[241585]: INFO    09:48:37 main.go:48: registering *ovnnorthd.Collector
Dec 06 09:48:37 np0005548790.localdomain openstack_network_exporter[241585]: INFO    09:48:37 main.go:48: registering *ovn.Collector
Dec 06 09:48:37 np0005548790.localdomain openstack_network_exporter[241585]: INFO    09:48:37 main.go:48: registering *ovsdbserver.Collector
Dec 06 09:48:37 np0005548790.localdomain openstack_network_exporter[241585]: INFO    09:48:37 main.go:48: registering *pmd_perf.Collector
Dec 06 09:48:37 np0005548790.localdomain openstack_network_exporter[241585]: INFO    09:48:37 main.go:48: registering *pmd_rxq.Collector
Dec 06 09:48:37 np0005548790.localdomain openstack_network_exporter[241585]: INFO    09:48:37 main.go:48: registering *vswitch.Collector
Dec 06 09:48:37 np0005548790.localdomain openstack_network_exporter[241585]: NOTICE  09:48:37 main.go:82: listening on http://:9105/metrics
Dec 06 09:48:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:48:37 np0005548790.localdomain podman[241570]: 2025-12-06 09:48:37.161280871 +0000 UTC m=+3.352277551 container start 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_id=edpm, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, architecture=x86_64, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6)
Dec 06 09:48:37 np0005548790.localdomain podman[241570]: openstack_network_exporter
Dec 06 09:48:38 np0005548790.localdomain systemd[1]: tmp-crun.8aXdCi.mount: Deactivated successfully.
Dec 06 09:48:38 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 06 09:48:38 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 06 09:48:38 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:48:39 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 06 09:48:39 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:48:39 np0005548790.localdomain systemd[1]: Started openstack_network_exporter container.
Dec 06 09:48:39 np0005548790.localdomain sudo[241485]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:39 np0005548790.localdomain podman[241595]: 2025-12-06 09:48:39.128830214 +0000 UTC m=+1.968784228 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=starting, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.33.7, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:48:39 np0005548790.localdomain podman[241595]: 2025-12-06 09:48:39.181194807 +0000 UTC m=+2.021148841 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.33.7, version=9.6)
Dec 06 09:48:39 np0005548790.localdomain sudo[241723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ouitwvvunrdbjuluidpgiwoahxzyfipy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014519.3719482-2287-211953993849732/AnsiballZ_systemd.py
Dec 06 09:48:39 np0005548790.localdomain sudo[241723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:39 np0005548790.localdomain python3.9[241725]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:48:39 np0005548790.localdomain systemd[1]: Stopping openstack_network_exporter container...
Dec 06 09:48:40 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 06 09:48:40 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 06 09:48:40 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:48:40 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:48:40 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 09:48:40 np0005548790.localdomain systemd[1]: libpod-9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.scope: Deactivated successfully.
Dec 06 09:48:40 np0005548790.localdomain podman[241729]: 2025-12-06 09:48:40.345196259 +0000 UTC m=+0.348539044 container died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, managed_by=edpm_ansible, vendor=Red Hat, Inc., name=ubi9-minimal)
Dec 06 09:48:40 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.timer: Deactivated successfully.
Dec 06 09:48:40 np0005548790.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:48:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63168 DF PROTO=TCP SPT=40486 DPT=9100 SEQ=3433094193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18014D200000000001030307) 
Dec 06 09:48:41 np0005548790.localdomain podman[241729]: 2025-12-06 09:48:41.002480829 +0000 UTC m=+1.005823594 container cleanup 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.tags=minimal rhel9, config_id=edpm, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Dec 06 09:48:41 np0005548790.localdomain podman[241729]: openstack_network_exporter
Dec 06 09:48:41 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:48:41 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-93d175e2df1681c89dcf70fc0995a2b31af4208ed46b2b89d660fb1e45de4aea-merged.mount: Deactivated successfully.
Dec 06 09:48:41 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8-userdata-shm.mount: Deactivated successfully.
Dec 06 09:48:41 np0005548790.localdomain systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 06 09:48:41 np0005548790.localdomain podman[241754]: 2025-12-06 09:48:41.105289095 +0000 UTC m=+0.072002207 container cleanup 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, version=9.6, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, release=1755695350, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 06 09:48:41 np0005548790.localdomain podman[241754]: openstack_network_exporter
Dec 06 09:48:41 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 06 09:48:41 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 06 09:48:41 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:48:41 np0005548790.localdomain systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Dec 06 09:48:41 np0005548790.localdomain systemd[1]: Stopped openstack_network_exporter container.
Dec 06 09:48:41 np0005548790.localdomain systemd[1]: Starting openstack_network_exporter container...
Dec 06 09:48:41 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:48:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5856 DF PROTO=TCP SPT=57060 DPT=9100 SEQ=3213752120 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180158A00000000001030307) 
Dec 06 09:48:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 06 09:48:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:48:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-86ebff1bc107fdb4ba48a82a29c7022b5ab13c6ae61733851ce5b1c08088cab4-merged.mount: Deactivated successfully.
Dec 06 09:48:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-86ebff1bc107fdb4ba48a82a29c7022b5ab13c6ae61733851ce5b1c08088cab4-merged.mount: Deactivated successfully.
Dec 06 09:48:43 np0005548790.localdomain podman[241778]: 2025-12-06 09:48:43.979219486 +0000 UTC m=+1.991202044 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute)
Dec 06 09:48:43 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:48:44 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d175e2df1681c89dcf70fc0995a2b31af4208ed46b2b89d660fb1e45de4aea/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 06 09:48:44 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d175e2df1681c89dcf70fc0995a2b31af4208ed46b2b89d660fb1e45de4aea/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 06 09:48:44 np0005548790.localdomain podman[241778]: 2025-12-06 09:48:44.021244245 +0000 UTC m=+2.033226783 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:48:44 np0005548790.localdomain podman[241778]: unhealthy
Dec 06 09:48:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:48:44 np0005548790.localdomain podman[241767]: 2025-12-06 09:48:44.031687082 +0000 UTC m=+2.546435279 container init 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, release=1755695350, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=edpm, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vendor=Red Hat, Inc., vcs-type=git, version=9.6)
Dec 06 09:48:44 np0005548790.localdomain openstack_network_exporter[241796]: INFO    09:48:44 main.go:48: registering *bridge.Collector
Dec 06 09:48:44 np0005548790.localdomain openstack_network_exporter[241796]: INFO    09:48:44 main.go:48: registering *coverage.Collector
Dec 06 09:48:44 np0005548790.localdomain openstack_network_exporter[241796]: INFO    09:48:44 main.go:48: registering *datapath.Collector
Dec 06 09:48:44 np0005548790.localdomain openstack_network_exporter[241796]: INFO    09:48:44 main.go:48: registering *iface.Collector
Dec 06 09:48:44 np0005548790.localdomain openstack_network_exporter[241796]: INFO    09:48:44 main.go:48: registering *memory.Collector
Dec 06 09:48:44 np0005548790.localdomain openstack_network_exporter[241796]: INFO    09:48:44 main.go:48: registering *ovnnorthd.Collector
Dec 06 09:48:44 np0005548790.localdomain openstack_network_exporter[241796]: INFO    09:48:44 main.go:48: registering *ovn.Collector
Dec 06 09:48:44 np0005548790.localdomain openstack_network_exporter[241796]: INFO    09:48:44 main.go:48: registering *ovsdbserver.Collector
Dec 06 09:48:44 np0005548790.localdomain openstack_network_exporter[241796]: INFO    09:48:44 main.go:48: registering *pmd_perf.Collector
Dec 06 09:48:44 np0005548790.localdomain openstack_network_exporter[241796]: INFO    09:48:44 main.go:48: registering *pmd_rxq.Collector
Dec 06 09:48:44 np0005548790.localdomain openstack_network_exporter[241796]: INFO    09:48:44 main.go:48: registering *vswitch.Collector
Dec 06 09:48:44 np0005548790.localdomain openstack_network_exporter[241796]: NOTICE  09:48:44 main.go:82: listening on http://:9105/metrics
Dec 06 09:48:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:48:44 np0005548790.localdomain podman[241767]: 2025-12-06 09:48:44.082821303 +0000 UTC m=+2.597569530 container start 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc., vcs-type=git, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7)
Dec 06 09:48:44 np0005548790.localdomain podman[241767]: openstack_network_exporter
Dec 06 09:48:44 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:48:44 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 06 09:48:44 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 06 09:48:45 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:48:46 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:48:46 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:48:46 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:48:46 np0005548790.localdomain systemd[1]: Started openstack_network_exporter container.
Dec 06 09:48:46 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:48:46 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Failed with result 'exit-code'.
Dec 06 09:48:46 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:46 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:46 np0005548790.localdomain sudo[241723]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:46 np0005548790.localdomain podman[241810]: 2025-12-06 09:48:46.919971215 +0000 UTC m=+2.833678061 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=starting, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, version=9.6, io.openshift.expose-services=, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, distribution-scope=public)
Dec 06 09:48:46 np0005548790.localdomain podman[241821]: 2025-12-06 09:48:46.963882744 +0000 UTC m=+1.475069881 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 09:48:47 np0005548790.localdomain podman[241821]: 2025-12-06 09:48:47.001039292 +0000 UTC m=+1.512226439 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 09:48:47 np0005548790.localdomain podman[241821]: unhealthy
Dec 06 09:48:47 np0005548790.localdomain podman[241810]: 2025-12-06 09:48:47.015702303 +0000 UTC m=+2.929409189 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 06 09:48:47 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:48:47 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:48:47 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:48:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:48:48.356 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:48:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:48:48.356 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:48:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:48:48.356 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:48:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43334 DF PROTO=TCP SPT=35360 DPT=9102 SEQ=1238065192 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18016C700000000001030307) 
Dec 06 09:48:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56992 DF PROTO=TCP SPT=47448 DPT=9882 SEQ=3030258420 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18016C780000000001030307) 
Dec 06 09:48:48 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:48:48 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:48:48 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:48:48 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:48 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:48 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:48:48 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Failed with result 'exit-code'.
Dec 06 09:48:48 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 09:48:49 np0005548790.localdomain sudo[241961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jpfgsczrwcldhjgysqgmsiwqgpehofin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014529.2256262-2313-151673209681648/AnsiballZ_find.py
Dec 06 09:48:49 np0005548790.localdomain sudo[241961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:48:49 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:48:49 np0005548790.localdomain python3.9[241963]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 09:48:49 np0005548790.localdomain sudo[241961]: pam_unix(sudo:session): session closed for user root
Dec 06 09:48:49 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:48:49 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:48:49 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 06 09:48:50 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:48:50 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-8550f340eb477249447dfc212906241c2d1057a5f1e3afa20f33c907252b7739-merged.mount: Deactivated successfully.
Dec 06 09:48:50 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:48:50 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:48:50 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:48:50 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:50 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:50 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:48:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43336 DF PROTO=TCP SPT=35360 DPT=9102 SEQ=1238065192 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1801785F0000000001030307) 
Dec 06 09:48:51 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 06 09:48:51 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:48:51 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 06 09:48:52 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 06 09:48:52 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:48:52 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:48:52 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:48:53 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:48:53 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-86ebff1bc107fdb4ba48a82a29c7022b5ab13c6ae61733851ce5b1c08088cab4-merged.mount: Deactivated successfully.
Dec 06 09:48:53 np0005548790.localdomain sshd[241981]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:48:53 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-86ebff1bc107fdb4ba48a82a29c7022b5ab13c6ae61733851ce5b1c08088cab4-merged.mount: Deactivated successfully.
Dec 06 09:48:54 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:48:54 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 06 09:48:54 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 06 09:48:54 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-b0ebbf80d85ec0610e007daf7d98c19344ed72ab9cca52ad6ba1866e75af9720-merged.mount: Deactivated successfully.
Dec 06 09:48:54 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 06 09:48:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8876 DF PROTO=TCP SPT=46566 DPT=9105 SEQ=2727149879 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180184DF0000000001030307) 
Dec 06 09:48:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:48:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:48:55 np0005548790.localdomain podman[241984]: 2025-12-06 09:48:55.28442093 +0000 UTC m=+0.094775463 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 06 09:48:55 np0005548790.localdomain podman[241984]: 2025-12-06 09:48:55.323226163 +0000 UTC m=+0.133580666 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 09:48:55 np0005548790.localdomain podman[241983]: 2025-12-06 09:48:55.323339936 +0000 UTC m=+0.135486906 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 09:48:55 np0005548790.localdomain podman[241983]: 2025-12-06 09:48:55.334063561 +0000 UTC m=+0.146210491 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:48:55 np0005548790.localdomain sshd[241981]: Received disconnect from 43.163.93.82 port 33980:11:  [preauth]
Dec 06 09:48:55 np0005548790.localdomain sshd[241981]: Disconnected from authenticating user root 43.163.93.82 port 33980 [preauth]
Dec 06 09:48:55 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:48:55 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:48:55 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:48:56 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 06 09:48:56 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully.
Dec 06 09:48:56 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:48:56 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:48:57 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 06 09:48:57 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-8550f340eb477249447dfc212906241c2d1057a5f1e3afa20f33c907252b7739-merged.mount: Deactivated successfully.
Dec 06 09:48:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49711 DF PROTO=TCP SPT=52714 DPT=9105 SEQ=4117510545 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1801911F0000000001030307) 
Dec 06 09:48:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:48:57.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:48:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:48:57.887 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:48:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:48:57.887 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:48:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:48:57.887 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:48:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:48:57.887 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 09:48:58 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 06 09:48:58 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 06 09:48:58 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:48:58 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 06 09:48:58 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 06 09:48:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:48:58.883 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:48:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:48:58.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:48:58 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 06 09:48:59 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 06 09:48:59 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:48:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:48:59.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:48:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:48:59.911 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:48:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:48:59.912 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:48:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:48:59.913 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:48:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:48:59.913 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 09:48:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:48:59.914 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:49:00 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:49:00 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:49:00 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:49:00 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:49:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:49:00.370 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:49:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:49:00.510 229637 WARNING nova.virt.libvirt.driver [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 09:49:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:49:00.510 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=13286MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 09:49:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:49:00.511 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:49:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:49:00.511 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:49:00 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:49:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:49:00.586 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 09:49:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:49:00.587 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 09:49:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:49:00.610 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:49:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8878 DF PROTO=TCP SPT=46566 DPT=9105 SEQ=2727149879 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18019C9F0000000001030307) 
Dec 06 09:49:00 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 06 09:49:00 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-b0ebbf80d85ec0610e007daf7d98c19344ed72ab9cca52ad6ba1866e75af9720-merged.mount: Deactivated successfully.
Dec 06 09:49:01 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-b0ebbf80d85ec0610e007daf7d98c19344ed72ab9cca52ad6ba1866e75af9720-merged.mount: Deactivated successfully.
Dec 06 09:49:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:49:01.045 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:49:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:49:01.051 229637 DEBUG nova.compute.provider_tree [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 09:49:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:49:01.074 229637 DEBUG nova.scheduler.client.report [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 09:49:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:49:01.077 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 09:49:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:49:01.077 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:49:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:49:02.078 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:49:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:49:02.079 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 09:49:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:49:02.079 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 09:49:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:49:02.100 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 09:49:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:49:02.101 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:49:02 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully.
Dec 06 09:49:02 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-208fd3e28e1d3a8776a6524bc8771d61ec27c4bea36118c21f18f1762d939041-merged.mount: Deactivated successfully.
Dec 06 09:49:02 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-208fd3e28e1d3a8776a6524bc8771d61ec27c4bea36118c21f18f1762d939041-merged.mount: Deactivated successfully.
Dec 06 09:49:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56996 DF PROTO=TCP SPT=47448 DPT=9882 SEQ=3030258420 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1801A91F0000000001030307) 
Dec 06 09:49:04 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:49:04 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:49:04 np0005548790.localdomain podman[242067]: 2025-12-06 09:49:04.576983301 +0000 UTC m=+0.093029366 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Dec 06 09:49:04 np0005548790.localdomain systemd[1]: tmp-crun.RUJHlc.mount: Deactivated successfully.
Dec 06 09:49:04 np0005548790.localdomain podman[242068]: 2025-12-06 09:49:04.64457973 +0000 UTC m=+0.156707490 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 09:49:04 np0005548790.localdomain podman[242067]: 2025-12-06 09:49:04.663130434 +0000 UTC m=+0.179176489 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 06 09:49:04 np0005548790.localdomain podman[242068]: 2025-12-06 09:49:04.712732984 +0000 UTC m=+0.224860744 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec 06 09:49:05 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:49:05 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:49:05 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 06 09:49:05 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully.
Dec 06 09:49:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36176 DF PROTO=TCP SPT=40804 DPT=9101 SEQ=2401174116 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1801B31F0000000001030307) 
Dec 06 09:49:06 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:49:06 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:49:06 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.320 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:49:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:49:07 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 06 09:49:07 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:49:08 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:49:08 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:49:08 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:49:09 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:49:09 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:49:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13424 DF PROTO=TCP SPT=39854 DPT=9100 SEQ=1369793106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1801C13E0000000001030307) 
Dec 06 09:49:11 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 06 09:49:11 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-998b5ee4f740798b555f9dbe537836a42f089d97290b4d07cd81c8b9291a5941-merged.mount: Deactivated successfully.
Dec 06 09:49:11 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully.
Dec 06 09:49:12 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 06 09:49:12 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-208fd3e28e1d3a8776a6524bc8771d61ec27c4bea36118c21f18f1762d939041-merged.mount: Deactivated successfully.
Dec 06 09:49:12 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 06 09:49:12 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 06 09:49:12 np0005548790.localdomain sudo[242107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:49:12 np0005548790.localdomain sudo[242107]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:49:12 np0005548790.localdomain sudo[242107]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:13 np0005548790.localdomain sudo[242125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:49:13 np0005548790.localdomain sudo[242125]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:49:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36804 DF PROTO=TCP SPT=47960 DPT=9100 SEQ=1110869477 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1801CD9F0000000001030307) 
Dec 06 09:49:13 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 06 09:49:13 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 06 09:49:14 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:49:14 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 06 09:49:14 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 06 09:49:14 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:49:14 np0005548790.localdomain sudo[242125]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:15 np0005548790.localdomain sudo[242175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:49:15 np0005548790.localdomain sudo[242175]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:49:15 np0005548790.localdomain sudo[242175]: pam_unix(sudo:session): session closed for user root
Dec 06 09:49:16 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:49:16 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:49:16 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:49:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:49:17 np0005548790.localdomain podman[242193]: 2025-12-06 09:49:17.010442798 +0000 UTC m=+0.090794086 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 09:49:17 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:49:17 np0005548790.localdomain podman[242193]: 2025-12-06 09:49:17.039919091 +0000 UTC m=+0.120270399 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:49:17 np0005548790.localdomain podman[242193]: unhealthy
Dec 06 09:49:17 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:49:17 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:49:17 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:49:17 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:49:17 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:49:17 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:49:17 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Failed with result 'exit-code'.
Dec 06 09:49:18 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:49:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60724 DF PROTO=TCP SPT=48728 DPT=9102 SEQ=985932290 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1801E1A10000000001030307) 
Dec 06 09:49:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10971 DF PROTO=TCP SPT=60564 DPT=9882 SEQ=13965666 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1801E1A80000000001030307) 
Dec 06 09:49:18 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:49:18 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:49:18 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:49:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:49:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:49:19 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:49:19 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:49:19 np0005548790.localdomain podman[242211]: 2025-12-06 09:49:19.328267133 +0000 UTC m=+0.081442557 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 09:49:19 np0005548790.localdomain podman[242211]: 2025-12-06 09:49:19.342157092 +0000 UTC m=+0.095332546 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 09:49:19 np0005548790.localdomain podman[242211]: unhealthy
Dec 06 09:49:19 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:49:20 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 06 09:49:20 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-998b5ee4f740798b555f9dbe537836a42f089d97290b4d07cd81c8b9291a5941-merged.mount: Deactivated successfully.
Dec 06 09:49:21 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:49:21 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Failed with result 'exit-code'.
Dec 06 09:49:21 np0005548790.localdomain podman[242212]: 2025-12-06 09:49:21.031533525 +0000 UTC m=+1.780723523 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., architecture=x86_64, config_id=edpm)
Dec 06 09:49:21 np0005548790.localdomain podman[242212]: 2025-12-06 09:49:21.040263067 +0000 UTC m=+1.789453105 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, version=9.6, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64, release=1755695350, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm)
Dec 06 09:49:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60726 DF PROTO=TCP SPT=48728 DPT=9102 SEQ=985932290 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1801ED9F0000000001030307) 
Dec 06 09:49:21 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 06 09:49:21 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 06 09:49:22 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:49:22 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 09:49:22 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:49:22 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:49:22 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:49:23 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:49:23 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 06 09:49:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34275 DF PROTO=TCP SPT=45006 DPT=9105 SEQ=975862744 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1801FA1F0000000001030307) 
Dec 06 09:49:25 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:49:25 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-b09684bd34ad3c8fd079e7ba80e7282e1f6c9c49f3e804a6f19145e8413aff6f-merged.mount: Deactivated successfully.
Dec 06 09:49:25 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-b09684bd34ad3c8fd079e7ba80e7282e1f6c9c49f3e804a6f19145e8413aff6f-merged.mount: Deactivated successfully.
Dec 06 09:49:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13078 DF PROTO=TCP SPT=40060 DPT=9105 SEQ=2895532945 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1802051F0000000001030307) 
Dec 06 09:49:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:49:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:49:27 np0005548790.localdomain podman[242253]: 2025-12-06 09:49:27.583911294 +0000 UTC m=+0.089833200 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:49:27 np0005548790.localdomain podman[242253]: 2025-12-06 09:49:27.593466957 +0000 UTC m=+0.099388883 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, tcib_managed=true)
Dec 06 09:49:27 np0005548790.localdomain systemd[1]: tmp-crun.P6epi1.mount: Deactivated successfully.
Dec 06 09:49:27 np0005548790.localdomain podman[242252]: 2025-12-06 09:49:27.671337678 +0000 UTC m=+0.175430325 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 09:49:27 np0005548790.localdomain podman[242252]: 2025-12-06 09:49:27.680102711 +0000 UTC m=+0.184195388 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 09:49:27 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53-merged.mount: Deactivated successfully.
Dec 06 09:49:28 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:49:28 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7-merged.mount: Deactivated successfully.
Dec 06 09:49:28 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:49:28 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:49:30 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d-merged.mount: Deactivated successfully.
Dec 06 09:49:30 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53-merged.mount: Deactivated successfully.
Dec 06 09:49:30 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53-merged.mount: Deactivated successfully.
Dec 06 09:49:30 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:49:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34277 DF PROTO=TCP SPT=45006 DPT=9105 SEQ=975862744 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180211DF0000000001030307) 
Dec 06 09:49:32 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 06 09:49:32 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d-merged.mount: Deactivated successfully.
Dec 06 09:49:32 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:49:32 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:49:32 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d-merged.mount: Deactivated successfully.
Dec 06 09:49:33 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:49:33 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 06 09:49:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10975 DF PROTO=TCP SPT=60564 DPT=9882 SEQ=13965666 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18021D200000000001030307) 
Dec 06 09:49:33 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:49:33 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:49:33 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 06 09:49:33 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 06 09:49:34 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:49:34 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:49:34 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:49:34 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:49:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:49:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:49:35 np0005548790.localdomain podman[242294]: 2025-12-06 09:49:35.555837451 +0000 UTC m=+0.061640960 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 06 09:49:35 np0005548790.localdomain podman[242294]: 2025-12-06 09:49:35.586049304 +0000 UTC m=+0.091852873 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Dec 06 09:49:36 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7-merged.mount: Deactivated successfully.
Dec 06 09:49:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61455 DF PROTO=TCP SPT=56014 DPT=9101 SEQ=4190329148 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1802291F0000000001030307) 
Dec 06 09:49:37 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:49:37 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-b09684bd34ad3c8fd079e7ba80e7282e1f6c9c49f3e804a6f19145e8413aff6f-merged.mount: Deactivated successfully.
Dec 06 09:49:37 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-b09684bd34ad3c8fd079e7ba80e7282e1f6c9c49f3e804a6f19145e8413aff6f-merged.mount: Deactivated successfully.
Dec 06 09:49:37 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:49:38 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Dec 06 09:49:38 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:49:38 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:49:39 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:49:39 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:49:39 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:49:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5859 DF PROTO=TCP SPT=57060 DPT=9100 SEQ=3213752120 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180237200000000001030307) 
Dec 06 09:49:40 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Dec 06 09:49:40 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-68867986d4a2077de0c22951b19fb4fc1ac570d0cdbe733dc50c260ed931a7ca-merged.mount: Deactivated successfully.
Dec 06 09:49:40 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7-merged.mount: Deactivated successfully.
Dec 06 09:49:41 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d-merged.mount: Deactivated successfully.
Dec 06 09:49:41 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53-merged.mount: Deactivated successfully.
Dec 06 09:49:41 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53-merged.mount: Deactivated successfully.
Dec 06 09:49:42 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:49:42 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:49:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:49:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32705 DF PROTO=TCP SPT=51124 DPT=9100 SEQ=176487726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180242DF0000000001030307) 
Dec 06 09:49:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d-merged.mount: Deactivated successfully.
Dec 06 09:49:44 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 06 09:49:44 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 06 09:49:44 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 06 09:49:44 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:49:44 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:49:44 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 06 09:49:45 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:49:45 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 06 09:49:45 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:49:46 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:49:46 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:49:46 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:49:46 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7-merged.mount: Deactivated successfully.
Dec 06 09:49:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:49:47 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:49:47 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:49:47 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:49:47 np0005548790.localdomain podman[242293]: 2025-12-06 09:49:47.755670365 +0000 UTC m=+12.265825721 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 09:49:47 np0005548790.localdomain podman[242293]: 2025-12-06 09:49:47.789985827 +0000 UTC m=+12.300141183 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 09:49:47 np0005548790.localdomain podman[242327]: 2025-12-06 09:49:47.810013639 +0000 UTC m=+0.137795555 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 09:49:47 np0005548790.localdomain podman[242327]: 2025-12-06 09:49:47.842192085 +0000 UTC m=+0.169974031 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 09:49:47 np0005548790.localdomain podman[242327]: unhealthy
Dec 06 09:49:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:49:48.356 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:49:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:49:48.357 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:49:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:49:48.357 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:49:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57791 DF PROTO=TCP SPT=42928 DPT=9102 SEQ=1452113865 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180256D10000000001030307) 
Dec 06 09:49:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33736 DF PROTO=TCP SPT=40600 DPT=9882 SEQ=1179980306 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180256D70000000001030307) 
Dec 06 09:49:48 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:49:48 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:49:48 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:49:48 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:49:48 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Failed with result 'exit-code'.
Dec 06 09:49:48 np0005548790.localdomain systemd[1]: tmp-crun.1TNV8s.mount: Deactivated successfully.
Dec 06 09:49:48 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Dec 06 09:49:49 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:49:49 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:49:49 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:49:50 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Dec 06 09:49:50 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-68867986d4a2077de0c22951b19fb4fc1ac570d0cdbe733dc50c260ed931a7ca-merged.mount: Deactivated successfully.
Dec 06 09:49:50 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-68867986d4a2077de0c22951b19fb4fc1ac570d0cdbe733dc50c260ed931a7ca-merged.mount: Deactivated successfully.
Dec 06 09:49:51 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:49:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:49:51 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-1b834d84ece9c7f6d27bdbfaa42b19d8cf29e8c24c1847f75b1aabfc03aadccf-merged.mount: Deactivated successfully.
Dec 06 09:49:51 np0005548790.localdomain podman[242350]: 2025-12-06 09:49:51.200555759 +0000 UTC m=+0.083516571 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 09:49:51 np0005548790.localdomain podman[242350]: 2025-12-06 09:49:51.23253918 +0000 UTC m=+0.115499982 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 09:49:51 np0005548790.localdomain podman[242350]: unhealthy
Dec 06 09:49:51 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:49:51 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Failed with result 'exit-code'.
Dec 06 09:49:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57793 DF PROTO=TCP SPT=42928 DPT=9102 SEQ=1452113865 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180262DF0000000001030307) 
Dec 06 09:49:52 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-1b834d84ece9c7f6d27bdbfaa42b19d8cf29e8c24c1847f75b1aabfc03aadccf-merged.mount: Deactivated successfully.
Dec 06 09:49:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:49:52 np0005548790.localdomain podman[242372]: 2025-12-06 09:49:52.57130513 +0000 UTC m=+0.080259705 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, build-date=2025-08-20T13:12:41, config_id=edpm, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, release=1755695350, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container)
Dec 06 09:49:52 np0005548790.localdomain podman[242372]: 2025-12-06 09:49:52.590230823 +0000 UTC m=+0.099185358 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 06 09:49:52 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 06 09:49:52 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 06 09:49:53 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 06 09:49:53 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:49:53 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:49:53 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:49:54 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 09:49:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61635 DF PROTO=TCP SPT=52988 DPT=9105 SEQ=212737213 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18026F1F0000000001030307) 
Dec 06 09:49:55 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 06 09:49:55 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 06 09:49:55 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:49:56 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:49:56 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:49:56 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:49:56 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:49:57 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:49:57 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:49:57 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:49:57 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:49:57 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:49:57 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:49:57 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:49:57 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:49:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8881 DF PROTO=TCP SPT=46566 DPT=9105 SEQ=2727149879 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18027B1F0000000001030307) 
Dec 06 09:49:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:49:57.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:49:58 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 06 09:49:58 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-6ff3cc0882e58e51d95c7dbb8a9007371d8427f559dbbb0b928cad9ba7629e14-merged.mount: Deactivated successfully.
Dec 06 09:49:58 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:49:58 np0005548790.localdomain podman[242392]: 2025-12-06 09:49:58.750914396 +0000 UTC m=+0.059950155 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd)
Dec 06 09:49:58 np0005548790.localdomain podman[242392]: 2025-12-06 09:49:58.789141972 +0000 UTC m=+0.098177761 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd)
Dec 06 09:49:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:49:58.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:49:59 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:49:59 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 06 09:49:59 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 06 09:49:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:49:59.885 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:49:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:49:59.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:49:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:49:59.886 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 09:50:00 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:50:00 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-1b834d84ece9c7f6d27bdbfaa42b19d8cf29e8c24c1847f75b1aabfc03aadccf-merged.mount: Deactivated successfully.
Dec 06 09:50:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61637 DF PROTO=TCP SPT=52988 DPT=9105 SEQ=212737213 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180286DF0000000001030307) 
Dec 06 09:50:00 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-1b834d84ece9c7f6d27bdbfaa42b19d8cf29e8c24c1847f75b1aabfc03aadccf-merged.mount: Deactivated successfully.
Dec 06 09:50:00 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:00 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:00 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:50:00 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:50:00 np0005548790.localdomain podman[242410]: 2025-12-06 09:50:00.8193799 +0000 UTC m=+0.086896572 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 09:50:00 np0005548790.localdomain podman[242410]: 2025-12-06 09:50:00.828296637 +0000 UTC m=+0.095813339 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 09:50:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:00.881 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:50:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:00.885 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:50:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:00.885 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:50:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:00.903 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:50:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:00.903 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:50:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:00.904 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:50:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:00.904 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 09:50:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:00.904 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:50:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:01.337 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:50:01 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:50:01 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:50:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:01.518 229637 WARNING nova.virt.libvirt.driver [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 09:50:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:01.519 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=13145MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 09:50:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:01.519 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:50:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:01.519 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:50:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:01.594 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 09:50:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:01.594 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 09:50:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:01.610 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:50:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:02.095 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:50:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:02.101 229637 DEBUG nova.compute.provider_tree [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 09:50:02 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 06 09:50:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:02.120 229637 DEBUG nova.scheduler.client.report [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 09:50:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:02.123 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 09:50:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:02.123 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:50:02 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 06 09:50:02 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 06 09:50:02 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:02 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:02 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:50:02 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:50:02 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:50:03 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:03.144 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:50:03 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:03.171 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:50:03 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:03.171 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 09:50:03 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:03.171 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 09:50:03 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:03.191 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 09:50:03 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:03.191 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:50:03 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 06 09:50:03 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-853ff4856ee5a83cc87cbeb349af5655b5d75f579ca17ec21c7fae4967e407aa-merged.mount: Deactivated successfully.
Dec 06 09:50:03 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:50:03 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 06 09:50:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33740 DF PROTO=TCP SPT=40600 DPT=9882 SEQ=1179980306 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1802931F0000000001030307) 
Dec 06 09:50:03 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 06 09:50:04 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:50:04 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:50:05 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:50:05 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:50:05 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:50:05 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:50:05 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:50:05 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:05 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:05 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:07 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40308 DF PROTO=TCP SPT=43722 DPT=9100 SEQ=1459641247 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1802A0600000000001030307) 
Dec 06 09:50:07 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 06 09:50:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:50:07 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:50:07 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-6ff3cc0882e58e51d95c7dbb8a9007371d8427f559dbbb0b928cad9ba7629e14-merged.mount: Deactivated successfully.
Dec 06 09:50:07 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:50:07 np0005548790.localdomain podman[242475]: 2025-12-06 09:50:07.706040048 +0000 UTC m=+0.085521315 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Dec 06 09:50:07 np0005548790.localdomain podman[242475]: 2025-12-06 09:50:07.768132899 +0000 UTC m=+0.147614116 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 06 09:50:08 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:08 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:08 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:50:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 09:50:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 6600.1 total, 600.0 interval
                                                          Cumulative writes: 5186 writes, 23K keys, 5186 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5186 writes, 682 syncs, 7.60 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 09:50:09 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:50:09 np0005548790.localdomain podman[239825]: time="2025-12-06T09:50:09Z" level=error msg="Getting root fs size for \"773c852679a9b7349ef8e9bc7e9228330c2e3ad6625820b8bfcd29e9c9904040\": getting diffsize of layer \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\" and its parent \"c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6\": unmounting layer efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf: replacing mount point \"/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged\": device or resource busy"
Dec 06 09:50:09 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:50:09 np0005548790.localdomain sshd[229974]: Received disconnect from 192.168.122.30 port 51284:11: disconnected by user
Dec 06 09:50:09 np0005548790.localdomain sshd[229974]: Disconnected from user zuul 192.168.122.30 port 51284
Dec 06 09:50:09 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 06 09:50:09 np0005548790.localdomain sshd[229971]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:50:09 np0005548790.localdomain systemd[1]: session-55.scope: Deactivated successfully.
Dec 06 09:50:09 np0005548790.localdomain systemd[1]: session-55.scope: Consumed 58.262s CPU time.
Dec 06 09:50:09 np0005548790.localdomain systemd-logind[760]: Session 55 logged out. Waiting for processes to exit.
Dec 06 09:50:09 np0005548790.localdomain systemd-logind[760]: Removed session 55.
Dec 06 09:50:09 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-853ff4856ee5a83cc87cbeb349af5655b5d75f579ca17ec21c7fae4967e407aa-merged.mount: Deactivated successfully.
Dec 06 09:50:09 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:09 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-853ff4856ee5a83cc87cbeb349af5655b5d75f579ca17ec21c7fae4967e407aa-merged.mount: Deactivated successfully.
Dec 06 09:50:09 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36807 DF PROTO=TCP SPT=47960 DPT=9100 SEQ=1110869477 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1802AB200000000001030307) 
Dec 06 09:50:10 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:50:10 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:50:12 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:50:12 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c83dd2ae8ec29b7cb801d1fd4229674fbfc32ccfb1cee7918282407025d079f4-merged.mount: Deactivated successfully.
Dec 06 09:50:13 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c83dd2ae8ec29b7cb801d1fd4229674fbfc32ccfb1cee7918282407025d079f4-merged.mount: Deactivated successfully.
Dec 06 09:50:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 09:50:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 6600.2 total, 600.0 interval
                                                          Cumulative writes: 5446 writes, 23K keys, 5446 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5446 writes, 742 syncs, 7.34 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 09:50:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40310 DF PROTO=TCP SPT=43722 DPT=9100 SEQ=1459641247 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1802B8200000000001030307) 
Dec 06 09:50:14 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 06 09:50:14 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 06 09:50:14 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 06 09:50:15 np0005548790.localdomain sudo[242499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:50:15 np0005548790.localdomain sudo[242499]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:50:15 np0005548790.localdomain sudo[242499]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:15 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:50:15 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:50:15 np0005548790.localdomain sudo[242517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:50:15 np0005548790.localdomain sudo[242517]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:50:15 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:50:16 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 06 09:50:17 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:50:17 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:50:17 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:50:17 np0005548790.localdomain sudo[242517]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24932 DF PROTO=TCP SPT=44742 DPT=9102 SEQ=2350785206 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1802CC010000000001030307) 
Dec 06 09:50:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32161 DF PROTO=TCP SPT=47810 DPT=9882 SEQ=1128414543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1802CC080000000001030307) 
Dec 06 09:50:18 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:50:18 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:50:18 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:50:18 np0005548790.localdomain podman[242567]: 2025-12-06 09:50:18.796936394 +0000 UTC m=+0.078918792 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 06 09:50:18 np0005548790.localdomain podman[242568]: 2025-12-06 09:50:18.815591936 +0000 UTC m=+0.090980820 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 09:50:18 np0005548790.localdomain podman[242568]: 2025-12-06 09:50:18.823994128 +0000 UTC m=+0.099383032 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 06 09:50:18 np0005548790.localdomain podman[242568]: unhealthy
Dec 06 09:50:18 np0005548790.localdomain podman[242567]: 2025-12-06 09:50:18.881424233 +0000 UTC m=+0.163406691 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:50:18 np0005548790.localdomain sudo[242602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:50:18 np0005548790.localdomain sudo[242602]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:50:18 np0005548790.localdomain sudo[242602]: pam_unix(sudo:session): session closed for user root
Dec 06 09:50:19 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:19 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:50:19 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:50:19 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Failed with result 'exit-code'.
Dec 06 09:50:19 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:50:19 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:19 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:19 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:19 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:20 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:50:20 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:50:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:50:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24934 DF PROTO=TCP SPT=44742 DPT=9102 SEQ=2350785206 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1802D8200000000001030307) 
Dec 06 09:50:21 np0005548790.localdomain systemd[1]: tmp-crun.rW9or8.mount: Deactivated successfully.
Dec 06 09:50:21 np0005548790.localdomain podman[242621]: 2025-12-06 09:50:21.584069623 +0000 UTC m=+0.089586394 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 09:50:21 np0005548790.localdomain podman[242621]: 2025-12-06 09:50:21.591677914 +0000 UTC m=+0.097194645 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 09:50:21 np0005548790.localdomain podman[242621]: unhealthy
Dec 06 09:50:21 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 06 09:50:21 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-5d6d513da38af6e3cd155d4b3518d4b989d374acab410fbc1ad1d5be1919c445-merged.mount: Deactivated successfully.
Dec 06 09:50:22 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:50:22 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:50:22 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Failed with result 'exit-code'.
Dec 06 09:50:22 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c83dd2ae8ec29b7cb801d1fd4229674fbfc32ccfb1cee7918282407025d079f4-merged.mount: Deactivated successfully.
Dec 06 09:50:23 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 06 09:50:23 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 06 09:50:23 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 06 09:50:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:50:24 np0005548790.localdomain podman[242645]: 2025-12-06 09:50:24.566253597 +0000 UTC m=+0.087744996 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Dec 06 09:50:24 np0005548790.localdomain podman[242645]: 2025-12-06 09:50:24.58231138 +0000 UTC m=+0.103802839 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, version=9.6, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., config_id=edpm, build-date=2025-08-20T13:12:41)
Dec 06 09:50:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3847 DF PROTO=TCP SPT=40256 DPT=9105 SEQ=1489439475 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1802E45F0000000001030307) 
Dec 06 09:50:24 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:50:24 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 06 09:50:25 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:50:25 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 06 09:50:25 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 06 09:50:25 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:25 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 09:50:26 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:50:27 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:50:27 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:50:27 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:50:27 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:50:27 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:27 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:27 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:27 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:27 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34280 DF PROTO=TCP SPT=45006 DPT=9105 SEQ=975862744 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1802F1200000000001030307) 
Dec 06 09:50:28 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:50:28 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:50:28 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:50:29 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 06 09:50:29 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-5d6d513da38af6e3cd155d4b3518d4b989d374acab410fbc1ad1d5be1919c445-merged.mount: Deactivated successfully.
Dec 06 09:50:29 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:29 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:50:29 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:50:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3849 DF PROTO=TCP SPT=40256 DPT=9105 SEQ=1489439475 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1802FC1F0000000001030307) 
Dec 06 09:50:31 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:50:31 np0005548790.localdomain systemd[1]: tmp-crun.wTfO5L.mount: Deactivated successfully.
Dec 06 09:50:31 np0005548790.localdomain podman[242663]: 2025-12-06 09:50:31.571555826 +0000 UTC m=+0.088519335 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec 06 09:50:31 np0005548790.localdomain podman[242663]: 2025-12-06 09:50:31.584022806 +0000 UTC m=+0.100986285 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:50:31 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 06 09:50:31 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ba8b2af72d8dcbf02be78782d6a093327973a6f19db17113d2698cfcfba8f0d1-merged.mount: Deactivated successfully.
Dec 06 09:50:31 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:50:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:50:32 np0005548790.localdomain podman[242680]: 2025-12-06 09:50:32.564885985 +0000 UTC m=+0.080640858 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 09:50:32 np0005548790.localdomain podman[242680]: 2025-12-06 09:50:32.597208218 +0000 UTC m=+0.112963151 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 09:50:33 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully.
Dec 06 09:50:33 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully.
Dec 06 09:50:33 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully.
Dec 06 09:50:34 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32165 DF PROTO=TCP SPT=47810 DPT=9882 SEQ=1128414543 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1803091F0000000001030307) 
Dec 06 09:50:34 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:50:34 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 06 09:50:34 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 06 09:50:34 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:50:35 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 06 09:50:35 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully.
Dec 06 09:50:35 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully.
Dec 06 09:50:36 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:50:36 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:50:36 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 06 09:50:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39970 DF PROTO=TCP SPT=58818 DPT=9101 SEQ=2780217863 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1803131F0000000001030307) 
Dec 06 09:50:37 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:50:37 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 06 09:50:37 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 06 09:50:37 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 06 09:50:38 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 06 09:50:38 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:50:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:50:38 np0005548790.localdomain podman[242703]: 2025-12-06 09:50:38.553464621 +0000 UTC m=+0.072160425 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 09:50:38 np0005548790.localdomain podman[242703]: 2025-12-06 09:50:38.581074809 +0000 UTC m=+0.099770593 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller)
Dec 06 09:50:38 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully.
Dec 06 09:50:38 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:50:38 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:50:38 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:50:38 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:38 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:39 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:50:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32708 DF PROTO=TCP SPT=51124 DPT=9100 SEQ=176487726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1803211F0000000001030307) 
Dec 06 09:50:41 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:50:41 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:50:41 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 06 09:50:41 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ba8b2af72d8dcbf02be78782d6a093327973a6f19db17113d2698cfcfba8f0d1-merged.mount: Deactivated successfully.
Dec 06 09:50:41 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:42 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:50:42 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully.
Dec 06 09:50:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully.
Dec 06 09:50:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully.
Dec 06 09:50:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52089 DF PROTO=TCP SPT=33202 DPT=9100 SEQ=514071644 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18032D1F0000000001030307) 
Dec 06 09:50:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:50:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:50:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:50:44 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 06 09:50:44 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully.
Dec 06 09:50:44 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully.
Dec 06 09:50:45 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:50:45 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:50:45 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:50:45 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 06 09:50:47 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully.
Dec 06 09:50:48 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:50:48 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-8b3046bf95c005ae5e06ce4ce46dded50d0c609d6971f4cdb5d43c0345e88618-merged.mount: Deactivated successfully.
Dec 06 09:50:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:50:48.357 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:50:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:50:48.357 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:50:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:50:48.358 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:50:48 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-8b3046bf95c005ae5e06ce4ce46dded50d0c609d6971f4cdb5d43c0345e88618-merged.mount: Deactivated successfully.
Dec 06 09:50:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31143 DF PROTO=TCP SPT=33482 DPT=9102 SEQ=2206658756 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180341310000000001030307) 
Dec 06 09:50:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35838 DF PROTO=TCP SPT=50664 DPT=9882 SEQ=118321936 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180341370000000001030307) 
Dec 06 09:50:49 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc-merged.mount: Deactivated successfully.
Dec 06 09:50:49 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 06 09:50:49 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 06 09:50:49 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 06 09:50:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:50:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:50:49 np0005548790.localdomain podman[242727]: 2025-12-06 09:50:49.567355715 +0000 UTC m=+0.079527078 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:50:49 np0005548790.localdomain podman[242727]: 2025-12-06 09:50:49.596004141 +0000 UTC m=+0.108175554 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 09:50:50 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:50:50 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:50:51 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:50:51 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:50:51 np0005548790.localdomain podman[242728]: 2025-12-06 09:50:51.246367818 +0000 UTC m=+1.757486694 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:50:51 np0005548790.localdomain podman[242728]: 2025-12-06 09:50:51.279221384 +0000 UTC m=+1.790340260 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 09:50:51 np0005548790.localdomain podman[242728]: unhealthy
Dec 06 09:50:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52090 DF PROTO=TCP SPT=33202 DPT=9100 SEQ=514071644 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18034D1F0000000001030307) 
Dec 06 09:50:52 np0005548790.localdomain systemd[1]: tmp-crun.MEOOSz.mount: Deactivated successfully.
Dec 06 09:50:52 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc-merged.mount: Deactivated successfully.
Dec 06 09:50:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:50:52 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:50:52 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:50:53 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:50:53 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:50:53 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Failed with result 'exit-code'.
Dec 06 09:50:53 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:53 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:53 np0005548790.localdomain systemd[1]: tmp-crun.GFA0hG.mount: Deactivated successfully.
Dec 06 09:50:53 np0005548790.localdomain podman[242761]: 2025-12-06 09:50:53.207353918 +0000 UTC m=+0.718407139 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 09:50:53 np0005548790.localdomain podman[242761]: 2025-12-06 09:50:53.236923087 +0000 UTC m=+0.747976308 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 09:50:53 np0005548790.localdomain podman[242761]: unhealthy
Dec 06 09:50:54 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:50:54 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:50:54 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:50:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45899 DF PROTO=TCP SPT=44170 DPT=9105 SEQ=103144328 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1803599F0000000001030307) 
Dec 06 09:50:55 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:50:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:50:55 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:50:55 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:50:55 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:55 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:55 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:50:55 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Failed with result 'exit-code'.
Dec 06 09:50:55 np0005548790.localdomain podman[242784]: 2025-12-06 09:50:55.964745322 +0000 UTC m=+0.332043269 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vcs-type=git, container_name=openstack_network_exporter, managed_by=edpm_ansible)
Dec 06 09:50:55 np0005548790.localdomain podman[242784]: 2025-12-06 09:50:55.985078168 +0000 UTC m=+0.352376125 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, distribution-scope=public, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-type=git, name=ubi9-minimal, version=9.6, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Dec 06 09:50:56 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:50:56 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:50:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:56.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:50:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:56.886 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 09:50:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:56.901 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 09:50:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:56.902 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:50:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:56.902 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 09:50:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:56.920 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:50:57 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:50:57 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:50:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61640 DF PROTO=TCP SPT=52988 DPT=9105 SEQ=212737213 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1803651F0000000001030307) 
Dec 06 09:50:57 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:50:57 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 09:50:57 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:57 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:58 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:50:58 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:58 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:50:58 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:50:58 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:50:58 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:50:58 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:50:59 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:50:59 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:50:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:50:59.931 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:51:00 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:51:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45901 DF PROTO=TCP SPT=44170 DPT=9105 SEQ=103144328 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1803715F0000000001030307) 
Dec 06 09:51:00 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-8b3046bf95c005ae5e06ce4ce46dded50d0c609d6971f4cdb5d43c0345e88618-merged.mount: Deactivated successfully.
Dec 06 09:51:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:00.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:51:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:00.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:51:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:00.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:51:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:00.886 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 09:51:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:00.887 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:51:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:00.906 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:51:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:00.906 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:51:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:00.907 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:51:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:00.907 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 09:51:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:00.907 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:51:01 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 06 09:51:01 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc-merged.mount: Deactivated successfully.
Dec 06 09:51:01 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc-merged.mount: Deactivated successfully.
Dec 06 09:51:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:01.369 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:51:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:01.529 229637 WARNING nova.virt.libvirt.driver [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 09:51:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:01.530 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=13084MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 09:51:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:01.530 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:51:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:01.530 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:51:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:01.685 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 09:51:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:01.686 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 09:51:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:01.748 229637 DEBUG nova.scheduler.client.report [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Refreshing inventories for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 09:51:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:01.799 229637 DEBUG nova.scheduler.client.report [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Updating ProviderTree inventory for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 09:51:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:01.799 229637 DEBUG nova.compute.provider_tree [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Updating inventory in ProviderTree for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 09:51:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:01.816 229637 DEBUG nova.scheduler.client.report [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Refreshing aggregate associations for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 09:51:01 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 06 09:51:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:01.841 229637 DEBUG nova.scheduler.client.report [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Refreshing trait associations for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0, traits: COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 09:51:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:01.859 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:51:02 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc-merged.mount: Deactivated successfully.
Dec 06 09:51:02 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:51:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:02.344 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:51:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:02.349 229637 DEBUG nova.compute.provider_tree [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 09:51:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:02.369 229637 DEBUG nova.scheduler.client.report [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 09:51:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:02.370 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 09:51:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:02.371 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:51:02 np0005548790.localdomain podman[242847]: 2025-12-06 09:51:02.386860601 +0000 UTC m=+0.051885869 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:51:02 np0005548790.localdomain podman[242847]: 2025-12-06 09:51:02.396567417 +0000 UTC m=+0.061592685 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 09:51:02 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:51:02 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e0160acc82432e6ab5584ba775b0f7164edaf038948049207c6a0305ea190059-merged.mount: Deactivated successfully.
Dec 06 09:51:03 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 06 09:51:03 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 06 09:51:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31147 DF PROTO=TCP SPT=33482 DPT=9102 SEQ=2206658756 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18037D1F0000000001030307) 
Dec 06 09:51:03 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 06 09:51:03 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 06 09:51:04 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 06 09:51:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:04.366 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:51:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:04.366 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:51:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:04.367 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:51:04 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:51:04 np0005548790.localdomain podman[242867]: 2025-12-06 09:51:04.82023733 +0000 UTC m=+0.082167138 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:51:04 np0005548790.localdomain podman[242867]: 2025-12-06 09:51:04.858113709 +0000 UTC m=+0.120043447 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 09:51:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:04.885 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:51:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:04.886 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 09:51:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:04.886 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 09:51:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:04.905 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 09:51:04 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:51:05 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:51:05 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 06 09:51:05 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:51:05 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:51:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27226 DF PROTO=TCP SPT=53778 DPT=9101 SEQ=2750948836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180389200000000001030307) 
Dec 06 09:51:07 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:51:07 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:51:07 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.319 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.320 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.320 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.320 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.320 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.320 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.320 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.320 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.320 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.320 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:51:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:51:07 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 06 09:51:08 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:51:08 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:09 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:51:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:51:09 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:51:09 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:09 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 06 09:51:09 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:09 np0005548790.localdomain podman[242890]: 2025-12-06 09:51:09.182412759 +0000 UTC m=+0.100137381 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller)
Dec 06 09:51:09 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:09 np0005548790.localdomain podman[242890]: 2025-12-06 09:51:09.243160462 +0000 UTC m=+0.160885114 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec 06 09:51:09 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:09 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:09 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:51:10 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:51:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40313 DF PROTO=TCP SPT=43722 DPT=9100 SEQ=1459641247 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1803971F0000000001030307) 
Dec 06 09:51:10 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:10 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:51:10 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:51:11 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:51:11 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:11 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:12 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:51:12 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e0160acc82432e6ab5584ba775b0f7164edaf038948049207c6a0305ea190059-merged.mount: Deactivated successfully.
Dec 06 09:51:13 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 06 09:51:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50482 DF PROTO=TCP SPT=37926 DPT=9100 SEQ=270492905 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1803A2600000000001030307) 
Dec 06 09:51:13 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 06 09:51:13 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 06 09:51:14 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 06 09:51:14 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 06 09:51:14 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-51b981582ce7f1a34b84e0c23c6399b8da018a3879f6d023b21a81fa9cc33483-merged.mount: Deactivated successfully.
Dec 06 09:51:14 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-51b981582ce7f1a34b84e0c23c6399b8da018a3879f6d023b21a81fa9cc33483-merged.mount: Deactivated successfully.
Dec 06 09:51:14 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 06 09:51:16 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 06 09:51:16 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 06 09:51:16 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 06 09:51:17 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 06 09:51:17 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 06 09:51:17 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 06 09:51:18 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 06 09:51:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21769 DF PROTO=TCP SPT=55122 DPT=9102 SEQ=4244947930 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1803B6610000000001030307) 
Dec 06 09:51:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15859 DF PROTO=TCP SPT=49976 DPT=9882 SEQ=2482982338 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1803B6680000000001030307) 
Dec 06 09:51:18 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:51:18 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 06 09:51:19 np0005548790.localdomain sudo[242917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:51:19 np0005548790.localdomain sudo[242917]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:51:19 np0005548790.localdomain sudo[242917]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:19 np0005548790.localdomain sudo[242935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:51:19 np0005548790.localdomain sudo[242935]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:51:19 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 06 09:51:19 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 06 09:51:19 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 06 09:51:19 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 06 09:51:19 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 06 09:51:20 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:51:20 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 06 09:51:20 np0005548790.localdomain sudo[242935]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:20 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:51:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:51:21 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:21 np0005548790.localdomain systemd[1]: tmp-crun.LTQNE4.mount: Deactivated successfully.
Dec 06 09:51:21 np0005548790.localdomain podman[242986]: 2025-12-06 09:51:21.329915053 +0000 UTC m=+0.097495423 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 09:51:21 np0005548790.localdomain podman[242986]: 2025-12-06 09:51:21.36017179 +0000 UTC m=+0.127752140 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 06 09:51:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15861 DF PROTO=TCP SPT=49976 DPT=9882 SEQ=2482982338 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1803C25F0000000001030307) 
Dec 06 09:51:22 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 06 09:51:22 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-9f34bb3b53a6136c2bbdfae7ed4dc44ec54288f4d2e146a5e9ea956f190ac6df-merged.mount: Deactivated successfully.
Dec 06 09:51:22 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 06 09:51:23 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-51b981582ce7f1a34b84e0c23c6399b8da018a3879f6d023b21a81fa9cc33483-merged.mount: Deactivated successfully.
Dec 06 09:51:23 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:51:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:51:23 np0005548790.localdomain podman[243004]: 2025-12-06 09:51:23.346978219 +0000 UTC m=+0.104853560 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:51:23 np0005548790.localdomain podman[243004]: 2025-12-06 09:51:23.379190212 +0000 UTC m=+0.137065543 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute)
Dec 06 09:51:23 np0005548790.localdomain podman[243004]: unhealthy
Dec 06 09:51:23 np0005548790.localdomain sudo[243021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:51:23 np0005548790.localdomain sudo[243021]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:51:23 np0005548790.localdomain sudo[243021]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:23 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:23 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 06 09:51:24 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 06 09:51:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3961 DF PROTO=TCP SPT=52878 DPT=9105 SEQ=2975771545 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1803CEDF0000000001030307) 
Dec 06 09:51:25 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 06 09:51:25 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 06 09:51:25 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 06 09:51:25 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:51:25 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Failed with result 'exit-code'.
Dec 06 09:51:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:51:26 np0005548790.localdomain podman[243039]: 2025-12-06 09:51:26.30289812 +0000 UTC m=+0.067462409 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 09:51:26 np0005548790.localdomain podman[243039]: 2025-12-06 09:51:26.310694006 +0000 UTC m=+0.075258275 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 09:51:26 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:26 np0005548790.localdomain podman[243039]: unhealthy
Dec 06 09:51:26 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:51:26 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:51:27 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 06 09:51:27 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 06 09:51:27 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:51:27 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Failed with result 'exit-code'.
Dec 06 09:51:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3852 DF PROTO=TCP SPT=40256 DPT=9105 SEQ=1489439475 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1803DB1F0000000001030307) 
Dec 06 09:51:27 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 06 09:51:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:51:27 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 06 09:51:28 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 06 09:51:28 np0005548790.localdomain podman[243060]: 2025-12-06 09:51:28.077231049 +0000 UTC m=+0.071514076 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, release=1755695350, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, build-date=2025-08-20T13:12:41, name=ubi9-minimal)
Dec 06 09:51:28 np0005548790.localdomain podman[243060]: 2025-12-06 09:51:28.112126244 +0000 UTC m=+0.106409321 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 06 09:51:28 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3a7065aceffc6d2146ce223b38d40dafae928acede64701f0e57091e6babe580-merged.mount: Deactivated successfully.
Dec 06 09:51:28 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:28 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 06 09:51:28 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 06 09:51:28 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 09:51:29 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:51:29 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:29 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:30 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 06 09:51:30 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 06 09:51:30 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-415e7a279decd7116c2befbd34e92cf4f0c1820f58473bd34c5452500e4d856c-merged.mount: Deactivated successfully.
Dec 06 09:51:30 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-415e7a279decd7116c2befbd34e92cf4f0c1820f58473bd34c5452500e4d856c-merged.mount: Deactivated successfully.
Dec 06 09:51:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3963 DF PROTO=TCP SPT=52878 DPT=9105 SEQ=2975771545 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1803E6A00000000001030307) 
Dec 06 09:51:31 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:31 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Dec 06 09:51:31 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Dec 06 09:51:31 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:32 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 06 09:51:32 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-9f34bb3b53a6136c2bbdfae7ed4dc44ec54288f4d2e146a5e9ea956f190ac6df-merged.mount: Deactivated successfully.
Dec 06 09:51:32 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:32 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:51:32 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 06 09:51:32 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:32 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:32 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:32 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:32 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:32 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:33 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:51:33 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:33 np0005548790.localdomain podman[243079]: 2025-12-06 09:51:33.320488913 +0000 UTC m=+0.089054001 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd)
Dec 06 09:51:33 np0005548790.localdomain podman[243079]: 2025-12-06 09:51:33.332121072 +0000 UTC m=+0.100686170 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:51:33 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:33 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:33 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:51:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15863 DF PROTO=TCP SPT=49976 DPT=9882 SEQ=2482982338 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1803F31F0000000001030307) 
Dec 06 09:51:34 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:34 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:34 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:51:34 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 06 09:51:34 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:51:34 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:34 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 06 09:51:34 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:34 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:34 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:35 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3a7065aceffc6d2146ce223b38d40dafae928acede64701f0e57091e6babe580-merged.mount: Deactivated successfully.
Dec 06 09:51:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:51:35 np0005548790.localdomain podman[243097]: 2025-12-06 09:51:35.442200158 +0000 UTC m=+0.092158304 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 09:51:35 np0005548790.localdomain podman[243097]: 2025-12-06 09:51:35.453681742 +0000 UTC m=+0.103639908 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 09:51:35 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:35 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 06 09:51:36 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:51:36 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:36 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:36 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 06 09:51:36 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 06 09:51:36 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-0521d8df5d3673de67e6c677f90dfbd55b1c1f914f1671502a747da6648e8a6d-merged.mount: Deactivated successfully.
Dec 06 09:51:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53237 DF PROTO=TCP SPT=51478 DPT=9101 SEQ=34607686 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1803FD1F0000000001030307) 
Dec 06 09:51:36 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:51:36 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:36 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:38 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 06 09:51:38 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-415e7a279decd7116c2befbd34e92cf4f0c1820f58473bd34c5452500e4d856c-merged.mount: Deactivated successfully.
Dec 06 09:51:38 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:51:38 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 06 09:51:38 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 06 09:51:39 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Dec 06 09:51:39 np0005548790.localdomain sshd[243118]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:51:39 np0005548790.localdomain sshd[243118]: Accepted publickey for zuul from 192.168.122.30 port 57112 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:51:39 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:51:39 np0005548790.localdomain systemd-logind[760]: New session 56 of user zuul.
Dec 06 09:51:39 np0005548790.localdomain systemd[1]: Started Session 56 of User zuul.
Dec 06 09:51:39 np0005548790.localdomain sshd[243118]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:51:39 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:51:39 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:39 np0005548790.localdomain podman[243121]: 2025-12-06 09:51:39.625848473 +0000 UTC m=+0.088521497 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 09:51:39 np0005548790.localdomain podman[243121]: 2025-12-06 09:51:39.733998779 +0000 UTC m=+0.196671743 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:51:39 np0005548790.localdomain sudo[243237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qkuhajydazxvyultrqaupqeyjtuavnvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014699.6058483-2799-156905567395681/AnsiballZ_podman_container_info.py
Dec 06 09:51:39 np0005548790.localdomain sudo[243237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:40 np0005548790.localdomain python3.9[243239]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec 06 09:51:40 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52092 DF PROTO=TCP SPT=33202 DPT=9100 SEQ=514071644 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18040B1F0000000001030307) 
Dec 06 09:51:40 np0005548790.localdomain systemd[1]: tmp-crun.NFZHHe.mount: Deactivated successfully.
Dec 06 09:51:40 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:40 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:51:40 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:51:40 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:51:40 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:51:41 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Dec 06 09:51:41 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-73cce0d5d3e32439730a4dc0ecb2505d2f64a391fd8c3ee449e9aeefbf57a5a3-merged.mount: Deactivated successfully.
Dec 06 09:51:42 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:51:42 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:51:42 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:42 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 06 09:51:43 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:43 np0005548790.localdomain sudo[243237]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50938 DF PROTO=TCP SPT=37566 DPT=9100 SEQ=2716771628 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1804179F0000000001030307) 
Dec 06 09:51:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 06 09:51:43 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:43 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:43 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:43 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:43 np0005548790.localdomain sudo[243361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gscugupfegufarrclfmkkpzcxmlhqzon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014703.2257593-2810-85415412501670/AnsiballZ_podman_container_exec.py
Dec 06 09:51:43 np0005548790.localdomain sudo[243361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:51:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:43 np0005548790.localdomain python3.9[243363]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 06 09:51:43 np0005548790.localdomain systemd[1]: Started libpod-conmon-f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.scope.
Dec 06 09:51:43 np0005548790.localdomain podman[243364]: 2025-12-06 09:51:43.861475066 +0000 UTC m=+0.102420906 container exec f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:51:43 np0005548790.localdomain podman[243364]: 2025-12-06 09:51:43.902441791 +0000 UTC m=+0.143387621 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:51:45 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 06 09:51:45 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ead6993018cbf28a6911fcc6a4afc0bfdf470e6d9ea5b6906d250c0b5f41599c-merged.mount: Deactivated successfully.
Dec 06 09:51:45 np0005548790.localdomain sudo[243361]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:46 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 06 09:51:46 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-0521d8df5d3673de67e6c677f90dfbd55b1c1f914f1671502a747da6648e8a6d-merged.mount: Deactivated successfully.
Dec 06 09:51:47 np0005548790.localdomain sudo[243499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vcxbzabetfqcvwddhvbodoxfhffhnsup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014706.876013-2818-263421409292421/AnsiballZ_podman_container_exec.py
Dec 06 09:51:47 np0005548790.localdomain sudo[243499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:47 np0005548790.localdomain python3.9[243501]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 06 09:51:48 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:51:48 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:51:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:51:48.358 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:51:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:51:48.360 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:51:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:51:48.360 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:51:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30155 DF PROTO=TCP SPT=52562 DPT=9102 SEQ=682148747 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18042B920000000001030307) 
Dec 06 09:51:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40671 DF PROTO=TCP SPT=48246 DPT=9882 SEQ=140153458 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18042B970000000001030307) 
Dec 06 09:51:48 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:51:48 np0005548790.localdomain systemd[1]: libpod-conmon-f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.scope: Deactivated successfully.
Dec 06 09:51:48 np0005548790.localdomain systemd[1]: Started libpod-conmon-f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.scope.
Dec 06 09:51:48 np0005548790.localdomain podman[243502]: 2025-12-06 09:51:48.581050382 +0000 UTC m=+1.203031760 container exec f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 09:51:48 np0005548790.localdomain podman[243502]: 2025-12-06 09:51:48.614336104 +0000 UTC m=+1.236317472 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 09:51:50 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:51:50 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:51:50 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 06 09:51:51 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 06 09:51:51 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:51 np0005548790.localdomain sudo[243499]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40673 DF PROTO=TCP SPT=48246 DPT=9882 SEQ=140153458 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1804379F0000000001030307) 
Dec 06 09:51:51 np0005548790.localdomain sudo[243641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vqugpzmqsqwmjuxihprlbhinwvheyftk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014711.2414505-2826-274352146964717/AnsiballZ_file.py
Dec 06 09:51:51 np0005548790.localdomain sudo[243641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:51 np0005548790.localdomain python3.9[243643]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:51:51 np0005548790.localdomain sudo[243641]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:52 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:52 np0005548790.localdomain sudo[243751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ewkozfomkcbpcckadmafcootbuuatqvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014711.9428387-2835-66230863263913/AnsiballZ_podman_container_info.py
Dec 06 09:51:52 np0005548790.localdomain sudo[243751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:52 np0005548790.localdomain python3.9[243753]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Dec 06 09:51:52 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:51:52 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:51:52 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:51:52 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:52 np0005548790.localdomain systemd[1]: libpod-conmon-f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.scope: Deactivated successfully.
Dec 06 09:51:53 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:51:53 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:51:53 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:53 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:51:54 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:54 np0005548790.localdomain podman[243765]: 2025-12-06 09:51:54.065446587 +0000 UTC m=+0.576523919 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 09:51:54 np0005548790.localdomain podman[243765]: 2025-12-06 09:51:54.074148708 +0000 UTC m=+0.585226070 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 09:51:54 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:51:54 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:51:54 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43667 DF PROTO=TCP SPT=49136 DPT=9105 SEQ=2656235182 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180443DF0000000001030307) 
Dec 06 09:51:54 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:51:54 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:54 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:51:54 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:51:54 np0005548790.localdomain sudo[243751]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:55 np0005548790.localdomain sudo[243892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jtmhybhktqerznmzqhrrtfqucdymlyum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014714.9716039-2843-92295776147800/AnsiballZ_podman_container_exec.py
Dec 06 09:51:55 np0005548790.localdomain sudo[243892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:55 np0005548790.localdomain python3.9[243894]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 06 09:51:55 np0005548790.localdomain systemd[1]: Started libpod-conmon-643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.scope.
Dec 06 09:51:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:51:55 np0005548790.localdomain podman[243895]: 2025-12-06 09:51:55.581272576 +0000 UTC m=+0.143679658 container exec 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 06 09:51:55 np0005548790.localdomain podman[243895]: 2025-12-06 09:51:55.620263469 +0000 UTC m=+0.182670561 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:51:57 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:51:57 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c1e85ee8cd933bc1928fa8420e88eccd78c498fc11458e12d7d40087a3d81339-merged.mount: Deactivated successfully.
Dec 06 09:51:57 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c1e85ee8cd933bc1928fa8420e88eccd78c498fc11458e12d7d40087a3d81339-merged.mount: Deactivated successfully.
Dec 06 09:51:57 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 06 09:51:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:51:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45904 DF PROTO=TCP SPT=44170 DPT=9105 SEQ=103144328 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18044F1F0000000001030307) 
Dec 06 09:51:57 np0005548790.localdomain podman[243909]: 2025-12-06 09:51:57.617506586 +0000 UTC m=+2.040892925 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:51:57 np0005548790.localdomain sudo[243892]: pam_unix(sudo:session): session closed for user root
Dec 06 09:51:57 np0005548790.localdomain podman[243909]: 2025-12-06 09:51:57.647172092 +0000 UTC m=+2.070558431 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:51:57 np0005548790.localdomain podman[243909]: unhealthy
Dec 06 09:51:57 np0005548790.localdomain podman[243936]: 2025-12-06 09:51:57.71539305 +0000 UTC m=+0.324640704 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 09:51:57 np0005548790.localdomain podman[243936]: 2025-12-06 09:51:57.749117993 +0000 UTC m=+0.358365667 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 09:51:57 np0005548790.localdomain podman[243936]: unhealthy
Dec 06 09:51:58 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ead6993018cbf28a6911fcc6a4afc0bfdf470e6d9ea5b6906d250c0b5f41599c-merged.mount: Deactivated successfully.
Dec 06 09:51:58 np0005548790.localdomain sudo[244072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yvquhcnmqnnxveremsfalkvjfzpykfpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014717.8270683-2851-256282737065590/AnsiballZ_podman_container_exec.py
Dec 06 09:51:58 np0005548790.localdomain sudo[244072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:51:58 np0005548790.localdomain python3.9[244074]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 06 09:51:59 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 06 09:51:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:51:59 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully.
Dec 06 09:51:59 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully.
Dec 06 09:51:59 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:51:59.885 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:51:59 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:52:00 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:52:00 np0005548790.localdomain systemd[1]: libpod-conmon-643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.scope: Deactivated successfully.
Dec 06 09:52:00 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:52:00 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Failed with result 'exit-code'.
Dec 06 09:52:00 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:52:00 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Failed with result 'exit-code'.
Dec 06 09:52:00 np0005548790.localdomain systemd[1]: Started libpod-conmon-643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.scope.
Dec 06 09:52:00 np0005548790.localdomain podman[244075]: 2025-12-06 09:52:00.285203288 +0000 UTC m=+1.950617731 container exec 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:52:00 np0005548790.localdomain podman[244086]: 2025-12-06 09:52:00.343863563 +0000 UTC m=+1.108868866 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, version=9.6, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:52:00 np0005548790.localdomain podman[244075]: 2025-12-06 09:52:00.36940996 +0000 UTC m=+2.034824473 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 09:52:00 np0005548790.localdomain podman[244086]: 2025-12-06 09:52:00.410312764 +0000 UTC m=+1.175318097 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container)
Dec 06 09:52:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43669 DF PROTO=TCP SPT=49136 DPT=9105 SEQ=2656235182 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18045B9F0000000001030307) 
Dec 06 09:52:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:00.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:52:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:00.887 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:52:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:00.887 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 09:52:01 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 06 09:52:01 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 06 09:52:01 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:52:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:01.882 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:52:02 np0005548790.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:52:02 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:52:02 np0005548790.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:52:02 np0005548790.localdomain podman[239825]: time="2025-12-06T09:52:02Z" level=error msg="Getting root fs size for \"d4b5d718ec3cc7e829835eddfc0b32c8c2c1aacff4a11827338bfade57727103\": getting diffsize of layer \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\" and its parent \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\": creating overlay mount to /var/lib/containers/storage/overlay/3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/l/KEAEZY6IHY6VIZOZTB25O7P4XO:/var/lib/containers/storage/overlay/l/TTSLVTNK7GTY3BTXZICSWH3UA2,upperdir=/var/lib/containers/storage/overlay/3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae/diff,workdir=/var/lib/containers/storage/overlay/3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae/work,nodev,metacopy=on\": no such file or directory"
Dec 06 09:52:02 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:52:02 np0005548790.localdomain systemd[1]: libpod-conmon-643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.scope: Deactivated successfully.
Dec 06 09:52:02 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 09:52:02 np0005548790.localdomain sudo[244072]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:02 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:52:02 np0005548790.localdomain sudo[244235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iivdjpkmsvhnehcekzzyufstedhiqziq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014722.4224317-2859-82854698115212/AnsiballZ_file.py
Dec 06 09:52:02 np0005548790.localdomain sudo[244235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:02.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:52:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:02.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:52:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:02.887 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:52:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:02.908 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:52:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:02.908 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:52:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:02.909 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:52:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:02.909 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 09:52:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:02.909 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:52:02 np0005548790.localdomain python3.9[244237]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:02 np0005548790.localdomain sudo[244235]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:03 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:52:03 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:52:03 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:03.297 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.388s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:52:03 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:52:03 np0005548790.localdomain sudo[244367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cxkuomrhuvthpelzyxncapncnikwxgov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014723.1241643-2868-58354605455881/AnsiballZ_podman_container_info.py
Dec 06 09:52:03 np0005548790.localdomain sudo[244367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:03 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:03.469 229637 WARNING nova.virt.libvirt.driver [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 09:52:03 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:03.471 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=13103MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 09:52:03 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:03.472 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:52:03 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:03.473 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:52:03 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:03.562 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 09:52:03 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:03.563 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 09:52:03 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:03.590 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:52:03 np0005548790.localdomain python3.9[244369]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Dec 06 09:52:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30159 DF PROTO=TCP SPT=52562 DPT=9102 SEQ=682148747 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1804671F0000000001030307) 
Dec 06 09:52:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:04.000 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:52:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:04.007 229637 DEBUG nova.compute.provider_tree [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 09:52:04 np0005548790.localdomain sudo[244367]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:04.029 229637 DEBUG nova.scheduler.client.report [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 09:52:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:04.032 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 09:52:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:04.032 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:52:04 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully.
Dec 06 09:52:04 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-32b5c90c9b92f21ac24e890e56ff13c8f62f2e0d1099ae340a806b11e7e1b242-merged.mount: Deactivated successfully.
Dec 06 09:52:04 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:52:04 np0005548790.localdomain podman[244435]: 2025-12-06 09:52:04.335933571 +0000 UTC m=+0.081565472 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:52:04 np0005548790.localdomain podman[244435]: 2025-12-06 09:52:04.356914018 +0000 UTC m=+0.102545979 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 06 09:52:04 np0005548790.localdomain sudo[244526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ecjrimbljbxgdyxcbyrbrdfnzknxlglw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014724.2216766-2876-41282501075666/AnsiballZ_podman_container_exec.py
Dec 06 09:52:04 np0005548790.localdomain sudo[244526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:04 np0005548790.localdomain python3.9[244528]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 06 09:52:04 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 06 09:52:04 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 06 09:52:05 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:05.028 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:52:05 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:05.029 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:52:05 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:05.029 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 09:52:05 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:05.029 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 09:52:05 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:05.049 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 09:52:05 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:52:05.050 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:52:06 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:52:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:52:06 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c1e85ee8cd933bc1928fa8420e88eccd78c498fc11458e12d7d40087a3d81339-merged.mount: Deactivated successfully.
Dec 06 09:52:06 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:52:06 np0005548790.localdomain podman[244542]: 2025-12-06 09:52:06.399068924 +0000 UTC m=+0.233175341 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 09:52:06 np0005548790.localdomain podman[244542]: 2025-12-06 09:52:06.436074685 +0000 UTC m=+0.270181202 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 09:52:06 np0005548790.localdomain systemd[1]: Started libpod-conmon-97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.scope.
Dec 06 09:52:06 np0005548790.localdomain podman[244529]: 2025-12-06 09:52:06.48799049 +0000 UTC m=+1.748027123 container exec 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec 06 09:52:06 np0005548790.localdomain podman[244529]: 2025-12-06 09:52:06.521151189 +0000 UTC m=+1.781187852 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd)
Dec 06 09:52:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20228 DF PROTO=TCP SPT=58900 DPT=9101 SEQ=2421272094 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1804731F0000000001030307) 
Dec 06 09:52:07 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:52:07 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 06 09:52:07 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 06 09:52:07 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 06 09:52:07 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully.
Dec 06 09:52:07 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:52:07 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:52:07 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:52:07 np0005548790.localdomain sudo[244526]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:08 np0005548790.localdomain sudo[244691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mdsrbvqwoelrmrsydmuzlgoynyyrlkel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014728.0731218-2884-89646627312731/AnsiballZ_podman_container_exec.py
Dec 06 09:52:08 np0005548790.localdomain sudo[244691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:08 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:52:08 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:52:08 np0005548790.localdomain python3.9[244693]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 06 09:52:08 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:52:09 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:52:09 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 06 09:52:09 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 06 09:52:09 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:52:09 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:52:09 np0005548790.localdomain systemd[1]: libpod-conmon-97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.scope: Deactivated successfully.
Dec 06 09:52:09 np0005548790.localdomain systemd[1]: Started libpod-conmon-97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.scope.
Dec 06 09:52:09 np0005548790.localdomain podman[244694]: 2025-12-06 09:52:09.451885212 +0000 UTC m=+0.900362490 container exec 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 09:52:09 np0005548790.localdomain podman[244694]: 2025-12-06 09:52:09.480762648 +0000 UTC m=+0.929239926 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 09:52:10 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:52:10 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50485 DF PROTO=TCP SPT=37926 DPT=9100 SEQ=270492905 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1804811F0000000001030307) 
Dec 06 09:52:10 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:52:10 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:52:10 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:52:10 np0005548790.localdomain sudo[244691]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:11 np0005548790.localdomain sudo[244829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-trhefcaijrxeehdkgvsvtqevfgppjrxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014730.7724338-2892-134954782299334/AnsiballZ_file.py
Dec 06 09:52:11 np0005548790.localdomain sudo[244829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:52:11 np0005548790.localdomain python3.9[244831]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:11 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:52:11 np0005548790.localdomain sudo[244829]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:11 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 06 09:52:11 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3ebaccff31fd8ea7811ba96442c3c33a15fe1448b51c25e74612ae52f41eb053-merged.mount: Deactivated successfully.
Dec 06 09:52:11 np0005548790.localdomain systemd[1]: libpod-conmon-97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.scope: Deactivated successfully.
Dec 06 09:52:11 np0005548790.localdomain podman[244832]: 2025-12-06 09:52:11.466085818 +0000 UTC m=+0.391944037 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:52:11 np0005548790.localdomain podman[244832]: 2025-12-06 09:52:11.527251319 +0000 UTC m=+0.453109528 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:52:11 np0005548790.localdomain sudo[244964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-emerypdtrjaswyvzuyicfadpiuukfxap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014731.5602362-2901-133667052184273/AnsiballZ_podman_container_info.py
Dec 06 09:52:11 np0005548790.localdomain sudo[244964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:11 np0005548790.localdomain python3.9[244966]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec 06 09:52:12 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 06 09:52:12 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:52:12 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 06 09:52:13 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26184 DF PROTO=TCP SPT=60516 DPT=9100 SEQ=4194734584 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18048CE00000000001030307) 
Dec 06 09:52:14 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully.
Dec 06 09:52:14 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-32b5c90c9b92f21ac24e890e56ff13c8f62f2e0d1099ae340a806b11e7e1b242-merged.mount: Deactivated successfully.
Dec 06 09:52:14 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-32b5c90c9b92f21ac24e890e56ff13c8f62f2e0d1099ae340a806b11e7e1b242-merged.mount: Deactivated successfully.
Dec 06 09:52:15 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 06 09:52:15 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 06 09:52:15 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 06 09:52:15 np0005548790.localdomain sudo[244964]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:15 np0005548790.localdomain sudo[245086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kmxcxumeeglabsbwhfadhtarvxayndua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014735.6050222-2909-95952421853816/AnsiballZ_podman_container_exec.py
Dec 06 09:52:15 np0005548790.localdomain sudo[245086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:16 np0005548790.localdomain python3.9[245088]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 06 09:52:16 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:52:16 np0005548790.localdomain systemd[1]: Started libpod-conmon-8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.scope.
Dec 06 09:52:16 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 06 09:52:16 np0005548790.localdomain podman[245089]: 2025-12-06 09:52:16.19982826 +0000 UTC m=+0.089482522 container exec 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3)
Dec 06 09:52:16 np0005548790.localdomain podman[245089]: 2025-12-06 09:52:16.232517587 +0000 UTC m=+0.122171869 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:52:16 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:52:16 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 06 09:52:16 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:52:16 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 06 09:52:16 np0005548790.localdomain sudo[245086]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:17 np0005548790.localdomain sudo[245225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mparfbdulcraxinybamrusxsbiuslrck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014736.908877-2917-257389195941729/AnsiballZ_podman_container_exec.py
Dec 06 09:52:17 np0005548790.localdomain sudo[245225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:17 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:52:17 np0005548790.localdomain python3.9[245227]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 06 09:52:17 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:52:17 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:52:17 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:52:17 np0005548790.localdomain systemd[1]: libpod-conmon-8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.scope: Deactivated successfully.
Dec 06 09:52:17 np0005548790.localdomain systemd[1]: Started libpod-conmon-8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.scope.
Dec 06 09:52:17 np0005548790.localdomain podman[245228]: 2025-12-06 09:52:17.996889992 +0000 UTC m=+0.607772606 container exec 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 06 09:52:18 np0005548790.localdomain podman[245228]: 2025-12-06 09:52:18.029107886 +0000 UTC m=+0.639990440 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute)
Dec 06 09:52:18 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:52:18 np0005548790.localdomain podman[239825]: time="2025-12-06T09:52:18Z" level=error msg="Getting root fs size for \"da7f9d3719dcb84b9e7f57b700648a6dd094cf294e7d55f44c28380e165d6830\": unmounting layer c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6: replacing mount point \"/var/lib/containers/storage/overlay/c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6/merged\": device or resource busy"
Dec 06 09:52:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59923 DF PROTO=TCP SPT=33164 DPT=9102 SEQ=3553402563 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1804A0C30000000001030307) 
Dec 06 09:52:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38465 DF PROTO=TCP SPT=34444 DPT=9882 SEQ=3745809365 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1804A0C70000000001030307) 
Dec 06 09:52:18 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:52:18 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:52:18 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:52:18 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:52:18 np0005548790.localdomain sudo[245225]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:19 np0005548790.localdomain sudo[245364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-enwnseqhatwdjnegkkmsuklwlqavstbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014738.7858012-2925-128730078641662/AnsiballZ_file.py
Dec 06 09:52:19 np0005548790.localdomain sudo[245364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:19 np0005548790.localdomain python3.9[245366]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:19 np0005548790.localdomain sudo[245364]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:19 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:52:19 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 06 09:52:19 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3ebaccff31fd8ea7811ba96442c3c33a15fe1448b51c25e74612ae52f41eb053-merged.mount: Deactivated successfully.
Dec 06 09:52:19 np0005548790.localdomain systemd[1]: libpod-conmon-8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.scope: Deactivated successfully.
Dec 06 09:52:19 np0005548790.localdomain sudo[245474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qsyjmihpgfodobkpzrmpvhdughirvwvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014739.5336242-2934-180402498080199/AnsiballZ_podman_container_info.py
Dec 06 09:52:19 np0005548790.localdomain sudo[245474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:20 np0005548790.localdomain python3.9[245476]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec 06 09:52:20 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 06 09:52:20 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 06 09:52:20 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-bdff378601a5745eaa4f1d4df62f2aeddf56fee1bd1fce6d42c6ce595385b282-merged.mount: Deactivated successfully.
Dec 06 09:52:21 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-bdff378601a5745eaa4f1d4df62f2aeddf56fee1bd1fce6d42c6ce595385b282-merged.mount: Deactivated successfully.
Dec 06 09:52:21 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:52:21 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 06 09:52:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59925 DF PROTO=TCP SPT=33164 DPT=9102 SEQ=3553402563 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1804ACDF0000000001030307) 
Dec 06 09:52:21 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 06 09:52:21 np0005548790.localdomain sudo[245474]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:22 np0005548790.localdomain sudo[245597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ahmherqkvicwlzftrvcikndhxuwnszpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014741.7905462-2942-267983278475498/AnsiballZ_podman_container_exec.py
Dec 06 09:52:22 np0005548790.localdomain sudo[245597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:22 np0005548790.localdomain python3.9[245599]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 06 09:52:22 np0005548790.localdomain systemd[1]: Started libpod-conmon-028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.scope.
Dec 06 09:52:22 np0005548790.localdomain podman[245600]: 2025-12-06 09:52:22.389165606 +0000 UTC m=+0.121638515 container exec 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 09:52:22 np0005548790.localdomain podman[245600]: 2025-12-06 09:52:22.422245972 +0000 UTC m=+0.154718921 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 09:52:22 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:52:22 np0005548790.localdomain sudo[245597]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:23 np0005548790.localdomain sudo[245737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ougtwigsmxlzknyfxwvnfzookjurlikh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014743.025534-2950-274447084845016/AnsiballZ_podman_container_exec.py
Dec 06 09:52:23 np0005548790.localdomain sudo[245737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:23 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:52:23 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 06 09:52:23 np0005548790.localdomain python3.9[245739]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 06 09:52:23 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 06 09:52:23 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-6cef3ed0275ec9a23efe0f943d6fe49d48d180e84df9235ed237a3514d395164-merged.mount: Deactivated successfully.
Dec 06 09:52:23 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-6cef3ed0275ec9a23efe0f943d6fe49d48d180e84df9235ed237a3514d395164-merged.mount: Deactivated successfully.
Dec 06 09:52:23 np0005548790.localdomain sudo[245751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:52:23 np0005548790.localdomain sudo[245751]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:52:23 np0005548790.localdomain sudo[245751]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:23 np0005548790.localdomain sudo[245769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:52:23 np0005548790.localdomain sudo[245769]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:52:24 np0005548790.localdomain systemd[1]: libpod-conmon-028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.scope: Deactivated successfully.
Dec 06 09:52:24 np0005548790.localdomain systemd[1]: Started libpod-conmon-028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.scope.
Dec 06 09:52:24 np0005548790.localdomain podman[245740]: 2025-12-06 09:52:24.209635737 +0000 UTC m=+0.685632549 container exec 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 09:52:24 np0005548790.localdomain podman[245740]: 2025-12-06 09:52:24.240876266 +0000 UTC m=+0.716873008 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 09:52:24 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 06 09:52:24 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55573 DF PROTO=TCP SPT=46182 DPT=9105 SEQ=813574035 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1804B91F0000000001030307) 
Dec 06 09:52:25 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:52:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:52:25 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:52:25 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:52:26 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:52:26 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:52:26 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:52:26 np0005548790.localdomain sudo[245737]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:26 np0005548790.localdomain podman[245816]: 2025-12-06 09:52:26.770292184 +0000 UTC m=+1.673030326 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 09:52:26 np0005548790.localdomain podman[245816]: 2025-12-06 09:52:26.77392386 +0000 UTC m=+1.676662102 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 06 09:52:26 np0005548790.localdomain sudo[245769]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:27 np0005548790.localdomain sudo[245960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jqryeefzwvorfgrdrfhxhaosaxxdxrqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014746.9085593-2958-174317040759986/AnsiballZ_file.py
Dec 06 09:52:27 np0005548790.localdomain sudo[245960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:27 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:52:27 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:52:27 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:52:27 np0005548790.localdomain systemd[1]: libpod-conmon-028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.scope: Deactivated successfully.
Dec 06 09:52:27 np0005548790.localdomain python3.9[245962]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:27 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:52:27 np0005548790.localdomain sudo[245960]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:27 np0005548790.localdomain sudo[245968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:52:27 np0005548790.localdomain sudo[245968]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:52:27 np0005548790.localdomain sudo[245968]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:27 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3966 DF PROTO=TCP SPT=52878 DPT=9105 SEQ=2975771545 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1804C51F0000000001030307) 
Dec 06 09:52:27 np0005548790.localdomain sudo[246088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pdmvzztcpehxzyvtkqhjaspkepwrilqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014747.7213528-2967-141069070552263/AnsiballZ_podman_container_info.py
Dec 06 09:52:27 np0005548790.localdomain sudo[246088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:28 np0005548790.localdomain python3.9[246090]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec 06 09:52:28 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:52:28 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 06 09:52:28 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-bdff378601a5745eaa4f1d4df62f2aeddf56fee1bd1fce6d42c6ce595385b282-merged.mount: Deactivated successfully.
Dec 06 09:52:28 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-bdff378601a5745eaa4f1d4df62f2aeddf56fee1bd1fce6d42c6ce595385b282-merged.mount: Deactivated successfully.
Dec 06 09:52:29 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:52:29 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:52:29 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:52:29 np0005548790.localdomain sudo[246088]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:30 np0005548790.localdomain sudo[246211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pqpwxdzyfaugyawmrtjvvfiyosptrrhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014749.7967834-2975-41729699292813/AnsiballZ_podman_container_exec.py
Dec 06 09:52:30 np0005548790.localdomain sudo[246211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:30 np0005548790.localdomain python3.9[246213]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 06 09:52:30 np0005548790.localdomain systemd[1]: Started libpod-conmon-0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.scope.
Dec 06 09:52:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:52:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:52:30 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 06 09:52:30 np0005548790.localdomain podman[246214]: 2025-12-06 09:52:30.434743201 +0000 UTC m=+0.121532482 container exec 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 09:52:30 np0005548790.localdomain podman[246214]: 2025-12-06 09:52:30.46493366 +0000 UTC m=+0.151722961 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 09:52:30 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 06 09:52:30 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55575 DF PROTO=TCP SPT=46182 DPT=9105 SEQ=813574035 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1804D0DF0000000001030307) 
Dec 06 09:52:30 np0005548790.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:52:30 np0005548790.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:52:30 np0005548790.localdomain sudo[246211]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:30 np0005548790.localdomain podman[246229]: 2025-12-06 09:52:30.937823522 +0000 UTC m=+0.512836921 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 06 09:52:30 np0005548790.localdomain podman[246228]: 2025-12-06 09:52:30.983363398 +0000 UTC m=+0.561735166 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 09:52:31 np0005548790.localdomain podman[246229]: 2025-12-06 09:52:31.003335008 +0000 UTC m=+0.578348467 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:52:31 np0005548790.localdomain podman[246228]: 2025-12-06 09:52:31.046205184 +0000 UTC m=+0.624576892 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 09:52:31 np0005548790.localdomain podman[246228]: unhealthy
Dec 06 09:52:31 np0005548790.localdomain sudo[246392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-odyrfuhkxlxetsjecxyajpzrmsoerbpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014751.075275-2983-198115276488567/AnsiballZ_podman_container_exec.py
Dec 06 09:52:31 np0005548790.localdomain sudo[246392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:31 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:52:31 np0005548790.localdomain python3.9[246394]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 06 09:52:31 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:52:31 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:52:31 np0005548790.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:52:31 np0005548790.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:52:31 np0005548790.localdomain podman[239825]: time="2025-12-06T09:52:31Z" level=error msg="Getting root fs size for \"eb6133e988b02363a75dbcc7a467edd17a75a8e084238a3bdd8a4d10d41619bf\": getting diffsize of layer \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\" and its parent \"c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6\": unmounting layer efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf: replacing mount point \"/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged\": device or resource busy"
Dec 06 09:52:31 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 09:52:31 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Main process exited, code=exited, status=1/FAILURE
Dec 06 09:52:31 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Failed with result 'exit-code'.
Dec 06 09:52:31 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:52:31 np0005548790.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 06 09:52:31 np0005548790.localdomain systemd[1]: libpod-conmon-0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.scope: Deactivated successfully.
Dec 06 09:52:31 np0005548790.localdomain systemd[1]: Started libpod-conmon-0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.scope.
Dec 06 09:52:31 np0005548790.localdomain podman[246395]: 2025-12-06 09:52:31.728999458 +0000 UTC m=+0.176958681 container exec 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 09:52:31 np0005548790.localdomain podman[246395]: 2025-12-06 09:52:31.76117153 +0000 UTC m=+0.209130753 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 09:52:32 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:52:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:52:32 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 06 09:52:32 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-6cef3ed0275ec9a23efe0f943d6fe49d48d180e84df9235ed237a3514d395164-merged.mount: Deactivated successfully.
Dec 06 09:52:32 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-6cef3ed0275ec9a23efe0f943d6fe49d48d180e84df9235ed237a3514d395164-merged.mount: Deactivated successfully.
Dec 06 09:52:32 np0005548790.localdomain sudo[246392]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:32 np0005548790.localdomain podman[246425]: 2025-12-06 09:52:32.707015345 +0000 UTC m=+0.245174148 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, name=ubi9-minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 06 09:52:32 np0005548790.localdomain podman[246425]: 2025-12-06 09:52:32.744076437 +0000 UTC m=+0.282235260 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, release=1755695350, config_id=edpm, container_name=openstack_network_exporter, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., architecture=x86_64, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Dec 06 09:52:33 np0005548790.localdomain sudo[246550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wczajfxrlgcqevrhwsnhnjjwckgthsdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014752.7931736-2991-53741936000734/AnsiballZ_file.py
Dec 06 09:52:33 np0005548790.localdomain sudo[246550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:33 np0005548790.localdomain python3.9[246552]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:33 np0005548790.localdomain sudo[246550]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38469 DF PROTO=TCP SPT=34444 DPT=9882 SEQ=3745809365 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1804DD1F0000000001030307) 
Dec 06 09:52:33 np0005548790.localdomain sudo[246660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qxjpuhyeczmevaecjuhfmwexoesrirbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014753.5497646-3000-134704138325778/AnsiballZ_podman_container_info.py
Dec 06 09:52:33 np0005548790.localdomain sudo[246660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:34 np0005548790.localdomain python3.9[246662]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec 06 09:52:34 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:52:34 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ed344d323c02ab9e2880ed86e85e30fa2695ed2203789077293b9be1165d5141-merged.mount: Deactivated successfully.
Dec 06 09:52:35 np0005548790.localdomain systemd[1]: libpod-conmon-0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.scope: Deactivated successfully.
Dec 06 09:52:35 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 09:52:36 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20644 DF PROTO=TCP SPT=46700 DPT=9101 SEQ=705662986 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1804E7200000000001030307) 
Dec 06 09:52:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:52:37 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:52:37 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:52:37 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:52:37 np0005548790.localdomain podman[246675]: 2025-12-06 09:52:37.708607137 +0000 UTC m=+1.225969659 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 09:52:37 np0005548790.localdomain podman[246675]: 2025-12-06 09:52:37.750441905 +0000 UTC m=+1.267804427 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 06 09:52:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:52:38 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd-merged.mount: Deactivated successfully.
Dec 06 09:52:39 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:52:39 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:52:39 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 06 09:52:39 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:52:39 np0005548790.localdomain sudo[246660]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:39 np0005548790.localdomain podman[246696]: 2025-12-06 09:52:39.69024435 +0000 UTC m=+1.450199511 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 09:52:39 np0005548790.localdomain podman[246696]: 2025-12-06 09:52:39.725200366 +0000 UTC m=+1.485155567 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 09:52:39 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50941 DF PROTO=TCP SPT=37566 DPT=9100 SEQ=2716771628 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1804F51F0000000001030307) 
Dec 06 09:52:40 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 06 09:52:40 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 06 09:52:40 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:52:40 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:52:40 np0005548790.localdomain sudo[246826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zrswjewalzpitnllnceburtvbwvxtavf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014759.825247-3008-106047973239625/AnsiballZ_podman_container_exec.py
Dec 06 09:52:40 np0005548790.localdomain sudo[246826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:40 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 06 09:52:40 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:52:40 np0005548790.localdomain python3.9[246828]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 06 09:52:41 np0005548790.localdomain systemd[1]: Started libpod-conmon-9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.scope.
Dec 06 09:52:41 np0005548790.localdomain podman[246829]: 2025-12-06 09:52:41.036551346 +0000 UTC m=+0.078447120 container exec 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=edpm, release=1755695350, container_name=openstack_network_exporter, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Dec 06 09:52:41 np0005548790.localdomain podman[246829]: 2025-12-06 09:52:41.044018654 +0000 UTC m=+0.085914408 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, container_name=openstack_network_exporter, config_id=edpm, release=1755695350, architecture=x86_64, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 06 09:52:41 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 06 09:52:41 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:52:41 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 06 09:52:41 np0005548790.localdomain sudo[246826]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:42 np0005548790.localdomain sudo[246965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kkoaqdxkdsqfwjnwneockcetkqalrftb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014761.7628527-3016-191491559809763/AnsiballZ_podman_container_exec.py
Dec 06 09:52:42 np0005548790.localdomain sudo[246965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:42 np0005548790.localdomain python3.9[246967]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 06 09:52:42 np0005548790.localdomain podman[239825]: time="2025-12-06T09:52:42Z" level=error msg="Unable to write json: \"write unix /run/podman/podman.sock->@: write: broken pipe\""
Dec 06 09:52:42 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:47:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 4096 "" "Go-http-client/1.1"
Dec 06 09:52:42 np0005548790.localdomain systemd[1]: libpod-conmon-9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.scope: Deactivated successfully.
Dec 06 09:52:42 np0005548790.localdomain systemd[1]: Started libpod-conmon-9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.scope.
Dec 06 09:52:42 np0005548790.localdomain podman[246968]: 2025-12-06 09:52:42.311362878 +0000 UTC m=+0.096943219 container exec 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, distribution-scope=public, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, container_name=openstack_network_exporter, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 06 09:52:42 np0005548790.localdomain podman[246968]: 2025-12-06 09:52:42.343298485 +0000 UTC m=+0.128878876 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.33.7, container_name=openstack_network_exporter, version=9.6, name=ubi9-minimal, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 06 09:52:43 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49834 DF PROTO=TCP SPT=36540 DPT=9100 SEQ=1456699369 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180502200000000001030307) 
Dec 06 09:52:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:52:44 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 06 09:52:44 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ed344d323c02ab9e2880ed86e85e30fa2695ed2203789077293b9be1165d5141-merged.mount: Deactivated successfully.
Dec 06 09:52:44 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ed344d323c02ab9e2880ed86e85e30fa2695ed2203789077293b9be1165d5141-merged.mount: Deactivated successfully.
Dec 06 09:52:44 np0005548790.localdomain sudo[246965]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:45 np0005548790.localdomain sudo[247114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qnmxidxdgkioxotheuchadhhmquwwqkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014764.9691477-3024-61210707227751/AnsiballZ_file.py
Dec 06 09:52:45 np0005548790.localdomain sudo[247114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:45 np0005548790.localdomain systemd[1]: libpod-conmon-9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.scope: Deactivated successfully.
Dec 06 09:52:45 np0005548790.localdomain python3.9[247116]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:45 np0005548790.localdomain sudo[247114]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:45 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8-merged.mount: Deactivated successfully.
Dec 06 09:52:45 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 06 09:52:45 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd-merged.mount: Deactivated successfully.
Dec 06 09:52:45 np0005548790.localdomain sudo[247227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tlxksjrkcesobwiggssrrqsgwroeoted ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014765.7330005-3038-43311416858876/AnsiballZ_file.py
Dec 06 09:52:45 np0005548790.localdomain sudo[247227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:46 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd-merged.mount: Deactivated successfully.
Dec 06 09:52:46 np0005548790.localdomain python3.9[247229]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:46 np0005548790.localdomain sudo[247227]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:46 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 06 09:52:46 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 06 09:52:47 np0005548790.localdomain sudo[247337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tnvfigicqcafpmopwwflzifrbkcgxjih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014766.8076596-3065-270067224727482/AnsiballZ_stat.py
Dec 06 09:52:47 np0005548790.localdomain sudo[247337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:47 np0005548790.localdomain python3.9[247339]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:52:47 np0005548790.localdomain sudo[247337]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:47 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:47:40 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 140637 "" "Go-http-client/1.1"
Dec 06 09:52:47 np0005548790.localdomain podman_exporter[240045]: ts=2025-12-06T09:52:47.348Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec 06 09:52:47 np0005548790.localdomain podman_exporter[240045]: ts=2025-12-06T09:52:47.349Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec 06 09:52:47 np0005548790.localdomain podman_exporter[240045]: ts=2025-12-06T09:52:47.349Z caller=tls_config.go:316 level=info msg="TLS is disabled." http2=false address=[::]:9882
Dec 06 09:52:47 np0005548790.localdomain podman[246995]: 2025-12-06 09:52:47.372771584 +0000 UTC m=+3.879577309 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:52:47 np0005548790.localdomain podman[246995]: 2025-12-06 09:52:47.452201088 +0000 UTC m=+3.959006853 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:52:47 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8-merged.mount: Deactivated successfully.
Dec 06 09:52:47 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:52:47 np0005548790.localdomain sudo[247438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jjhtpdmttffiioinummuxgdlqswvqgcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014766.8076596-3065-270067224727482/AnsiballZ_copy.py
Dec 06 09:52:47 np0005548790.localdomain sudo[247438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:47 np0005548790.localdomain python3.9[247440]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014766.8076596-3065-270067224727482/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:47 np0005548790.localdomain sudo[247438]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:52:48.360 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:52:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:52:48.361 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:52:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:52:48.361 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:52:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24787 DF PROTO=TCP SPT=43686 DPT=9102 SEQ=2423812774 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180515F40000000001030307) 
Dec 06 09:52:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23910 DF PROTO=TCP SPT=44616 DPT=9882 SEQ=1976280715 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180515F70000000001030307) 
Dec 06 09:52:48 np0005548790.localdomain sudo[247548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fzqibxatgiyujlhdpijgepeoltvovcsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014768.3582735-3113-30643234225158/AnsiballZ_file.py
Dec 06 09:52:48 np0005548790.localdomain sudo[247548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:48 np0005548790.localdomain python3.9[247550]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:48 np0005548790.localdomain sudo[247548]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:49 np0005548790.localdomain auditd[726]: Audit daemon rotating log files
Dec 06 09:52:49 np0005548790.localdomain sudo[247658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rhbizwporjyazhbjodmtxqnanjavfozu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014769.0781639-3136-253570726170544/AnsiballZ_stat.py
Dec 06 09:52:49 np0005548790.localdomain sudo[247658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:49 np0005548790.localdomain python3.9[247660]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:52:49 np0005548790.localdomain sudo[247658]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:49 np0005548790.localdomain sudo[247715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-odxznxckwuuerszfdfxbjtydksusqwii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014769.0781639-3136-253570726170544/AnsiballZ_file.py
Dec 06 09:52:49 np0005548790.localdomain sudo[247715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:50 np0005548790.localdomain python3.9[247717]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:50 np0005548790.localdomain sudo[247715]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:50 np0005548790.localdomain sudo[247825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bvwjgxibaopzvvukiusniolrrrmcskyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014770.3687522-3173-92731995953787/AnsiballZ_stat.py
Dec 06 09:52:50 np0005548790.localdomain sudo[247825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:50 np0005548790.localdomain python3.9[247827]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:52:50 np0005548790.localdomain sudo[247825]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:51 np0005548790.localdomain sudo[247882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gppwhnujskttgpqqeaizjyrdluchizfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014770.3687522-3173-92731995953787/AnsiballZ_file.py
Dec 06 09:52:51 np0005548790.localdomain sudo[247882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:51 np0005548790.localdomain python3.9[247884]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.4143p9hb recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:51 np0005548790.localdomain sudo[247882]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24789 DF PROTO=TCP SPT=43686 DPT=9102 SEQ=2423812774 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180521DF0000000001030307) 
Dec 06 09:52:51 np0005548790.localdomain sudo[247992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fjjcbattkhptppmhljtmmvceutlvdcgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014771.5517042-3209-74141186442220/AnsiballZ_stat.py
Dec 06 09:52:51 np0005548790.localdomain sudo[247992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:52 np0005548790.localdomain python3.9[247994]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:52:52 np0005548790.localdomain sudo[247992]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:52 np0005548790.localdomain sudo[248049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ptxdsbrzvrzsrigkydcypxvvhkfkmkui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014771.5517042-3209-74141186442220/AnsiballZ_file.py
Dec 06 09:52:52 np0005548790.localdomain sudo[248049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:52 np0005548790.localdomain python3.9[248051]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:52 np0005548790.localdomain sudo[248049]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:53 np0005548790.localdomain sudo[248159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xcjypzakbpkigyuxfyzmrghjjnenlopf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014772.8071542-3247-194830284058523/AnsiballZ_command.py
Dec 06 09:52:53 np0005548790.localdomain sudo[248159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:54 np0005548790.localdomain python3.9[248161]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:52:54 np0005548790.localdomain sudo[248159]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:54 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25748 DF PROTO=TCP SPT=50176 DPT=9105 SEQ=2200523457 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18052E600000000001030307) 
Dec 06 09:52:54 np0005548790.localdomain sudo[248270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-caytxtzgbiddqmghvcreppvxpmwpywrh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765014774.3347094-3272-181453141441009/AnsiballZ_edpm_nftables_from_files.py
Dec 06 09:52:54 np0005548790.localdomain sudo[248270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:54 np0005548790.localdomain python3[248272]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 06 09:52:54 np0005548790.localdomain sudo[248270]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:56 np0005548790.localdomain sudo[248380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-flojqqfaczclcjbbqfwxwzdhmkghkjyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014775.351072-3298-4511423631766/AnsiballZ_stat.py
Dec 06 09:52:56 np0005548790.localdomain sudo[248380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:56 np0005548790.localdomain python3.9[248382]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:52:56 np0005548790.localdomain sudo[248380]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:57 np0005548790.localdomain sudo[248437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oqhjoojprfckyebfdlhuxtgtjotwbefv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014775.351072-3298-4511423631766/AnsiballZ_file.py
Dec 06 09:52:57 np0005548790.localdomain sudo[248437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:57 np0005548790.localdomain python3.9[248439]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:57 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43672 DF PROTO=TCP SPT=49136 DPT=9105 SEQ=2656235182 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1805391F0000000001030307) 
Dec 06 09:52:57 np0005548790.localdomain sudo[248437]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:52:57 np0005548790.localdomain podman[248457]: 2025-12-06 09:52:57.564999054 +0000 UTC m=+0.076321934 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:52:57 np0005548790.localdomain podman[248457]: 2025-12-06 09:52:57.57090798 +0000 UTC m=+0.082230910 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 09:52:57 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:52:58 np0005548790.localdomain sudo[248565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ajlcqylmpgtatudhowwkskvuropfbvxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014777.727162-3332-216832646778627/AnsiballZ_stat.py
Dec 06 09:52:58 np0005548790.localdomain sudo[248565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:58 np0005548790.localdomain python3.9[248567]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:52:58 np0005548790.localdomain sudo[248565]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:58 np0005548790.localdomain sudo[248622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cqdmgickvpbcihbktedirdjqqathrlvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014777.727162-3332-216832646778627/AnsiballZ_file.py
Dec 06 09:52:58 np0005548790.localdomain sudo[248622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:58 np0005548790.localdomain python3.9[248624]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:52:58 np0005548790.localdomain sudo[248622]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:59 np0005548790.localdomain sudo[248732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jteawovcllphygxlzbsmdsdqkoutmlse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014779.06388-3368-265175602023020/AnsiballZ_stat.py
Dec 06 09:52:59 np0005548790.localdomain sudo[248732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:52:59 np0005548790.localdomain python3.9[248734]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:52:59 np0005548790.localdomain sudo[248732]: pam_unix(sudo:session): session closed for user root
Dec 06 09:52:59 np0005548790.localdomain sudo[248789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sfvyppvotegbqswqmtrctmkypnbmocfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014779.06388-3368-265175602023020/AnsiballZ_file.py
Dec 06 09:52:59 np0005548790.localdomain sudo[248789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:00 np0005548790.localdomain python3.9[248791]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:53:00 np0005548790.localdomain sudo[248789]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:00 np0005548790.localdomain sudo[248899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fuywsfhrmlkndxttjvogtgyahhqudqrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014780.3739345-3403-152974469636575/AnsiballZ_stat.py
Dec 06 09:53:00 np0005548790.localdomain sudo[248899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:00 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25750 DF PROTO=TCP SPT=50176 DPT=9105 SEQ=2200523457 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180546200000000001030307) 
Dec 06 09:53:00 np0005548790.localdomain python3.9[248901]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:53:00 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:00.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:53:00 np0005548790.localdomain sudo[248899]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:01 np0005548790.localdomain sudo[248956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cahrdhpupnzzfjqlmckwgueetpwlcnen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014780.3739345-3403-152974469636575/AnsiballZ_file.py
Dec 06 09:53:01 np0005548790.localdomain sudo[248956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:01 np0005548790.localdomain python3.9[248958]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:53:01 np0005548790.localdomain sudo[248956]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:01 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:01.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:53:02 np0005548790.localdomain sudo[249066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mdtfjmhmkvvusnlvedkbwohlretqodaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014781.7563875-3440-257537396997001/AnsiballZ_stat.py
Dec 06 09:53:02 np0005548790.localdomain sudo[249066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:02 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:53:02 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:53:02 np0005548790.localdomain systemd[1]: tmp-crun.5U9xBp.mount: Deactivated successfully.
Dec 06 09:53:02 np0005548790.localdomain podman[249068]: 2025-12-06 09:53:02.222770523 +0000 UTC m=+0.089788730 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 09:53:02 np0005548790.localdomain podman[249068]: 2025-12-06 09:53:02.229334517 +0000 UTC m=+0.096352674 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 09:53:02 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 09:53:02 np0005548790.localdomain podman[249070]: 2025-12-06 09:53:02.323069421 +0000 UTC m=+0.188341922 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:53:02 np0005548790.localdomain python3.9[249069]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:53:02 np0005548790.localdomain podman[249070]: 2025-12-06 09:53:02.336060005 +0000 UTC m=+0.201332446 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 09:53:02 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 09:53:02 np0005548790.localdomain sudo[249066]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:02 np0005548790.localdomain sudo[249197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-exabdkmqobluqnbagrjxpveeqmtscmqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014781.7563875-3440-257537396997001/AnsiballZ_copy.py
Dec 06 09:53:02 np0005548790.localdomain sudo[249197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:02.885 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:53:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:02.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:53:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:02.886 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 09:53:03 np0005548790.localdomain python3.9[249199]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765014781.7563875-3440-257537396997001/.source.nft follow=False _original_basename=ruleset.j2 checksum=953266ca5f7d82d2777a0a437bd7feceb9259ee8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:53:03 np0005548790.localdomain sudo[249197]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24791 DF PROTO=TCP SPT=43686 DPT=9102 SEQ=2423812774 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1805511F0000000001030307) 
Dec 06 09:53:03 np0005548790.localdomain sudo[249307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-azjvwoexqxkwsgmlhbvvayokgbbsdrel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014783.3178442-3484-134954720986490/AnsiballZ_file.py
Dec 06 09:53:03 np0005548790.localdomain sudo[249307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:03 np0005548790.localdomain python3.9[249309]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:53:03 np0005548790.localdomain sudo[249307]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:03 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:03.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:53:04 np0005548790.localdomain sudo[249417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ynciqonwegjraecukqoodtjwvzhvzwpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014784.082194-3509-240297798177616/AnsiballZ_command.py
Dec 06 09:53:04 np0005548790.localdomain sudo[249417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:04 np0005548790.localdomain python3.9[249419]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:53:04 np0005548790.localdomain sudo[249417]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:04.885 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:53:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:04.885 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 09:53:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:04.886 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 09:53:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:04.903 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 09:53:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:04.904 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:53:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:04.904 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:53:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:04.919 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:53:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:04.920 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:53:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:04.921 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:53:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:04.921 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 09:53:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:04.922 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:53:05 np0005548790.localdomain sudo[249550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xrctfiytkxaqhgshefmbazzlqmzbwdzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014784.8080363-3532-98355387167105/AnsiballZ_blockinfile.py
Dec 06 09:53:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:53:05 np0005548790.localdomain sudo[249550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:05 np0005548790.localdomain systemd[1]: tmp-crun.2js6bL.mount: Deactivated successfully.
Dec 06 09:53:05 np0005548790.localdomain podman[249552]: 2025-12-06 09:53:05.291709538 +0000 UTC m=+0.083999917 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 06 09:53:05 np0005548790.localdomain podman[249552]: 2025-12-06 09:53:05.303731927 +0000 UTC m=+0.096022286 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, config_id=edpm, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, container_name=openstack_network_exporter, architecture=x86_64, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public)
Dec 06 09:53:05 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 09:53:05 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:05.337 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:53:05 np0005548790.localdomain python3.9[249553]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                                            include "/etc/nftables/edpm-chains.nft"
                                                            include "/etc/nftables/edpm-rules.nft"
                                                            include "/etc/nftables/edpm-jumps.nft"
                                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:53:05 np0005548790.localdomain sudo[249550]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:05 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:05.539 229637 WARNING nova.virt.libvirt.driver [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 09:53:05 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:05.541 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=13117MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 09:53:05 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:05.542 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:53:05 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:05.542 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:53:05 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:05.623 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 09:53:05 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:05.624 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 09:53:05 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:05.650 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:53:06 np0005548790.localdomain sudo[249702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yqolvdpebzbsbyumkaccqeeewzkcygol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014785.7939043-3559-278785930710329/AnsiballZ_command.py
Dec 06 09:53:06 np0005548790.localdomain sudo[249702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:06 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:06.065 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:53:06 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:06.071 229637 DEBUG nova.compute.provider_tree [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 09:53:06 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:06.087 229637 DEBUG nova.scheduler.client.report [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 09:53:06 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:06.090 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 09:53:06 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:06.090 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:53:06 np0005548790.localdomain python3.9[249706]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:53:06 np0005548790.localdomain sudo[249702]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:06 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57629 DF PROTO=TCP SPT=60786 DPT=9101 SEQ=2373168364 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18055D1F0000000001030307) 
Dec 06 09:53:06 np0005548790.localdomain sudo[249815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fueigoqjigveidtzpwwhxdserjiauhte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014786.5714664-3584-2533472325756/AnsiballZ_stat.py
Dec 06 09:53:06 np0005548790.localdomain sudo[249815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:07 np0005548790.localdomain python3.9[249817]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:53:07 np0005548790.localdomain sudo[249815]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:07 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:53:07.087 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.320 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:53:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:53:08 np0005548790.localdomain sudo[249927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kjfcgllmutsplmbarfkcxmaapoppmkaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014787.915543-3608-259465623479172/AnsiballZ_command.py
Dec 06 09:53:08 np0005548790.localdomain sudo[249927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:08 np0005548790.localdomain python3.9[249929]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:53:08 np0005548790.localdomain sudo[249927]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:08 np0005548790.localdomain sudo[250040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fnoupuklybqjknmpcgcrroqvndmvvgqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014788.641613-3632-36470201630122/AnsiballZ_file.py
Dec 06 09:53:08 np0005548790.localdomain sudo[250040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:09 np0005548790.localdomain python3.9[250042]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:53:09 np0005548790.localdomain sudo[250040]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:10 np0005548790.localdomain sshd[243118]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:53:10 np0005548790.localdomain systemd[1]: session-56.scope: Deactivated successfully.
Dec 06 09:53:10 np0005548790.localdomain systemd[1]: session-56.scope: Consumed 29.216s CPU time.
Dec 06 09:53:10 np0005548790.localdomain systemd-logind[760]: Session 56 logged out. Waiting for processes to exit.
Dec 06 09:53:10 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:53:10 np0005548790.localdomain systemd-logind[760]: Removed session 56.
Dec 06 09:53:10 np0005548790.localdomain podman[250060]: 2025-12-06 09:53:10.220967613 +0000 UTC m=+0.085846997 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3)
Dec 06 09:53:10 np0005548790.localdomain podman[250060]: 2025-12-06 09:53:10.232481568 +0000 UTC m=+0.097360982 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 09:53:10 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:53:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:53:11 np0005548790.localdomain podman[250079]: 2025-12-06 09:53:11.559689688 +0000 UTC m=+0.076941181 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 09:53:11 np0005548790.localdomain podman[250079]: 2025-12-06 09:53:11.567543516 +0000 UTC m=+0.084794969 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 09:53:11 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:53:16 np0005548790.localdomain sshd[250102]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:53:16 np0005548790.localdomain sshd[250102]: Accepted publickey for zuul from 192.168.122.30 port 33120 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:53:16 np0005548790.localdomain systemd-logind[760]: New session 57 of user zuul.
Dec 06 09:53:16 np0005548790.localdomain systemd[1]: Started Session 57 of User zuul.
Dec 06 09:53:16 np0005548790.localdomain sshd[250102]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:53:17 np0005548790.localdomain sudo[250213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bbcofhvgrtcujnzqqmspkbcmcltlktdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014796.84316-28-11261169498020/AnsiballZ_file.py
Dec 06 09:53:17 np0005548790.localdomain sudo[250213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:17 np0005548790.localdomain python3.9[250215]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:53:17 np0005548790.localdomain sudo[250213]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:53:17 np0005548790.localdomain podman[250216]: 2025-12-06 09:53:17.68812835 +0000 UTC m=+0.074192008 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 09:53:17 np0005548790.localdomain podman[250216]: 2025-12-06 09:53:17.751032836 +0000 UTC m=+0.137096504 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Dec 06 09:53:17 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:53:18 np0005548790.localdomain sudo[250350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ajweacgwdmvvxapkuufcziiadxofbhgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014797.7379193-28-217595258973052/AnsiballZ_file.py
Dec 06 09:53:18 np0005548790.localdomain sudo[250350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:18 np0005548790.localdomain python3.9[250352]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:53:18 np0005548790.localdomain sudo[250350]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63486 DF PROTO=TCP SPT=52530 DPT=9102 SEQ=3625756705 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18058B210000000001030307) 
Dec 06 09:53:18 np0005548790.localdomain podman[239825]: time="2025-12-06T09:53:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 09:53:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:53:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142283 "" "Go-http-client/1.1"
Dec 06 09:53:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:53:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 15861 "" "Go-http-client/1.1"
Dec 06 09:53:18 np0005548790.localdomain sudo[250464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-avzyejgnkbqfsadbfywfhnhkyisasymw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014798.3869798-28-111027917710295/AnsiballZ_file.py
Dec 06 09:53:18 np0005548790.localdomain sudo[250464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:18 np0005548790.localdomain python3.9[250466]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated/neutron-sriov-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:53:18 np0005548790.localdomain sudo[250464]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:19 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63487 DF PROTO=TCP SPT=52530 DPT=9102 SEQ=3625756705 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18058F1F0000000001030307) 
Dec 06 09:53:19 np0005548790.localdomain python3.9[250574]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/neutron_sriov_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:53:19 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24792 DF PROTO=TCP SPT=43686 DPT=9102 SEQ=2423812774 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1805911F0000000001030307) 
Dec 06 09:53:20 np0005548790.localdomain python3.9[250660]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/neutron_sriov_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014799.1550584-106-158984560314696/.source.yaml follow=False _original_basename=neutron_sriov_agent.yaml.j2 checksum=d3942d8476d006ea81540d2a1d96dd9d67f33f5f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:53:21 np0005548790.localdomain python3.9[250768]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:53:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63488 DF PROTO=TCP SPT=52530 DPT=9102 SEQ=3625756705 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180597200000000001030307) 
Dec 06 09:53:21 np0005548790.localdomain python3.9[250854]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014800.6415625-151-64637680138599/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:53:22 np0005548790.localdomain python3.9[250962]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:53:22 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59929 DF PROTO=TCP SPT=33164 DPT=9102 SEQ=3553402563 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18059B1F0000000001030307) 
Dec 06 09:53:23 np0005548790.localdomain python3.9[251048]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014801.7959023-151-39635977715220/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:53:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:53:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:53:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:53:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 09:53:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:53:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:53:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:53:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 09:53:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:53:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:53:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 09:53:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:53:24 np0005548790.localdomain python3.9[251161]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron-sriov-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:53:24 np0005548790.localdomain python3.9[251247]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron-sriov-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014803.5526028-151-8006222488973/.source.conf follow=False _original_basename=neutron-sriov-agent.conf.j2 checksum=e2ae42e1af8942fb025ef50d9f7eaf3d9bc444f6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:53:25 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63489 DF PROTO=TCP SPT=52530 DPT=9102 SEQ=3625756705 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1805A6DF0000000001030307) 
Dec 06 09:53:26 np0005548790.localdomain python3.9[251355]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/10-neutron-sriov.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:53:27 np0005548790.localdomain python3.9[251441]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/10-neutron-sriov.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014806.1095526-325-237345806439192/.source.conf _original_basename=10-neutron-sriov.conf follow=False checksum=b14e40a972b9e05d0e95a7e875b3201eda2c4b6d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:53:27 np0005548790.localdomain sudo[251499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:53:27 np0005548790.localdomain sudo[251499]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:53:27 np0005548790.localdomain sudo[251499]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:53:27 np0005548790.localdomain sudo[251540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 06 09:53:27 np0005548790.localdomain sudo[251540]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:53:27 np0005548790.localdomain podman[251531]: 2025-12-06 09:53:27.73240706 +0000 UTC m=+0.086420521 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 09:53:27 np0005548790.localdomain podman[251531]: 2025-12-06 09:53:27.742244681 +0000 UTC m=+0.096258142 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:53:27 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:53:27 np0005548790.localdomain python3.9[251605]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:53:28 np0005548790.localdomain sudo[251540]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:28 np0005548790.localdomain sudo[251645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:53:28 np0005548790.localdomain sudo[251645]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:53:28 np0005548790.localdomain sudo[251645]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:28 np0005548790.localdomain sudo[251705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:53:28 np0005548790.localdomain sudo[251705]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:53:28 np0005548790.localdomain sudo[251771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cxddpwtykntsxjzzyfmqskmcqsnvslie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014808.3189294-397-144078919630556/AnsiballZ_file.py
Dec 06 09:53:28 np0005548790.localdomain sudo[251771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:28 np0005548790.localdomain python3.9[251773]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:53:28 np0005548790.localdomain sudo[251771]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:29 np0005548790.localdomain sudo[251705]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:29 np0005548790.localdomain sudo[251913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sobpwytqefczxhyubanmdwrbxhiaevbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014809.1489902-421-91304527685419/AnsiballZ_stat.py
Dec 06 09:53:29 np0005548790.localdomain sudo[251913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:29 np0005548790.localdomain python3.9[251915]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:53:29 np0005548790.localdomain sudo[251913]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:29 np0005548790.localdomain sudo[251970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fgulxwhdbarzyerirgngcrobstjrprqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014809.1489902-421-91304527685419/AnsiballZ_file.py
Dec 06 09:53:29 np0005548790.localdomain sudo[251970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:30 np0005548790.localdomain python3.9[251972]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:53:30 np0005548790.localdomain sudo[251970]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:30 np0005548790.localdomain sudo[252080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wrtdpxdrtrutuljebkgkdvqzxaejgiui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014810.2989094-421-52713103491503/AnsiballZ_stat.py
Dec 06 09:53:30 np0005548790.localdomain sudo[252080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:30 np0005548790.localdomain python3.9[252082]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:53:30 np0005548790.localdomain sudo[252080]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:31 np0005548790.localdomain sudo[252137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dxelaodswupdbsvflwlprjmvypvfmyou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014810.2989094-421-52713103491503/AnsiballZ_file.py
Dec 06 09:53:31 np0005548790.localdomain sudo[252137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:31 np0005548790.localdomain python3.9[252139]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:53:31 np0005548790.localdomain sudo[252137]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:31 np0005548790.localdomain sudo[252247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ngytkeyxezionpfnnvjcsoaoqtzhjtsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014811.651267-490-161769424560152/AnsiballZ_file.py
Dec 06 09:53:31 np0005548790.localdomain sudo[252247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:32 np0005548790.localdomain python3.9[252249]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:53:32 np0005548790.localdomain sudo[252247]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:53:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:53:32 np0005548790.localdomain podman[252305]: 2025-12-06 09:53:32.572424443 +0000 UTC m=+0.083607987 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 09:53:32 np0005548790.localdomain systemd[1]: tmp-crun.80ivj6.mount: Deactivated successfully.
Dec 06 09:53:32 np0005548790.localdomain podman[252306]: 2025-12-06 09:53:32.650975508 +0000 UTC m=+0.157065430 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm)
Dec 06 09:53:32 np0005548790.localdomain podman[252305]: 2025-12-06 09:53:32.65605855 +0000 UTC m=+0.167242144 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 09:53:32 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 09:53:32 np0005548790.localdomain podman[252306]: 2025-12-06 09:53:32.686134548 +0000 UTC m=+0.192224500 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:53:32 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 09:53:32 np0005548790.localdomain sudo[252398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-saiqdamsrzapwwffxvoawvvtezlmwkzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014812.3857484-515-161502472513260/AnsiballZ_stat.py
Dec 06 09:53:32 np0005548790.localdomain sudo[252398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:32 np0005548790.localdomain python3.9[252400]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:53:33 np0005548790.localdomain sudo[252398]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:33 np0005548790.localdomain sudo[252420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:53:33 np0005548790.localdomain sudo[252420]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:53:33 np0005548790.localdomain sudo[252420]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:33 np0005548790.localdomain sudo[252473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-myvuoedqdqyllzphvbnozgiozsomajru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014812.3857484-515-161502472513260/AnsiballZ_file.py
Dec 06 09:53:33 np0005548790.localdomain sudo[252473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:33 np0005548790.localdomain python3.9[252475]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:53:33 np0005548790.localdomain sudo[252473]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63490 DF PROTO=TCP SPT=52530 DPT=9102 SEQ=3625756705 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1805C71F0000000001030307) 
Dec 06 09:53:34 np0005548790.localdomain sudo[252583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ytbugwzbwykawvahwxnqmghqzpykvbuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014813.6716776-551-216107909331769/AnsiballZ_stat.py
Dec 06 09:53:34 np0005548790.localdomain sudo[252583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:34 np0005548790.localdomain python3.9[252585]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:53:34 np0005548790.localdomain sudo[252583]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:34 np0005548790.localdomain sudo[252640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tozzbfbjkhvlthpwridkuijxofkkdhzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014813.6716776-551-216107909331769/AnsiballZ_file.py
Dec 06 09:53:34 np0005548790.localdomain sudo[252640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:34 np0005548790.localdomain python3.9[252642]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:53:34 np0005548790.localdomain sudo[252640]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:53:35 np0005548790.localdomain podman[252698]: 2025-12-06 09:53:35.553633336 +0000 UTC m=+0.071486343 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 06 09:53:35 np0005548790.localdomain podman[252698]: 2025-12-06 09:53:35.600368707 +0000 UTC m=+0.118221704 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, io.openshift.expose-services=, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, name=ubi9-minimal, maintainer=Red Hat, Inc., config_id=edpm, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6)
Dec 06 09:53:35 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 09:53:36 np0005548790.localdomain sudo[252770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vxijyhawrlajwjozsbukllnnkjczzhdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014815.20631-586-209675662154525/AnsiballZ_systemd.py
Dec 06 09:53:36 np0005548790.localdomain sudo[252770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:36 np0005548790.localdomain python3.9[252772]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:53:36 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:53:36 np0005548790.localdomain systemd-sysv-generator[252801]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:53:36 np0005548790.localdomain systemd-rc-local-generator[252796]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:53:36 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:36 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:36 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:36 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:36 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:53:36 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:36 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:36 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:36 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:37 np0005548790.localdomain sudo[252770]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:37 np0005548790.localdomain sudo[252918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nilofmiskswgymgsbbhlrovdkmmhsvmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014817.3793316-610-185404171266169/AnsiballZ_stat.py
Dec 06 09:53:37 np0005548790.localdomain sudo[252918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:37 np0005548790.localdomain python3.9[252920]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:53:37 np0005548790.localdomain sudo[252918]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:38 np0005548790.localdomain sudo[252975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nngsvzkqymdotlikqzlnvvvccqrjuxkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014817.3793316-610-185404171266169/AnsiballZ_file.py
Dec 06 09:53:38 np0005548790.localdomain sudo[252975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:39 np0005548790.localdomain python3.9[252977]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:53:39 np0005548790.localdomain sudo[252975]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:39 np0005548790.localdomain sudo[253085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rchhnbvufdbjlmtfrbscnpqjkknupwnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014819.343567-647-232121041171490/AnsiballZ_stat.py
Dec 06 09:53:39 np0005548790.localdomain sudo[253085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:39 np0005548790.localdomain python3.9[253087]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:53:39 np0005548790.localdomain sudo[253085]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:40 np0005548790.localdomain sudo[253142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xwbjlklpqrivaxieaahpvcxglolvoroc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014819.343567-647-232121041171490/AnsiballZ_file.py
Dec 06 09:53:40 np0005548790.localdomain sudo[253142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:40 np0005548790.localdomain python3.9[253144]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:53:40 np0005548790.localdomain sudo[253142]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:53:40 np0005548790.localdomain podman[253162]: 2025-12-06 09:53:40.57910913 +0000 UTC m=+0.090115865 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 09:53:40 np0005548790.localdomain podman[253162]: 2025-12-06 09:53:40.592177388 +0000 UTC m=+0.103184113 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 09:53:40 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:53:40 np0005548790.localdomain sudo[253272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wsagvnwhpimilhbrnkxlfvucrpzmyrff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014820.5177374-682-276317155803292/AnsiballZ_systemd.py
Dec 06 09:53:40 np0005548790.localdomain sudo[253272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:41 np0005548790.localdomain python3.9[253274]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:53:41 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:53:41 np0005548790.localdomain systemd-rc-local-generator[253298]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:53:41 np0005548790.localdomain systemd-sysv-generator[253302]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:53:41 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:41 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:41 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:41 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:41 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:53:41 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:41 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:41 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:41 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:41 np0005548790.localdomain systemd[1]: Starting Create netns directory...
Dec 06 09:53:41 np0005548790.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 09:53:41 np0005548790.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 09:53:41 np0005548790.localdomain systemd[1]: Finished Create netns directory.
Dec 06 09:53:41 np0005548790.localdomain sudo[253272]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:42 np0005548790.localdomain sudo[253424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mhwtmbztqonhseurfaotciucnecsyjgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014822.0999353-713-246280486935613/AnsiballZ_file.py
Dec 06 09:53:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:53:42 np0005548790.localdomain sudo[253424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:42 np0005548790.localdomain podman[253426]: 2025-12-06 09:53:42.472081731 +0000 UTC m=+0.075406684 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:53:42 np0005548790.localdomain podman[253426]: 2025-12-06 09:53:42.50562069 +0000 UTC m=+0.108945663 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:53:42 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:53:42 np0005548790.localdomain python3.9[253427]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:53:42 np0005548790.localdomain sudo[253424]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:43 np0005548790.localdomain sudo[253557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fmsdtdngqytlvuqnsdcdlpaehzzxgtoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014822.9028585-736-107344361107643/AnsiballZ_stat.py
Dec 06 09:53:43 np0005548790.localdomain sudo[253557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:43 np0005548790.localdomain python3.9[253559]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_sriov_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:53:43 np0005548790.localdomain sudo[253557]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:43 np0005548790.localdomain sudo[253645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fdrzgfolqbcnrhddswzidimaohjjexit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014822.9028585-736-107344361107643/AnsiballZ_copy.py
Dec 06 09:53:43 np0005548790.localdomain sudo[253645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:43 np0005548790.localdomain python3.9[253647]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_sriov_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014822.9028585-736-107344361107643/.source.json _original_basename=._7vg57wc follow=False checksum=a32073fdba4733b9ffe872cfb91708eff83a585a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:53:43 np0005548790.localdomain sudo[253645]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:44 np0005548790.localdomain sudo[253755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iijundbnokbffvtoaaddpetrosngyucu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014824.1954253-781-242953610478128/AnsiballZ_file.py
Dec 06 09:53:44 np0005548790.localdomain sudo[253755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:44 np0005548790.localdomain python3.9[253757]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:53:44 np0005548790.localdomain sudo[253755]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:45 np0005548790.localdomain sudo[253865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vtmabikycfhezzslflcuohzyreqcvmnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014825.0130005-805-11983335776053/AnsiballZ_stat.py
Dec 06 09:53:45 np0005548790.localdomain sudo[253865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:45 np0005548790.localdomain sudo[253865]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:45 np0005548790.localdomain sudo[253953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pdyypnqbllmmzhgffzecdtfsbaxapist ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014825.0130005-805-11983335776053/AnsiballZ_copy.py
Dec 06 09:53:45 np0005548790.localdomain sudo[253953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:46 np0005548790.localdomain sudo[253953]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:46 np0005548790.localdomain sudo[254063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qgexjvbilubcfftqwycgukmoymgoqbyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014826.4556453-856-229621534229005/AnsiballZ_container_config_data.py
Dec 06 09:53:46 np0005548790.localdomain sudo[254063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:47 np0005548790.localdomain python3.9[254065]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_pattern=*.json debug=False
Dec 06 09:53:47 np0005548790.localdomain sudo[254063]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:47 np0005548790.localdomain sudo[254173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-slxmhaxkjxvitjsjumlvkpnbtabtertf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014827.3973382-883-68307438370293/AnsiballZ_container_config_hash.py
Dec 06 09:53:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:53:47 np0005548790.localdomain sudo[254173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:48 np0005548790.localdomain systemd[1]: tmp-crun.V2Cpbh.mount: Deactivated successfully.
Dec 06 09:53:48 np0005548790.localdomain podman[254175]: 2025-12-06 09:53:48.237985016 +0000 UTC m=+0.376728912 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 09:53:48 np0005548790.localdomain podman[254175]: 2025-12-06 09:53:48.273259449 +0000 UTC m=+0.412003345 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec 06 09:53:48 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:53:48 np0005548790.localdomain python3.9[254176]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 09:53:48 np0005548790.localdomain sudo[254173]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:53:48.361 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:53:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:53:48.361 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:53:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:53:48.362 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:53:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3825 DF PROTO=TCP SPT=44672 DPT=9102 SEQ=3071194906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180600500000000001030307) 
Dec 06 09:53:48 np0005548790.localdomain podman[239825]: time="2025-12-06T09:53:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 09:53:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:53:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142283 "" "Go-http-client/1.1"
Dec 06 09:53:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:53:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 15861 "" "Go-http-client/1.1"
Dec 06 09:53:49 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3826 DF PROTO=TCP SPT=44672 DPT=9102 SEQ=3071194906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1806045F0000000001030307) 
Dec 06 09:53:49 np0005548790.localdomain sudo[254309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-raeikducodayxfmvtswecdthtevzomgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014828.6517537-910-52007364938179/AnsiballZ_podman_container_info.py
Dec 06 09:53:49 np0005548790.localdomain sudo[254309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:49 np0005548790.localdomain python3.9[254311]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 06 09:53:50 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63491 DF PROTO=TCP SPT=52530 DPT=9102 SEQ=3625756705 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1806071F0000000001030307) 
Dec 06 09:53:50 np0005548790.localdomain sudo[254309]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3827 DF PROTO=TCP SPT=44672 DPT=9102 SEQ=3071194906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18060C600000000001030307) 
Dec 06 09:53:52 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24793 DF PROTO=TCP SPT=43686 DPT=9102 SEQ=2423812774 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18060F1F0000000001030307) 
Dec 06 09:53:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:53:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:53:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:53:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 09:53:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:53:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:53:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:53:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 09:53:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:53:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:53:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 09:53:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:53:54 np0005548790.localdomain sudo[254446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wxapniaioawceygwyktvhjaajygurhyf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765014833.5961416-949-136869274448485/AnsiballZ_edpm_container_manage.py
Dec 06 09:53:54 np0005548790.localdomain sudo[254446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:54 np0005548790.localdomain python3[254448]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_id=neutron_sriov_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 09:53:54 np0005548790.localdomain podman[254487]: 
Dec 06 09:53:54 np0005548790.localdomain podman[254487]: 2025-12-06 09:53:54.744451128 +0000 UTC m=+0.129455295 container create 88760bb4c6ff2ce739a543ddd1862a57e7b71275ebdd41732226a41930f3d90a (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '40a3556fb038cdb2d67944d1ecdbd8ad0d57b7fec9a29ef9a28e1f741aec5638'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=neutron_sriov_agent, config_id=neutron_sriov_agent, managed_by=edpm_ansible)
Dec 06 09:53:54 np0005548790.localdomain podman[254487]: 2025-12-06 09:53:54.654735213 +0000 UTC m=+0.039739380 image pull  quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified
Dec 06 09:53:54 np0005548790.localdomain python3[254448]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_sriov_agent --conmon-pidfile /run/neutron_sriov_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=40a3556fb038cdb2d67944d1ecdbd8ad0d57b7fec9a29ef9a28e1f741aec5638 --label config_id=neutron_sriov_agent --label container_name=neutron_sriov_agent --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '40a3556fb038cdb2d67944d1ecdbd8ad0d57b7fec9a29ef9a28e1f741aec5638'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user neutron --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified
Dec 06 09:53:54 np0005548790.localdomain sudo[254446]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:55 np0005548790.localdomain sudo[254632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xprmjnvmhsiwztxqcexmtdfxicacugbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014835.132354-973-249991747985660/AnsiballZ_stat.py
Dec 06 09:53:55 np0005548790.localdomain sudo[254632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:55 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3828 DF PROTO=TCP SPT=44672 DPT=9102 SEQ=3071194906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18061C1F0000000001030307) 
Dec 06 09:53:55 np0005548790.localdomain python3.9[254634]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:53:55 np0005548790.localdomain sudo[254632]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:56 np0005548790.localdomain sudo[254744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nttokzhfpoyovugzddsytrqnrylqlazo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014835.9481413-1000-52649870992559/AnsiballZ_file.py
Dec 06 09:53:56 np0005548790.localdomain sudo[254744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:56 np0005548790.localdomain python3.9[254746]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:53:56 np0005548790.localdomain sudo[254744]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:56 np0005548790.localdomain sudo[254799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bdthpwyrxqgblurvoxgxzdxtvtbhlzkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014835.9481413-1000-52649870992559/AnsiballZ_stat.py
Dec 06 09:53:56 np0005548790.localdomain sudo[254799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:56 np0005548790.localdomain python3.9[254801]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:53:56 np0005548790.localdomain sudo[254799]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:57 np0005548790.localdomain sudo[254908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-auxsfnvglhhoqmmwpdsbiicfqoajinub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014836.9699197-1000-233277574022006/AnsiballZ_copy.py
Dec 06 09:53:57 np0005548790.localdomain sudo[254908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:57 np0005548790.localdomain python3.9[254910]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014836.9699197-1000-233277574022006/source dest=/etc/systemd/system/edpm_neutron_sriov_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:53:57 np0005548790.localdomain sudo[254908]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:57 np0005548790.localdomain sudo[254963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dkszfuautmnjykbqwbdfazvegnqcegdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014836.9699197-1000-233277574022006/AnsiballZ_systemd.py
Dec 06 09:53:57 np0005548790.localdomain sudo[254963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:53:58 np0005548790.localdomain systemd[1]: tmp-crun.q9qVyM.mount: Deactivated successfully.
Dec 06 09:53:58 np0005548790.localdomain podman[254966]: 2025-12-06 09:53:58.060350932 +0000 UTC m=+0.095170496 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec 06 09:53:58 np0005548790.localdomain podman[254966]: 2025-12-06 09:53:58.068193926 +0000 UTC m=+0.103013470 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 09:53:58 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:53:58 np0005548790.localdomain python3.9[254965]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:53:58 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:53:58 np0005548790.localdomain systemd-rc-local-generator[255013]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:53:58 np0005548790.localdomain systemd-sysv-generator[255016]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:53:58 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:58 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:58 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:58 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:58 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:53:58 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:58 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:58 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:58 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:58 np0005548790.localdomain sudo[254963]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:58 np0005548790.localdomain sudo[255074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wuhjzfcbioswphjasfmkirndpodjjqam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014836.9699197-1000-233277574022006/AnsiballZ_systemd.py
Dec 06 09:53:58 np0005548790.localdomain sudo[255074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:53:59 np0005548790.localdomain python3.9[255076]: ansible-systemd Invoked with state=restarted name=edpm_neutron_sriov_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:53:59 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:53:59 np0005548790.localdomain systemd-sysv-generator[255109]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:53:59 np0005548790.localdomain systemd-rc-local-generator[255105]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:53:59 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:59 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:59 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:59 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:59 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:53:59 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:59 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:59 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:59 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:53:59 np0005548790.localdomain systemd[1]: Starting neutron_sriov_agent container...
Dec 06 09:53:59 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:53:59 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a02fb14da98aaf9516d4dcaebbaf346f09b9170ec173ce8ff408a96415ff2a9e/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 06 09:53:59 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a02fb14da98aaf9516d4dcaebbaf346f09b9170ec173ce8ff408a96415ff2a9e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 09:53:59 np0005548790.localdomain podman[255117]: 2025-12-06 09:53:59.850852178 +0000 UTC m=+0.102132556 container init 88760bb4c6ff2ce739a543ddd1862a57e7b71275ebdd41732226a41930f3d90a (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '40a3556fb038cdb2d67944d1ecdbd8ad0d57b7fec9a29ef9a28e1f741aec5638'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=neutron_sriov_agent, org.label-schema.license=GPLv2)
Dec 06 09:53:59 np0005548790.localdomain podman[255117]: 2025-12-06 09:53:59.858821225 +0000 UTC m=+0.110101603 container start 88760bb4c6ff2ce739a543ddd1862a57e7b71275ebdd41732226a41930f3d90a (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '40a3556fb038cdb2d67944d1ecdbd8ad0d57b7fec9a29ef9a28e1f741aec5638'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 09:53:59 np0005548790.localdomain podman[255117]: neutron_sriov_agent
Dec 06 09:53:59 np0005548790.localdomain neutron_sriov_agent[255131]: + sudo -E kolla_set_configs
Dec 06 09:53:59 np0005548790.localdomain systemd[1]: Started neutron_sriov_agent container.
Dec 06 09:53:59 np0005548790.localdomain sudo[255074]: pam_unix(sudo:session): session closed for user root
Dec 06 09:53:59 np0005548790.localdomain neutron_sriov_agent[255131]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 09:53:59 np0005548790.localdomain neutron_sriov_agent[255131]: INFO:__main__:Validating config file
Dec 06 09:53:59 np0005548790.localdomain neutron_sriov_agent[255131]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 09:53:59 np0005548790.localdomain neutron_sriov_agent[255131]: INFO:__main__:Copying service configuration files
Dec 06 09:53:59 np0005548790.localdomain neutron_sriov_agent[255131]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 06 09:53:59 np0005548790.localdomain neutron_sriov_agent[255131]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 06 09:53:59 np0005548790.localdomain neutron_sriov_agent[255131]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 06 09:53:59 np0005548790.localdomain neutron_sriov_agent[255131]: INFO:__main__:Writing out command to execute
Dec 06 09:53:59 np0005548790.localdomain neutron_sriov_agent[255131]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 06 09:53:59 np0005548790.localdomain neutron_sriov_agent[255131]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 06 09:53:59 np0005548790.localdomain neutron_sriov_agent[255131]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Dec 06 09:53:59 np0005548790.localdomain neutron_sriov_agent[255131]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 06 09:53:59 np0005548790.localdomain neutron_sriov_agent[255131]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 06 09:53:59 np0005548790.localdomain neutron_sriov_agent[255131]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 06 09:53:59 np0005548790.localdomain neutron_sriov_agent[255131]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Dec 06 09:53:59 np0005548790.localdomain neutron_sriov_agent[255131]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 06 09:53:59 np0005548790.localdomain neutron_sriov_agent[255131]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Dec 06 09:53:59 np0005548790.localdomain neutron_sriov_agent[255131]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934
Dec 06 09:53:59 np0005548790.localdomain neutron_sriov_agent[255131]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 06 09:53:59 np0005548790.localdomain neutron_sriov_agent[255131]: ++ cat /run_command
Dec 06 09:53:59 np0005548790.localdomain neutron_sriov_agent[255131]: + CMD=/usr/bin/neutron-sriov-nic-agent
Dec 06 09:53:59 np0005548790.localdomain neutron_sriov_agent[255131]: + ARGS=
Dec 06 09:53:59 np0005548790.localdomain neutron_sriov_agent[255131]: + sudo kolla_copy_cacerts
Dec 06 09:54:00 np0005548790.localdomain neutron_sriov_agent[255131]: + [[ ! -n '' ]]
Dec 06 09:54:00 np0005548790.localdomain neutron_sriov_agent[255131]: + . kolla_extend_start
Dec 06 09:54:00 np0005548790.localdomain neutron_sriov_agent[255131]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\'''
Dec 06 09:54:00 np0005548790.localdomain neutron_sriov_agent[255131]: Running command: '/usr/bin/neutron-sriov-nic-agent'
Dec 06 09:54:00 np0005548790.localdomain neutron_sriov_agent[255131]: + umask 0022
Dec 06 09:54:00 np0005548790.localdomain neutron_sriov_agent[255131]: + exec /usr/bin/neutron-sriov-nic-agent
Dec 06 09:54:00 np0005548790.localdomain sudo[255253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oqzjikawstwcxebdssugowmnpvdlffuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014840.189705-1084-103651353887512/AnsiballZ_systemd.py
Dec 06 09:54:00 np0005548790.localdomain sudo[255253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:00 np0005548790.localdomain python3.9[255255]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_sriov_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:54:00 np0005548790.localdomain systemd[1]: tmp-crun.lPQHUO.mount: Deactivated successfully.
Dec 06 09:54:00 np0005548790.localdomain systemd[1]: Stopping neutron_sriov_agent container...
Dec 06 09:54:00 np0005548790.localdomain systemd[1]: libpod-88760bb4c6ff2ce739a543ddd1862a57e7b71275ebdd41732226a41930f3d90a.scope: Deactivated successfully.
Dec 06 09:54:00 np0005548790.localdomain systemd[1]: libpod-88760bb4c6ff2ce739a543ddd1862a57e7b71275ebdd41732226a41930f3d90a.scope: Consumed 1.028s CPU time.
Dec 06 09:54:00 np0005548790.localdomain podman[255259]: 2025-12-06 09:54:00.904209017 +0000 UTC m=+0.078843723 container died 88760bb4c6ff2ce739a543ddd1862a57e7b71275ebdd41732226a41930f3d90a (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, container_name=neutron_sriov_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=neutron_sriov_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '40a3556fb038cdb2d67944d1ecdbd8ad0d57b7fec9a29ef9a28e1f741aec5638'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.build-date=20251125)
Dec 06 09:54:00 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-88760bb4c6ff2ce739a543ddd1862a57e7b71275ebdd41732226a41930f3d90a-userdata-shm.mount: Deactivated successfully.
Dec 06 09:54:00 np0005548790.localdomain podman[255259]: 2025-12-06 09:54:00.996355025 +0000 UTC m=+0.170989701 container cleanup 88760bb4c6ff2ce739a543ddd1862a57e7b71275ebdd41732226a41930f3d90a (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '40a3556fb038cdb2d67944d1ecdbd8ad0d57b7fec9a29ef9a28e1f741aec5638'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 09:54:00 np0005548790.localdomain podman[255259]: neutron_sriov_agent
Dec 06 09:54:00 np0005548790.localdomain podman[255272]: 2025-12-06 09:54:00.998188873 +0000 UTC m=+0.089937642 container cleanup 88760bb4c6ff2ce739a543ddd1862a57e7b71275ebdd41732226a41930f3d90a (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '40a3556fb038cdb2d67944d1ecdbd8ad0d57b7fec9a29ef9a28e1f741aec5638'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent)
Dec 06 09:54:01 np0005548790.localdomain podman[255283]: 2025-12-06 09:54:01.081355256 +0000 UTC m=+0.058267889 container cleanup 88760bb4c6ff2ce739a543ddd1862a57e7b71275ebdd41732226a41930f3d90a (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '40a3556fb038cdb2d67944d1ecdbd8ad0d57b7fec9a29ef9a28e1f741aec5638'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 09:54:01 np0005548790.localdomain podman[255283]: neutron_sriov_agent
Dec 06 09:54:01 np0005548790.localdomain systemd[1]: edpm_neutron_sriov_agent.service: Deactivated successfully.
Dec 06 09:54:01 np0005548790.localdomain systemd[1]: Stopped neutron_sriov_agent container.
Dec 06 09:54:01 np0005548790.localdomain systemd[1]: Starting neutron_sriov_agent container...
Dec 06 09:54:01 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:54:01 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a02fb14da98aaf9516d4dcaebbaf346f09b9170ec173ce8ff408a96415ff2a9e/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 06 09:54:01 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a02fb14da98aaf9516d4dcaebbaf346f09b9170ec173ce8ff408a96415ff2a9e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 09:54:01 np0005548790.localdomain podman[255296]: 2025-12-06 09:54:01.187328652 +0000 UTC m=+0.073487365 container init 88760bb4c6ff2ce739a543ddd1862a57e7b71275ebdd41732226a41930f3d90a (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, container_name=neutron_sriov_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '40a3556fb038cdb2d67944d1ecdbd8ad0d57b7fec9a29ef9a28e1f741aec5638'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec 06 09:54:01 np0005548790.localdomain podman[255296]: 2025-12-06 09:54:01.1957462 +0000 UTC m=+0.081904903 container start 88760bb4c6ff2ce739a543ddd1862a57e7b71275ebdd41732226a41930f3d90a (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '40a3556fb038cdb2d67944d1ecdbd8ad0d57b7fec9a29ef9a28e1f741aec5638'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent)
Dec 06 09:54:01 np0005548790.localdomain podman[255296]: neutron_sriov_agent
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: + sudo -E kolla_set_configs
Dec 06 09:54:01 np0005548790.localdomain systemd[1]: Started neutron_sriov_agent container.
Dec 06 09:54:01 np0005548790.localdomain sudo[255253]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: INFO:__main__:Validating config file
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: INFO:__main__:Copying service configuration files
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: INFO:__main__:Writing out command to execute
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: ++ cat /run_command
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: + CMD=/usr/bin/neutron-sriov-nic-agent
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: + ARGS=
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: + sudo kolla_copy_cacerts
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: + [[ ! -n '' ]]
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: + . kolla_extend_start
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\'''
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: Running command: '/usr/bin/neutron-sriov-nic-agent'
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: + umask 0022
Dec 06 09:54:01 np0005548790.localdomain neutron_sriov_agent[255311]: + exec /usr/bin/neutron-sriov-nic-agent
Dec 06 09:54:02 np0005548790.localdomain sshd[250102]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:54:02 np0005548790.localdomain systemd[1]: session-57.scope: Deactivated successfully.
Dec 06 09:54:02 np0005548790.localdomain systemd[1]: session-57.scope: Consumed 23.611s CPU time.
Dec 06 09:54:02 np0005548790.localdomain systemd-logind[760]: Session 57 logged out. Waiting for processes to exit.
Dec 06 09:54:02 np0005548790.localdomain systemd-logind[760]: Removed session 57.
Dec 06 09:54:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:02.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:54:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:02.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:54:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:02.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:54:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:02.886 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 09:54:02 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 09:54:02.892 2 INFO neutron.common.config [-] Logging enabled!
Dec 06 09:54:02 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 09:54:02.892 2 INFO neutron.common.config [-] /usr/bin/neutron-sriov-nic-agent version 22.2.2.dev43
Dec 06 09:54:02 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 09:54:02.892 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Physical Devices mappings: {'dummy_sriov_net': ['dummy-dev']}
Dec 06 09:54:02 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 09:54:02.892 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Exclude Devices: {}
Dec 06 09:54:02 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 09:54:02.892 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider bandwidths: {}
Dec 06 09:54:02 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 09:54:02.893 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider inventory defaults: {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0}
Dec 06 09:54:02 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 09:54:02.893 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider hypervisors: {'dummy-dev': 'np0005548790.localdomain'}
Dec 06 09:54:02 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 09:54:02.893 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-a0af420b-f78e-4799-97af-5c19d7f7a0f7 - - - - - -] RPC agent_id: nic-switch-agent.np0005548790.localdomain
Dec 06 09:54:02 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 09:54:02.898 2 INFO neutron.agent.agent_extensions_manager [None req-a0af420b-f78e-4799-97af-5c19d7f7a0f7 - - - - - -] Loaded agent extensions: ['qos']
Dec 06 09:54:02 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 09:54:02.898 2 INFO neutron.agent.agent_extensions_manager [None req-a0af420b-f78e-4799-97af-5c19d7f7a0f7 - - - - - -] Initializing agent extension 'qos'
Dec 06 09:54:03 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 09:54:03.177 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-a0af420b-f78e-4799-97af-5c19d7f7a0f7 - - - - - -] Agent initialized successfully, now running... 
Dec 06 09:54:03 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 09:54:03.177 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-a0af420b-f78e-4799-97af-5c19d7f7a0f7 - - - - - -] SRIOV NIC Agent RPC Daemon Started!
Dec 06 09:54:03 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 09:54:03.177 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-a0af420b-f78e-4799-97af-5c19d7f7a0f7 - - - - - -] Agent out of sync with plugin!
Dec 06 09:54:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:54:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:54:03 np0005548790.localdomain podman[255344]: 2025-12-06 09:54:03.57501717 +0000 UTC m=+0.089787797 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 09:54:03 np0005548790.localdomain podman[255344]: 2025-12-06 09:54:03.585144112 +0000 UTC m=+0.099914779 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 09:54:03 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 09:54:03 np0005548790.localdomain podman[255345]: 2025-12-06 09:54:03.675641117 +0000 UTC m=+0.186309218 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 09:54:03 np0005548790.localdomain podman[255345]: 2025-12-06 09:54:03.692269007 +0000 UTC m=+0.202937118 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 09:54:03 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 09:54:03 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:03.882 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:54:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3829 DF PROTO=TCP SPT=44672 DPT=9102 SEQ=3071194906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18063D1F0000000001030307) 
Dec 06 09:54:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:04.885 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:54:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:04.885 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 09:54:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:04.886 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 09:54:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:04.913 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 09:54:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:04.913 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:54:05 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:05.887 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:54:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:54:06 np0005548790.localdomain podman[255384]: 2025-12-06 09:54:06.295268112 +0000 UTC m=+0.078205857 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, maintainer=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, release=1755695350, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal)
Dec 06 09:54:06 np0005548790.localdomain podman[255384]: 2025-12-06 09:54:06.313231227 +0000 UTC m=+0.096168962 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, architecture=x86_64, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 06 09:54:06 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 09:54:06 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:06.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:54:06 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:06.887 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:54:06 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:06.904 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:54:06 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:06.905 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:54:06 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:06.905 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:54:06 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:06.905 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 09:54:06 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:06.906 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:54:07 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:07.363 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:54:07 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:07.568 229637 WARNING nova.virt.libvirt.driver [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 09:54:07 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:07.570 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=13005MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 09:54:07 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:07.571 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:54:07 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:07.571 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:54:07 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:07.668 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 09:54:07 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:07.668 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 09:54:07 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:07.714 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:54:08 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:08.155 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:54:08 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:08.160 229637 DEBUG nova.compute.provider_tree [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 09:54:08 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:08.175 229637 DEBUG nova.scheduler.client.report [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 09:54:08 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:08.176 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 09:54:08 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:08.176 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:54:09 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:54:09.172 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:54:09 np0005548790.localdomain sshd[255448]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:54:09 np0005548790.localdomain sshd[255448]: Accepted publickey for zuul from 192.168.122.30 port 33962 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:54:09 np0005548790.localdomain systemd-logind[760]: New session 58 of user zuul.
Dec 06 09:54:09 np0005548790.localdomain systemd[1]: Started Session 58 of User zuul.
Dec 06 09:54:09 np0005548790.localdomain sshd[255448]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:54:10 np0005548790.localdomain python3.9[255559]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:54:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:54:11 np0005548790.localdomain sudo[255681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oovsbjrdopxzvbyxzjitowdicnpoybtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014851.2438939-67-161319409241057/AnsiballZ_setup.py
Dec 06 09:54:11 np0005548790.localdomain sudo[255681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:11 np0005548790.localdomain podman[255652]: 2025-12-06 09:54:11.578926454 +0000 UTC m=+0.084620953 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec 06 09:54:11 np0005548790.localdomain podman[255652]: 2025-12-06 09:54:11.594187609 +0000 UTC m=+0.099882108 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 06 09:54:11 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:54:11 np0005548790.localdomain python3.9[255684]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:54:12 np0005548790.localdomain sudo[255681]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:12 np0005548790.localdomain sudo[255751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kolofzzhokcamsnkkrjqfmwxgjvaqkyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014851.2438939-67-161319409241057/AnsiballZ_dnf.py
Dec 06 09:54:12 np0005548790.localdomain sudo[255751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:12 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:54:12 np0005548790.localdomain podman[255754]: 2025-12-06 09:54:12.758184575 +0000 UTC m=+0.093642198 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 09:54:12 np0005548790.localdomain podman[255754]: 2025-12-06 09:54:12.76726735 +0000 UTC m=+0.102724923 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 09:54:12 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:54:12 np0005548790.localdomain python3.9[255753]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:54:16 np0005548790.localdomain sudo[255751]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:16 np0005548790.localdomain sudo[255886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sjgqgbrfgfugfdntvgozsbvjqcbklyyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014856.4321802-103-56677300617881/AnsiballZ_systemd.py
Dec 06 09:54:16 np0005548790.localdomain sudo[255886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:17 np0005548790.localdomain python3.9[255888]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 06 09:54:17 np0005548790.localdomain sudo[255886]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25642 DF PROTO=TCP SPT=41104 DPT=9102 SEQ=1708550821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180675810000000001030307) 
Dec 06 09:54:18 np0005548790.localdomain podman[239825]: time="2025-12-06T09:54:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 09:54:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:54:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144241 "" "Go-http-client/1.1"
Dec 06 09:54:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:54:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16304 "" "Go-http-client/1.1"
Dec 06 09:54:18 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:54:18 np0005548790.localdomain podman[255909]: 2025-12-06 09:54:18.563129732 +0000 UTC m=+0.080704672 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec 06 09:54:18 np0005548790.localdomain podman[255909]: 2025-12-06 09:54:18.629279665 +0000 UTC m=+0.146854615 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:54:18 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:54:19 np0005548790.localdomain sudo[256024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tlmxzfzrplgezcpgqpdwbbdugxshnpmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014858.6622713-130-195837963745620/AnsiballZ_file.py
Dec 06 09:54:19 np0005548790.localdomain sudo[256024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:19 np0005548790.localdomain python3.9[256026]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:54:19 np0005548790.localdomain sudo[256024]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:19 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25643 DF PROTO=TCP SPT=41104 DPT=9102 SEQ=1708550821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1806799F0000000001030307) 
Dec 06 09:54:19 np0005548790.localdomain sudo[256134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tmdbhlbwhuolmqiqautztimqfazekllu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014859.4547284-130-38216508474469/AnsiballZ_file.py
Dec 06 09:54:19 np0005548790.localdomain sudo[256134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:19 np0005548790.localdomain python3.9[256136]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:54:19 np0005548790.localdomain sudo[256134]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:20 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3830 DF PROTO=TCP SPT=44672 DPT=9102 SEQ=3071194906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18067D1F0000000001030307) 
Dec 06 09:54:20 np0005548790.localdomain sudo[256244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yiopwlpfbaztjbojegcqqwuuxitumrwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014860.068313-130-33645833201828/AnsiballZ_file.py
Dec 06 09:54:20 np0005548790.localdomain sudo[256244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:20 np0005548790.localdomain python3.9[256246]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:54:20 np0005548790.localdomain sudo[256244]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:21 np0005548790.localdomain sudo[256354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ajgkluuwayurwiqkalthkcctazatcfgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014860.7242012-130-109620264532894/AnsiballZ_file.py
Dec 06 09:54:21 np0005548790.localdomain sudo[256354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:21 np0005548790.localdomain python3.9[256356]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:54:21 np0005548790.localdomain sudo[256354]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25644 DF PROTO=TCP SPT=41104 DPT=9102 SEQ=1708550821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1806819F0000000001030307) 
Dec 06 09:54:21 np0005548790.localdomain sudo[256464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nzrbkpvlanlodtsmapbcqxdohbuwgwst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014861.3960087-130-14448023905136/AnsiballZ_file.py
Dec 06 09:54:21 np0005548790.localdomain sudo[256464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:21 np0005548790.localdomain python3.9[256466]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:54:21 np0005548790.localdomain sudo[256464]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:22 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63492 DF PROTO=TCP SPT=52530 DPT=9102 SEQ=3625756705 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1806851F0000000001030307) 
Dec 06 09:54:22 np0005548790.localdomain sudo[256574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-swuyxnmhydoggmaqxfjvhnykovmuhteu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014862.1382344-130-180729032821203/AnsiballZ_file.py
Dec 06 09:54:22 np0005548790.localdomain sudo[256574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:22 np0005548790.localdomain python3.9[256576]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ns-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:54:22 np0005548790.localdomain sudo[256574]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:23 np0005548790.localdomain sudo[256684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fgifatzqsveltpsszzzqdaiiwzwcuazf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014862.8139496-130-106299901613726/AnsiballZ_file.py
Dec 06 09:54:23 np0005548790.localdomain sudo[256684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:23 np0005548790.localdomain python3.9[256686]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:54:23 np0005548790.localdomain sudo[256684]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:54:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:54:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:54:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:54:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:54:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 09:54:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:54:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 09:54:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:54:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:54:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 09:54:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:54:24 np0005548790.localdomain sudo[256795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ahrhgpquenfezfmolwqudvsrixmbkxbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014863.6447086-280-32145625205369/AnsiballZ_stat.py
Dec 06 09:54:24 np0005548790.localdomain sudo[256795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:24 np0005548790.localdomain python3.9[256797]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:54:24 np0005548790.localdomain sudo[256795]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:24 np0005548790.localdomain sudo[256883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ukkwdfagxytrblotkbrqfnrncetqbodw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014863.6447086-280-32145625205369/AnsiballZ_copy.py
Dec 06 09:54:24 np0005548790.localdomain sudo[256883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:25 np0005548790.localdomain python3.9[256885]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014863.6447086-280-32145625205369/.source.yaml follow=False _original_basename=neutron_dhcp_agent.yaml.j2 checksum=3ebfe8ab1da42a1c6ca52429f61716009c5fd177 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:54:25 np0005548790.localdomain sudo[256883]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:25 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25645 DF PROTO=TCP SPT=41104 DPT=9102 SEQ=1708550821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1806915F0000000001030307) 
Dec 06 09:54:25 np0005548790.localdomain python3.9[256993]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:54:26 np0005548790.localdomain python3.9[257079]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014865.310429-325-46858721638962/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:54:27 np0005548790.localdomain python3.9[257187]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:54:27 np0005548790.localdomain python3.9[257273]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014866.8104308-325-91082563627487/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:54:28 np0005548790.localdomain python3.9[257381]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron-dhcp-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:54:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:54:28 np0005548790.localdomain podman[257382]: 2025-12-06 09:54:28.560001907 +0000 UTC m=+0.071384580 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:54:28 np0005548790.localdomain podman[257382]: 2025-12-06 09:54:28.563482168 +0000 UTC m=+0.074864881 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:54:28 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:54:29 np0005548790.localdomain python3.9[257486]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron-dhcp-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014867.8689556-325-8062682914411/.source.conf follow=False _original_basename=neutron-dhcp-agent.conf.j2 checksum=d8a5f05be29036179ecb7b001a729a6d48ac464b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:54:31 np0005548790.localdomain python3.9[257594]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/10-neutron-dhcp.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:54:31 np0005548790.localdomain python3.9[257680]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/10-neutron-dhcp.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014870.6544921-500-127057343570973/.source.conf _original_basename=10-neutron-dhcp.conf follow=False checksum=b14e40a972b9e05d0e95a7e875b3201eda2c4b6d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:54:32 np0005548790.localdomain python3.9[257788]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:54:32 np0005548790.localdomain python3.9[257874]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014871.884407-544-4419546348313/.source follow=False _original_basename=haproxy.j2 checksum=e4288860049c1baef23f6e1bb6c6f91acb5432e7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:54:33 np0005548790.localdomain sudo[257930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:54:33 np0005548790.localdomain sudo[257930]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:54:33 np0005548790.localdomain sudo[257930]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:33 np0005548790.localdomain sudo[257965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:54:33 np0005548790.localdomain sudo[257965]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:54:33 np0005548790.localdomain python3.9[258018]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:54:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25646 DF PROTO=TCP SPT=41104 DPT=9102 SEQ=1708550821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1806B11F0000000001030307) 
Dec 06 09:54:33 np0005548790.localdomain sudo[257965]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:34 np0005548790.localdomain python3.9[258124]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014873.1681743-544-230983415611070/.source follow=False _original_basename=dnsmasq.j2 checksum=efc19f376a79c40570368e9c2b979cde746f1ea8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:54:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:54:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:54:34 np0005548790.localdomain podman[258171]: 2025-12-06 09:54:34.58555151 +0000 UTC m=+0.094426756 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 09:54:34 np0005548790.localdomain podman[258171]: 2025-12-06 09:54:34.624464419 +0000 UTC m=+0.133339655 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 06 09:54:34 np0005548790.localdomain systemd[1]: tmp-crun.xWfPgs.mount: Deactivated successfully.
Dec 06 09:54:34 np0005548790.localdomain podman[258170]: 2025-12-06 09:54:34.638488032 +0000 UTC m=+0.148453967 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 09:54:34 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 09:54:34 np0005548790.localdomain podman[258170]: 2025-12-06 09:54:34.652155756 +0000 UTC m=+0.162121711 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 09:54:34 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 09:54:34 np0005548790.localdomain sudo[258274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:54:34 np0005548790.localdomain sudo[258274]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:54:34 np0005548790.localdomain sudo[258274]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:34 np0005548790.localdomain python3.9[258300]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:54:35 np0005548790.localdomain python3.9[258359]: ansible-ansible.legacy.file Invoked with mode=0755 setype=container_file_t dest=/var/lib/neutron/kill_scripts/haproxy-kill _original_basename=kill-script.j2 recurse=False state=file path=/var/lib/neutron/kill_scripts/haproxy-kill force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:54:36 np0005548790.localdomain python3.9[258467]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/dnsmasq-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:54:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:54:36 np0005548790.localdomain python3.9[258553]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/dnsmasq-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765014875.5486467-631-275989652415045/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:54:36 np0005548790.localdomain podman[258554]: 2025-12-06 09:54:36.575654718 +0000 UTC m=+0.084258584 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, version=9.6, distribution-scope=public, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container)
Dec 06 09:54:36 np0005548790.localdomain podman[258554]: 2025-12-06 09:54:36.594160647 +0000 UTC m=+0.102764513 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, config_id=edpm, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, vcs-type=git, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, distribution-scope=public)
Dec 06 09:54:36 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 09:54:37 np0005548790.localdomain python3.9[258680]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:54:38 np0005548790.localdomain sudo[258790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-crmxywxfkxoxopkmvjdnvzjoyqjbkrvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014877.7240815-737-2125422599934/AnsiballZ_file.py
Dec 06 09:54:38 np0005548790.localdomain sudo[258790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:38 np0005548790.localdomain python3.9[258792]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:54:38 np0005548790.localdomain sudo[258790]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:38 np0005548790.localdomain sudo[258900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iasitlfmlqegwboaiqxaitsfvzihricv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014878.533833-760-23476879036571/AnsiballZ_stat.py
Dec 06 09:54:38 np0005548790.localdomain sudo[258900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:39 np0005548790.localdomain python3.9[258902]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:54:39 np0005548790.localdomain sudo[258900]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:39 np0005548790.localdomain sudo[258957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xgwkwawzikahwreneqjvktiehwwepwec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014878.533833-760-23476879036571/AnsiballZ_file.py
Dec 06 09:54:39 np0005548790.localdomain sudo[258957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:39 np0005548790.localdomain python3.9[258959]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:54:39 np0005548790.localdomain sudo[258957]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:39 np0005548790.localdomain sudo[259067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hgzwytsrlisduohpligjhggiaxyznzdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014879.7122896-760-87772158378751/AnsiballZ_stat.py
Dec 06 09:54:39 np0005548790.localdomain sudo[259067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:40 np0005548790.localdomain python3.9[259069]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:54:40 np0005548790.localdomain sudo[259067]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:40 np0005548790.localdomain sudo[259124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-thwhdiylfjeyhwoxdjviulmutdlgogtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014879.7122896-760-87772158378751/AnsiballZ_file.py
Dec 06 09:54:40 np0005548790.localdomain sudo[259124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:40 np0005548790.localdomain python3.9[259126]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:54:40 np0005548790.localdomain sudo[259124]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:41 np0005548790.localdomain sudo[259234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wmypjpvujvajweretfvkffwcnidrnoqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014880.9640613-830-48390313487625/AnsiballZ_file.py
Dec 06 09:54:41 np0005548790.localdomain sudo[259234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:41 np0005548790.localdomain python3.9[259236]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:54:41 np0005548790.localdomain sudo[259234]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:42 np0005548790.localdomain sudo[259344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cspwvmsaguypckxgxebsniqsykynkulq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014881.7581217-854-161520989968158/AnsiballZ_stat.py
Dec 06 09:54:42 np0005548790.localdomain sudo[259344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:54:42 np0005548790.localdomain podman[259347]: 2025-12-06 09:54:42.132724363 +0000 UTC m=+0.087772584 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2)
Dec 06 09:54:42 np0005548790.localdomain podman[259347]: 2025-12-06 09:54:42.142969379 +0000 UTC m=+0.098017590 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 09:54:42 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:54:42 np0005548790.localdomain python3.9[259346]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:54:42 np0005548790.localdomain sshd[259366]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:54:42 np0005548790.localdomain sudo[259344]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:42 np0005548790.localdomain sudo[259422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqukiibcjfeyqoefxiyfsdmiuhipzmqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014881.7581217-854-161520989968158/AnsiballZ_file.py
Dec 06 09:54:42 np0005548790.localdomain sudo[259422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:42 np0005548790.localdomain python3.9[259424]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:54:42 np0005548790.localdomain sudo[259422]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:43 np0005548790.localdomain sudo[259532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jsenjmwryllikizuzisicshgverwuzcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014883.0467818-890-76240779381709/AnsiballZ_stat.py
Dec 06 09:54:43 np0005548790.localdomain sudo[259532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:54:43 np0005548790.localdomain podman[259535]: 2025-12-06 09:54:43.480132291 +0000 UTC m=+0.095303140 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 09:54:43 np0005548790.localdomain podman[259535]: 2025-12-06 09:54:43.493475507 +0000 UTC m=+0.108646396 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 09:54:43 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:54:43 np0005548790.localdomain python3.9[259534]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:54:43 np0005548790.localdomain sudo[259532]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:43 np0005548790.localdomain sudo[259612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eapxdkkuccsqotbcostrgyjdbbajruya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014883.0467818-890-76240779381709/AnsiballZ_file.py
Dec 06 09:54:43 np0005548790.localdomain sudo[259612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:44 np0005548790.localdomain python3.9[259614]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:54:44 np0005548790.localdomain sudo[259612]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:44 np0005548790.localdomain sudo[259722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wwlrsdbosiuirvlpxcvvkulyxxmakzas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014884.2419586-925-180105131666465/AnsiballZ_systemd.py
Dec 06 09:54:44 np0005548790.localdomain sudo[259722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:44 np0005548790.localdomain python3.9[259724]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:54:44 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:54:45 np0005548790.localdomain systemd-rc-local-generator[259745]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:54:45 np0005548790.localdomain systemd-sysv-generator[259751]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:54:45 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:54:45 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:54:45 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:54:45 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:54:45 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:54:45 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:54:45 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:54:45 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:54:45 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:54:45 np0005548790.localdomain sudo[259722]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:45 np0005548790.localdomain sudo[259870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hfhpmogangiocfftdmnxomiqcxqegfur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014885.5436134-950-53614681882512/AnsiballZ_stat.py
Dec 06 09:54:45 np0005548790.localdomain sudo[259870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:46 np0005548790.localdomain python3.9[259872]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:54:46 np0005548790.localdomain sudo[259870]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:46 np0005548790.localdomain sudo[259927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wowerwgirbthnfesmsrpivdkonxdyvwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014885.5436134-950-53614681882512/AnsiballZ_file.py
Dec 06 09:54:46 np0005548790.localdomain sudo[259927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:46 np0005548790.localdomain python3.9[259929]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:54:46 np0005548790.localdomain sudo[259927]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:47 np0005548790.localdomain sudo[260037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qwaekgrjscmetaelfksacxclgykufwln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014886.916108-985-98066399295757/AnsiballZ_stat.py
Dec 06 09:54:47 np0005548790.localdomain sudo[260037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:47 np0005548790.localdomain sshd[259366]: Received disconnect from 101.47.160.186 port 52712:11: Bye Bye [preauth]
Dec 06 09:54:47 np0005548790.localdomain sshd[259366]: Disconnected from authenticating user root 101.47.160.186 port 52712 [preauth]
Dec 06 09:54:47 np0005548790.localdomain python3.9[260039]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:54:47 np0005548790.localdomain sudo[260037]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:47 np0005548790.localdomain sudo[260094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zrqmvqpbiutifwscnoroeuycdrrxkdpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014886.916108-985-98066399295757/AnsiballZ_file.py
Dec 06 09:54:47 np0005548790.localdomain sudo[260094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:47 np0005548790.localdomain python3.9[260096]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:54:47 np0005548790.localdomain sudo[260094]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:54:48.362 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:54:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:54:48.364 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:54:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:54:48.365 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:54:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12834 DF PROTO=TCP SPT=39184 DPT=9102 SEQ=1835009632 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1806EAB10000000001030307) 
Dec 06 09:54:48 np0005548790.localdomain podman[239825]: time="2025-12-06T09:54:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 09:54:48 np0005548790.localdomain sudo[260204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vxkmqevvbviepscnamweotswhdeiwavw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014888.1346476-1021-181134459313285/AnsiballZ_systemd.py
Dec 06 09:54:48 np0005548790.localdomain sudo[260204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:54:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144241 "" "Go-http-client/1.1"
Dec 06 09:54:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:54:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16307 "" "Go-http-client/1.1"
Dec 06 09:54:48 np0005548790.localdomain python3.9[260206]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:54:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:54:48 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:54:48 np0005548790.localdomain systemd-rc-local-generator[260247]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:54:48 np0005548790.localdomain systemd-sysv-generator[260250]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:54:48 np0005548790.localdomain podman[260208]: 2025-12-06 09:54:48.877860118 +0000 UTC m=+0.095716420 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 09:54:48 np0005548790.localdomain podman[260208]: 2025-12-06 09:54:48.912414564 +0000 UTC m=+0.130270866 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 09:54:48 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:54:48 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:54:48 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:54:48 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:54:48 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:54:48 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:54:48 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:54:48 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:54:48 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:54:49 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:54:49 np0005548790.localdomain systemd[1]: Starting Create netns directory...
Dec 06 09:54:49 np0005548790.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 09:54:49 np0005548790.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 09:54:49 np0005548790.localdomain systemd[1]: Finished Create netns directory.
Dec 06 09:54:49 np0005548790.localdomain sudo[260204]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:49 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12835 DF PROTO=TCP SPT=39184 DPT=9102 SEQ=1835009632 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1806EE9F0000000001030307) 
Dec 06 09:54:50 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25647 DF PROTO=TCP SPT=41104 DPT=9102 SEQ=1708550821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1806F11F0000000001030307) 
Dec 06 09:54:50 np0005548790.localdomain sudo[260381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mtcfiydvbnabnxccdxkqynupcqjkwusw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014889.8246894-1052-166826509943062/AnsiballZ_file.py
Dec 06 09:54:50 np0005548790.localdomain sudo[260381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:50 np0005548790.localdomain python3.9[260383]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:54:50 np0005548790.localdomain sudo[260381]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:50 np0005548790.localdomain sudo[260491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hbhpyvnhkdvgegzjzbuqtphxkbsvjrzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014890.5637372-1075-47070344363925/AnsiballZ_stat.py
Dec 06 09:54:50 np0005548790.localdomain sudo[260491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:51 np0005548790.localdomain python3.9[260493]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_dhcp_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:54:51 np0005548790.localdomain sudo[260491]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:51 np0005548790.localdomain sudo[260579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kvtauzjmvthlkykzcikhvamoqjlsmngz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014890.5637372-1075-47070344363925/AnsiballZ_copy.py
Dec 06 09:54:51 np0005548790.localdomain sudo[260579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12836 DF PROTO=TCP SPT=39184 DPT=9102 SEQ=1835009632 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1806F69F0000000001030307) 
Dec 06 09:54:51 np0005548790.localdomain python3.9[260581]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_dhcp_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765014890.5637372-1075-47070344363925/.source.json _original_basename=.fbgvkilt follow=False checksum=c62829c98c0f9e788d62f52aa71fba276cd98270 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:54:51 np0005548790.localdomain sudo[260579]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:52 np0005548790.localdomain sudo[260689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aumxfhculxliwpwqcboweiahdvzxijxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014891.8385577-1120-212383925390159/AnsiballZ_file.py
Dec 06 09:54:52 np0005548790.localdomain sudo[260689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:52 np0005548790.localdomain python3.9[260691]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_dhcp state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:54:52 np0005548790.localdomain sudo[260689]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:52 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3831 DF PROTO=TCP SPT=44672 DPT=9102 SEQ=3071194906 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1806FB200000000001030307) 
Dec 06 09:54:52 np0005548790.localdomain sudo[260799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tjobalxarfukmczfxrforrddbghnwtha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014892.608041-1144-233263845081318/AnsiballZ_stat.py
Dec 06 09:54:52 np0005548790.localdomain sudo[260799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:53 np0005548790.localdomain sudo[260799]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:53 np0005548790.localdomain sudo[260887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-erjxxcpzhqqaxisnoxjznlvdgcovkscj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014892.608041-1144-233263845081318/AnsiballZ_copy.py
Dec 06 09:54:53 np0005548790.localdomain sudo[260887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:54:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:54:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:54:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:54:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:54:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 09:54:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:54:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 09:54:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:54:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:54:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 09:54:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:54:53 np0005548790.localdomain sudo[260887]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:54 np0005548790.localdomain sudo[260997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kxmizgvrfqrtkqckovqbrtsyejdonnle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014894.328171-1195-137557475077179/AnsiballZ_container_config_data.py
Dec 06 09:54:54 np0005548790.localdomain sudo[260997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:54 np0005548790.localdomain python3.9[260999]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_pattern=*.json debug=False
Dec 06 09:54:54 np0005548790.localdomain sudo[260997]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:55 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12837 DF PROTO=TCP SPT=39184 DPT=9102 SEQ=1835009632 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1807065F0000000001030307) 
Dec 06 09:54:55 np0005548790.localdomain sudo[261107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pcbfobfdilqrgngegapwfhgkckkjztwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014895.177028-1222-267944924367858/AnsiballZ_container_config_hash.py
Dec 06 09:54:55 np0005548790.localdomain sudo[261107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:55 np0005548790.localdomain python3.9[261109]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 09:54:55 np0005548790.localdomain sudo[261107]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:56 np0005548790.localdomain sudo[261217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gfnzqpzqadhptsicorfacujoporrsxbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014896.247142-1249-139703098627974/AnsiballZ_podman_container_info.py
Dec 06 09:54:56 np0005548790.localdomain sudo[261217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:54:56 np0005548790.localdomain python3.9[261219]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 06 09:54:57 np0005548790.localdomain sudo[261217]: pam_unix(sudo:session): session closed for user root
Dec 06 09:54:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:54:59 np0005548790.localdomain podman[261263]: 2025-12-06 09:54:59.540948785 +0000 UTC m=+0.062659715 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 09:54:59 np0005548790.localdomain podman[261263]: 2025-12-06 09:54:59.57513328 +0000 UTC m=+0.096844250 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 09:54:59 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:55:01 np0005548790.localdomain sudo[261371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sibccufacqqoaihhhyapfzygdjxfttwe ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765014900.72497-1288-212497900705026/AnsiballZ_edpm_container_manage.py
Dec 06 09:55:01 np0005548790.localdomain sudo[261371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:01 np0005548790.localdomain python3[261373]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_id=neutron_dhcp config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 09:55:01 np0005548790.localdomain podman[261411]: 
Dec 06 09:55:01 np0005548790.localdomain podman[261411]: 2025-12-06 09:55:01.710942801 +0000 UTC m=+0.074614123 container create aff65d3858b151147199b97a032b18642aa256d7ddecb8ec34ddf89c0e966e37 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.vendor=CentOS, container_name=neutron_dhcp_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_id=neutron_dhcp, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '680a4cb500f57a7b7bc130692ebec2059b7b563e6ff014c86804d3fc951d0349'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:55:01 np0005548790.localdomain podman[261411]: 2025-12-06 09:55:01.668249046 +0000 UTC m=+0.031920398 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 09:55:01 np0005548790.localdomain python3[261373]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_dhcp_agent --cgroupns=host --conmon-pidfile /run/neutron_dhcp_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=680a4cb500f57a7b7bc130692ebec2059b7b563e6ff014c86804d3fc951d0349 --label config_id=neutron_dhcp --label container_name=neutron_dhcp_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '680a4cb500f57a7b7bc130692ebec2059b7b563e6ff014c86804d3fc951d0349'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/netns:/run/netns:shared --volume /var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 09:55:01 np0005548790.localdomain sudo[261371]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:02 np0005548790.localdomain sudo[261556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lfxcainlwvevkordycljwbrwmpnnwydq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014902.2779446-1312-74496482085879/AnsiballZ_stat.py
Dec 06 09:55:02 np0005548790.localdomain sudo[261556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:02 np0005548790.localdomain python3.9[261558]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:55:02 np0005548790.localdomain sudo[261556]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:02.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:55:02 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:02.887 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:55:03 np0005548790.localdomain sudo[261668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qyqyoizsuckbcprhwfyjiiyehxdjlrrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014903.1347353-1339-83344762092563/AnsiballZ_file.py
Dec 06 09:55:03 np0005548790.localdomain sudo[261668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:03 np0005548790.localdomain python3.9[261670]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:03 np0005548790.localdomain sudo[261668]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12838 DF PROTO=TCP SPT=39184 DPT=9102 SEQ=1835009632 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1807271F0000000001030307) 
Dec 06 09:55:03 np0005548790.localdomain sudo[261723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-inbarpwefzmwhdafdtpwhsnehbikxhue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014903.1347353-1339-83344762092563/AnsiballZ_stat.py
Dec 06 09:55:03 np0005548790.localdomain sudo[261723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:03 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:03.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:55:03 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:03.886 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 09:55:04 np0005548790.localdomain python3.9[261725]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:55:04 np0005548790.localdomain sudo[261723]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:04 np0005548790.localdomain sudo[261832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uofpwurwgfexrvvffenwrybjnwffnodd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014904.1230526-1339-153132832437067/AnsiballZ_copy.py
Dec 06 09:55:04 np0005548790.localdomain sudo[261832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:04 np0005548790.localdomain python3.9[261834]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014904.1230526-1339-153132832437067/source dest=/etc/systemd/system/edpm_neutron_dhcp_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:04 np0005548790.localdomain sudo[261832]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:04.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:55:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:04.887 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 09:55:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:04.887 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 09:55:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:04.907 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 09:55:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:04.908 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:55:05 np0005548790.localdomain sudo[261887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-colrkcmonsovwvsadhhbrvfhwljonjsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014904.1230526-1339-153132832437067/AnsiballZ_systemd.py
Dec 06 09:55:05 np0005548790.localdomain sudo[261887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:55:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:55:05 np0005548790.localdomain podman[261891]: 2025-12-06 09:55:05.152933262 +0000 UTC m=+0.085114646 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 09:55:05 np0005548790.localdomain podman[261891]: 2025-12-06 09:55:05.167025487 +0000 UTC m=+0.099206851 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm)
Dec 06 09:55:05 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 09:55:05 np0005548790.localdomain podman[261890]: 2025-12-06 09:55:05.251438724 +0000 UTC m=+0.185418394 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 09:55:05 np0005548790.localdomain podman[261890]: 2025-12-06 09:55:05.261229118 +0000 UTC m=+0.195208768 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 09:55:05 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 09:55:05 np0005548790.localdomain python3.9[261889]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:55:05 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:55:05 np0005548790.localdomain systemd-rc-local-generator[261956]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:55:05 np0005548790.localdomain systemd-sysv-generator[261961]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:55:05 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:55:05 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:55:05 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:55:05 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:55:05 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:55:05 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:55:05 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:55:05 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:55:05 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:55:05 np0005548790.localdomain sudo[261887]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:05 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:05.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:55:06 np0005548790.localdomain sudo[262020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yvnvqotscaponjhnlfqrzhiziprqubxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014904.1230526-1339-153132832437067/AnsiballZ_systemd.py
Dec 06 09:55:06 np0005548790.localdomain sudo[262020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:06 np0005548790.localdomain python3.9[262022]: ansible-systemd Invoked with state=restarted name=edpm_neutron_dhcp_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:55:06 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:55:06 np0005548790.localdomain systemd-rc-local-generator[262048]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:55:06 np0005548790.localdomain systemd-sysv-generator[262051]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:55:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:55:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:55:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:55:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:55:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:55:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:55:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:55:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:55:06 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:55:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:55:06 np0005548790.localdomain systemd[1]: Starting neutron_dhcp_agent container...
Dec 06 09:55:06 np0005548790.localdomain systemd[1]: tmp-crun.YEeIua.mount: Deactivated successfully.
Dec 06 09:55:06 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:55:06 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cb5d1e9137a7d14427affab949d5a47b712382310f5c78ac75e74ce9804f6b5/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 06 09:55:06 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cb5d1e9137a7d14427affab949d5a47b712382310f5c78ac75e74ce9804f6b5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 09:55:06 np0005548790.localdomain podman[262064]: 2025-12-06 09:55:06.968036446 +0000 UTC m=+0.124241101 container init aff65d3858b151147199b97a032b18642aa256d7ddecb8ec34ddf89c0e966e37 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=neutron_dhcp, container_name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '680a4cb500f57a7b7bc130692ebec2059b7b563e6ff014c86804d3fc951d0349'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:55:06 np0005548790.localdomain podman[262062]: 2025-12-06 09:55:06.970848949 +0000 UTC m=+0.128957372 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 06 09:55:06 np0005548790.localdomain podman[262064]: 2025-12-06 09:55:06.977980713 +0000 UTC m=+0.134185368 container start aff65d3858b151147199b97a032b18642aa256d7ddecb8ec34ddf89c0e966e37 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=neutron_dhcp_agent, managed_by=edpm_ansible, config_id=neutron_dhcp, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '680a4cb500f57a7b7bc130692ebec2059b7b563e6ff014c86804d3fc951d0349'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 09:55:06 np0005548790.localdomain podman[262064]: neutron_dhcp_agent
Dec 06 09:55:06 np0005548790.localdomain neutron_dhcp_agent[262091]: + sudo -E kolla_set_configs
Dec 06 09:55:06 np0005548790.localdomain podman[262062]: 2025-12-06 09:55:06.983047334 +0000 UTC m=+0.141155757 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal)
Dec 06 09:55:06 np0005548790.localdomain systemd[1]: Started neutron_dhcp_agent container.
Dec 06 09:55:06 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 09:55:07 np0005548790.localdomain sudo[262020]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: INFO:__main__:Validating config file
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: INFO:__main__:Copying service configuration files
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: INFO:__main__:Writing out command to execute
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/00c6e44062d81bae38ea1c96678049e54d3f27d226bb6f9651816ab13eb94f06
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: ++ cat /run_command
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: + CMD=/usr/bin/neutron-dhcp-agent
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: + ARGS=
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: + sudo kolla_copy_cacerts
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: + [[ ! -n '' ]]
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: + . kolla_extend_start
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: Running command: '/usr/bin/neutron-dhcp-agent'
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\'''
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: + umask 0022
Dec 06 09:55:07 np0005548790.localdomain neutron_dhcp_agent[262091]: + exec /usr/bin/neutron-dhcp-agent
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:55:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:55:07 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:07.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:55:08 np0005548790.localdomain neutron_dhcp_agent[262091]: 2025-12-06 09:55:08.287 262102 INFO neutron.common.config [-] Logging enabled!
Dec 06 09:55:08 np0005548790.localdomain neutron_dhcp_agent[262091]: 2025-12-06 09:55:08.287 262102 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev43
Dec 06 09:55:08 np0005548790.localdomain neutron_dhcp_agent[262091]: 2025-12-06 09:55:08.674 262102 INFO neutron.agent.dhcp.agent [-] Synchronizing state
Dec 06 09:55:08 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:08.883 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:55:08 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:08.885 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:55:08 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:08.904 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:55:08 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:08.904 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:55:08 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:08.905 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:55:08 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:08.906 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 09:55:08 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:08.907 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:55:09 np0005548790.localdomain sudo[262239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tizwpwpbdhdevyiduqjmvlghuylhwrkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014908.793941-1423-180372018932363/AnsiballZ_systemd.py
Dec 06 09:55:09 np0005548790.localdomain sudo[262239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:09 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:09.365 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:55:09 np0005548790.localdomain python3.9[262243]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_dhcp_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:55:09 np0005548790.localdomain systemd[1]: Stopping neutron_dhcp_agent container...
Dec 06 09:55:09 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:09.556 229637 WARNING nova.virt.libvirt.driver [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 09:55:09 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:09.557 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=12871MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 09:55:09 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:09.557 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:55:09 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:09.558 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:55:09 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:09.672 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 09:55:09 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:09.672 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 09:55:09 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:09.690 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:55:09 np0005548790.localdomain systemd[1]: libpod-aff65d3858b151147199b97a032b18642aa256d7ddecb8ec34ddf89c0e966e37.scope: Deactivated successfully.
Dec 06 09:55:09 np0005548790.localdomain systemd[1]: libpod-aff65d3858b151147199b97a032b18642aa256d7ddecb8ec34ddf89c0e966e37.scope: Consumed 2.019s CPU time.
Dec 06 09:55:09 np0005548790.localdomain podman[262249]: 2025-12-06 09:55:09.834850416 +0000 UTC m=+0.394153423 container died aff65d3858b151147199b97a032b18642aa256d7ddecb8ec34ddf89c0e966e37 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '680a4cb500f57a7b7bc130692ebec2059b7b563e6ff014c86804d3fc951d0349'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=neutron_dhcp, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=neutron_dhcp_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 09:55:09 np0005548790.localdomain systemd[1]: tmp-crun.Usk7Pf.mount: Deactivated successfully.
Dec 06 09:55:09 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aff65d3858b151147199b97a032b18642aa256d7ddecb8ec34ddf89c0e966e37-userdata-shm.mount: Deactivated successfully.
Dec 06 09:55:09 np0005548790.localdomain podman[262249]: 2025-12-06 09:55:09.933682386 +0000 UTC m=+0.492985303 container cleanup aff65d3858b151147199b97a032b18642aa256d7ddecb8ec34ddf89c0e966e37 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=neutron_dhcp, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '680a4cb500f57a7b7bc130692ebec2059b7b563e6ff014c86804d3fc951d0349'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']})
Dec 06 09:55:09 np0005548790.localdomain podman[262249]: neutron_dhcp_agent
Dec 06 09:55:10 np0005548790.localdomain podman[262305]: error opening file `/run/crun/aff65d3858b151147199b97a032b18642aa256d7ddecb8ec34ddf89c0e966e37/status`: No such file or directory
Dec 06 09:55:10 np0005548790.localdomain podman[262294]: 2025-12-06 09:55:10.032445106 +0000 UTC m=+0.067329776 container cleanup aff65d3858b151147199b97a032b18642aa256d7ddecb8ec34ddf89c0e966e37 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=neutron_dhcp, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '680a4cb500f57a7b7bc130692ebec2059b7b563e6ff014c86804d3fc951d0349'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_dhcp_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 09:55:10 np0005548790.localdomain podman[262294]: neutron_dhcp_agent
Dec 06 09:55:10 np0005548790.localdomain systemd[1]: edpm_neutron_dhcp_agent.service: Deactivated successfully.
Dec 06 09:55:10 np0005548790.localdomain systemd[1]: Stopped neutron_dhcp_agent container.
Dec 06 09:55:10 np0005548790.localdomain systemd[1]: Starting neutron_dhcp_agent container...
Dec 06 09:55:10 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:10.129 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:55:10 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:10.135 229637 DEBUG nova.compute.provider_tree [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 09:55:10 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:55:10 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cb5d1e9137a7d14427affab949d5a47b712382310f5c78ac75e74ce9804f6b5/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 06 09:55:10 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cb5d1e9137a7d14427affab949d5a47b712382310f5c78ac75e74ce9804f6b5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 09:55:10 np0005548790.localdomain podman[262307]: 2025-12-06 09:55:10.152945067 +0000 UTC m=+0.097909127 container init aff65d3858b151147199b97a032b18642aa256d7ddecb8ec34ddf89c0e966e37 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '680a4cb500f57a7b7bc130692ebec2059b7b563e6ff014c86804d3fc951d0349'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=neutron_dhcp_agent)
Dec 06 09:55:10 np0005548790.localdomain podman[262307]: 2025-12-06 09:55:10.158599164 +0000 UTC m=+0.103563224 container start aff65d3858b151147199b97a032b18642aa256d7ddecb8ec34ddf89c0e966e37 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '680a4cb500f57a7b7bc130692ebec2059b7b563e6ff014c86804d3fc951d0349'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 09:55:10 np0005548790.localdomain podman[262307]: neutron_dhcp_agent
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: + sudo -E kolla_set_configs
Dec 06 09:55:10 np0005548790.localdomain systemd[1]: Started neutron_dhcp_agent container.
Dec 06 09:55:10 np0005548790.localdomain sudo[262239]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Validating config file
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Copying service configuration files
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Writing out command to execute
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/00c6e44062d81bae38ea1c96678049e54d3f27d226bb6f9651816ab13eb94f06
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 06 09:55:10 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:10.218 229637 DEBUG nova.scheduler.client.report [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: ++ cat /run_command
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: + CMD=/usr/bin/neutron-dhcp-agent
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: + ARGS=
Dec 06 09:55:10 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:10.221 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 09:55:10 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:10.221 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: + sudo kolla_copy_cacerts
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: + [[ ! -n '' ]]
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: + . kolla_extend_start
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: Running command: '/usr/bin/neutron-dhcp-agent'
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\'''
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: + umask 0022
Dec 06 09:55:10 np0005548790.localdomain neutron_dhcp_agent[262322]: + exec /usr/bin/neutron-dhcp-agent
Dec 06 09:55:10 np0005548790.localdomain sshd[255448]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:55:10 np0005548790.localdomain systemd[1]: session-58.scope: Deactivated successfully.
Dec 06 09:55:10 np0005548790.localdomain systemd[1]: session-58.scope: Consumed 34.249s CPU time.
Dec 06 09:55:10 np0005548790.localdomain systemd-logind[760]: Session 58 logged out. Waiting for processes to exit.
Dec 06 09:55:10 np0005548790.localdomain systemd-logind[760]: Removed session 58.
Dec 06 09:55:11 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:11.436 262327 INFO neutron.common.config [-] Logging enabled!
Dec 06 09:55:11 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:11.436 262327 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev43
Dec 06 09:55:11 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:11.819 262327 INFO neutron.agent.dhcp.agent [-] Synchronizing state
Dec 06 09:55:12 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:55:12 np0005548790.localdomain podman[262356]: 2025-12-06 09:55:12.547652416 +0000 UTC m=+0.056658389 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, tcib_managed=true)
Dec 06 09:55:12 np0005548790.localdomain podman[262356]: 2025-12-06 09:55:12.564676007 +0000 UTC m=+0.073681950 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 09:55:12 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:55:13 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:13.871 262327 INFO neutron.agent.dhcp.agent [None req-46a7a84c-e698-412b-97a9-087a8b1e6289 - - - - - -] All active networks have been fetched through RPC.
Dec 06 09:55:13 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:13.871 262327 INFO neutron.agent.dhcp.agent [-] Starting network 652b6bdc-40ce-45b7-8aa5-3bca79987993 dhcp configuration
Dec 06 09:55:14 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:55:14 np0005548790.localdomain podman[262375]: 2025-12-06 09:55:14.563637963 +0000 UTC m=+0.081914203 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 09:55:14 np0005548790.localdomain podman[262375]: 2025-12-06 09:55:14.576178668 +0000 UTC m=+0.094454898 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 09:55:14 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:55:15 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:55:15.579 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 09:55:15 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:55:15.581 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 09:55:15 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:55:15.582 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=33b2d0f4-3dae-458c-b286-c937c7cb3d9e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 09:55:16 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:16.089 262327 INFO oslo.privsep.daemon [None req-7d06afec-a1e8-4a98-8c75-760b2d49e981 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpp3njpjgi/privsep.sock']
Dec 06 09:55:16 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:16.701 262327 INFO oslo.privsep.daemon [None req-7d06afec-a1e8-4a98-8c75-760b2d49e981 - - - - - -] Spawned new privsep daemon via rootwrap
Dec 06 09:55:16 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:16.601 262401 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 09:55:16 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:16.605 262401 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 09:55:16 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:16.610 262401 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Dec 06 09:55:16 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:16.610 262401 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262401
Dec 06 09:55:17 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:17.219 262327 INFO oslo.privsep.daemon [None req-7d06afec-a1e8-4a98-8c75-760b2d49e981 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpa4urufzz/privsep.sock']
Dec 06 09:55:17 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:17.841 262327 INFO oslo.privsep.daemon [None req-7d06afec-a1e8-4a98-8c75-760b2d49e981 - - - - - -] Spawned new privsep daemon via rootwrap
Dec 06 09:55:17 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:17.733 262410 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 09:55:17 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:17.740 262410 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 09:55:17 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:17.746 262410 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 06 09:55:17 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:17.746 262410 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262410
Dec 06 09:55:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1922 DF PROTO=TCP SPT=34670 DPT=9102 SEQ=1350985428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18075FE10000000001030307) 
Dec 06 09:55:18 np0005548790.localdomain podman[239825]: time="2025-12-06T09:55:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 09:55:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:55:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146546 "" "Go-http-client/1.1"
Dec 06 09:55:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:55:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16745 "" "Go-http-client/1.1"
Dec 06 09:55:18 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:18.802 262327 INFO oslo.privsep.daemon [None req-7d06afec-a1e8-4a98-8c75-760b2d49e981 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp4b968r1n/privsep.sock']
Dec 06 09:55:19 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:19.400 262327 INFO oslo.privsep.daemon [None req-7d06afec-a1e8-4a98-8c75-760b2d49e981 - - - - - -] Spawned new privsep daemon via rootwrap
Dec 06 09:55:19 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:19.280 262422 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 09:55:19 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:19.286 262422 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 09:55:19 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:19.290 262422 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 06 09:55:19 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:19.290 262422 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262422
Dec 06 09:55:19 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1923 DF PROTO=TCP SPT=34670 DPT=9102 SEQ=1350985428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180763DF0000000001030307) 
Dec 06 09:55:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:55:19 np0005548790.localdomain podman[262426]: 2025-12-06 09:55:19.557489758 +0000 UTC m=+0.069856341 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 09:55:19 np0005548790.localdomain podman[262426]: 2025-12-06 09:55:19.59426774 +0000 UTC m=+0.106634293 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 06 09:55:19 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:55:20 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12839 DF PROTO=TCP SPT=39184 DPT=9102 SEQ=1835009632 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1807671F0000000001030307) 
Dec 06 09:55:20 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:20.780 262327 INFO neutron.agent.linux.ip_lib [None req-7d06afec-a1e8-4a98-8c75-760b2d49e981 - - - - - -] Device tap5a7c43c6-fc cannot be used as it has no MAC address
Dec 06 09:55:20 np0005548790.localdomain kernel: device tap5a7c43c6-fc entered promiscuous mode
Dec 06 09:55:20 np0005548790.localdomain NetworkManager[5968]: <info>  [1765014920.8298] manager: (tap5a7c43c6-fc): new Generic device (/org/freedesktop/NetworkManager/Devices/13)
Dec 06 09:55:20 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:55:20Z|00025|binding|INFO|Claiming lport 5a7c43c6-fc9f-4cdd-8263-7f473c208a14 for this chassis.
Dec 06 09:55:20 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:55:20Z|00026|binding|INFO|5a7c43c6-fc9f-4cdd-8263-7f473c208a14: Claiming unknown
Dec 06 09:55:20 np0005548790.localdomain systemd-udevd[262462]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 09:55:20 np0005548790.localdomain virtnodedevd[228934]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 06 09:55:20 np0005548790.localdomain virtnodedevd[228934]: hostname: np0005548790.localdomain
Dec 06 09:55:20 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap5a7c43c6-fc: No such device
Dec 06 09:55:20 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap5a7c43c6-fc: No such device
Dec 06 09:55:20 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap5a7c43c6-fc: No such device
Dec 06 09:55:20 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap5a7c43c6-fc: No such device
Dec 06 09:55:20 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap5a7c43c6-fc: No such device
Dec 06 09:55:20 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap5a7c43c6-fc: No such device
Dec 06 09:55:20 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap5a7c43c6-fc: No such device
Dec 06 09:55:20 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap5a7c43c6-fc: No such device
Dec 06 09:55:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:55:20.943 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.3/24', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-652b6bdc-40ce-45b7-8aa5-3bca79987993', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-652b6bdc-40ce-45b7-8aa5-3bca79987993', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d603431c0bb4967bafc7a0aa6108bfe', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7872d306-938e-4ee0-be61-57ba3983d747, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=5a7c43c6-fc9f-4cdd-8263-7f473c208a14) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 09:55:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:55:20.946 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 5a7c43c6-fc9f-4cdd-8263-7f473c208a14 in datapath 652b6bdc-40ce-45b7-8aa5-3bca79987993 bound to our chassis
Dec 06 09:55:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:55:20.949 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Port 30c34a4f-e2ec-4cff-86f4-ede944ae9220 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 06 09:55:20 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:55:20Z|00027|binding|INFO|Setting lport 5a7c43c6-fc9f-4cdd-8263-7f473c208a14 ovn-installed in OVS
Dec 06 09:55:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:55:20.950 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 652b6bdc-40ce-45b7-8aa5-3bca79987993, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 09:55:20 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:55:20Z|00028|binding|INFO|Setting lport 5a7c43c6-fc9f-4cdd-8263-7f473c208a14 up in Southbound
Dec 06 09:55:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:55:20.951 159200 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpwp97yhja/privsep.sock']
Dec 06 09:55:21 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:55:21Z|00029|ovn_bfd|INFO|Enabled BFD on interface ovn-afa07b-0
Dec 06 09:55:21 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:55:21Z|00030|ovn_bfd|INFO|Enabled BFD on interface ovn-ca3c1f-0
Dec 06 09:55:21 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:55:21Z|00031|ovn_bfd|INFO|Enabled BFD on interface ovn-bd2a75-0
Dec 06 09:55:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1924 DF PROTO=TCP SPT=34670 DPT=9102 SEQ=1350985428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18076BDF0000000001030307) 
Dec 06 09:55:21 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:55:21.520 159200 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 06 09:55:21 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:55:21.520 159200 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpwp97yhja/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 06 09:55:21 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:55:21.421 262518 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 09:55:21 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:55:21.427 262518 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 09:55:21 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:55:21.431 262518 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Dec 06 09:55:21 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:55:21.432 262518 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262518
Dec 06 09:55:21 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:55:21.522 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[112dce46-6a66-4314-ac1f-e632d67c197d]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 09:55:21 np0005548790.localdomain podman[262545]: 
Dec 06 09:55:21 np0005548790.localdomain podman[262545]: 2025-12-06 09:55:21.79679877 +0000 UTC m=+0.077411405 container create 539d9396d377b3eb27272adbc01c1f0923124872c3d7688a697ccc316a5c2873 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-652b6bdc-40ce-45b7-8aa5-3bca79987993, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 06 09:55:21 np0005548790.localdomain systemd[1]: Started libpod-conmon-539d9396d377b3eb27272adbc01c1f0923124872c3d7688a697ccc316a5c2873.scope.
Dec 06 09:55:21 np0005548790.localdomain systemd[1]: tmp-crun.9Ix0bo.mount: Deactivated successfully.
Dec 06 09:55:21 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:55:21 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fcc2a1b323f9d42b74650d636e6f449a47e8c2798054610d60e87683deb9330/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 09:55:21 np0005548790.localdomain podman[262545]: 2025-12-06 09:55:21.871879766 +0000 UTC m=+0.152492391 container init 539d9396d377b3eb27272adbc01c1f0923124872c3d7688a697ccc316a5c2873 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-652b6bdc-40ce-45b7-8aa5-3bca79987993, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 09:55:21 np0005548790.localdomain podman[262545]: 2025-12-06 09:55:21.771588307 +0000 UTC m=+0.052200972 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 09:55:21 np0005548790.localdomain podman[262545]: 2025-12-06 09:55:21.877647945 +0000 UTC m=+0.158260570 container start 539d9396d377b3eb27272adbc01c1f0923124872c3d7688a697ccc316a5c2873 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-652b6bdc-40ce-45b7-8aa5-3bca79987993, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 06 09:55:21 np0005548790.localdomain dnsmasq[262563]: started, version 2.85 cachesize 150
Dec 06 09:55:21 np0005548790.localdomain dnsmasq[262563]: DNS service limited to local subnets
Dec 06 09:55:21 np0005548790.localdomain dnsmasq[262563]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 09:55:21 np0005548790.localdomain dnsmasq[262563]: warning: no upstream servers configured
Dec 06 09:55:21 np0005548790.localdomain dnsmasq-dhcp[262563]: DHCP, static leases only on 192.168.0.0, lease time 1d
Dec 06 09:55:21 np0005548790.localdomain dnsmasq[262563]: read /var/lib/neutron/dhcp/652b6bdc-40ce-45b7-8aa5-3bca79987993/addn_hosts - 2 addresses
Dec 06 09:55:21 np0005548790.localdomain dnsmasq-dhcp[262563]: read /var/lib/neutron/dhcp/652b6bdc-40ce-45b7-8aa5-3bca79987993/host
Dec 06 09:55:21 np0005548790.localdomain dnsmasq-dhcp[262563]: read /var/lib/neutron/dhcp/652b6bdc-40ce-45b7-8aa5-3bca79987993/opts
Dec 06 09:55:21 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:21.927 262327 INFO neutron.agent.dhcp.agent [None req-d0e6e3b5-1daf-4a32-863f-9878fc9dc635 - - - - - -] Finished network 652b6bdc-40ce-45b7-8aa5-3bca79987993 dhcp configuration
Dec 06 09:55:21 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:21.928 262327 INFO neutron.agent.dhcp.agent [None req-46a7a84c-e698-412b-97a9-087a8b1e6289 - - - - - -] Synchronizing state complete
Dec 06 09:55:21 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:55:21.974 262518 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:55:21 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:55:21.974 262518 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:55:21 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:55:21.974 262518 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:55:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:55:22.070 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[093dd4e5-ae98-4e00-b5e0-a34caf67e34f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 09:55:22 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:22.128 262327 INFO neutron.agent.dhcp.agent [None req-46a7a84c-e698-412b-97a9-087a8b1e6289 - - - - - -] DHCP agent started
Dec 06 09:55:22 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25648 DF PROTO=TCP SPT=41104 DPT=9102 SEQ=1708550821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18076F200000000001030307) 
Dec 06 09:55:22 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 09:55:22.574 262327 INFO neutron.agent.dhcp.agent [None req-7d400c14-5908-464b-89ea-2177c097addc - - - - - -] DHCP configuration for ports {'4fb81ffd-e198-4628-9bd0-0c0f0c89c33a', '86fc0b7a-fbc5-4d40-a0d4-e1a3d455af5b', 'e2f8d27d-14b3-427b-b7c6-48605fbb9c14'} is completed
Dec 06 09:55:23 np0005548790.localdomain sshd[262564]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:55:23 np0005548790.localdomain sshd[262564]: Accepted publickey for zuul from 192.168.122.30 port 55304 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:55:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:55:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:55:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:55:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:55:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:55:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 09:55:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:55:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 09:55:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:55:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:55:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 09:55:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:55:23 np0005548790.localdomain systemd-logind[760]: New session 59 of user zuul.
Dec 06 09:55:23 np0005548790.localdomain systemd[1]: Started Session 59 of User zuul.
Dec 06 09:55:23 np0005548790.localdomain sshd[262564]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:55:24 np0005548790.localdomain python3.9[262675]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:55:25 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1925 DF PROTO=TCP SPT=34670 DPT=9102 SEQ=1350985428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18077BA00000000001030307) 
Dec 06 09:55:26 np0005548790.localdomain python3.9[262787]: ansible-ansible.builtin.service_facts Invoked
Dec 06 09:55:26 np0005548790.localdomain network[262804]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 09:55:26 np0005548790.localdomain network[262805]: 'network-scripts' will be removed from distribution in near future.
Dec 06 09:55:26 np0005548790.localdomain network[262806]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 09:55:28 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:55:29 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:55:29 np0005548790.localdomain podman[262926]: 2025-12-06 09:55:29.686836505 +0000 UTC m=+0.071441362 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 09:55:29 np0005548790.localdomain podman[262926]: 2025-12-06 09:55:29.720130628 +0000 UTC m=+0.104735465 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 09:55:29 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:55:33 np0005548790.localdomain sudo[263057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kzlnuogouvbllnwqfmwupczqqeztezlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014933.181387-103-120443632300946/AnsiballZ_setup.py
Dec 06 09:55:33 np0005548790.localdomain sudo[263057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1926 DF PROTO=TCP SPT=34670 DPT=9102 SEQ=1350985428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18079B1F0000000001030307) 
Dec 06 09:55:33 np0005548790.localdomain python3.9[263059]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 06 09:55:34 np0005548790.localdomain sudo[263057]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:34 np0005548790.localdomain sudo[263120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oigosbighimqvbszwcbupjuxemvlayag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014933.181387-103-120443632300946/AnsiballZ_dnf.py
Dec 06 09:55:34 np0005548790.localdomain sudo[263120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:34 np0005548790.localdomain python3.9[263122]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:55:34 np0005548790.localdomain sudo[263124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:55:34 np0005548790.localdomain sudo[263124]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:55:34 np0005548790.localdomain sudo[263124]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:34 np0005548790.localdomain sudo[263142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 09:55:34 np0005548790.localdomain sudo[263142]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:55:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:55:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:55:35 np0005548790.localdomain podman[263201]: 2025-12-06 09:55:35.57743494 +0000 UTC m=+0.087115868 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 09:55:35 np0005548790.localdomain podman[263201]: 2025-12-06 09:55:35.590279453 +0000 UTC m=+0.099960411 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 09:55:35 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 09:55:35 np0005548790.localdomain podman[263202]: 2025-12-06 09:55:35.674064084 +0000 UTC m=+0.179684816 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=edpm)
Dec 06 09:55:35 np0005548790.localdomain podman[263202]: 2025-12-06 09:55:35.684157085 +0000 UTC m=+0.189777807 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251125)
Dec 06 09:55:35 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 09:55:35 np0005548790.localdomain podman[263268]: 2025-12-06 09:55:35.88505552 +0000 UTC m=+0.093572715 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, com.redhat.component=rhceph-container, architecture=x86_64)
Dec 06 09:55:36 np0005548790.localdomain podman[263268]: 2025-12-06 09:55:36.015311895 +0000 UTC m=+0.223829090 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=1763362218, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True)
Dec 06 09:55:36 np0005548790.localdomain sudo[263142]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:36 np0005548790.localdomain sudo[263334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:55:36 np0005548790.localdomain sudo[263334]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:55:36 np0005548790.localdomain sudo[263334]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:36 np0005548790.localdomain sudo[263352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:55:36 np0005548790.localdomain sudo[263352]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:55:37 np0005548790.localdomain sudo[263352]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:55:37 np0005548790.localdomain podman[263401]: 2025-12-06 09:55:37.576725616 +0000 UTC m=+0.086962264 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, managed_by=edpm_ansible, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 06 09:55:37 np0005548790.localdomain podman[263401]: 2025-12-06 09:55:37.622266805 +0000 UTC m=+0.132503483 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 06 09:55:37 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 09:55:37 np0005548790.localdomain sudo[263120]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:37 np0005548790.localdomain sudo[263421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:55:37 np0005548790.localdomain sudo[263421]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:55:37 np0005548790.localdomain sudo[263421]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:38 np0005548790.localdomain sudo[263546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-odjxjacbvhhnbdyeszanlglicycxtymy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014938.299109-140-97535714120955/AnsiballZ_stat.py
Dec 06 09:55:38 np0005548790.localdomain sudo[263546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:38 np0005548790.localdomain python3.9[263548]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:55:38 np0005548790.localdomain sudo[263546]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:39 np0005548790.localdomain sudo[263656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lpapfysgptvcgkabyzeodbvqgtsusnhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014939.393576-170-98885801686970/AnsiballZ_command.py
Dec 06 09:55:39 np0005548790.localdomain sudo[263656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:39 np0005548790.localdomain python3.9[263658]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:55:39 np0005548790.localdomain sudo[263656]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:40 np0005548790.localdomain sudo[263767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-anzfgdjruabnnhleymbgsjbpexrzjpaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014940.4364252-199-199264449908277/AnsiballZ_stat.py
Dec 06 09:55:40 np0005548790.localdomain sudo[263767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:41 np0005548790.localdomain python3.9[263769]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:55:41 np0005548790.localdomain sudo[263767]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:42 np0005548790.localdomain sudo[263879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qoxfuieyibsmbppsenjwzdexfkphbguy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014941.5282462-233-44314319031000/AnsiballZ_lineinfile.py
Dec 06 09:55:42 np0005548790.localdomain sudo[263879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:42 np0005548790.localdomain python3.9[263881]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:42 np0005548790.localdomain sudo[263879]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:43 np0005548790.localdomain sudo[263989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zrygsxtxadodwhompwbdzohmjbyooqji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014942.659376-259-265422307671505/AnsiballZ_systemd_service.py
Dec 06 09:55:43 np0005548790.localdomain sudo[263989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:55:43 np0005548790.localdomain podman[263992]: 2025-12-06 09:55:43.335356538 +0000 UTC m=+0.065704950 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd)
Dec 06 09:55:43 np0005548790.localdomain podman[263992]: 2025-12-06 09:55:43.349324451 +0000 UTC m=+0.079672863 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:55:43 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:55:43 np0005548790.localdomain python3.9[263991]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:55:43 np0005548790.localdomain sudo[263989]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:44 np0005548790.localdomain sudo[264120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oeyydhfjtydkiwvnbpmdulchuhsnjehs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014943.9452028-283-238494794785301/AnsiballZ_systemd_service.py
Dec 06 09:55:44 np0005548790.localdomain sudo[264120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:44 np0005548790.localdomain python3.9[264122]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:55:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:55:44 np0005548790.localdomain sudo[264120]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:44 np0005548790.localdomain podman[264124]: 2025-12-06 09:55:44.69770412 +0000 UTC m=+0.070999627 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 09:55:44 np0005548790.localdomain podman[264124]: 2025-12-06 09:55:44.707029362 +0000 UTC m=+0.080324859 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 09:55:44 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:55:46 np0005548790.localdomain sudo[264253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jfjkblillyhvcmtwxslnkbpjnmeqndsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014946.1476462-317-145963616045002/AnsiballZ_service_facts.py
Dec 06 09:55:46 np0005548790.localdomain sudo[264253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:46 np0005548790.localdomain python3.9[264255]: ansible-ansible.builtin.service_facts Invoked
Dec 06 09:55:46 np0005548790.localdomain network[264272]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 09:55:46 np0005548790.localdomain network[264273]: 'network-scripts' will be removed from distribution in near future.
Dec 06 09:55:46 np0005548790.localdomain network[264274]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 09:55:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:55:48.364 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:55:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:55:48.364 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:55:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:55:48.365 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:55:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65161 DF PROTO=TCP SPT=39894 DPT=9102 SEQ=3222578564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1807D5110000000001030307) 
Dec 06 09:55:48 np0005548790.localdomain podman[239825]: time="2025-12-06T09:55:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 09:55:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:55:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148370 "" "Go-http-client/1.1"
Dec 06 09:55:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:55:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17224 "" "Go-http-client/1.1"
Dec 06 09:55:49 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:55:49 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65162 DF PROTO=TCP SPT=39894 DPT=9102 SEQ=3222578564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1807D91F0000000001030307) 
Dec 06 09:55:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:55:49 np0005548790.localdomain podman[264344]: 2025-12-06 09:55:49.74447931 +0000 UTC m=+0.090806912 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Dec 06 09:55:49 np0005548790.localdomain podman[264344]: 2025-12-06 09:55:49.856172384 +0000 UTC m=+0.202499946 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 09:55:49 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:55:49 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1927 DF PROTO=TCP SPT=34670 DPT=9102 SEQ=1350985428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1807DB200000000001030307) 
Dec 06 09:55:50 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:55:50Z|00032|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec 06 09:55:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65163 DF PROTO=TCP SPT=39894 DPT=9102 SEQ=3222578564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1807E1200000000001030307) 
Dec 06 09:55:52 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12840 DF PROTO=TCP SPT=39184 DPT=9102 SEQ=1835009632 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1807E5200000000001030307) 
Dec 06 09:55:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:55:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 09:55:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:55:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:55:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:55:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:55:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:55:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 09:55:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:55:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:55:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 09:55:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:55:53 np0005548790.localdomain sudo[264253]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:54 np0005548790.localdomain sudo[264531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uoskzongbmkkzhqiibxzaizfnalcrwor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014954.355033-347-199195073893937/AnsiballZ_file.py
Dec 06 09:55:54 np0005548790.localdomain sudo[264531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:55 np0005548790.localdomain python3.9[264533]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 09:55:55 np0005548790.localdomain sudo[264531]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:55 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65164 DF PROTO=TCP SPT=39894 DPT=9102 SEQ=3222578564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1807F0DF0000000001030307) 
Dec 06 09:55:55 np0005548790.localdomain sudo[264641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tpqlnzacehfmfejyzwsgecdvlkdanneu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014955.3165054-371-198317960930452/AnsiballZ_modprobe.py
Dec 06 09:55:55 np0005548790.localdomain sudo[264641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:55 np0005548790.localdomain python3.9[264643]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec 06 09:55:55 np0005548790.localdomain sudo[264641]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:56 np0005548790.localdomain sudo[264751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-royujgsjqfsutfziinnfinmccvmmoolf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014956.1751482-395-100913953637809/AnsiballZ_stat.py
Dec 06 09:55:56 np0005548790.localdomain sudo[264751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:56 np0005548790.localdomain python3.9[264753]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:55:56 np0005548790.localdomain sudo[264751]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:56 np0005548790.localdomain sudo[264808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-djgshtasjvonielekcpbchhsbwimaycn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014956.1751482-395-100913953637809/AnsiballZ_file.py
Dec 06 09:55:56 np0005548790.localdomain sudo[264808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:56.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:55:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:56.886 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 09:55:56 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:56.902 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:55:57 np0005548790.localdomain python3.9[264810]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/dm-multipath.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/dm-multipath.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:57 np0005548790.localdomain sudo[264808]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:57 np0005548790.localdomain sudo[264918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xhjxrmqunuosbbrrjtszjbleodhiqfxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014957.5118408-433-265626487170380/AnsiballZ_lineinfile.py
Dec 06 09:55:57 np0005548790.localdomain sudo[264918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:57.913 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:55:57 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:57.913 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 09:55:57 np0005548790.localdomain python3.9[264920]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:55:57 np0005548790.localdomain sudo[264918]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:58 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:55:58.141 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 09:55:58 np0005548790.localdomain sudo[265028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pyemdhuxwwghpbrqjbocuzqeoxcacwad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014958.2908654-461-203045559600109/AnsiballZ_file.py
Dec 06 09:55:58 np0005548790.localdomain sudo[265028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:58 np0005548790.localdomain python3.9[265030]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:55:58 np0005548790.localdomain sudo[265028]: pam_unix(sudo:session): session closed for user root
Dec 06 09:55:59 np0005548790.localdomain sudo[265138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hatdlgshblajmzceyegxoqjfqqwpbgug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014959.1180866-487-208139399494044/AnsiballZ_stat.py
Dec 06 09:55:59 np0005548790.localdomain sudo[265138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:55:59 np0005548790.localdomain python3.9[265140]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:55:59 np0005548790.localdomain sudo[265138]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:00 np0005548790.localdomain sudo[265250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fjdhixzxgwnprzzvobxgbrtloceuprfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014960.0962434-515-205313870151007/AnsiballZ_stat.py
Dec 06 09:56:00 np0005548790.localdomain sudo[265250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:00 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:56:00 np0005548790.localdomain podman[265253]: 2025-12-06 09:56:00.463905775 +0000 UTC m=+0.082185178 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 09:56:00 np0005548790.localdomain podman[265253]: 2025-12-06 09:56:00.474299555 +0000 UTC m=+0.092578998 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 06 09:56:00 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:56:00 np0005548790.localdomain python3.9[265252]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:56:00 np0005548790.localdomain sudo[265250]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:01 np0005548790.localdomain sudo[265382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pdxatbtqhuazotbuyqiinrwegnfwgsjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014960.9301636-542-83092174366700/AnsiballZ_command.py
Dec 06 09:56:01 np0005548790.localdomain sudo[265382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:01 np0005548790.localdomain python3.9[265384]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:56:01 np0005548790.localdomain sudo[265382]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:02 np0005548790.localdomain sudo[265493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pjnnnwtswqovmuuegpwkdkpdfclasaij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014961.7816854-572-45816678280552/AnsiballZ_replace.py
Dec 06 09:56:02 np0005548790.localdomain sudo[265493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:02 np0005548790.localdomain python3.9[265495]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:02 np0005548790.localdomain sudo[265493]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:02 np0005548790.localdomain systemd-journald[47675]: Field hash table of /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal has a fill level at 75.7 (252 of 333 items), suggesting rotation.
Dec 06 09:56:02 np0005548790.localdomain systemd-journald[47675]: /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 06 09:56:02 np0005548790.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 09:56:02 np0005548790.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 09:56:02 np0005548790.localdomain sudo[265604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-emfxjsjoounvyomkhmkdrdpdrjmafsmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014962.7266288-599-15458956848423/AnsiballZ_lineinfile.py
Dec 06 09:56:02 np0005548790.localdomain sudo[265604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:03 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:03.114 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:56:03 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:03.115 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:56:03 np0005548790.localdomain python3.9[265606]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:03 np0005548790.localdomain sudo[265604]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:03 np0005548790.localdomain sudo[265714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hlppjfyzddyeyuvosyxoynrfbgmwynmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014963.2872477-599-12338391462555/AnsiballZ_lineinfile.py
Dec 06 09:56:03 np0005548790.localdomain sudo[265714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:03 np0005548790.localdomain python3.9[265716]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:03 np0005548790.localdomain sudo[265714]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65165 DF PROTO=TCP SPT=39894 DPT=9102 SEQ=3222578564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1808111F0000000001030307) 
Dec 06 09:56:03 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:03.883 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:56:04 np0005548790.localdomain sudo[265824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dvmxiqzvgmqmkylhkkuopqsdyongwidy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014963.8736813-599-275000927693618/AnsiballZ_lineinfile.py
Dec 06 09:56:04 np0005548790.localdomain sudo[265824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:04 np0005548790.localdomain python3.9[265826]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:04 np0005548790.localdomain sudo[265824]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:04 np0005548790.localdomain sudo[265934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lmuuwruxsxntpoegcxjxxcvskbtjdodf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014964.5546749-599-131720154651544/AnsiballZ_lineinfile.py
Dec 06 09:56:04 np0005548790.localdomain sudo[265934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:04.885 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:56:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:04.886 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 09:56:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:04.886 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 09:56:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:04.904 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 09:56:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:04.904 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:56:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:04.904 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 09:56:05 np0005548790.localdomain python3.9[265936]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:05 np0005548790.localdomain sudo[265934]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:05 np0005548790.localdomain sudo[266044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-izwhgjcrvqzvksopkxhsmouwikdsdmyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014965.4541435-686-55365822987120/AnsiballZ_stat.py
Dec 06 09:56:05 np0005548790.localdomain sudo[266044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:56:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:56:05 np0005548790.localdomain systemd[1]: tmp-crun.YdUA1r.mount: Deactivated successfully.
Dec 06 09:56:05 np0005548790.localdomain podman[266048]: 2025-12-06 09:56:05.866612791 +0000 UTC m=+0.093565674 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 09:56:05 np0005548790.localdomain podman[266048]: 2025-12-06 09:56:05.883317665 +0000 UTC m=+0.110270578 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3)
Dec 06 09:56:05 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:05.887 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:56:05 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:05.888 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:56:05 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 09:56:05 np0005548790.localdomain python3.9[266046]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:56:05 np0005548790.localdomain podman[266047]: 2025-12-06 09:56:05.95237276 +0000 UTC m=+0.182310301 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 09:56:05 np0005548790.localdomain podman[266047]: 2025-12-06 09:56:05.960656756 +0000 UTC m=+0.190594227 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 09:56:05 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 09:56:05 np0005548790.localdomain sudo[266044]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:06 np0005548790.localdomain sudo[266197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qweitfylteyedjmshdwhdpatpqylhitu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014966.5433168-716-61474766363068/AnsiballZ_file.py
Dec 06 09:56:06 np0005548790.localdomain sudo[266197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:07 np0005548790.localdomain python3.9[266199]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:56:07 np0005548790.localdomain sudo[266197]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:07 np0005548790.localdomain sudo[266307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rsjkgefoojribgmasftiquawkomivjhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014967.3113756-740-3161822094219/AnsiballZ_stat.py
Dec 06 09:56:07 np0005548790.localdomain sudo[266307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:07 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:07.608 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:56:07 np0005548790.localdomain python3.9[266309]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:07 np0005548790.localdomain sudo[266307]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:08 np0005548790.localdomain sudo[266364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zjtneywijuenvbgljkgecmdntrtwmffa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014967.3113756-740-3161822094219/AnsiballZ_file.py
Dec 06 09:56:08 np0005548790.localdomain sudo[266364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:08 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:56:08 np0005548790.localdomain podman[266367]: 2025-12-06 09:56:08.18268528 +0000 UTC m=+0.077159676 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, distribution-scope=public, version=9.6, architecture=x86_64, vcs-type=git, name=ubi9-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, release=1755695350, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 06 09:56:08 np0005548790.localdomain podman[266367]: 2025-12-06 09:56:08.197441835 +0000 UTC m=+0.091916231 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, release=1755695350, distribution-scope=public, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, version=9.6, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Dec 06 09:56:08 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 09:56:08 np0005548790.localdomain python3.9[266366]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:56:08 np0005548790.localdomain sudo[266364]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:08 np0005548790.localdomain sudo[266494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vqwdvvdpknghplqczyskhrsqajyxlxvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014968.4505455-740-43727080919754/AnsiballZ_stat.py
Dec 06 09:56:08 np0005548790.localdomain sudo[266494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:08 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:08.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:56:08 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:08.887 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:56:08 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:08.887 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:56:08 np0005548790.localdomain python3.9[266496]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:08 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:08.912 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:56:08 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:08.912 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:56:08 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:08.912 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:56:08 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:08.912 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 09:56:08 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:08.913 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:56:08 np0005548790.localdomain sudo[266494]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:09 np0005548790.localdomain sudo[266571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ksqmdiefsowwtpivnaqsdlsspuyrwmpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014968.4505455-740-43727080919754/AnsiballZ_file.py
Dec 06 09:56:09 np0005548790.localdomain sudo[266571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:09 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:09.359 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:56:09 np0005548790.localdomain python3.9[266573]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:56:09 np0005548790.localdomain sudo[266571]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:09 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:09.534 229637 WARNING nova.virt.libvirt.driver [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 09:56:09 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:09.535 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=12526MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 09:56:09 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:09.535 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:56:09 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:09.535 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:56:09 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:09.715 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 09:56:09 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:09.715 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 09:56:09 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:09.814 229637 DEBUG nova.scheduler.client.report [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Refreshing inventories for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 09:56:09 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:09.878 229637 DEBUG nova.scheduler.client.report [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Updating ProviderTree inventory for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 09:56:09 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:09.878 229637 DEBUG nova.compute.provider_tree [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Updating inventory in ProviderTree for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 09:56:09 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:09.905 229637 DEBUG nova.scheduler.client.report [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Refreshing aggregate associations for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 09:56:09 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:09.935 229637 DEBUG nova.scheduler.client.report [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Refreshing trait associations for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0, traits: COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_AVX,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,HW_CPU_X86_SSE4A,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_RESCUE_BFV,HW_CPU_X86_ABM,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_F16C,HW_CPU_X86_BMI2,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_CLMUL,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_AVX2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 09:56:09 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:09.952 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:56:10 np0005548790.localdomain sudo[266684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kapbhmdgywtqrooqysfxiechbqjuppzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014969.727167-809-196091696299772/AnsiballZ_file.py
Dec 06 09:56:10 np0005548790.localdomain sudo[266684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:10 np0005548790.localdomain python3.9[266686]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:10 np0005548790.localdomain sudo[266684]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:10 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:10.391 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:56:10 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:10.397 229637 DEBUG nova.compute.provider_tree [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 09:56:10 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:10.414 229637 DEBUG nova.scheduler.client.report [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 09:56:10 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:10.416 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 09:56:10 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:56:10.417 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:56:10 np0005548790.localdomain sudo[266815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rsjxyyymwcqdbebpkfcdpvdfgtyozlof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014970.475431-833-28567659588363/AnsiballZ_stat.py
Dec 06 09:56:10 np0005548790.localdomain sudo[266815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:10 np0005548790.localdomain python3.9[266817]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:10 np0005548790.localdomain sudo[266815]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:11 np0005548790.localdomain sudo[266872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-naiclblkrgrkjmfgefhmoqxcnfwoemgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014970.475431-833-28567659588363/AnsiballZ_file.py
Dec 06 09:56:11 np0005548790.localdomain sudo[266872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:11 np0005548790.localdomain python3.9[266874]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:11 np0005548790.localdomain sudo[266872]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:11 np0005548790.localdomain sudo[266982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jjxxdvnhwmklntfozuchpwlyzpgqwfbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014971.6866763-868-86486201672423/AnsiballZ_stat.py
Dec 06 09:56:11 np0005548790.localdomain sudo[266982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:12 np0005548790.localdomain python3.9[266984]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:12 np0005548790.localdomain sudo[266982]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:12 np0005548790.localdomain sudo[267039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-urwvzdvshtangovcomuunexslmalwjxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014971.6866763-868-86486201672423/AnsiballZ_file.py
Dec 06 09:56:12 np0005548790.localdomain sudo[267039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:12 np0005548790.localdomain python3.9[267041]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:12 np0005548790.localdomain sudo[267039]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:56:13 np0005548790.localdomain sudo[267160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-powevlubydaafxzlqpmpjzdkagwpmcab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014973.018782-905-104563657341803/AnsiballZ_systemd.py
Dec 06 09:56:13 np0005548790.localdomain sudo[267160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:13 np0005548790.localdomain podman[267130]: 2025-12-06 09:56:13.576060114 +0000 UTC m=+0.088396080 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:56:13 np0005548790.localdomain podman[267130]: 2025-12-06 09:56:13.587487911 +0000 UTC m=+0.099823887 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:56:13 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:56:13 np0005548790.localdomain python3.9[267167]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:56:13 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:56:13 np0005548790.localdomain systemd-rc-local-generator[267193]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:56:14 np0005548790.localdomain systemd-sysv-generator[267198]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:56:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:56:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:56:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:56:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:56:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:56:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:56:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:56:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:56:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:56:14 np0005548790.localdomain sudo[267160]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:56:15 np0005548790.localdomain podman[267225]: 2025-12-06 09:56:15.548705705 +0000 UTC m=+0.066830459 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 09:56:15 np0005548790.localdomain podman[267225]: 2025-12-06 09:56:15.56314584 +0000 UTC m=+0.081270584 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 09:56:15 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:56:15 np0005548790.localdomain sudo[267339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yldeidfjbqizldsodufklbyzsngrbwzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014975.641595-930-180231171269375/AnsiballZ_stat.py
Dec 06 09:56:15 np0005548790.localdomain sudo[267339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:16 np0005548790.localdomain python3.9[267341]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:16 np0005548790.localdomain sudo[267339]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:16 np0005548790.localdomain sudo[267396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kmkiavrrmupalpkivrwblfftvklkploa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014975.641595-930-180231171269375/AnsiballZ_file.py
Dec 06 09:56:16 np0005548790.localdomain sudo[267396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:16 np0005548790.localdomain python3.9[267398]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:16 np0005548790.localdomain sudo[267396]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:17 np0005548790.localdomain sudo[267506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eptvfgwycdwycdweuospwasqzjcqndis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014976.819843-965-84504277358905/AnsiballZ_stat.py
Dec 06 09:56:17 np0005548790.localdomain sudo[267506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:17 np0005548790.localdomain python3.9[267508]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:17 np0005548790.localdomain sudo[267506]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:17 np0005548790.localdomain sudo[267563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nvwvpbmrgzivuxkzygydwuytjuwlovym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014976.819843-965-84504277358905/AnsiballZ_file.py
Dec 06 09:56:17 np0005548790.localdomain sudo[267563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:17 np0005548790.localdomain python3.9[267565]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:17 np0005548790.localdomain sudo[267563]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24706 DF PROTO=TCP SPT=52084 DPT=9102 SEQ=495960246 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18084A410000000001030307) 
Dec 06 09:56:18 np0005548790.localdomain podman[239825]: time="2025-12-06T09:56:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 09:56:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:56:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148370 "" "Go-http-client/1.1"
Dec 06 09:56:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:56:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17229 "" "Go-http-client/1.1"
Dec 06 09:56:18 np0005548790.localdomain sudo[267673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qkhaxbekkahwvvhfqxwiahgnhhmxxymw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014978.1331599-1001-84614697293790/AnsiballZ_systemd.py
Dec 06 09:56:18 np0005548790.localdomain sudo[267673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:18 np0005548790.localdomain python3.9[267675]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:56:18 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:56:18 np0005548790.localdomain systemd-sysv-generator[267699]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:56:18 np0005548790.localdomain systemd-rc-local-generator[267696]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:56:18 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:56:18 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:56:18 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:56:18 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:56:18 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:56:18 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:56:18 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:56:18 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:56:18 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:56:19 np0005548790.localdomain systemd[1]: Starting Create netns directory...
Dec 06 09:56:19 np0005548790.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 06 09:56:19 np0005548790.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 06 09:56:19 np0005548790.localdomain systemd[1]: Finished Create netns directory.
Dec 06 09:56:19 np0005548790.localdomain sudo[267673]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:19 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24707 DF PROTO=TCP SPT=52084 DPT=9102 SEQ=495960246 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18084E5F0000000001030307) 
Dec 06 09:56:19 np0005548790.localdomain sudo[267825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pbdnqrvlftgfxocxyacdzhikntnvwafg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014979.6977541-1031-177673751175218/AnsiballZ_file.py
Dec 06 09:56:20 np0005548790.localdomain sudo[267825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:56:20 np0005548790.localdomain podman[267828]: 2025-12-06 09:56:20.123548624 +0000 UTC m=+0.103974874 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:56:20 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65166 DF PROTO=TCP SPT=39894 DPT=9102 SEQ=3222578564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1808511F0000000001030307) 
Dec 06 09:56:20 np0005548790.localdomain podman[267828]: 2025-12-06 09:56:20.205457395 +0000 UTC m=+0.185883655 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Dec 06 09:56:20 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:56:20 np0005548790.localdomain python3.9[267827]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:56:20 np0005548790.localdomain sudo[267825]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:20 np0005548790.localdomain sudo[267960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qzyhltfbsybfamzyepkxeddjgurwebfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014980.5090609-1054-274015757568241/AnsiballZ_stat.py
Dec 06 09:56:20 np0005548790.localdomain sudo[267960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:20 np0005548790.localdomain python3.9[267962]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:21 np0005548790.localdomain sudo[267960]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:21 np0005548790.localdomain sudo[268017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jkgcuwumsudjcaltkhpbrzixqwhsixml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014980.5090609-1054-274015757568241/AnsiballZ_file.py
Dec 06 09:56:21 np0005548790.localdomain sudo[268017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:21 np0005548790.localdomain python3.9[268019]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/multipathd/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/multipathd/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:56:21 np0005548790.localdomain sudo[268017]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24708 DF PROTO=TCP SPT=52084 DPT=9102 SEQ=495960246 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180856600000000001030307) 
Dec 06 09:56:22 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1928 DF PROTO=TCP SPT=34670 DPT=9102 SEQ=1350985428 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180859200000000001030307) 
Dec 06 09:56:22 np0005548790.localdomain sudo[268127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cgxnpwkhbfdrcwzjflcydvqyssymkfzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014981.9689553-1097-42501735077926/AnsiballZ_file.py
Dec 06 09:56:22 np0005548790.localdomain sudo[268127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:22 np0005548790.localdomain python3.9[268129]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:56:22 np0005548790.localdomain sudo[268127]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:23 np0005548790.localdomain sudo[268237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ckrnpaltcdbsaarbtjqrhkjoimwdqkqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014982.8311822-1120-168727939166195/AnsiballZ_stat.py
Dec 06 09:56:23 np0005548790.localdomain sudo[268237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:23 np0005548790.localdomain python3.9[268239]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:23 np0005548790.localdomain sudo[268237]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:56:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 09:56:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:56:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:56:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:56:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:56:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:56:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 09:56:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:56:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:56:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 09:56:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:56:23 np0005548790.localdomain sudo[268294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vujazjrlgsqzoreuetcgfghjwezmphdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014982.8311822-1120-168727939166195/AnsiballZ_file.py
Dec 06 09:56:23 np0005548790.localdomain sudo[268294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:23 np0005548790.localdomain python3.9[268296]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/multipathd.json _original_basename=.oo_blabm recurse=False state=file path=/var/lib/kolla/config_files/multipathd.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:23 np0005548790.localdomain sudo[268294]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:24 np0005548790.localdomain sudo[268404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eafdrcckrzowauxzdsedphysrinmojoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014984.180505-1156-187352160880453/AnsiballZ_file.py
Dec 06 09:56:24 np0005548790.localdomain sudo[268404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:24 np0005548790.localdomain python3.9[268406]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:24 np0005548790.localdomain sudo[268404]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:25 np0005548790.localdomain sudo[268514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ilurbpcmxiuhenrxzpnnjuhjisxpsmtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014984.9233363-1180-164614915326562/AnsiballZ_stat.py
Dec 06 09:56:25 np0005548790.localdomain sudo[268514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:25 np0005548790.localdomain sudo[268514]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:25 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24709 DF PROTO=TCP SPT=52084 DPT=9102 SEQ=495960246 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1808661F0000000001030307) 
Dec 06 09:56:25 np0005548790.localdomain sudo[268571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-upalzfumhsltmaftaobtdjtpzkkqlown ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014984.9233363-1180-164614915326562/AnsiballZ_file.py
Dec 06 09:56:25 np0005548790.localdomain sudo[268571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:25 np0005548790.localdomain sudo[268571]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:26 np0005548790.localdomain sudo[268681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dufrqtcdsxgcfvfumhgjqgezshespfbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014986.433237-1223-146862805795213/AnsiballZ_container_config_data.py
Dec 06 09:56:26 np0005548790.localdomain sudo[268681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:27 np0005548790.localdomain python3.9[268683]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec 06 09:56:27 np0005548790.localdomain sudo[268681]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:27 np0005548790.localdomain sudo[268791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vjxvshnwhzkahlffvnvmegzjzrztvqow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014987.512265-1250-259308292949200/AnsiballZ_container_config_hash.py
Dec 06 09:56:27 np0005548790.localdomain sudo[268791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:28 np0005548790.localdomain python3.9[268793]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 09:56:28 np0005548790.localdomain sudo[268791]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:28 np0005548790.localdomain sudo[268901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ruxzkyfagdtnmrxlcjiqkfsblmnkopyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014988.561229-1277-262234358926540/AnsiballZ_podman_container_info.py
Dec 06 09:56:28 np0005548790.localdomain sudo[268901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:29 np0005548790.localdomain python3.9[268903]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 06 09:56:29 np0005548790.localdomain sudo[268901]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:31 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:56:31 np0005548790.localdomain podman[268947]: 2025-12-06 09:56:31.614342276 +0000 UTC m=+0.127270111 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Dec 06 09:56:31 np0005548790.localdomain podman[268947]: 2025-12-06 09:56:31.649362136 +0000 UTC m=+0.162289961 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:56:31 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:56:33 np0005548790.localdomain sudo[269054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nqbyjtaqiczcspreqqdcquayczzoloeu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765014993.102026-1315-5205450208629/AnsiballZ_edpm_container_manage.py
Dec 06 09:56:33 np0005548790.localdomain sudo[269054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:33 np0005548790.localdomain python3[269056]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 09:56:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24710 DF PROTO=TCP SPT=52084 DPT=9102 SEQ=495960246 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1808871F0000000001030307) 
Dec 06 09:56:34 np0005548790.localdomain python3[269056]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7",
                                                                    "Digest": "sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-multipathd:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2025-12-01T06:11:02.031267563Z",
                                                                    "Config": {
                                                                         "User": "root",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20251125",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 249482216,
                                                                    "VirtualSize": 249482216,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                                              "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                                              "sha256:8c448567789503f6c5be645a12473dfc27734872532d528b6ee764c214f9f2f3"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20251125",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "root",
                                                                    "History": [
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223494528Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223562059Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:39.054452717Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025707917Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025744608Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025767729Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025791379Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.02581523Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025867611Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.469442331Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:02.029095017Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:05.672474685Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.113425253Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.532320725Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.370061347Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.805172373Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.259306372Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.625948784Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.028304824Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.423316076Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.801219631Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.239187116Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.70996597Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.147342611Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.5739488Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.006975065Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.421255505Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.066694755Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.475695836Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.8971372Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:18.542651107Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622503041Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622561802Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622578342Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622594423Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:22.080892529Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:24.212273596Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:11:01.523582443Z",
                                                                              "created_by": "/bin/sh -c dnf -y install device-mapper-multipath iscsi-initiator-utils && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:11:03.162365736Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-multipathd:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 06 09:56:34 np0005548790.localdomain sudo[269054]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:34 np0005548790.localdomain sudo[269227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vmmjlkmlyypkprymjhprraujfimzxpge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014994.4913535-1340-16334622378362/AnsiballZ_stat.py
Dec 06 09:56:34 np0005548790.localdomain sudo[269227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:35 np0005548790.localdomain python3.9[269229]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:56:35 np0005548790.localdomain sudo[269227]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:35 np0005548790.localdomain sudo[269339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uyjrnsthmoxvufersyldqnckhsozrxiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014995.5771601-1366-258402592549855/AnsiballZ_file.py
Dec 06 09:56:35 np0005548790.localdomain sudo[269339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:36 np0005548790.localdomain python3.9[269341]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:36 np0005548790.localdomain sudo[269339]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:56:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:56:36 np0005548790.localdomain systemd[1]: tmp-crun.aAypAF.mount: Deactivated successfully.
Dec 06 09:56:36 np0005548790.localdomain podman[269358]: 2025-12-06 09:56:36.307085632 +0000 UTC m=+0.087258060 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 09:56:36 np0005548790.localdomain podman[269358]: 2025-12-06 09:56:36.320041598 +0000 UTC m=+0.100214026 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 09:56:36 np0005548790.localdomain sudo[269425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ouqlifoufqkdvnerbwgxuobjhsevgjdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014995.5771601-1366-258402592549855/AnsiballZ_stat.py
Dec 06 09:56:36 np0005548790.localdomain sudo[269425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:36 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 09:56:36 np0005548790.localdomain podman[269359]: 2025-12-06 09:56:36.40666591 +0000 UTC m=+0.190596176 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS)
Dec 06 09:56:36 np0005548790.localdomain podman[269359]: 2025-12-06 09:56:36.412455321 +0000 UTC m=+0.196385577 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:56:36 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 09:56:36 np0005548790.localdomain python3.9[269430]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:56:36 np0005548790.localdomain sudo[269425]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:37 np0005548790.localdomain sudo[269543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rmedjmpnkebczqxksnvmcececjcqfaga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014996.5950885-1366-256109664202949/AnsiballZ_copy.py
Dec 06 09:56:37 np0005548790.localdomain sudo[269543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:37 np0005548790.localdomain python3.9[269545]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765014996.5950885-1366-256109664202949/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:37 np0005548790.localdomain sudo[269543]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:37 np0005548790.localdomain sudo[269598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pckwnyxenjtrsxddirifdjxcjguikqwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765014996.5950885-1366-256109664202949/AnsiballZ_systemd.py
Dec 06 09:56:37 np0005548790.localdomain sudo[269598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:38 np0005548790.localdomain sudo[269601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:56:38 np0005548790.localdomain sudo[269601]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:56:38 np0005548790.localdomain sudo[269601]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:38 np0005548790.localdomain python3.9[269600]: ansible-systemd Invoked with state=started name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:56:38 np0005548790.localdomain sudo[269598]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:38 np0005548790.localdomain sudo[269621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:56:38 np0005548790.localdomain sudo[269621]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:56:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:56:38 np0005548790.localdomain podman[269670]: 2025-12-06 09:56:38.537002842 +0000 UTC m=+0.057117447 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public)
Dec 06 09:56:38 np0005548790.localdomain podman[269670]: 2025-12-06 09:56:38.551448337 +0000 UTC m=+0.071562982 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64)
Dec 06 09:56:38 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 09:56:38 np0005548790.localdomain sudo[269621]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:39 np0005548790.localdomain sudo[269749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:56:39 np0005548790.localdomain sudo[269749]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:56:39 np0005548790.localdomain sudo[269749]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:39 np0005548790.localdomain python3.9[269815]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:56:40 np0005548790.localdomain sudo[269923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fhrhoaiferwtljksnonahohjqyimxcfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015000.2805548-1469-81922146630435/AnsiballZ_file.py
Dec 06 09:56:40 np0005548790.localdomain sudo[269923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:40 np0005548790.localdomain python3.9[269925]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:40 np0005548790.localdomain sudo[269923]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:41 np0005548790.localdomain sudo[270033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hsgvpesnricodlumseavhbytmzwevnsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015001.5159194-1505-276062272285996/AnsiballZ_file.py
Dec 06 09:56:41 np0005548790.localdomain sudo[270033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:41 np0005548790.localdomain python3.9[270035]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 06 09:56:41 np0005548790.localdomain sudo[270033]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:42 np0005548790.localdomain sudo[270143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xppatrpbqwezyxwsqxatsnwqbbkmrope ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015002.213139-1529-141563398006883/AnsiballZ_modprobe.py
Dec 06 09:56:42 np0005548790.localdomain sudo[270143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:42 np0005548790.localdomain python3.9[270145]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec 06 09:56:42 np0005548790.localdomain sudo[270143]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:43 np0005548790.localdomain sudo[270253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uedmgmrjodzpwqhaegkqqerhflqoxpop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015003.0705538-1553-75823139560924/AnsiballZ_stat.py
Dec 06 09:56:43 np0005548790.localdomain sudo[270253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:43 np0005548790.localdomain python3.9[270255]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:56:43 np0005548790.localdomain sudo[270253]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:43 np0005548790.localdomain sudo[270310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pjgemnaptmrqwatpqjjlsgqqzabqqqla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015003.0705538-1553-75823139560924/AnsiballZ_file.py
Dec 06 09:56:43 np0005548790.localdomain sudo[270310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:56:43 np0005548790.localdomain podman[270313]: 2025-12-06 09:56:43.894995634 +0000 UTC m=+0.081312625 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, tcib_managed=true)
Dec 06 09:56:43 np0005548790.localdomain podman[270313]: 2025-12-06 09:56:43.907412598 +0000 UTC m=+0.093729569 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 09:56:43 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:56:44 np0005548790.localdomain python3.9[270312]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/nvme-fabrics.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/nvme-fabrics.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:44 np0005548790.localdomain sudo[270310]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:44 np0005548790.localdomain sudo[270438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sulnoodecbpluvxmoeppjjpmxvohzdtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015004.3629885-1592-170127948758583/AnsiballZ_lineinfile.py
Dec 06 09:56:44 np0005548790.localdomain sudo[270438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:44 np0005548790.localdomain python3.9[270440]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:44 np0005548790.localdomain sudo[270438]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:45 np0005548790.localdomain sudo[270548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-orepnzmitddgkvlrqrdhetaffqydplum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015005.2420235-1618-245918992519/AnsiballZ_dnf.py
Dec 06 09:56:45 np0005548790.localdomain sudo[270548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:45 np0005548790.localdomain python3.9[270550]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 06 09:56:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:56:46 np0005548790.localdomain podman[270553]: 2025-12-06 09:56:46.57009599 +0000 UTC m=+0.083439730 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 09:56:46 np0005548790.localdomain podman[270553]: 2025-12-06 09:56:46.578732344 +0000 UTC m=+0.092076104 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 09:56:46 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:56:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:56:48.365 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:56:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:56:48.365 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:56:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:56:48.365 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:56:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52303 DF PROTO=TCP SPT=40950 DPT=9102 SEQ=1784222752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1808BF710000000001030307) 
Dec 06 09:56:48 np0005548790.localdomain podman[239825]: time="2025-12-06T09:56:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 09:56:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:56:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148370 "" "Go-http-client/1.1"
Dec 06 09:56:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:56:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17221 "" "Go-http-client/1.1"
Dec 06 09:56:48 np0005548790.localdomain sudo[270548]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:49 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52304 DF PROTO=TCP SPT=40950 DPT=9102 SEQ=1784222752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1808C35F0000000001030307) 
Dec 06 09:56:50 np0005548790.localdomain python3.9[270683]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 06 09:56:50 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24711 DF PROTO=TCP SPT=52084 DPT=9102 SEQ=495960246 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1808C71F0000000001030307) 
Dec 06 09:56:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:56:50 np0005548790.localdomain podman[270695]: 2025-12-06 09:56:50.56608459 +0000 UTC m=+0.082352563 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 09:56:50 np0005548790.localdomain podman[270695]: 2025-12-06 09:56:50.605160586 +0000 UTC m=+0.121428599 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 06 09:56:50 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:56:51 np0005548790.localdomain sudo[270820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-envjujqucoziglubzwoaxmhxqtkczmiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015010.7847242-1670-248205009988429/AnsiballZ_file.py
Dec 06 09:56:51 np0005548790.localdomain sudo[270820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:51 np0005548790.localdomain python3.9[270822]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:56:51 np0005548790.localdomain sudo[270820]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52305 DF PROTO=TCP SPT=40950 DPT=9102 SEQ=1784222752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1808CB5F0000000001030307) 
Dec 06 09:56:52 np0005548790.localdomain sudo[270930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ocpawxfyugnjtrpemudpfmpqynaijrxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015011.850887-1704-81477464820726/AnsiballZ_systemd_service.py
Dec 06 09:56:52 np0005548790.localdomain sudo[270930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:52 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65167 DF PROTO=TCP SPT=39894 DPT=9102 SEQ=3222578564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1808CF1F0000000001030307) 
Dec 06 09:56:52 np0005548790.localdomain python3.9[270932]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:56:52 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:56:52 np0005548790.localdomain systemd-rc-local-generator[270961]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:56:52 np0005548790.localdomain systemd-sysv-generator[270964]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:56:52 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:56:52 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:56:52 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:56:52 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:56:52 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:56:52 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:56:52 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:56:52 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:56:52 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:56:52 np0005548790.localdomain sudo[270930]: pam_unix(sudo:session): session closed for user root
Dec 06 09:56:53 np0005548790.localdomain python3.9[271077]: ansible-ansible.builtin.service_facts Invoked
Dec 06 09:56:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:56:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:56:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:56:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 09:56:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:56:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:56:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:56:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 09:56:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:56:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:56:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 09:56:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:56:53 np0005548790.localdomain network[271094]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 06 09:56:53 np0005548790.localdomain network[271095]: 'network-scripts' will be removed from distribution in near future.
Dec 06 09:56:53 np0005548790.localdomain network[271096]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 06 09:56:55 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52306 DF PROTO=TCP SPT=40950 DPT=9102 SEQ=1784222752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1808DB1F0000000001030307) 
Dec 06 09:56:55 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:56:59 np0005548790.localdomain sudo[271328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pvjvpajdbgkkuizokyjjnelsumnrbcyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015019.2218199-1761-94155526959903/AnsiballZ_systemd_service.py
Dec 06 09:56:59 np0005548790.localdomain sudo[271328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:56:59 np0005548790.localdomain python3.9[271330]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:57:00 np0005548790.localdomain sudo[271328]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:01 np0005548790.localdomain sudo[271439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jekuxxkyxmicxgyubfvwaghkegsntpss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015020.9546795-1761-196648257634371/AnsiballZ_systemd_service.py
Dec 06 09:57:01 np0005548790.localdomain sudo[271439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:01 np0005548790.localdomain python3.9[271441]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:57:01 np0005548790.localdomain sudo[271439]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:02 np0005548790.localdomain sudo[271550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dhnxtxyyawkvdfraponuprvptrbnidgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015021.7500758-1761-232327670594706/AnsiballZ_systemd_service.py
Dec 06 09:57:02 np0005548790.localdomain sudo[271550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:02 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:57:02 np0005548790.localdomain podman[271552]: 2025-12-06 09:57:02.233141984 +0000 UTC m=+0.068564274 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:57:02 np0005548790.localdomain podman[271552]: 2025-12-06 09:57:02.242096906 +0000 UTC m=+0.077519256 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 09:57:02 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:57:02 np0005548790.localdomain python3.9[271553]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:57:02 np0005548790.localdomain sudo[271550]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:02 np0005548790.localdomain sudo[271680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kcxsosxedywrzkrzqbgjevxxqxhzsvan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015022.6166713-1761-245949817962425/AnsiballZ_systemd_service.py
Dec 06 09:57:02 np0005548790.localdomain sudo[271680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:03 np0005548790.localdomain python3.9[271682]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:57:03 np0005548790.localdomain sudo[271680]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52307 DF PROTO=TCP SPT=40950 DPT=9102 SEQ=1784222752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1808FB1F0000000001030307) 
Dec 06 09:57:03 np0005548790.localdomain sudo[271791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cyxelpdjqwuocpzqnyfaemihualgesce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015023.5386627-1761-33616632372679/AnsiballZ_systemd_service.py
Dec 06 09:57:03 np0005548790.localdomain sudo[271791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:04 np0005548790.localdomain python3.9[271793]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:57:04 np0005548790.localdomain sudo[271791]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:04.417 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:57:04 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:04.418 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:57:04 np0005548790.localdomain sudo[271902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-brlqywbhtkchhnarwpubdndrmpnofvfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015024.2691398-1761-26843487714680/AnsiballZ_systemd_service.py
Dec 06 09:57:04 np0005548790.localdomain sudo[271902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:04 np0005548790.localdomain python3.9[271904]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:57:04 np0005548790.localdomain sudo[271902]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:05 np0005548790.localdomain sudo[272013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zzsqkjfjhkykymlxboqdnrahstnzwwwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015025.0877824-1761-80350167863174/AnsiballZ_systemd_service.py
Dec 06 09:57:05 np0005548790.localdomain sudo[272013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:05 np0005548790.localdomain python3.9[272015]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:57:05 np0005548790.localdomain sudo[272013]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:06 np0005548790.localdomain sudo[272124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gbidbfsufxaukvrqprgfylsyontalpsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015025.8572881-1761-219410090958759/AnsiballZ_systemd_service.py
Dec 06 09:57:06 np0005548790.localdomain sudo[272124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:06 np0005548790.localdomain python3.9[272126]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:57:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:57:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:57:06 np0005548790.localdomain sudo[272124]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:06 np0005548790.localdomain podman[272129]: 2025-12-06 09:57:06.571524655 +0000 UTC m=+0.080399140 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3)
Dec 06 09:57:06 np0005548790.localdomain podman[272129]: 2025-12-06 09:57:06.612288356 +0000 UTC m=+0.121162911 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 09:57:06 np0005548790.localdomain podman[272128]: 2025-12-06 09:57:06.622848311 +0000 UTC m=+0.132665391 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 09:57:06 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 09:57:06 np0005548790.localdomain podman[272128]: 2025-12-06 09:57:06.662758038 +0000 UTC m=+0.172575078 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 09:57:06 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 09:57:06 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:06.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:57:06 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:06.886 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 09:57:06 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:06.887 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 09:57:07 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:07.119 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 09:57:07 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:07.120 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:57:07 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:07.120 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 09:57:07 np0005548790.localdomain sudo[272277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ufpztjjwkljnxoyndkfvkmcmdqtctitv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015026.923172-1937-100677275582056/AnsiballZ_file.py
Dec 06 09:57:07 np0005548790.localdomain sudo[272277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:57:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:57:07 np0005548790.localdomain python3.9[272279]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:07 np0005548790.localdomain sudo[272277]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:07 np0005548790.localdomain sudo[272387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mbpbtyyqemrkmbvpzxizgzsqbtdaxmdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015027.5506597-1937-221191242821783/AnsiballZ_file.py
Dec 06 09:57:07 np0005548790.localdomain sudo[272387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:07 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:07.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:57:07 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:07.887 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:57:08 np0005548790.localdomain python3.9[272389]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:08 np0005548790.localdomain sudo[272387]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:08 np0005548790.localdomain sudo[272498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kyfgmeowzwrdqtwhiezrfdgnabznpbea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015028.1487637-1937-277923198524642/AnsiballZ_file.py
Dec 06 09:57:08 np0005548790.localdomain sudo[272498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:08 np0005548790.localdomain python3.9[272500]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:08 np0005548790.localdomain sudo[272498]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:08 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:08.882 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:57:09 np0005548790.localdomain sudo[272608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tgsmoemvxgfeputlawwqelpwhqoibpmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015028.7386754-1937-192192688594828/AnsiballZ_file.py
Dec 06 09:57:09 np0005548790.localdomain sudo[272608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:57:09 np0005548790.localdomain podman[272611]: 2025-12-06 09:57:09.109512256 +0000 UTC m=+0.076538081 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, config_id=edpm, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, release=1755695350, maintainer=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, vendor=Red Hat, Inc.)
Dec 06 09:57:09 np0005548790.localdomain podman[272611]: 2025-12-06 09:57:09.116676722 +0000 UTC m=+0.083702517 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.openshift.expose-services=, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, release=1755695350, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 06 09:57:09 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 09:57:09 np0005548790.localdomain python3.9[272610]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:09 np0005548790.localdomain sudo[272608]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:09 np0005548790.localdomain sudo[272737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wpzhwqystbtrtlyehpptplersqurytcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015029.3751547-1937-270558322814784/AnsiballZ_file.py
Dec 06 09:57:09 np0005548790.localdomain sudo[272737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:09 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:09.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:57:09 np0005548790.localdomain python3.9[272739]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:09 np0005548790.localdomain sudo[272737]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:10 np0005548790.localdomain sudo[272847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dywjbrwqhtcimpmqxhfhffxrffbxitbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015030.0258594-1937-39291459789596/AnsiballZ_file.py
Dec 06 09:57:10 np0005548790.localdomain sudo[272847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:10 np0005548790.localdomain python3.9[272849]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:10 np0005548790.localdomain sudo[272847]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:10 np0005548790.localdomain sudo[272957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sjjqcdmgqgtkpphamjzihfahblstnxnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015030.5993392-1937-248413517028318/AnsiballZ_file.py
Dec 06 09:57:10 np0005548790.localdomain sudo[272957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:10 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:10.885 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:57:10 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:10.911 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:57:10 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:10.911 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:57:10 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:10.913 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:57:10 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:10.913 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 09:57:10 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:10.914 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:57:11 np0005548790.localdomain python3.9[272959]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:11 np0005548790.localdomain sudo[272957]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:11 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:11.376 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:57:11 np0005548790.localdomain sudo[273089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-efkrdtbuvrcavfihazchxrnjmlsbmhrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015031.2467988-1937-241112154604131/AnsiballZ_file.py
Dec 06 09:57:11 np0005548790.localdomain sudo[273089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:11 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:11.539 229637 WARNING nova.virt.libvirt.driver [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 09:57:11 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:11.540 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=12538MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 09:57:11 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:11.540 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:57:11 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:11.541 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:57:11 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:11.590 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 09:57:11 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:11.590 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 09:57:11 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:11.606 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:57:11 np0005548790.localdomain python3.9[273091]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:11 np0005548790.localdomain sudo[273089]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:12 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:12.071 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:57:12 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:12.077 229637 DEBUG nova.compute.provider_tree [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 09:57:12 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:12.092 229637 DEBUG nova.scheduler.client.report [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 09:57:12 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:12.094 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 09:57:12 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:57:12.094 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:57:12 np0005548790.localdomain sudo[273221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jnptpoyzaqmcvjvozjeglbwmvqegqigk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015032.5017984-2109-18485057374542/AnsiballZ_file.py
Dec 06 09:57:12 np0005548790.localdomain sudo[273221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:13 np0005548790.localdomain python3.9[273223]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:13 np0005548790.localdomain sudo[273221]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:13 np0005548790.localdomain sudo[273331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lunsyzcwvyozzrqlqeucviuvibxgmjav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015033.1230445-2109-55787809615929/AnsiballZ_file.py
Dec 06 09:57:13 np0005548790.localdomain sudo[273331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:13 np0005548790.localdomain python3.9[273333]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:13 np0005548790.localdomain sudo[273331]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:14 np0005548790.localdomain sudo[273441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mmdunstdhasxgcroclfuqsugdiotcrtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015033.7961776-2109-24908232035675/AnsiballZ_file.py
Dec 06 09:57:14 np0005548790.localdomain sudo[273441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:14 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:57:14 np0005548790.localdomain podman[273444]: 2025-12-06 09:57:14.205500346 +0000 UTC m=+0.078290556 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 09:57:14 np0005548790.localdomain podman[273444]: 2025-12-06 09:57:14.242588201 +0000 UTC m=+0.115378391 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 09:57:14 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:57:14 np0005548790.localdomain python3.9[273443]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:14 np0005548790.localdomain sudo[273441]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:14 np0005548790.localdomain sudo[273570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zwkzxlzonmeyfnjrecexksrpqejjioav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015034.5466046-2109-28212834587840/AnsiballZ_file.py
Dec 06 09:57:14 np0005548790.localdomain sudo[273570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:15 np0005548790.localdomain python3.9[273572]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:15 np0005548790.localdomain sudo[273570]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:15 np0005548790.localdomain sudo[273680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nhtjymoztfzxhrcwpmigcuyfmfrjvmes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015035.2031627-2109-140493606309972/AnsiballZ_file.py
Dec 06 09:57:15 np0005548790.localdomain sudo[273680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:15 np0005548790.localdomain python3.9[273682]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:15 np0005548790.localdomain sudo[273680]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:16 np0005548790.localdomain sudo[273790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ewlxvhsksmzvhafdntpwglvsnrdrinez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015035.8427978-2109-186921711990075/AnsiballZ_file.py
Dec 06 09:57:16 np0005548790.localdomain sudo[273790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:16 np0005548790.localdomain python3.9[273792]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:16 np0005548790.localdomain sudo[273790]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:16 np0005548790.localdomain sudo[273900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wbmcnlsfobcbyibqcrxvhqabmctjgctf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015036.4849277-2109-262335385750357/AnsiballZ_file.py
Dec 06 09:57:16 np0005548790.localdomain sudo[273900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:57:16 np0005548790.localdomain podman[273903]: 2025-12-06 09:57:16.914622896 +0000 UTC m=+0.124049276 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 09:57:16 np0005548790.localdomain podman[273903]: 2025-12-06 09:57:16.921839784 +0000 UTC m=+0.131266164 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 09:57:16 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:57:16 np0005548790.localdomain python3.9[273902]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:16 np0005548790.localdomain sudo[273900]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:17 np0005548790.localdomain sudo[274031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-swhbwrdylaynkmvsivqnfiwkjlqnxvxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015037.0968695-2109-135948881539077/AnsiballZ_file.py
Dec 06 09:57:17 np0005548790.localdomain sudo[274031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:17 np0005548790.localdomain python3.9[274033]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:17 np0005548790.localdomain sudo[274031]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3008 DF PROTO=TCP SPT=57696 DPT=9102 SEQ=3137403385 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180934A10000000001030307) 
Dec 06 09:57:18 np0005548790.localdomain podman[239825]: time="2025-12-06T09:57:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 09:57:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:57:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148370 "" "Go-http-client/1.1"
Dec 06 09:57:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:57:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17215 "" "Go-http-client/1.1"
Dec 06 09:57:18 np0005548790.localdomain sudo[274141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nujfodtyjwueqfmruzibuqeblyterhrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015038.3018205-2283-116814382974764/AnsiballZ_command.py
Dec 06 09:57:18 np0005548790.localdomain sudo[274141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:18 np0005548790.localdomain python3.9[274143]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                              systemctl disable --now certmonger.service
                                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                            fi
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:57:18 np0005548790.localdomain sudo[274141]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:19 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3009 DF PROTO=TCP SPT=57696 DPT=9102 SEQ=3137403385 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180938A00000000001030307) 
Dec 06 09:57:19 np0005548790.localdomain python3.9[274253]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 06 09:57:20 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52308 DF PROTO=TCP SPT=40950 DPT=9102 SEQ=1784222752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A18093B1F0000000001030307) 
Dec 06 09:57:20 np0005548790.localdomain sudo[274361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fubjfupnzfbqnxsmvdrjppxoxunapdfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015040.0260563-2337-49243709211584/AnsiballZ_systemd_service.py
Dec 06 09:57:20 np0005548790.localdomain sudo[274361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:20 np0005548790.localdomain python3.9[274363]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 06 09:57:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:57:20 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 09:57:20 np0005548790.localdomain systemd-rc-local-generator[274396]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 09:57:20 np0005548790.localdomain systemd-sysv-generator[274400]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 09:57:20 np0005548790.localdomain podman[274365]: 2025-12-06 09:57:20.755367269 +0000 UTC m=+0.100402431 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 06 09:57:20 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:57:20 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 09:57:20 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:57:20 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:57:20 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 09:57:20 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 09:57:20 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:57:20 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:57:20 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 09:57:20 np0005548790.localdomain podman[274365]: 2025-12-06 09:57:20.820468363 +0000 UTC m=+0.165503515 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 09:57:20 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:57:20 np0005548790.localdomain sudo[274361]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3010 DF PROTO=TCP SPT=57696 DPT=9102 SEQ=3137403385 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1809409F0000000001030307) 
Dec 06 09:57:21 np0005548790.localdomain sudo[274532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-azooavjxmjubzyjwusehlxpkrgpgsmtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015041.2920415-2360-149859997938689/AnsiballZ_command.py
Dec 06 09:57:21 np0005548790.localdomain sudo[274532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:21 np0005548790.localdomain python3.9[274534]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:57:21 np0005548790.localdomain sudo[274532]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:22 np0005548790.localdomain sudo[274643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-levrxhjxywepjxdmywzmbzvvlisukojl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015041.9299517-2360-159690745297582/AnsiballZ_command.py
Dec 06 09:57:22 np0005548790.localdomain sudo[274643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:22 np0005548790.localdomain python3.9[274645]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:57:22 np0005548790.localdomain sudo[274643]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:22 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24712 DF PROTO=TCP SPT=52084 DPT=9102 SEQ=495960246 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180945200000000001030307) 
Dec 06 09:57:22 np0005548790.localdomain sudo[274754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gjgygpniopbtfxudbqkwseplpdwrimry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015042.5846376-2360-155058035263575/AnsiballZ_command.py
Dec 06 09:57:22 np0005548790.localdomain sudo[274754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:23 np0005548790.localdomain python3.9[274756]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:57:23 np0005548790.localdomain sudo[274754]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:23 np0005548790.localdomain sudo[274865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-meyyyruzfvrsdzbelqnyctmgszrxxslm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015043.184199-2360-69560221324699/AnsiballZ_command.py
Dec 06 09:57:23 np0005548790.localdomain sudo[274865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:57:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:57:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:57:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:57:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:57:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 09:57:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:57:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 09:57:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:57:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:57:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 09:57:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:57:23 np0005548790.localdomain python3.9[274867]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:57:23 np0005548790.localdomain sudo[274865]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:24 np0005548790.localdomain sudo[274976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jwdmrfghrgntftgnmmezdxwmxraejexh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015043.8599381-2360-244752892284539/AnsiballZ_command.py
Dec 06 09:57:24 np0005548790.localdomain sudo[274976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:24 np0005548790.localdomain python3.9[274978]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:57:24 np0005548790.localdomain sudo[274976]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:24 np0005548790.localdomain sudo[275087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-acfcxbueeshbndxrouhckwcwoinjriau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015044.3864124-2360-5080385505321/AnsiballZ_command.py
Dec 06 09:57:24 np0005548790.localdomain sudo[275087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:24 np0005548790.localdomain python3.9[275089]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:57:24 np0005548790.localdomain sudo[275087]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:25 np0005548790.localdomain sudo[275198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-azpvjijkgxcjmuhahuutnypkdajbirmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015044.984066-2360-235322163442848/AnsiballZ_command.py
Dec 06 09:57:25 np0005548790.localdomain sudo[275198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:25 np0005548790.localdomain python3.9[275200]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:57:25 np0005548790.localdomain sudo[275198]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:25 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3011 DF PROTO=TCP SPT=57696 DPT=9102 SEQ=3137403385 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1809505F0000000001030307) 
Dec 06 09:57:25 np0005548790.localdomain sudo[275309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bwxhkjyzifzjhysyjawsdmfzeymyvdoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015045.5629897-2360-60978468844173/AnsiballZ_command.py
Dec 06 09:57:25 np0005548790.localdomain sudo[275309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:26 np0005548790.localdomain python3.9[275311]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 09:57:26 np0005548790.localdomain sudo[275309]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:30 np0005548790.localdomain sudo[275420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bertynkzvpdwomtsivapnpdsclqebaax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015049.9390495-2568-17343940417667/AnsiballZ_file.py
Dec 06 09:57:30 np0005548790.localdomain sudo[275420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:30 np0005548790.localdomain python3.9[275422]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:57:30 np0005548790.localdomain sudo[275420]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:30 np0005548790.localdomain sudo[275530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-acapsdwsteixtcwrjebzjynjxylbgokp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015050.5879269-2568-171969984763965/AnsiballZ_file.py
Dec 06 09:57:30 np0005548790.localdomain sudo[275530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:31 np0005548790.localdomain python3.9[275532]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:57:31 np0005548790.localdomain sudo[275530]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:31 np0005548790.localdomain sudo[275640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rnfdjpfbtkayflqqduafskogvcfjffdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015051.2255754-2568-196480317541910/AnsiballZ_file.py
Dec 06 09:57:31 np0005548790.localdomain sudo[275640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:31 np0005548790.localdomain python3.9[275642]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:57:31 np0005548790.localdomain sudo[275640]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:32 np0005548790.localdomain sudo[275750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kwmymorvyoksuputwsulakjgemvlulii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015052.0827336-2633-260063579061527/AnsiballZ_file.py
Dec 06 09:57:32 np0005548790.localdomain sudo[275750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:57:32 np0005548790.localdomain systemd[1]: tmp-crun.NxaHNY.mount: Deactivated successfully.
Dec 06 09:57:32 np0005548790.localdomain podman[275753]: 2025-12-06 09:57:32.463615786 +0000 UTC m=+0.094703184 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:57:32 np0005548790.localdomain podman[275753]: 2025-12-06 09:57:32.471200814 +0000 UTC m=+0.102288192 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 09:57:32 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:57:32 np0005548790.localdomain python3.9[275752]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:57:32 np0005548790.localdomain sudo[275750]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:33 np0005548790.localdomain sudo[275878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mjrajwvsoxxbbspgpjxtofrzrjthplsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015052.7364266-2633-181829855217974/AnsiballZ_file.py
Dec 06 09:57:33 np0005548790.localdomain sudo[275878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:33 np0005548790.localdomain python3.9[275880]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:57:33 np0005548790.localdomain sudo[275878]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:33 np0005548790.localdomain sudo[275988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xezrufygxokstwkiwybfiuvvpmxxaaxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015053.3727777-2633-271695169025592/AnsiballZ_file.py
Dec 06 09:57:33 np0005548790.localdomain sudo[275988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:33 np0005548790.localdomain python3.9[275990]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:57:33 np0005548790.localdomain sudo[275988]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3012 DF PROTO=TCP SPT=57696 DPT=9102 SEQ=3137403385 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1809711F0000000001030307) 
Dec 06 09:57:34 np0005548790.localdomain sudo[276098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xwddwmxyttcbvkqsasarqbcqvjwmtpvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015053.943177-2633-8934639013228/AnsiballZ_file.py
Dec 06 09:57:34 np0005548790.localdomain sudo[276098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:34 np0005548790.localdomain python3.9[276100]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:57:34 np0005548790.localdomain sudo[276098]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:34 np0005548790.localdomain sudo[276208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rrfrwwpygucyncmqcoyzmjayfrgbpiut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015054.5695174-2633-50621576917255/AnsiballZ_file.py
Dec 06 09:57:34 np0005548790.localdomain sudo[276208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:35 np0005548790.localdomain python3.9[276210]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:57:35 np0005548790.localdomain sudo[276208]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:35 np0005548790.localdomain sudo[276318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qyxvqrgckdlraalcvomrcgfgorpdgyjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015055.207464-2633-89166364282895/AnsiballZ_file.py
Dec 06 09:57:35 np0005548790.localdomain sudo[276318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:35 np0005548790.localdomain python3.9[276320]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:57:35 np0005548790.localdomain sudo[276318]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:36 np0005548790.localdomain sudo[276428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-namkbiciwfynhddbbqegmfsgsmdzoyct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015055.7933433-2633-26722798535099/AnsiballZ_file.py
Dec 06 09:57:36 np0005548790.localdomain sudo[276428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:36 np0005548790.localdomain python3.9[276430]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:57:36 np0005548790.localdomain sudo[276428]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:57:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:57:37 np0005548790.localdomain podman[276448]: 2025-12-06 09:57:37.565647604 +0000 UTC m=+0.083224856 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 09:57:37 np0005548790.localdomain podman[276448]: 2025-12-06 09:57:37.62318341 +0000 UTC m=+0.140760632 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 09:57:37 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 09:57:37 np0005548790.localdomain podman[276449]: 2025-12-06 09:57:37.716681021 +0000 UTC m=+0.231267905 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:57:37 np0005548790.localdomain podman[276449]: 2025-12-06 09:57:37.73010595 +0000 UTC m=+0.244692824 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 06 09:57:37 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 09:57:39 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:57:39 np0005548790.localdomain podman[276491]: 2025-12-06 09:57:39.553831638 +0000 UTC m=+0.071935261 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, release=1755695350, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=openstack_network_exporter, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 09:57:39 np0005548790.localdomain podman[276491]: 2025-12-06 09:57:39.564812954 +0000 UTC m=+0.082916627 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter)
Dec 06 09:57:39 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 09:57:39 np0005548790.localdomain sudo[276511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:57:39 np0005548790.localdomain sudo[276511]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:57:39 np0005548790.localdomain sudo[276511]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:39 np0005548790.localdomain sudo[276529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:57:39 np0005548790.localdomain sudo[276529]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:57:40 np0005548790.localdomain sudo[276529]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:41 np0005548790.localdomain sudo[276578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:57:41 np0005548790.localdomain sudo[276578]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:57:41 np0005548790.localdomain sudo[276578]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:42 np0005548790.localdomain sudo[276686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nfwyvcamckrxuuoiiihxvjrewmdnkdqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015061.9211435-2959-259415348135055/AnsiballZ_getent.py
Dec 06 09:57:42 np0005548790.localdomain sudo[276686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:42 np0005548790.localdomain python3.9[276688]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec 06 09:57:42 np0005548790.localdomain sudo[276686]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:43 np0005548790.localdomain sshd[276707]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:57:43 np0005548790.localdomain sshd[276707]: Accepted publickey for zuul from 192.168.122.30 port 57070 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 09:57:43 np0005548790.localdomain systemd-logind[760]: New session 60 of user zuul.
Dec 06 09:57:43 np0005548790.localdomain systemd[1]: Started Session 60 of User zuul.
Dec 06 09:57:43 np0005548790.localdomain sshd[276707]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 09:57:43 np0005548790.localdomain sshd[276710]: Received disconnect from 192.168.122.30 port 57070:11: disconnected by user
Dec 06 09:57:43 np0005548790.localdomain sshd[276710]: Disconnected from user zuul 192.168.122.30 port 57070
Dec 06 09:57:43 np0005548790.localdomain sshd[276707]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:57:43 np0005548790.localdomain systemd[1]: session-60.scope: Deactivated successfully.
Dec 06 09:57:43 np0005548790.localdomain systemd-logind[760]: Session 60 logged out. Waiting for processes to exit.
Dec 06 09:57:44 np0005548790.localdomain systemd-logind[760]: Removed session 60.
Dec 06 09:57:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:57:44 np0005548790.localdomain podman[276796]: 2025-12-06 09:57:44.573867973 +0000 UTC m=+0.082598019 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 09:57:44 np0005548790.localdomain podman[276796]: 2025-12-06 09:57:44.596226805 +0000 UTC m=+0.104956871 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 06 09:57:44 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:57:44 np0005548790.localdomain python3.9[276830]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:57:45 np0005548790.localdomain python3.9[276922]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015064.2935605-3040-241501671334236/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:57:45 np0005548790.localdomain python3.9[277030]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:57:46 np0005548790.localdomain python3.9[277085]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:57:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:57:47 np0005548790.localdomain python3.9[277193]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:57:47 np0005548790.localdomain systemd[1]: tmp-crun.5SeE7X.mount: Deactivated successfully.
Dec 06 09:57:47 np0005548790.localdomain podman[277194]: 2025-12-06 09:57:47.571825613 +0000 UTC m=+0.085263658 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 09:57:47 np0005548790.localdomain podman[277194]: 2025-12-06 09:57:47.580761615 +0000 UTC m=+0.094199610 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 09:57:47 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:57:48 np0005548790.localdomain python3.9[277302]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015067.1060576-3040-36336253716766/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:57:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:57:48.365 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:57:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:57:48.366 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:57:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:57:48.366 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:57:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43746 DF PROTO=TCP SPT=49164 DPT=9102 SEQ=4055321179 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1809A9D10000000001030307) 
Dec 06 09:57:48 np0005548790.localdomain podman[239825]: time="2025-12-06T09:57:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 09:57:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:57:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148370 "" "Go-http-client/1.1"
Dec 06 09:57:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:57:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17224 "" "Go-http-client/1.1"
Dec 06 09:57:48 np0005548790.localdomain python3.9[277410]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:57:49 np0005548790.localdomain python3.9[277496]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015068.2668293-3040-101637509398384/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=da0d7199af82d0dd331a8c2ddfaef41c68c4707d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:57:49 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43747 DF PROTO=TCP SPT=49164 DPT=9102 SEQ=4055321179 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1809ADDF0000000001030307) 
Dec 06 09:57:49 np0005548790.localdomain python3.9[277604]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:57:50 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3013 DF PROTO=TCP SPT=57696 DPT=9102 SEQ=3137403385 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1809B1200000000001030307) 
Dec 06 09:57:50 np0005548790.localdomain python3.9[277690]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015069.4312136-3040-279680055796649/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:57:51 np0005548790.localdomain python3.9[277798]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:57:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43748 DF PROTO=TCP SPT=49164 DPT=9102 SEQ=4055321179 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1809B5DF0000000001030307) 
Dec 06 09:57:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:57:51 np0005548790.localdomain podman[277885]: 2025-12-06 09:57:51.562833445 +0000 UTC m=+0.073135561 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller)
Dec 06 09:57:51 np0005548790.localdomain podman[277885]: 2025-12-06 09:57:51.599115218 +0000 UTC m=+0.109417314 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 09:57:51 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:57:51 np0005548790.localdomain python3.9[277884]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765015070.6224127-3040-83665429186705/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:57:52 np0005548790.localdomain sudo[278018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zycqxulxvcsbmkuhlosmydjydsjbtfyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015071.9124258-3289-58446975101291/AnsiballZ_file.py
Dec 06 09:57:52 np0005548790.localdomain sudo[278018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:52 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52309 DF PROTO=TCP SPT=40950 DPT=9102 SEQ=1784222752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1809B9200000000001030307) 
Dec 06 09:57:52 np0005548790.localdomain python3.9[278020]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:52 np0005548790.localdomain sudo[278018]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:52 np0005548790.localdomain sudo[278128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-izpoopeunprujkkmthhifvbjigqpkevs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015072.675662-3313-105252343329512/AnsiballZ_copy.py
Dec 06 09:57:52 np0005548790.localdomain sudo[278128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:53 np0005548790.localdomain python3.9[278130]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:53 np0005548790.localdomain sudo[278128]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:57:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:57:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:57:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:57:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:57:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 09:57:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:57:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 09:57:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:57:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:57:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 09:57:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:57:53 np0005548790.localdomain sudo[278238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-blydwvlxnlwtxfyahozwhfqvdxhboogn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015073.486676-3338-120292193266789/AnsiballZ_stat.py
Dec 06 09:57:53 np0005548790.localdomain sudo[278238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:53 np0005548790.localdomain python3.9[278240]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:57:53 np0005548790.localdomain sudo[278238]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:54 np0005548790.localdomain sudo[278350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cvwvdtevlgnlkyvxogvjmphaowuhqdgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015074.2851954-3364-117556426574257/AnsiballZ_file.py
Dec 06 09:57:54 np0005548790.localdomain sudo[278350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:54 np0005548790.localdomain python3.9[278352]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:57:54 np0005548790.localdomain sudo[278350]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:55 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43749 DF PROTO=TCP SPT=49164 DPT=9102 SEQ=4055321179 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1809C59F0000000001030307) 
Dec 06 09:57:55 np0005548790.localdomain python3.9[278460]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:57:56 np0005548790.localdomain python3.9[278570]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:57:56 np0005548790.localdomain python3.9[278625]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute.json _original_basename=nova_compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:57:57 np0005548790.localdomain python3.9[278733]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 06 09:57:58 np0005548790.localdomain python3.9[278788]: ansible-ansible.legacy.file Invoked with mode=0700 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute_init.json _original_basename=nova_compute_init.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute_init.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 06 09:57:58 np0005548790.localdomain sudo[278896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hntemsfcsjovemioeglndnsxpywlvvai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015078.6890814-3493-28045841703644/AnsiballZ_container_config_data.py
Dec 06 09:57:58 np0005548790.localdomain sudo[278896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:57:59 np0005548790.localdomain python3.9[278898]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec 06 09:57:59 np0005548790.localdomain sudo[278896]: pam_unix(sudo:session): session closed for user root
Dec 06 09:57:59 np0005548790.localdomain sudo[279006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qgnosbygbcsmmgfpelhapmkqzpunfora ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015079.5534482-3519-221745694305302/AnsiballZ_container_config_hash.py
Dec 06 09:57:59 np0005548790.localdomain sudo[279006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:00 np0005548790.localdomain python3.9[279008]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 09:58:00 np0005548790.localdomain sudo[279006]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:00 np0005548790.localdomain sudo[279116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dcyqwzhmekqyovbyyfbzwvhymudwuurz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765015080.566703-3550-162866904935528/AnsiballZ_edpm_container_manage.py
Dec 06 09:58:00 np0005548790.localdomain sudo[279116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:01 np0005548790.localdomain python3[279118]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 09:58:01 np0005548790.localdomain python3[279118]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3",
                                                                    "Digest": "sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2025-12-01T06:31:10.62653219Z",
                                                                    "Config": {
                                                                         "User": "nova",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20251125",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 1211779450,
                                                                    "VirtualSize": 1211779450,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                                              "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                                              "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",
                                                                              "sha256:baa8e0bc73d6b505f07c40d4f69a464312cc41ae2045c7975dd4759c27721a22",
                                                                              "sha256:d0cde44181262e43c105085c32a5af158b232f2e2ce4fe4b50530d7cdc5126cd"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20251125",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "nova",
                                                                    "History": [
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223494528Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223562059Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:39.054452717Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025707917Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025744608Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025767729Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025791379Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.02581523Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025867611Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.469442331Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:02.029095017Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:05.672474685Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.113425253Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.532320725Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.370061347Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.805172373Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.259306372Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.625948784Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.028304824Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.423316076Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.801219631Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.239187116Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.70996597Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.147342611Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.5739488Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.006975065Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.421255505Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.066694755Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.475695836Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.8971372Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:18.542651107Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622503041Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622561802Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622578342Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622594423Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:22.080892529Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:15.092312074Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:53.218820537Z",
                                                                              "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:56.858075591Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:17:53.072482982Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:18:02.761216507Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:18:03.785234187Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /etc/ssh && touch /etc/ssh/ssh_known_host",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:19:17.194997182Z",
                                                                              "created_by": "/bin/sh -c dnf install -y openstack-nova-common && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:19:24.14458279Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:29:30.048641643Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-nova-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:09.707360362Z",
                                                                              "created_by": "/bin/sh -c dnf -y install e2fsprogs xfsprogs xorriso iscsi-initiator-utils nfs-utils targetcli nvme-cli device-mapper-multipath ceph-common openssh-clients openstack-nova-compute openvswitch swtpm swtpm-tools && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.208898452Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.624465805Z",
                                                                              "created_by": "/bin/sh -c rm -f /etc/machine-id",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.624514176Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:18.661822382Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 06 09:58:01 np0005548790.localdomain sudo[279116]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:02 np0005548790.localdomain sudo[279289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fzefubqkmeiihzufrajzycyrdrwlozom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015081.7904632-3573-49737701115142/AnsiballZ_stat.py
Dec 06 09:58:02 np0005548790.localdomain sudo[279289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:02 np0005548790.localdomain python3.9[279291]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:58:02 np0005548790.localdomain sudo[279289]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:03 np0005548790.localdomain sudo[279401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-agvkuyryxnqmrxlaybnvfwbuxnxsptwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015082.9156244-3609-67689142612914/AnsiballZ_container_config_data.py
Dec 06 09:58:03 np0005548790.localdomain sudo[279401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:58:03 np0005548790.localdomain podman[279404]: 2025-12-06 09:58:03.366831258 +0000 UTC m=+0.096016788 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:58:03 np0005548790.localdomain podman[279404]: 2025-12-06 09:58:03.375073017 +0000 UTC m=+0.104258557 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 09:58:03 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:58:03 np0005548790.localdomain python3.9[279403]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec 06 09:58:03 np0005548790.localdomain sudo[279401]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43750 DF PROTO=TCP SPT=49164 DPT=9102 SEQ=4055321179 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A1809E51F0000000001030307) 
Dec 06 09:58:04 np0005548790.localdomain sudo[279529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mjmxpnaajxodlcqhakkrkwylhjtokwpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015083.893702-3636-207113495143974/AnsiballZ_container_config_hash.py
Dec 06 09:58:04 np0005548790.localdomain sudo[279529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:04 np0005548790.localdomain python3.9[279531]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 06 09:58:04 np0005548790.localdomain sudo[279529]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:05 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:05.095 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:58:05 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:05.096 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:58:05 np0005548790.localdomain sudo[279639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dnwvuplyzczztypfgmuldgxxmjorsxwv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765015084.8472688-3666-240889434721541/AnsiballZ_edpm_container_manage.py
Dec 06 09:58:05 np0005548790.localdomain sudo[279639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:05 np0005548790.localdomain python3[279641]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 06 09:58:05 np0005548790.localdomain python3[279641]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3",
                                                                    "Digest": "sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2025-12-01T06:31:10.62653219Z",
                                                                    "Config": {
                                                                         "User": "nova",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20251125",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 1211779450,
                                                                    "VirtualSize": 1211779450,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                                              "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                                              "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",
                                                                              "sha256:baa8e0bc73d6b505f07c40d4f69a464312cc41ae2045c7975dd4759c27721a22",
                                                                              "sha256:d0cde44181262e43c105085c32a5af158b232f2e2ce4fe4b50530d7cdc5126cd"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20251125",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "nova",
                                                                    "History": [
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223494528Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223562059Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:39.054452717Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025707917Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025744608Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025767729Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025791379Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.02581523Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025867611Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.469442331Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:02.029095017Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:05.672474685Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.113425253Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.532320725Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.370061347Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.805172373Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.259306372Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.625948784Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.028304824Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.423316076Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.801219631Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.239187116Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.70996597Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.147342611Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.5739488Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.006975065Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.421255505Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.066694755Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.475695836Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.8971372Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:18.542651107Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622503041Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622561802Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622578342Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622594423Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:22.080892529Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:15.092312074Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:53.218820537Z",
                                                                              "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:56.858075591Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:17:53.072482982Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:18:02.761216507Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:18:03.785234187Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /etc/ssh && touch /etc/ssh/ssh_known_host",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:19:17.194997182Z",
                                                                              "created_by": "/bin/sh -c dnf install -y openstack-nova-common && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:19:24.14458279Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:29:30.048641643Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-nova-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:09.707360362Z",
                                                                              "created_by": "/bin/sh -c dnf -y install e2fsprogs xfsprogs xorriso iscsi-initiator-utils nfs-utils targetcli nvme-cli device-mapper-multipath ceph-common openssh-clients openstack-nova-compute openvswitch swtpm swtpm-tools && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.208898452Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.624465805Z",
                                                                              "created_by": "/bin/sh -c rm -f /etc/machine-id",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.624514176Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:18.661822382Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 06 09:58:05 np0005548790.localdomain sudo[279639]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:06 np0005548790.localdomain sudo[279812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bzskslfixwjcwkxsyzkaqfwrvffbfygs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015086.0634582-3691-28779297418/AnsiballZ_stat.py
Dec 06 09:58:06 np0005548790.localdomain sudo[279812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:06 np0005548790.localdomain python3.9[279814]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:58:06 np0005548790.localdomain sudo[279812]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:06 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:06.882 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:58:06 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:06.904 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:58:06 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:06.904 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 09:58:06 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:06.905 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 09:58:06 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:06.924 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 09:58:06 np0005548790.localdomain sshd[279834]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:58:07 np0005548790.localdomain sudo[279925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-covseffbyzjtzredbkdebhxdipzvspax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015087.0015523-3717-74813813145259/AnsiballZ_file.py
Dec 06 09:58:07 np0005548790.localdomain sudo[279925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:07 np0005548790.localdomain python3.9[279927]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:58:07 np0005548790.localdomain sudo[279925]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:07 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:07.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:58:07 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:07.887 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:58:07 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:07.887 229637 DEBUG nova.compute.manager [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 09:58:07 np0005548790.localdomain sudo[280034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xczgbocykqpkmlcdbfyhektfvgiwmxlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015087.5464215-3717-101390105941576/AnsiballZ_copy.py
Dec 06 09:58:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:58:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:58:07 np0005548790.localdomain sudo[280034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:08 np0005548790.localdomain systemd[1]: tmp-crun.lVb39H.mount: Deactivated successfully.
Dec 06 09:58:08 np0005548790.localdomain podman[280036]: 2025-12-06 09:58:08.042231108 +0000 UTC m=+0.087204164 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 09:58:08 np0005548790.localdomain podman[280036]: 2025-12-06 09:58:08.055148941 +0000 UTC m=+0.100122027 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 09:58:08 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 09:58:08 np0005548790.localdomain python3.9[280038]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765015087.5464215-3717-101390105941576/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 09:58:08 np0005548790.localdomain podman[280037]: 2025-12-06 09:58:08.153403488 +0000 UTC m=+0.192364015 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec 06 09:58:08 np0005548790.localdomain sudo[280034]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:08 np0005548790.localdomain podman[280037]: 2025-12-06 09:58:08.189252439 +0000 UTC m=+0.228212926 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 09:58:08 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 09:58:08 np0005548790.localdomain sudo[280129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-whiqujaqljuwwuuuarpyulduilvjhpnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015087.5464215-3717-101390105941576/AnsiballZ_systemd.py
Dec 06 09:58:08 np0005548790.localdomain sudo[280129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:08 np0005548790.localdomain python3.9[280131]: ansible-systemd Invoked with state=started name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 09:58:08 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:08.883 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:58:08 np0005548790.localdomain sudo[280129]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:09 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:09.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:58:10 np0005548790.localdomain python3.9[280241]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:58:10 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:58:10 np0005548790.localdomain systemd[1]: tmp-crun.tlyALD.mount: Deactivated successfully.
Dec 06 09:58:10 np0005548790.localdomain podman[280259]: 2025-12-06 09:58:10.582586395 +0000 UTC m=+0.093472091 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=edpm, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 06 09:58:10 np0005548790.localdomain podman[280259]: 2025-12-06 09:58:10.597433048 +0000 UTC m=+0.108318734 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_id=edpm, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9)
Dec 06 09:58:10 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 09:58:10 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:10.886 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:58:10 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:10.911 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:58:10 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:10.912 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:58:10 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:10.912 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:58:10 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:10.912 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 09:58:10 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:10.913 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:58:11 np0005548790.localdomain python3.9[280391]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:58:11 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:11.354 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:58:11 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:11.570 229637 WARNING nova.virt.libvirt.driver [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 09:58:11 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:11.572 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=12484MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 09:58:11 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:11.572 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:58:11 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:11.573 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:58:11 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:11.655 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 09:58:11 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:11.656 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 09:58:11 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:11.685 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:58:12 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:12.180 229637 DEBUG oslo_concurrency.processutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:58:12 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:12.188 229637 DEBUG nova.compute.provider_tree [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 09:58:12 np0005548790.localdomain python3.9[280521]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 06 09:58:12 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:12.211 229637 DEBUG nova.scheduler.client.report [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 09:58:12 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:12.215 229637 DEBUG nova.compute.resource_tracker [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 09:58:12 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:12.216 229637 DEBUG oslo_concurrency.lockutils [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:58:13 np0005548790.localdomain sudo[280631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mijtupxzeneacbzonkpsmlrbcrzdnwiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015092.6879342-3886-162428311875341/AnsiballZ_podman_container.py
Dec 06 09:58:13 np0005548790.localdomain sudo[280631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:13 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:13.217 229637 DEBUG oslo_service.periodic_task [None req-a25193e9-4122-4068-b32b-27ebb86d4cd1 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:58:13 np0005548790.localdomain python3.9[280633]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 06 09:58:13 np0005548790.localdomain sudo[280631]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:13 np0005548790.localdomain systemd-journald[47675]: Field hash table of /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal has a fill level at 102.7 (342 of 333 items), suggesting rotation.
Dec 06 09:58:13 np0005548790.localdomain systemd-journald[47675]: /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 06 09:58:13 np0005548790.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 09:58:13 np0005548790.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 09:58:14 np0005548790.localdomain sudo[280763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ixsrbgyrwhxvinsjogjdeznqrvpjdwoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015093.8816688-3909-225030627738283/AnsiballZ_systemd.py
Dec 06 09:58:14 np0005548790.localdomain sudo[280763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:14 np0005548790.localdomain python3.9[280765]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 06 09:58:14 np0005548790.localdomain systemd[1]: Stopping nova_compute container...
Dec 06 09:58:14 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:58:14 np0005548790.localdomain systemd[1]: tmp-crun.PQpVRc.mount: Deactivated successfully.
Dec 06 09:58:14 np0005548790.localdomain podman[280781]: 2025-12-06 09:58:14.787538693 +0000 UTC m=+0.084110662 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 09:58:14 np0005548790.localdomain podman[280781]: 2025-12-06 09:58:14.802537041 +0000 UTC m=+0.099108980 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd)
Dec 06 09:58:14 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:58:17 np0005548790.localdomain sshd[279834]: error: kex_exchange_identification: read: Connection timed out
Dec 06 09:58:17 np0005548790.localdomain sshd[279834]: banner exchange: Connection from 117.81.233.236 port 48100: Connection timed out
Dec 06 09:58:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:58:17 np0005548790.localdomain podman[280800]: 2025-12-06 09:58:17.817229093 +0000 UTC m=+0.077617240 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:58:17 np0005548790.localdomain podman[280800]: 2025-12-06 09:58:17.854670196 +0000 UTC m=+0.115058313 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 09:58:17 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:58:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39880 DF PROTO=TCP SPT=42436 DPT=9102 SEQ=2498573123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180A1F010000000001030307) 
Dec 06 09:58:18 np0005548790.localdomain podman[239825]: time="2025-12-06T09:58:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 09:58:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:58:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148371 "" "Go-http-client/1.1"
Dec 06 09:58:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:58:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17096 "" "Go-http-client/1.1"
Dec 06 09:58:19 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39881 DF PROTO=TCP SPT=42436 DPT=9102 SEQ=2498573123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180A231F0000000001030307) 
Dec 06 09:58:19 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:19.612 229637 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored
Dec 06 09:58:19 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:19.614 229637 DEBUG oslo_concurrency.lockutils [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 09:58:19 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:19.614 229637 DEBUG oslo_concurrency.lockutils [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 09:58:19 np0005548790.localdomain nova_compute[229633]: 2025-12-06 09:58:19.615 229637 DEBUG oslo_concurrency.lockutils [None req-634bf811-d75b-410f-9b71-5c92312b6dc6 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 09:58:19 np0005548790.localdomain virtqemud[228868]: End of file while reading data: Input/output error
Dec 06 09:58:19 np0005548790.localdomain systemd[1]: libpod-30d4ec6a5d3288f1c354684ec26537de7e0edd7aa78b7c3d46996c8317c45d26.scope: Deactivated successfully.
Dec 06 09:58:19 np0005548790.localdomain systemd[1]: libpod-30d4ec6a5d3288f1c354684ec26537de7e0edd7aa78b7c3d46996c8317c45d26.scope: Consumed 17.337s CPU time.
Dec 06 09:58:19 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43751 DF PROTO=TCP SPT=49164 DPT=9102 SEQ=4055321179 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180A251F0000000001030307) 
Dec 06 09:58:19 np0005548790.localdomain podman[280769]: 2025-12-06 09:58:19.962693223 +0000 UTC m=+5.356040889 container died 30d4ec6a5d3288f1c354684ec26537de7e0edd7aa78b7c3d46996c8317c45d26 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 09:58:19 np0005548790.localdomain systemd[1]: tmp-crun.9P28l1.mount: Deactivated successfully.
Dec 06 09:58:19 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-30d4ec6a5d3288f1c354684ec26537de7e0edd7aa78b7c3d46996c8317c45d26-userdata-shm.mount: Deactivated successfully.
Dec 06 09:58:20 np0005548790.localdomain podman[280769]: 2025-12-06 09:58:20.126685934 +0000 UTC m=+5.520033520 container cleanup 30d4ec6a5d3288f1c354684ec26537de7e0edd7aa78b7c3d46996c8317c45d26 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 09:58:20 np0005548790.localdomain podman[280769]: nova_compute
Dec 06 09:58:20 np0005548790.localdomain podman[280849]: error opening file `/run/crun/30d4ec6a5d3288f1c354684ec26537de7e0edd7aa78b7c3d46996c8317c45d26/status`: No such file or directory
Dec 06 09:58:20 np0005548790.localdomain podman[280838]: 2025-12-06 09:58:20.233386885 +0000 UTC m=+0.067713377 container cleanup 30d4ec6a5d3288f1c354684ec26537de7e0edd7aa78b7c3d46996c8317c45d26 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Dec 06 09:58:20 np0005548790.localdomain podman[280838]: nova_compute
Dec 06 09:58:20 np0005548790.localdomain systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec 06 09:58:20 np0005548790.localdomain systemd[1]: Stopped nova_compute container.
Dec 06 09:58:20 np0005548790.localdomain systemd[1]: Starting nova_compute container...
Dec 06 09:58:20 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:58:20 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14c055f0120ab812f6d286999a71cde76653890989ed47d549081354032c382/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:20 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14c055f0120ab812f6d286999a71cde76653890989ed47d549081354032c382/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:20 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14c055f0120ab812f6d286999a71cde76653890989ed47d549081354032c382/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:20 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14c055f0120ab812f6d286999a71cde76653890989ed47d549081354032c382/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:20 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a14c055f0120ab812f6d286999a71cde76653890989ed47d549081354032c382/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:20 np0005548790.localdomain podman[280851]: 2025-12-06 09:58:20.372571618 +0000 UTC m=+0.105381217 container init 30d4ec6a5d3288f1c354684ec26537de7e0edd7aa78b7c3d46996c8317c45d26 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Dec 06 09:58:20 np0005548790.localdomain podman[280851]: 2025-12-06 09:58:20.378809433 +0000 UTC m=+0.111619032 container start 30d4ec6a5d3288f1c354684ec26537de7e0edd7aa78b7c3d46996c8317c45d26 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 09:58:20 np0005548790.localdomain podman[280851]: nova_compute
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: + sudo -E kolla_set_configs
Dec 06 09:58:20 np0005548790.localdomain systemd[1]: Started nova_compute container.
Dec 06 09:58:20 np0005548790.localdomain sudo[280763]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Validating config file
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Copying service configuration files
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Deleting /etc/ceph
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Creating directory /etc/ceph
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Setting permission for /etc/ceph
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Writing out command to execute
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: ++ cat /run_command
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: + CMD=nova-compute
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: + ARGS=
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: + sudo kolla_copy_cacerts
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: + [[ ! -n '' ]]
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: + . kolla_extend_start
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: Running command: 'nova-compute'
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: + echo 'Running command: '\''nova-compute'\'''
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: + umask 0022
Dec 06 09:58:20 np0005548790.localdomain nova_compute[280865]: + exec nova-compute
Dec 06 09:58:20 np0005548790.localdomain sudo[280984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ztlkapdqxztylpqtsrnkmevkgqddkcwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765015100.6787317-3938-167498510790328/AnsiballZ_podman_container.py
Dec 06 09:58:20 np0005548790.localdomain sudo[280984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 06 09:58:21 np0005548790.localdomain python3.9[280986]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 06 09:58:21 np0005548790.localdomain systemd[1]: Started libpod-conmon-a395dbd0ab59430267ec664b211f8908514541a37ff8aaee17c95683340e21f7.scope.
Dec 06 09:58:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39882 DF PROTO=TCP SPT=42436 DPT=9102 SEQ=2498573123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180A2B1F0000000001030307) 
Dec 06 09:58:21 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 09:58:21 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f34bb3b53a6136c2bbdfae7ed4dc44ec54288f4d2e146a5e9ea956f190ac6df/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:21 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f34bb3b53a6136c2bbdfae7ed4dc44ec54288f4d2e146a5e9ea956f190ac6df/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:21 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f34bb3b53a6136c2bbdfae7ed4dc44ec54288f4d2e146a5e9ea956f190ac6df/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 06 09:58:21 np0005548790.localdomain podman[281012]: 2025-12-06 09:58:21.504649132 +0000 UTC m=+0.113952194 container init a395dbd0ab59430267ec664b211f8908514541a37ff8aaee17c95683340e21f7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, container_name=nova_compute_init)
Dec 06 09:58:21 np0005548790.localdomain podman[281012]: 2025-12-06 09:58:21.514631907 +0000 UTC m=+0.123934949 container start a395dbd0ab59430267ec664b211f8908514541a37ff8aaee17c95683340e21f7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Dec 06 09:58:21 np0005548790.localdomain python3.9[280986]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec 06 09:58:21 np0005548790.localdomain nova_compute_init[281033]: INFO:nova_statedir:Applying nova statedir ownership
Dec 06 09:58:21 np0005548790.localdomain nova_compute_init[281033]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec 06 09:58:21 np0005548790.localdomain nova_compute_init[281033]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec 06 09:58:21 np0005548790.localdomain nova_compute_init[281033]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec 06 09:58:21 np0005548790.localdomain nova_compute_init[281033]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec 06 09:58:21 np0005548790.localdomain nova_compute_init[281033]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec 06 09:58:21 np0005548790.localdomain nova_compute_init[281033]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec 06 09:58:21 np0005548790.localdomain nova_compute_init[281033]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec 06 09:58:21 np0005548790.localdomain nova_compute_init[281033]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute
Dec 06 09:58:21 np0005548790.localdomain nova_compute_init[281033]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec 06 09:58:21 np0005548790.localdomain nova_compute_init[281033]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec 06 09:58:21 np0005548790.localdomain nova_compute_init[281033]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec 06 09:58:21 np0005548790.localdomain nova_compute_init[281033]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec 06 09:58:21 np0005548790.localdomain nova_compute_init[281033]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec 06 09:58:21 np0005548790.localdomain nova_compute_init[281033]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/
Dec 06 09:58:21 np0005548790.localdomain nova_compute_init[281033]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436
Dec 06 09:58:21 np0005548790.localdomain nova_compute_init[281033]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0
Dec 06 09:58:21 np0005548790.localdomain nova_compute_init[281033]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/
Dec 06 09:58:21 np0005548790.localdomain nova_compute_init[281033]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436
Dec 06 09:58:21 np0005548790.localdomain nova_compute_init[281033]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0
Dec 06 09:58:21 np0005548790.localdomain nova_compute_init[281033]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/b234715fc878456b41e32c4fbc669b417044dbe6c6684bbc9059e5c93396ffea
Dec 06 09:58:21 np0005548790.localdomain nova_compute_init[281033]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/20273498b7380904530133bcb3f720bd45f4f00b810dc4597d81d23acd8f9673
Dec 06 09:58:21 np0005548790.localdomain nova_compute_init[281033]: INFO:nova_statedir:Nova statedir ownership complete
Dec 06 09:58:21 np0005548790.localdomain systemd[1]: libpod-a395dbd0ab59430267ec664b211f8908514541a37ff8aaee17c95683340e21f7.scope: Deactivated successfully.
Dec 06 09:58:21 np0005548790.localdomain podman[281034]: 2025-12-06 09:58:21.575646675 +0000 UTC m=+0.038926083 container died a395dbd0ab59430267ec664b211f8908514541a37ff8aaee17c95683340e21f7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, managed_by=edpm_ansible)
Dec 06 09:58:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:58:21 np0005548790.localdomain podman[281047]: 2025-12-06 09:58:21.669202158 +0000 UTC m=+0.088743026 container cleanup a395dbd0ab59430267ec664b211f8908514541a37ff8aaee17c95683340e21f7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, container_name=nova_compute_init, org.label-schema.schema-version=1.0)
Dec 06 09:58:21 np0005548790.localdomain systemd[1]: libpod-conmon-a395dbd0ab59430267ec664b211f8908514541a37ff8aaee17c95683340e21f7.scope: Deactivated successfully.
Dec 06 09:58:21 np0005548790.localdomain podman[281070]: 2025-12-06 09:58:21.743682124 +0000 UTC m=+0.082436108 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 09:58:21 np0005548790.localdomain sudo[280984]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:21 np0005548790.localdomain podman[281070]: 2025-12-06 09:58:21.845198137 +0000 UTC m=+0.183952131 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 09:58:21 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:58:21 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-9f34bb3b53a6136c2bbdfae7ed4dc44ec54288f4d2e146a5e9ea956f190ac6df-merged.mount: Deactivated successfully.
Dec 06 09:58:21 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a395dbd0ab59430267ec664b211f8908514541a37ff8aaee17c95683340e21f7-userdata-shm.mount: Deactivated successfully.
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.159 280869 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.159 280869 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.159 280869 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.159 280869 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.302 280869 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.323 280869 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.324 280869 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 06 09:58:22 np0005548790.localdomain sshd[262564]: pam_unix(sshd:session): session closed for user zuul
Dec 06 09:58:22 np0005548790.localdomain systemd-logind[760]: Session 59 logged out. Waiting for processes to exit.
Dec 06 09:58:22 np0005548790.localdomain systemd[1]: session-59.scope: Deactivated successfully.
Dec 06 09:58:22 np0005548790.localdomain systemd[1]: session-59.scope: Consumed 1min 30.296s CPU time.
Dec 06 09:58:22 np0005548790.localdomain systemd-logind[760]: Removed session 59.
Dec 06 09:58:22 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3014 DF PROTO=TCP SPT=57696 DPT=9102 SEQ=3137403385 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180A2F1F0000000001030307) 
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.707 280869 INFO nova.virt.driver [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.823 280869 INFO nova.compute.provider_config [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.833 280869 DEBUG oslo_concurrency.lockutils [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.833 280869 DEBUG oslo_concurrency.lockutils [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.834 280869 DEBUG oslo_concurrency.lockutils [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.834 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.834 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.834 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.834 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.834 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.834 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.835 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.835 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.835 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.835 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.835 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.835 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.835 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.835 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.836 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.836 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.836 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.836 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.836 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.836 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] console_host                   = np0005548790.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.836 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.837 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.837 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.837 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.837 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.837 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.837 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.837 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.838 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.838 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.838 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.838 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.838 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.838 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.838 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.838 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.839 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.839 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.839 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] host                           = np0005548790.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.839 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.839 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.839 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.839 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.840 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.840 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.840 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.840 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.840 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.840 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.840 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.841 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.841 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.841 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.841 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.841 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.841 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.841 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.841 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.842 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.842 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.842 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.842 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.842 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.842 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.842 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.842 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.843 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.843 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.843 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.843 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.843 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.843 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.843 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.844 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.844 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.844 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.844 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.844 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.844 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.844 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.844 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] my_block_storage_ip            = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.845 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] my_ip                          = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.845 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.845 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.845 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.845 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.845 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.845 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.846 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.846 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.846 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.846 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.846 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.846 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.846 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.846 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.847 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.847 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.847 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.847 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.847 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.847 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.847 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.847 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.848 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.848 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.848 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.848 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.848 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.848 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.848 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.848 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.849 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.849 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.849 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.849 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.849 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.849 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.850 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.850 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.850 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.850 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.850 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.850 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.850 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.850 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.851 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.851 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.851 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.851 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.851 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.851 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.851 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.851 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.852 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.852 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.852 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.852 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.852 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.852 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.852 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.852 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.853 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.853 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.853 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.853 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.853 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.853 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.853 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.854 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.854 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.854 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.854 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.854 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.854 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.854 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.854 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.855 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.855 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.855 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.855 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.855 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.855 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.855 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.856 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.856 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.856 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.856 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.856 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.856 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.856 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.857 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.857 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.857 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.857 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.857 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.857 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.857 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.857 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.858 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.858 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.858 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.858 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.858 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.858 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.858 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.859 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.859 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.859 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.859 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.859 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.859 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.859 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.859 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.860 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.860 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.860 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.860 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.860 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.860 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.860 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.860 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.861 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.861 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.861 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.861 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.861 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.861 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.861 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.862 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.862 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.862 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.862 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.862 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.862 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.862 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.862 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.863 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.863 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.863 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.863 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.863 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.863 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.863 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.863 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.864 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.864 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.864 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.864 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.864 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.864 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.864 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.865 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.865 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.865 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.865 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.865 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.865 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.866 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.866 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.866 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.866 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.866 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.866 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.866 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.866 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.867 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.867 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.867 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.867 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.867 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.867 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.867 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.867 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.868 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.868 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.868 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.868 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.868 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.868 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.868 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.869 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.869 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.869 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.869 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.869 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.869 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.869 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.869 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.870 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.870 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.870 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.870 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.870 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.870 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.870 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.871 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.871 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.871 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.871 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.871 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.871 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.871 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.871 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.872 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.872 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.872 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.872 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.872 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.872 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.872 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.873 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.873 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.873 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.873 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.873 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.873 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.873 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.873 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.874 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.874 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.874 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.874 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.874 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.874 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.874 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.875 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.875 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.875 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.875 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.875 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.875 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.875 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.875 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.876 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.876 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.876 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.876 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.876 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.876 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.876 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.877 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.877 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.877 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.877 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.877 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.877 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.877 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.877 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.878 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.878 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.878 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.878 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.878 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.878 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.878 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.879 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.879 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.879 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.879 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.879 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.879 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.879 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.880 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.880 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.880 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.880 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.880 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.880 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.880 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.881 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.881 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.881 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.881 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.881 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.881 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.882 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.882 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.882 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.882 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.882 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.882 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.882 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.883 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.883 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.883 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.883 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.883 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.883 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.883 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.883 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.884 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.884 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.884 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.884 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.884 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.884 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.884 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.884 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.885 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.885 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.885 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.885 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.885 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.885 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.885 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.885 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.886 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.886 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.886 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.886 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.886 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.886 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.886 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.887 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.887 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.887 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.887 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.887 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.887 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.887 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.887 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.888 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.888 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.888 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.888 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.888 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.888 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.888 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.888 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.889 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.889 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.889 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.889 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.889 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.889 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.889 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.890 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.890 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.890 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.890 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.890 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.890 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.890 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.890 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.891 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.891 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.891 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.891 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.891 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.891 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.892 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.892 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.892 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.892 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.892 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.892 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.893 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.893 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.893 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.893 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.893 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.894 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.894 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.894 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.894 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.894 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.895 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.895 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.895 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.895 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.895 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.896 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.896 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.896 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.896 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.896 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.897 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.897 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.897 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.897 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.898 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.898 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.898 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.898 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.898 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.899 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.899 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.899 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.899 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.899 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.900 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.900 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.900 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.900 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.900 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.901 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.901 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.901 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.901 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.901 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.902 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.902 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.902 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.902 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.902 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.903 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.903 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.903 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.903 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.903 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.904 280869 WARNING oslo_config.cfg [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: and ``live_migration_inbound_addr`` respectively.
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: ).  Its value may be silently ignored in the future.
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.904 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.live_migration_uri     = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.904 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.904 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.905 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.905 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.905 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.905 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.905 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.906 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.906 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.906 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.906 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.906 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.907 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.907 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.907 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.907 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.907 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.908 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.rbd_secret_uuid        = 1939e851-b10c-5c3b-9bb7-8e7f380233e8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.908 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.908 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.908 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.908 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.909 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.909 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.909 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.909 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.909 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.909 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.910 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.910 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.910 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.910 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.910 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.911 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.911 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.911 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.911 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.911 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.912 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.912 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.912 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.912 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.912 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.913 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.913 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.913 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.913 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.913 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.914 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.914 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.914 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.914 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.914 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.915 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.915 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.915 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.915 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.915 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.916 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.916 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.916 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.916 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.916 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.917 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.917 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.917 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.917 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.917 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.918 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.918 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.918 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.918 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.918 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.918 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.919 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.919 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.919 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.919 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.919 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.919 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.920 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.920 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.920 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.920 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.921 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.921 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.921 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.921 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.921 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.921 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.auth_url             = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.922 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.922 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.922 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.922 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.922 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.922 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.923 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.923 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.923 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.923 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.923 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.923 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.924 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.924 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.924 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.924 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.924 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.924 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.925 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.925 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.925 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.925 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.925 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.925 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.926 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.926 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.926 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.926 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.926 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.926 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.927 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.927 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.927 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.927 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.927 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.928 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.928 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.928 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.928 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.928 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.929 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.929 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.929 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.929 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.929 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.929 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.929 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.930 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.930 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.930 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.930 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.930 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.930 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.931 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.931 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.931 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.931 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.931 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.931 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.931 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.932 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.932 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.932 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.932 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.932 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.932 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.932 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.933 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.933 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.933 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.933 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.933 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.933 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.934 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.934 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.934 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.934 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.934 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.934 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.935 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.935 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.935 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.935 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.935 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.935 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.935 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.935 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.936 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.936 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.936 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.936 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.936 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.936 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.937 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.937 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.937 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.937 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.937 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.937 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.938 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.938 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.938 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.938 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.938 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.938 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.939 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.939 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.939 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.939 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.939 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.939 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.939 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.940 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.940 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.940 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.940 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.940 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.940 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.940 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.941 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.941 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.941 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.941 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.941 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.941 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.941 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.941 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.942 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.942 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.942 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.942 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.942 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.942 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.942 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.943 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.943 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.943 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.943 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.943 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.943 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.943 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.943 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.944 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.944 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.944 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.944 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.944 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.944 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.944 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.945 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.945 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.945 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.945 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.945 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.945 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.945 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.946 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.946 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vnc.novncproxy_base_url        = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.946 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.946 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.946 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.946 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vnc.server_proxyclient_address = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.947 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.947 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.947 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.947 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.947 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.947 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.947 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.947 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.948 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.948 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.948 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.948 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.948 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.948 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.948 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.949 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.949 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.949 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.949 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.949 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.949 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.949 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.950 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.950 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.950 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.950 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.950 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.950 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.950 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.951 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.951 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.951 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.951 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.951 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.951 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.951 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.952 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.952 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.952 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.952 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.952 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.952 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.952 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.953 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.953 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.953 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.953 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.953 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.953 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.953 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.954 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.954 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.954 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.954 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.954 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.954 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.954 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.955 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.955 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.955 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.955 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.955 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.955 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.955 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.956 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.956 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.956 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.956 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.956 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.956 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.956 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.957 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.957 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.957 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.957 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.957 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.957 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.957 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.958 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.958 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.958 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.958 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.958 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.958 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.958 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.959 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.959 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.959 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.959 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.959 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.959 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.auth_url            = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.959 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.960 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.960 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.960 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.960 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.960 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.960 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.960 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.961 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.961 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.961 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.961 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.961 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.961 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.961 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.961 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.962 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.962 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.962 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.962 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.962 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.962 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.962 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.963 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.963 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.963 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.963 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.963 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.963 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.963 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.963 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.964 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.964 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.964 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.964 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.964 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.964 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.964 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.965 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.965 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.965 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.965 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.965 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.965 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.965 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.966 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.966 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.966 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.966 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.966 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.966 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.966 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.967 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.967 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.967 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.967 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.967 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.967 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.968 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.968 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.968 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.968 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.968 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.968 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.968 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.969 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.969 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.969 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.969 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.969 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.969 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.969 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.969 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.970 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.970 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.970 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.970 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.970 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.970 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.970 280869 DEBUG oslo_service.service [None req-5d78afeb-1b93-4679-aa25-57daa10ee891 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 06 09:58:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:22.971 280869 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.017 280869 INFO nova.virt.node [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Determined node identity 9d142787-bd19-4b53-bf45-24c0e0c1cff0 from /var/lib/nova/compute_id
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.017 280869 DEBUG nova.virt.libvirt.host [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.018 280869 DEBUG nova.virt.libvirt.host [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.018 280869 DEBUG nova.virt.libvirt.host [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.019 280869 DEBUG nova.virt.libvirt.host [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.028 280869 DEBUG nova.virt.libvirt.host [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f135f74c640> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.030 280869 DEBUG nova.virt.libvirt.host [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f135f74c640> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.032 280869 INFO nova.virt.libvirt.driver [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Connection event '1' reason 'None'
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.035 280869 INFO nova.virt.libvirt.host [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Libvirt host capabilities <capabilities>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <host>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <uuid>f03c6239-85fa-4e2b-b1f7-56cf939bb96f</uuid>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <cpu>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <arch>x86_64</arch>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model>EPYC-Rome-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <vendor>AMD</vendor>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <microcode version='16777317'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <signature family='23' model='49' stepping='0'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature name='x2apic'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature name='tsc-deadline'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature name='osxsave'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature name='hypervisor'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature name='tsc_adjust'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature name='spec-ctrl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature name='stibp'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature name='arch-capabilities'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature name='ssbd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature name='cmp_legacy'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature name='topoext'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature name='virt-ssbd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature name='lbrv'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature name='tsc-scale'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature name='vmcb-clean'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature name='pause-filter'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature name='pfthreshold'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature name='svme-addr-chk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature name='rdctl-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature name='skip-l1dfl-vmentry'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature name='mds-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature name='pschange-mc-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <pages unit='KiB' size='4'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <pages unit='KiB' size='2048'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <pages unit='KiB' size='1048576'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </cpu>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <power_management>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <suspend_mem/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <suspend_disk/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <suspend_hybrid/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </power_management>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <iommu support='no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <migration_features>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <live/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <uri_transports>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <uri_transport>tcp</uri_transport>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <uri_transport>rdma</uri_transport>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </uri_transports>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </migration_features>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <topology>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <cells num='1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <cell id='0'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:           <memory unit='KiB'>16116612</memory>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:           <pages unit='KiB' size='4'>4029153</pages>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:           <pages unit='KiB' size='2048'>0</pages>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:           <distances>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:             <sibling id='0' value='10'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:           </distances>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:           <cpus num='8'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:           </cpus>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         </cell>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </cells>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </topology>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <cache>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </cache>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <secmodel>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model>selinux</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <doi>0</doi>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </secmodel>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <secmodel>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model>dac</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <doi>0</doi>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </secmodel>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   </host>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <guest>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <os_type>hvm</os_type>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <arch name='i686'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <wordsize>32</wordsize>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <domain type='qemu'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <domain type='kvm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </arch>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <features>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <pae/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <nonpae/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <acpi default='on' toggle='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <apic default='on' toggle='no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <cpuselection/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <deviceboot/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <disksnapshot default='on' toggle='no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <externalSnapshot/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </features>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   </guest>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <guest>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <os_type>hvm</os_type>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <arch name='x86_64'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <wordsize>64</wordsize>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <domain type='qemu'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <domain type='kvm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </arch>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <features>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <acpi default='on' toggle='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <apic default='on' toggle='no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <cpuselection/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <deviceboot/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <disksnapshot default='on' toggle='no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <externalSnapshot/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </features>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   </guest>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: </capabilities>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.043 280869 DEBUG nova.virt.libvirt.host [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.045 280869 DEBUG nova.virt.libvirt.volume.mount [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.046 280869 DEBUG nova.virt.libvirt.host [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: <domainCapabilities>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <domain>kvm</domain>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <arch>i686</arch>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <vcpu max='240'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <iothreads supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <os supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <enum name='firmware'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <loader supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='type'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>rom</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>pflash</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='readonly'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>yes</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>no</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='secure'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>no</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </loader>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   </os>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <cpu>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <mode name='host-passthrough' supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='hostPassthroughMigratable'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>on</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>off</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </mode>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <mode name='maximum' supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='maximumMigratable'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>on</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>off</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </mode>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <mode name='host-model' supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <vendor>AMD</vendor>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='x2apic'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='hypervisor'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='stibp'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='ssbd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='overflow-recov'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='succor'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='ibrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='lbrv'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='tsc-scale'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='pause-filter'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='pfthreshold'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='disable' name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </mode>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <mode name='custom' supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-noTSX'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cooperlake'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cooperlake-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cooperlake-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Denverton'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mpx'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Denverton-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mpx'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Denverton-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Denverton-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Dhyana-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Genoa'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amd-psfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='auto-ibrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='no-nested-data-bp'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='null-sel-clr-base'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='stibp-always-on'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amd-psfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='auto-ibrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='no-nested-data-bp'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='null-sel-clr-base'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='stibp-always-on'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Milan'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Milan-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Milan-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amd-psfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='no-nested-data-bp'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='null-sel-clr-base'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='stibp-always-on'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Rome'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Rome-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Rome-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Rome-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='GraniteRapids'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mcdt-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pbrsb-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='prefetchiti'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='GraniteRapids-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mcdt-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pbrsb-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='prefetchiti'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='GraniteRapids-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx10'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx10-128'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx10-256'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx10-512'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mcdt-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pbrsb-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='prefetchiti'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-noTSX'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v5'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v6'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v7'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='IvyBridge'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='IvyBridge-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='IvyBridge-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='IvyBridge-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='KnightsMill'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-4fmaps'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-4vnniw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512er'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512pf'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='KnightsMill-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-4fmaps'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-4vnniw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512er'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512pf'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Opteron_G4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fma4'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xop'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Opteron_G4-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fma4'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xop'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Opteron_G5'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fma4'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tbm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xop'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Opteron_G5-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fma4'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tbm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xop'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='SapphireRapids'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='SapphireRapids-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='SapphireRapids-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='SapphireRapids-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='SierraForest'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-ne-convert'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cmpccxadd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mcdt-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pbrsb-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='SierraForest-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-ne-convert'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cmpccxadd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mcdt-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pbrsb-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-v5'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Snowridge'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='core-capability'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mpx'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='split-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Snowridge-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='core-capability'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mpx'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='split-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Snowridge-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='core-capability'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='split-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Snowridge-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='core-capability'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='split-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Snowridge-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='athlon'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnow'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnowext'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='athlon-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnow'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnowext'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='core2duo'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='core2duo-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='coreduo'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='coreduo-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='n270'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='n270-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='phenom'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnow'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnowext'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='phenom-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnow'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnowext'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </mode>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   </cpu>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <memoryBacking supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <enum name='sourceType'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <value>file</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <value>anonymous</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <value>memfd</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   </memoryBacking>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <devices>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <disk supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='diskDevice'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>disk</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>cdrom</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>floppy</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>lun</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='bus'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>ide</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>fdc</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>scsi</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>usb</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>sata</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='model'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio-transitional</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio-non-transitional</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </disk>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <graphics supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='type'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>vnc</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>egl-headless</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>dbus</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </graphics>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <video supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='modelType'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>vga</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>cirrus</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>none</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>bochs</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>ramfb</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </video>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <hostdev supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='mode'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>subsystem</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='startupPolicy'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>default</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>mandatory</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>requisite</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>optional</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='subsysType'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>usb</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>pci</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>scsi</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='capsType'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='pciBackend'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </hostdev>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <rng supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='model'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio-transitional</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio-non-transitional</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='backendModel'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>random</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>egd</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>builtin</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </rng>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <filesystem supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='driverType'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>path</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>handle</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtiofs</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </filesystem>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <tpm supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='model'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>tpm-tis</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>tpm-crb</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='backendModel'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>emulator</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>external</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='backendVersion'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>2.0</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </tpm>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <redirdev supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='bus'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>usb</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </redirdev>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <channel supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='type'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>pty</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>unix</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </channel>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <crypto supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='model'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='type'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>qemu</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='backendModel'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>builtin</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </crypto>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <interface supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='backendType'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>default</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>passt</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </interface>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <panic supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='model'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>isa</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>hyperv</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </panic>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <console supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='type'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>null</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>vc</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>pty</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>dev</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>file</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>pipe</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>stdio</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>udp</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>tcp</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>unix</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>qemu-vdagent</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>dbus</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </console>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   </devices>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <features>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <gic supported='no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <vmcoreinfo supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <genid supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <backingStoreInput supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <backup supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <async-teardown supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <ps2 supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <sev supported='no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <sgx supported='no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <hyperv supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='features'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>relaxed</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>vapic</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>spinlocks</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>vpindex</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>runtime</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>synic</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>stimer</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>reset</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>vendor_id</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>frequencies</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>reenlightenment</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>tlbflush</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>ipi</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>avic</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>emsr_bitmap</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>xmm_input</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <defaults>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <spinlocks>4095</spinlocks>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <stimer_direct>on</stimer_direct>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <tlbflush_direct>off</tlbflush_direct>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <tlbflush_extended>off</tlbflush_extended>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </defaults>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </hyperv>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <launchSecurity supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='sectype'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>tdx</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </launchSecurity>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   </features>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: </domainCapabilities>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.056 280869 DEBUG nova.virt.libvirt.host [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: <domainCapabilities>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <domain>kvm</domain>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <arch>i686</arch>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <vcpu max='1024'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <iothreads supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <os supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <enum name='firmware'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <loader supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='type'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>rom</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>pflash</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='readonly'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>yes</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>no</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='secure'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>no</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </loader>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   </os>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <cpu>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <mode name='host-passthrough' supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='hostPassthroughMigratable'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>on</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>off</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </mode>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <mode name='maximum' supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='maximumMigratable'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>on</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>off</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </mode>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <mode name='host-model' supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <vendor>AMD</vendor>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='x2apic'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='hypervisor'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='stibp'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='ssbd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='overflow-recov'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='succor'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='ibrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='lbrv'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='tsc-scale'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='pause-filter'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='pfthreshold'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='disable' name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </mode>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <mode name='custom' supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-noTSX'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cooperlake'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cooperlake-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cooperlake-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Denverton'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mpx'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Denverton-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mpx'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Denverton-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Denverton-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Dhyana-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Genoa'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amd-psfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='auto-ibrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='no-nested-data-bp'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='null-sel-clr-base'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='stibp-always-on'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amd-psfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='auto-ibrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='no-nested-data-bp'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='null-sel-clr-base'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='stibp-always-on'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Milan'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Milan-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Milan-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amd-psfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='no-nested-data-bp'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='null-sel-clr-base'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='stibp-always-on'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Rome'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Rome-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Rome-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Rome-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='GraniteRapids'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mcdt-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pbrsb-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='prefetchiti'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='GraniteRapids-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mcdt-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pbrsb-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='prefetchiti'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='GraniteRapids-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx10'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx10-128'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx10-256'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx10-512'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mcdt-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pbrsb-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='prefetchiti'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-noTSX'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v5'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v6'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v7'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='IvyBridge'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='IvyBridge-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='IvyBridge-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='IvyBridge-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='KnightsMill'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-4fmaps'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-4vnniw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512er'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512pf'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='KnightsMill-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-4fmaps'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-4vnniw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512er'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512pf'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Opteron_G4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fma4'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xop'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Opteron_G4-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fma4'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xop'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Opteron_G5'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fma4'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tbm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xop'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Opteron_G5-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fma4'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tbm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xop'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='SapphireRapids'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='SapphireRapids-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='SapphireRapids-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='SapphireRapids-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='SierraForest'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-ne-convert'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cmpccxadd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mcdt-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pbrsb-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='SierraForest-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-ne-convert'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cmpccxadd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mcdt-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pbrsb-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-v5'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Snowridge'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='core-capability'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mpx'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='split-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Snowridge-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='core-capability'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mpx'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='split-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Snowridge-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='core-capability'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='split-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Snowridge-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='core-capability'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='split-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Snowridge-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='athlon'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnow'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnowext'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='athlon-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnow'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnowext'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='core2duo'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='core2duo-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='coreduo'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='coreduo-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='n270'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='n270-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='phenom'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnow'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnowext'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='phenom-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnow'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnowext'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </mode>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   </cpu>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <memoryBacking supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <enum name='sourceType'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <value>file</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <value>anonymous</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <value>memfd</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   </memoryBacking>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <devices>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <disk supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='diskDevice'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>disk</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>cdrom</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>floppy</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>lun</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='bus'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>fdc</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>scsi</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>usb</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>sata</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='model'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio-transitional</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio-non-transitional</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </disk>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <graphics supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='type'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>vnc</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>egl-headless</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>dbus</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </graphics>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <video supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='modelType'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>vga</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>cirrus</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>none</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>bochs</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>ramfb</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </video>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <hostdev supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='mode'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>subsystem</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='startupPolicy'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>default</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>mandatory</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>requisite</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>optional</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='subsysType'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>usb</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>pci</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>scsi</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='capsType'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='pciBackend'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </hostdev>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <rng supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='model'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio-transitional</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio-non-transitional</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='backendModel'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>random</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>egd</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>builtin</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </rng>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <filesystem supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='driverType'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>path</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>handle</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtiofs</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </filesystem>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <tpm supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='model'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>tpm-tis</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>tpm-crb</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='backendModel'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>emulator</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>external</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='backendVersion'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>2.0</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </tpm>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <redirdev supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='bus'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>usb</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </redirdev>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <channel supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='type'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>pty</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>unix</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </channel>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <crypto supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='model'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='type'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>qemu</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='backendModel'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>builtin</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </crypto>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <interface supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='backendType'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>default</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>passt</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </interface>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <panic supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='model'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>isa</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>hyperv</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </panic>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <console supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='type'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>null</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>vc</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>pty</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>dev</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>file</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>pipe</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>stdio</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>udp</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>tcp</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>unix</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>qemu-vdagent</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>dbus</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </console>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   </devices>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <features>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <gic supported='no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <vmcoreinfo supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <genid supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <backingStoreInput supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <backup supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <async-teardown supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <ps2 supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <sev supported='no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <sgx supported='no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <hyperv supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='features'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>relaxed</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>vapic</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>spinlocks</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>vpindex</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>runtime</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>synic</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>stimer</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>reset</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>vendor_id</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>frequencies</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>reenlightenment</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>tlbflush</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>ipi</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>avic</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>emsr_bitmap</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>xmm_input</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <defaults>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <spinlocks>4095</spinlocks>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <stimer_direct>on</stimer_direct>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <tlbflush_direct>off</tlbflush_direct>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <tlbflush_extended>off</tlbflush_extended>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </defaults>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </hyperv>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <launchSecurity supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='sectype'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>tdx</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </launchSecurity>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   </features>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: </domainCapabilities>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.093 280869 DEBUG nova.virt.libvirt.host [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.097 280869 DEBUG nova.virt.libvirt.host [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: <domainCapabilities>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <domain>kvm</domain>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <arch>x86_64</arch>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <vcpu max='1024'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <iothreads supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <os supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <enum name='firmware'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <value>efi</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <loader supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='type'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>rom</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>pflash</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='readonly'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>yes</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>no</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='secure'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>yes</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>no</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </loader>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   </os>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <cpu>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <mode name='host-passthrough' supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='hostPassthroughMigratable'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>on</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>off</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </mode>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <mode name='maximum' supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='maximumMigratable'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>on</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>off</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </mode>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <mode name='host-model' supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <vendor>AMD</vendor>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='x2apic'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='hypervisor'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='stibp'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='ssbd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='overflow-recov'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='succor'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='ibrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='lbrv'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='tsc-scale'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='pause-filter'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='pfthreshold'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='disable' name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </mode>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <mode name='custom' supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-noTSX'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cooperlake'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cooperlake-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cooperlake-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Denverton'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mpx'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Denverton-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mpx'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Denverton-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Denverton-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Dhyana-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Genoa'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amd-psfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='auto-ibrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='no-nested-data-bp'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='null-sel-clr-base'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='stibp-always-on'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amd-psfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='auto-ibrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='no-nested-data-bp'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='null-sel-clr-base'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='stibp-always-on'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Milan'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Milan-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Milan-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amd-psfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='no-nested-data-bp'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='null-sel-clr-base'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='stibp-always-on'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Rome'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Rome-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Rome-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Rome-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='GraniteRapids'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mcdt-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pbrsb-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='prefetchiti'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='GraniteRapids-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mcdt-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pbrsb-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='prefetchiti'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='GraniteRapids-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx10'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx10-128'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx10-256'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx10-512'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mcdt-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pbrsb-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='prefetchiti'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-noTSX'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v5'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v6'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v7'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='IvyBridge'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='IvyBridge-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='IvyBridge-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='IvyBridge-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='KnightsMill'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-4fmaps'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-4vnniw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512er'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512pf'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='KnightsMill-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-4fmaps'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-4vnniw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512er'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512pf'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Opteron_G4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fma4'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xop'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Opteron_G4-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fma4'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xop'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Opteron_G5'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fma4'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tbm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xop'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Opteron_G5-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fma4'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tbm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xop'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='SapphireRapids'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='SapphireRapids-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='SapphireRapids-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='SapphireRapids-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='SierraForest'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-ne-convert'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cmpccxadd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mcdt-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pbrsb-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='SierraForest-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-ne-convert'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cmpccxadd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mcdt-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pbrsb-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-v5'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Snowridge'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='core-capability'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mpx'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='split-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Snowridge-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='core-capability'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mpx'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='split-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Snowridge-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='core-capability'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='split-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Snowridge-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='core-capability'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='split-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Snowridge-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='athlon'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnow'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnowext'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='athlon-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnow'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnowext'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='core2duo'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='core2duo-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='coreduo'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='coreduo-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='n270'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='n270-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='phenom'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnow'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnowext'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='phenom-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnow'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnowext'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </mode>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   </cpu>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <memoryBacking supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <enum name='sourceType'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <value>file</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <value>anonymous</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <value>memfd</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   </memoryBacking>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <devices>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <disk supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='diskDevice'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>disk</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>cdrom</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>floppy</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>lun</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='bus'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>fdc</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>scsi</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>usb</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>sata</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='model'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio-transitional</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio-non-transitional</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </disk>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <graphics supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='type'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>vnc</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>egl-headless</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>dbus</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </graphics>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <video supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='modelType'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>vga</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>cirrus</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>none</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>bochs</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>ramfb</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </video>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <hostdev supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='mode'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>subsystem</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='startupPolicy'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>default</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>mandatory</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>requisite</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>optional</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='subsysType'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>usb</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>pci</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>scsi</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='capsType'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='pciBackend'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </hostdev>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <rng supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='model'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio-transitional</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio-non-transitional</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='backendModel'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>random</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>egd</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>builtin</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </rng>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <filesystem supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='driverType'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>path</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>handle</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtiofs</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </filesystem>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <tpm supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='model'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>tpm-tis</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>tpm-crb</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='backendModel'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>emulator</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>external</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='backendVersion'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>2.0</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </tpm>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <redirdev supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='bus'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>usb</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </redirdev>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <channel supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='type'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>pty</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>unix</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </channel>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <crypto supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='model'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='type'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>qemu</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='backendModel'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>builtin</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </crypto>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <interface supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='backendType'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>default</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>passt</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </interface>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <panic supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='model'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>isa</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>hyperv</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </panic>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <console supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='type'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>null</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>vc</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>pty</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>dev</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>file</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>pipe</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>stdio</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>udp</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>tcp</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>unix</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>qemu-vdagent</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>dbus</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </console>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   </devices>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <features>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <gic supported='no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <vmcoreinfo supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <genid supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <backingStoreInput supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <backup supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <async-teardown supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <ps2 supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <sev supported='no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <sgx supported='no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <hyperv supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='features'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>relaxed</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>vapic</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>spinlocks</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>vpindex</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>runtime</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>synic</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>stimer</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>reset</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>vendor_id</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>frequencies</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>reenlightenment</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>tlbflush</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>ipi</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>avic</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>emsr_bitmap</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>xmm_input</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <defaults>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <spinlocks>4095</spinlocks>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <stimer_direct>on</stimer_direct>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <tlbflush_direct>off</tlbflush_direct>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <tlbflush_extended>off</tlbflush_extended>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </defaults>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </hyperv>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <launchSecurity supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='sectype'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>tdx</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </launchSecurity>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   </features>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: </domainCapabilities>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.153 280869 DEBUG nova.virt.libvirt.host [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: <domainCapabilities>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <path>/usr/libexec/qemu-kvm</path>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <domain>kvm</domain>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <arch>x86_64</arch>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <vcpu max='240'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <iothreads supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <os supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <enum name='firmware'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <loader supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='type'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>rom</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>pflash</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='readonly'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>yes</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>no</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='secure'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>no</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </loader>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   </os>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <cpu>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <mode name='host-passthrough' supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='hostPassthroughMigratable'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>on</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>off</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </mode>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <mode name='maximum' supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='maximumMigratable'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>on</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>off</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </mode>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <mode name='host-model' supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <vendor>AMD</vendor>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='x2apic'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='tsc-deadline'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='hypervisor'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='tsc_adjust'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='spec-ctrl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='stibp'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='ssbd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='cmp_legacy'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='overflow-recov'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='succor'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='ibrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='amd-ssbd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='virt-ssbd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='lbrv'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='tsc-scale'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='vmcb-clean'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='pause-filter'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='pfthreshold'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='svme-addr-chk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <feature policy='disable' name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </mode>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <mode name='custom' supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-noTSX'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Broadwell-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cascadelake-Server-v5'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cooperlake'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cooperlake-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Cooperlake-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Denverton'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mpx'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Denverton-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mpx'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Denverton-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Denverton-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Dhyana-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Genoa'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amd-psfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='auto-ibrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='no-nested-data-bp'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='null-sel-clr-base'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='stibp-always-on'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Genoa-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amd-psfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='auto-ibrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='no-nested-data-bp'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='null-sel-clr-base'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='stibp-always-on'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Milan'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Milan-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Milan-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amd-psfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='no-nested-data-bp'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='null-sel-clr-base'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='stibp-always-on'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Rome'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Rome-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Rome-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-Rome-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='EPYC-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='GraniteRapids'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mcdt-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pbrsb-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='prefetchiti'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='GraniteRapids-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mcdt-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pbrsb-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='prefetchiti'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='GraniteRapids-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx10'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx10-128'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx10-256'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx10-512'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mcdt-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pbrsb-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='prefetchiti'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-noTSX'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Haswell-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-noTSX'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v5'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v6'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Icelake-Server-v7'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='IvyBridge'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='IvyBridge-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='IvyBridge-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='IvyBridge-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='KnightsMill'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-4fmaps'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-4vnniw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512er'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512pf'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='KnightsMill-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-4fmaps'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-4vnniw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512er'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512pf'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Opteron_G4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fma4'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xop'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Opteron_G4-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fma4'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xop'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Opteron_G5'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fma4'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tbm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xop'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Opteron_G5-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fma4'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tbm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xop'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='SapphireRapids'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='SapphireRapids-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='SapphireRapids-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='SapphireRapids-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='amx-tile'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-bf16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-fp16'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512-vpopcntdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bitalg'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vbmi2'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrc'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fzrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='la57'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='taa-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='tsx-ldtrk'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xfd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='SierraForest'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-ne-convert'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cmpccxadd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mcdt-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pbrsb-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='SierraForest-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-ifma'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-ne-convert'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx-vnni-int8'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='bus-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cmpccxadd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fbsdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='fsrs'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ibrs-all'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mcdt-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pbrsb-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='psdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='sbdr-ssdp-no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='serialize'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vaes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='vpclmulqdq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Client-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='hle'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='rtm'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Skylake-Server-v5'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512bw'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512cd'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512dq'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512f'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='avx512vl'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='invpcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pcid'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='pku'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Snowridge'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='core-capability'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mpx'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='split-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Snowridge-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='core-capability'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='mpx'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='split-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Snowridge-v2'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='core-capability'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='split-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Snowridge-v3'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='core-capability'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='split-lock-detect'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='Snowridge-v4'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='cldemote'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='erms'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='gfni'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdir64b'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='movdiri'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='xsaves'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='athlon'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnow'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnowext'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='athlon-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnow'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnowext'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='core2duo'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='core2duo-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='coreduo'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='coreduo-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='n270'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='n270-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='ss'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='phenom'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnow'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnowext'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <blockers model='phenom-v1'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnow'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <feature name='3dnowext'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </blockers>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </mode>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   </cpu>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <memoryBacking supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <enum name='sourceType'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <value>file</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <value>anonymous</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <value>memfd</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   </memoryBacking>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <devices>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <disk supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='diskDevice'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>disk</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>cdrom</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>floppy</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>lun</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='bus'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>ide</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>fdc</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>scsi</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>usb</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>sata</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='model'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio-transitional</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio-non-transitional</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </disk>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <graphics supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='type'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>vnc</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>egl-headless</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>dbus</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </graphics>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <video supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='modelType'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>vga</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>cirrus</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>none</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>bochs</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>ramfb</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </video>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <hostdev supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='mode'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>subsystem</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='startupPolicy'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>default</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>mandatory</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>requisite</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>optional</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='subsysType'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>usb</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>pci</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>scsi</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='capsType'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='pciBackend'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </hostdev>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <rng supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='model'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio-transitional</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtio-non-transitional</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='backendModel'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>random</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>egd</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>builtin</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </rng>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <filesystem supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='driverType'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>path</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>handle</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>virtiofs</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </filesystem>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <tpm supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='model'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>tpm-tis</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>tpm-crb</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='backendModel'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>emulator</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>external</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='backendVersion'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>2.0</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </tpm>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <redirdev supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='bus'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>usb</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </redirdev>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <channel supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='type'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>pty</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>unix</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </channel>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <crypto supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='model'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='type'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>qemu</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='backendModel'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>builtin</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </crypto>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <interface supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='backendType'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>default</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>passt</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </interface>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <panic supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='model'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>isa</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>hyperv</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </panic>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <console supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='type'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>null</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>vc</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>pty</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>dev</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>file</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>pipe</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>stdio</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>udp</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>tcp</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>unix</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>qemu-vdagent</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>dbus</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </console>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   </devices>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   <features>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <gic supported='no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <vmcoreinfo supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <genid supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <backingStoreInput supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <backup supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <async-teardown supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <ps2 supported='yes'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <sev supported='no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <sgx supported='no'/>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <hyperv supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='features'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>relaxed</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>vapic</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>spinlocks</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>vpindex</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>runtime</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>synic</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>stimer</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>reset</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>vendor_id</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>frequencies</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>reenlightenment</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>tlbflush</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>ipi</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>avic</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>emsr_bitmap</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>xmm_input</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <defaults>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <spinlocks>4095</spinlocks>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <stimer_direct>on</stimer_direct>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <tlbflush_direct>off</tlbflush_direct>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <tlbflush_extended>off</tlbflush_extended>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </defaults>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </hyperv>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     <launchSecurity supported='yes'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       <enum name='sectype'>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:         <value>tdx</value>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:       </enum>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:     </launchSecurity>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:   </features>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: </domainCapabilities>
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.206 280869 DEBUG nova.virt.libvirt.host [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.206 280869 INFO nova.virt.libvirt.host [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Secure Boot support detected
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.208 280869 INFO nova.virt.libvirt.driver [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.208 280869 INFO nova.virt.libvirt.driver [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.216 280869 DEBUG nova.virt.libvirt.driver [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.251 280869 INFO nova.virt.node [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Determined node identity 9d142787-bd19-4b53-bf45-24c0e0c1cff0 from /var/lib/nova/compute_id
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.270 280869 DEBUG nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Verified node 9d142787-bd19-4b53-bf45-24c0e0c1cff0 matches my host np0005548790.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.295 280869 INFO nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.378 280869 DEBUG oslo_concurrency.lockutils [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.378 280869 DEBUG oslo_concurrency.lockutils [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.378 280869 DEBUG oslo_concurrency.lockutils [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.379 280869 DEBUG nova.compute.resource_tracker [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.379 280869 DEBUG oslo_concurrency.processutils [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:58:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:58:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:58:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:58:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:58:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:58:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 09:58:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:58:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 09:58:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:58:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:58:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 09:58:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:58:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:23.819 280869 DEBUG oslo_concurrency.processutils [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:58:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:24.031 280869 WARNING nova.virt.libvirt.driver [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 09:58:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:24.033 280869 DEBUG nova.compute.resource_tracker [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=12537MB free_disk=41.83721923828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 09:58:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:24.033 280869 DEBUG oslo_concurrency.lockutils [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:58:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:24.034 280869 DEBUG oslo_concurrency.lockutils [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:58:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:24.209 280869 DEBUG nova.compute.resource_tracker [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 09:58:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:24.209 280869 DEBUG nova.compute.resource_tracker [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 09:58:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:24.278 280869 DEBUG nova.scheduler.client.report [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Refreshing inventories for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 09:58:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:24.301 280869 DEBUG nova.scheduler.client.report [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Updating ProviderTree inventory for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 09:58:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:24.302 280869 DEBUG nova.compute.provider_tree [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Updating inventory in ProviderTree for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 09:58:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:24.322 280869 DEBUG nova.scheduler.client.report [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Refreshing aggregate associations for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 09:58:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:24.362 280869 DEBUG nova.scheduler.client.report [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Refreshing trait associations for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0, traits: HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AMD_SVM,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_F16C,HW_CPU_X86_ABM,HW_CPU_X86_BMI,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SVM,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 09:58:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:24.386 280869 DEBUG oslo_concurrency.processutils [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:58:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:24.806 280869 DEBUG oslo_concurrency.processutils [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:58:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:24.813 280869 DEBUG nova.virt.libvirt.host [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 06 09:58:24 np0005548790.localdomain nova_compute[280865]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Dec 06 09:58:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:24.814 280869 INFO nova.virt.libvirt.host [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] kernel doesn't support AMD SEV
Dec 06 09:58:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:24.815 280869 DEBUG nova.compute.provider_tree [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 09:58:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:24.816 280869 DEBUG nova.virt.libvirt.driver [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 09:58:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:24.846 280869 DEBUG nova.scheduler.client.report [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 09:58:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:24.876 280869 DEBUG nova.compute.resource_tracker [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 09:58:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:24.877 280869 DEBUG oslo_concurrency.lockutils [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:58:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:24.877 280869 DEBUG nova.service [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Dec 06 09:58:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:24.904 280869 DEBUG nova.service [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Dec 06 09:58:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:24.905 280869 DEBUG nova.servicegroup.drivers.db [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] DB_Driver: join new ServiceGroup member np0005548790.localdomain to the compute group, service = <Service: host=np0005548790.localdomain, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Dec 06 09:58:25 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39883 DF PROTO=TCP SPT=42436 DPT=9102 SEQ=2498573123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180A3ADF0000000001030307) 
Dec 06 09:58:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:29.909 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:58:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:58:29.944 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:58:31 np0005548790.localdomain sshd[281184]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 09:58:33 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:58:33 np0005548790.localdomain podman[281186]: 2025-12-06 09:58:33.563951259 +0000 UTC m=+0.080894087 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Dec 06 09:58:33 np0005548790.localdomain podman[281186]: 2025-12-06 09:58:33.596278956 +0000 UTC m=+0.113221804 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 09:58:33 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:58:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39884 DF PROTO=TCP SPT=42436 DPT=9102 SEQ=2498573123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180A5B1F0000000001030307) 
Dec 06 09:58:37 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:58:37.915 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 09:58:37 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:58:37.916 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 09:58:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:58:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:58:38 np0005548790.localdomain systemd[1]: tmp-crun.AlaQTZ.mount: Deactivated successfully.
Dec 06 09:58:38 np0005548790.localdomain podman[281204]: 2025-12-06 09:58:38.592047797 +0000 UTC m=+0.105941052 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 09:58:38 np0005548790.localdomain podman[281205]: 2025-12-06 09:58:38.655634353 +0000 UTC m=+0.165362748 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 06 09:58:38 np0005548790.localdomain podman[281205]: 2025-12-06 09:58:38.665983838 +0000 UTC m=+0.175712213 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 09:58:38 np0005548790.localdomain podman[281204]: 2025-12-06 09:58:38.677023711 +0000 UTC m=+0.190917006 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 09:58:38 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 09:58:38 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 09:58:40 np0005548790.localdomain sshd[281184]: Received disconnect from 101.47.160.186 port 46256:11: Bye Bye [preauth]
Dec 06 09:58:40 np0005548790.localdomain sshd[281184]: Disconnected from authenticating user root 101.47.160.186 port 46256 [preauth]
Dec 06 09:58:41 np0005548790.localdomain sudo[281244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:58:41 np0005548790.localdomain sudo[281244]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:58:41 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:58:41 np0005548790.localdomain sudo[281244]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:41 np0005548790.localdomain sudo[281268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:58:41 np0005548790.localdomain sudo[281268]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:58:41 np0005548790.localdomain podman[281262]: 2025-12-06 09:58:41.47984602 +0000 UTC m=+0.068385195 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.openshift.expose-services=, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 06 09:58:41 np0005548790.localdomain podman[281262]: 2025-12-06 09:58:41.515574898 +0000 UTC m=+0.104114073 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_id=edpm, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9)
Dec 06 09:58:41 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 09:58:42 np0005548790.localdomain sudo[281268]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:43 np0005548790.localdomain sudo[281332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:58:43 np0005548790.localdomain sudo[281332]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:58:43 np0005548790.localdomain sudo[281332]: pam_unix(sudo:session): session closed for user root
Dec 06 09:58:45 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:58:45 np0005548790.localdomain podman[281350]: 2025-12-06 09:58:45.575218763 +0000 UTC m=+0.086652590 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 09:58:45 np0005548790.localdomain podman[281350]: 2025-12-06 09:58:45.591231508 +0000 UTC m=+0.102665275 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 09:58:45 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:58:46 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:58:46.919 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=33b2d0f4-3dae-458c-b286-c937c7cb3d9e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 09:58:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:58:48.366 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:58:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:58:48.366 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:58:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:58:48.367 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:58:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63677 DF PROTO=TCP SPT=41892 DPT=9102 SEQ=1126454578 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180A94300000000001030307) 
Dec 06 09:58:48 np0005548790.localdomain podman[239825]: time="2025-12-06T09:58:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 09:58:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:58:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148370 "" "Go-http-client/1.1"
Dec 06 09:58:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:58:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17222 "" "Go-http-client/1.1"
Dec 06 09:58:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:58:48 np0005548790.localdomain podman[281369]: 2025-12-06 09:58:48.567476047 +0000 UTC m=+0.082823728 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 09:58:48 np0005548790.localdomain podman[281369]: 2025-12-06 09:58:48.60412967 +0000 UTC m=+0.119477291 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 09:58:48 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:58:49 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63678 DF PROTO=TCP SPT=41892 DPT=9102 SEQ=1126454578 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180A981F0000000001030307) 
Dec 06 09:58:50 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39885 DF PROTO=TCP SPT=42436 DPT=9102 SEQ=2498573123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180A9B1F0000000001030307) 
Dec 06 09:58:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63679 DF PROTO=TCP SPT=41892 DPT=9102 SEQ=1126454578 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180AA01F0000000001030307) 
Dec 06 09:58:52 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43752 DF PROTO=TCP SPT=49164 DPT=9102 SEQ=4055321179 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180AA31F0000000001030307) 
Dec 06 09:58:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:58:52 np0005548790.localdomain podman[281392]: 2025-12-06 09:58:52.572656556 +0000 UTC m=+0.089018403 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:58:52 np0005548790.localdomain podman[281392]: 2025-12-06 09:58:52.613511131 +0000 UTC m=+0.129872978 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Dec 06 09:58:52 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:58:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:58:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:58:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:58:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 09:58:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:58:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:58:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:58:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 09:58:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:58:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:58:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 09:58:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:58:55 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63680 DF PROTO=TCP SPT=41892 DPT=9102 SEQ=1126454578 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180AAFDF0000000001030307) 
Dec 06 09:59:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63681 DF PROTO=TCP SPT=41892 DPT=9102 SEQ=1126454578 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180AD11F0000000001030307) 
Dec 06 09:59:04 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:59:04 np0005548790.localdomain podman[281418]: 2025-12-06 09:59:04.567503044 +0000 UTC m=+0.083606500 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec 06 09:59:04 np0005548790.localdomain podman[281418]: 2025-12-06 09:59:04.603278133 +0000 UTC m=+0.119381599 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 06 09:59:04 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.321 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 09:59:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 09:59:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:59:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:59:09 np0005548790.localdomain podman[281434]: 2025-12-06 09:59:09.576107125 +0000 UTC m=+0.080924008 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 09:59:09 np0005548790.localdomain podman[281434]: 2025-12-06 09:59:09.58759092 +0000 UTC m=+0.092407803 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 09:59:09 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 09:59:09 np0005548790.localdomain podman[281435]: 2025-12-06 09:59:09.670101128 +0000 UTC m=+0.171520611 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 09:59:09 np0005548790.localdomain podman[281435]: 2025-12-06 09:59:09.680285269 +0000 UTC m=+0.181704772 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 09:59:09 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 09:59:12 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:59:12 np0005548790.localdomain podman[281475]: 2025-12-06 09:59:12.564272742 +0000 UTC m=+0.066468984 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, version=9.6, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=openstack_network_exporter, release=1755695350, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 06 09:59:12 np0005548790.localdomain podman[281475]: 2025-12-06 09:59:12.603310728 +0000 UTC m=+0.105506960 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, name=ubi9-minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., distribution-scope=public)
Dec 06 09:59:12 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 09:59:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:59:16 np0005548790.localdomain podman[281497]: 2025-12-06 09:59:16.558948592 +0000 UTC m=+0.077071405 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 09:59:16 np0005548790.localdomain podman[281497]: 2025-12-06 09:59:16.568684191 +0000 UTC m=+0.086806974 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:59:16 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:59:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30384 DF PROTO=TCP SPT=55018 DPT=9102 SEQ=1549442674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180B09610000000001030307) 
Dec 06 09:59:18 np0005548790.localdomain podman[239825]: time="2025-12-06T09:59:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 09:59:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:59:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148370 "" "Go-http-client/1.1"
Dec 06 09:59:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:59:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17231 "" "Go-http-client/1.1"
Dec 06 09:59:19 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30385 DF PROTO=TCP SPT=55018 DPT=9102 SEQ=1549442674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180B0D5F0000000001030307) 
Dec 06 09:59:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:59:19 np0005548790.localdomain podman[281516]: 2025-12-06 09:59:19.568114877 +0000 UTC m=+0.085900931 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 09:59:19 np0005548790.localdomain podman[281516]: 2025-12-06 09:59:19.607416449 +0000 UTC m=+0.125202493 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 09:59:19 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:59:20 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63682 DF PROTO=TCP SPT=41892 DPT=9102 SEQ=1126454578 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180B111F0000000001030307) 
Dec 06 09:59:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30386 DF PROTO=TCP SPT=55018 DPT=9102 SEQ=1549442674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180B155F0000000001030307) 
Dec 06 09:59:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:22.335 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:59:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:22.336 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:59:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:22.336 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 09:59:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:22.337 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 09:59:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:22.351 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 09:59:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:22.351 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:59:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:22.352 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:59:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:22.352 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:59:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:22.353 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:59:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:22.353 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:59:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:22.353 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:59:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:22.354 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 09:59:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:22.354 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 09:59:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:22.374 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:59:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:22.375 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:59:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:22.375 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:59:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:22.376 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 09:59:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:22.376 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:59:22 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39886 DF PROTO=TCP SPT=42436 DPT=9102 SEQ=2498573123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180B191F0000000001030307) 
Dec 06 09:59:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:22.835 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:59:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:23.050 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 09:59:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:23.052 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=12522MB free_disk=41.837059020996094GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 09:59:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:23.052 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:59:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:23.053 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:59:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:23.146 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 09:59:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:23.146 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 09:59:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:23.173 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 09:59:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:59:23 np0005548790.localdomain systemd[1]: tmp-crun.vseY85.mount: Deactivated successfully.
Dec 06 09:59:23 np0005548790.localdomain podman[281582]: 2025-12-06 09:59:23.578519224 +0000 UTC m=+0.090960734 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125)
Dec 06 09:59:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:59:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:59:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:59:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 09:59:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:59:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:59:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:59:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 09:59:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:59:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:59:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 09:59:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:59:23 np0005548790.localdomain podman[281582]: 2025-12-06 09:59:23.6175885 +0000 UTC m=+0.130030000 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:59:23 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:59:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:23.642 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 09:59:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:23.650 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 09:59:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:23.677 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 09:59:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:23.680 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 09:59:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 09:59:23.680 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:59:25 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30387 DF PROTO=TCP SPT=55018 DPT=9102 SEQ=1549442674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180B25200000000001030307) 
Dec 06 09:59:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30388 DF PROTO=TCP SPT=55018 DPT=9102 SEQ=1549442674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180B45200000000001030307) 
Dec 06 09:59:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 09:59:35 np0005548790.localdomain podman[281607]: 2025-12-06 09:59:35.560406657 +0000 UTC m=+0.076501990 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 09:59:35 np0005548790.localdomain podman[281607]: 2025-12-06 09:59:35.565163713 +0000 UTC m=+0.081259036 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 09:59:35 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 09:59:35 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T09:59:35Z|00033|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Dec 06 09:59:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 09:59:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 09:59:40 np0005548790.localdomain systemd[1]: tmp-crun.cysUNV.mount: Deactivated successfully.
Dec 06 09:59:40 np0005548790.localdomain podman[281625]: 2025-12-06 09:59:40.573188641 +0000 UTC m=+0.086338522 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 09:59:40 np0005548790.localdomain podman[281625]: 2025-12-06 09:59:40.611612729 +0000 UTC m=+0.124762620 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 09:59:40 np0005548790.localdomain podman[281626]: 2025-12-06 09:59:40.623029542 +0000 UTC m=+0.131990022 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 09:59:40 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 09:59:40 np0005548790.localdomain podman[281626]: 2025-12-06 09:59:40.638178455 +0000 UTC m=+0.147138975 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 09:59:40 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 09:59:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 09:59:43 np0005548790.localdomain sudo[281666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:59:43 np0005548790.localdomain sudo[281666]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:59:43 np0005548790.localdomain sudo[281666]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:43 np0005548790.localdomain systemd[1]: tmp-crun.7vL6yA.mount: Deactivated successfully.
Dec 06 09:59:43 np0005548790.localdomain podman[281682]: 2025-12-06 09:59:43.591021636 +0000 UTC m=+0.089704401 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, name=ubi9-minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 06 09:59:43 np0005548790.localdomain podman[281682]: 2025-12-06 09:59:43.603040525 +0000 UTC m=+0.101723310 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, build-date=2025-08-20T13:12:41, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., architecture=x86_64)
Dec 06 09:59:43 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 09:59:43 np0005548790.localdomain sudo[281701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 09:59:43 np0005548790.localdomain sudo[281701]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:59:44 np0005548790.localdomain sudo[281701]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:44 np0005548790.localdomain sudo[281754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 09:59:44 np0005548790.localdomain sudo[281754]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:59:44 np0005548790.localdomain sudo[281754]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:44 np0005548790.localdomain sudo[281772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 06 09:59:44 np0005548790.localdomain sudo[281772]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:59:45 np0005548790.localdomain sudo[281772]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 09:59:47 np0005548790.localdomain podman[281809]: 2025-12-06 09:59:47.562919902 +0000 UTC m=+0.078202546 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 09:59:47 np0005548790.localdomain podman[281809]: 2025-12-06 09:59:47.574604011 +0000 UTC m=+0.089886655 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd)
Dec 06 09:59:47 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 09:59:48 np0005548790.localdomain sudo[281828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 09:59:48 np0005548790.localdomain sudo[281828]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 09:59:48 np0005548790.localdomain sudo[281828]: pam_unix(sudo:session): session closed for user root
Dec 06 09:59:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:59:48.366 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 09:59:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:59:48.367 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 09:59:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 09:59:48.367 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 09:59:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41080 DF PROTO=TCP SPT=44802 DPT=9102 SEQ=2016751072 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180B7E910000000001030307) 
Dec 06 09:59:48 np0005548790.localdomain podman[239825]: time="2025-12-06T09:59:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 09:59:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:59:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148370 "" "Go-http-client/1.1"
Dec 06 09:59:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:09:59:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17222 "" "Go-http-client/1.1"
Dec 06 09:59:49 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41081 DF PROTO=TCP SPT=44802 DPT=9102 SEQ=2016751072 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180B829F0000000001030307) 
Dec 06 09:59:50 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30389 DF PROTO=TCP SPT=55018 DPT=9102 SEQ=1549442674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180B85200000000001030307) 
Dec 06 09:59:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 09:59:50 np0005548790.localdomain podman[281846]: 2025-12-06 09:59:50.575245199 +0000 UTC m=+0.084441932 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 09:59:50 np0005548790.localdomain podman[281846]: 2025-12-06 09:59:50.583563459 +0000 UTC m=+0.092760172 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 09:59:50 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 09:59:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41082 DF PROTO=TCP SPT=44802 DPT=9102 SEQ=2016751072 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180B8AA00000000001030307) 
Dec 06 09:59:52 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63683 DF PROTO=TCP SPT=41892 DPT=9102 SEQ=1126454578 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180B8F1F0000000001030307) 
Dec 06 09:59:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:59:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 09:59:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:59:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:59:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:59:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 09:59:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:59:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 09:59:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:59:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   09:59:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 09:59:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 09:59:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 09:59:54 np0005548790.localdomain podman[281872]: 2025-12-06 09:59:54.569378044 +0000 UTC m=+0.089225808 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Dec 06 09:59:54 np0005548790.localdomain podman[281872]: 2025-12-06 09:59:54.667686842 +0000 UTC m=+0.187534616 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 09:59:54 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 09:59:54 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 06 09:59:55 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41083 DF PROTO=TCP SPT=44802 DPT=9102 SEQ=2016751072 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180B9A5F0000000001030307) 
Dec 06 10:00:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41084 DF PROTO=TCP SPT=44802 DPT=9102 SEQ=2016751072 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180BBB200000000001030307) 
Dec 06 10:00:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:00:06 np0005548790.localdomain podman[281897]: 2025-12-06 10:00:06.294073662 +0000 UTC m=+0.067342220 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 10:00:06 np0005548790.localdomain podman[281897]: 2025-12-06 10:00:06.323156769 +0000 UTC m=+0.096425437 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Dec 06 10:00:06 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:00:06 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:00:06Z|00034|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory
Dec 06 10:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:00:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 7200.1 total, 600.0 interval
                                                          Cumulative writes: 5270 writes, 23K keys, 5270 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5270 writes, 717 syncs, 7.35 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 84 writes, 245 keys, 84 commit groups, 1.0 writes per commit group, ingest: 0.38 MB, 0.00 MB/s
                                                          Interval WAL: 84 writes, 35 syncs, 2.40 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 10:00:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:00:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:00:11 np0005548790.localdomain systemd[1]: tmp-crun.v1IyH4.mount: Deactivated successfully.
Dec 06 10:00:11 np0005548790.localdomain podman[281917]: 2025-12-06 10:00:11.556063609 +0000 UTC m=+0.067548278 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:00:11 np0005548790.localdomain podman[281917]: 2025-12-06 10:00:11.568158785 +0000 UTC m=+0.079643454 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 10:00:11 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:00:11 np0005548790.localdomain podman[281916]: 2025-12-06 10:00:11.605670658 +0000 UTC m=+0.119948282 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 10:00:11 np0005548790.localdomain podman[281916]: 2025-12-06 10:00:11.642219216 +0000 UTC m=+0.156496840 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:00:11 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:00:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:00:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 7200.2 total, 600.0 interval
                                                          Cumulative writes: 5485 writes, 24K keys, 5485 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5485 writes, 761 syncs, 7.21 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 39 writes, 115 keys, 39 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                                          Interval WAL: 39 writes, 19 syncs, 2.05 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 10:00:14 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:00:14 np0005548790.localdomain podman[281956]: 2025-12-06 10:00:14.560856691 +0000 UTC m=+0.075932472 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, distribution-scope=public, version=9.6, managed_by=edpm_ansible, maintainer=Red Hat, Inc., name=ubi9-minimal, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Dec 06 10:00:14 np0005548790.localdomain podman[281956]: 2025-12-06 10:00:14.574244153 +0000 UTC m=+0.089319914 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, container_name=openstack_network_exporter, architecture=x86_64, release=1755695350, io.openshift.expose-services=)
Dec 06 10:00:14 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:00:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57914 DF PROTO=TCP SPT=55892 DPT=9102 SEQ=3511560126 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180BF3C30000000001030307) 
Dec 06 10:00:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:00:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:00:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:00:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148370 "" "Go-http-client/1.1"
Dec 06 10:00:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:00:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17227 "" "Go-http-client/1.1"
Dec 06 10:00:18 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:00:18 np0005548790.localdomain podman[281977]: 2025-12-06 10:00:18.569362465 +0000 UTC m=+0.084553155 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 10:00:18 np0005548790.localdomain podman[281977]: 2025-12-06 10:00:18.606304083 +0000 UTC m=+0.121494763 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 10:00:18 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:00:19 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57915 DF PROTO=TCP SPT=55892 DPT=9102 SEQ=3511560126 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180BF7E00000000001030307) 
Dec 06 10:00:20 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41085 DF PROTO=TCP SPT=44802 DPT=9102 SEQ=2016751072 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180BFB1F0000000001030307) 
Dec 06 10:00:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:00:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57916 DF PROTO=TCP SPT=55892 DPT=9102 SEQ=3511560126 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180BFFDF0000000001030307) 
Dec 06 10:00:21 np0005548790.localdomain podman[281994]: 2025-12-06 10:00:21.561646919 +0000 UTC m=+0.079611081 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:00:21 np0005548790.localdomain podman[281994]: 2025-12-06 10:00:21.5982889 +0000 UTC m=+0.116253012 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:00:21 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:00:22 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30390 DF PROTO=TCP SPT=55018 DPT=9102 SEQ=1549442674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180C031F0000000001030307) 
Dec 06 10:00:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:00:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:00:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:00:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:00:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:00:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:00:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:00:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:00:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:00:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:00:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:00:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:00:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:23.673 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:00:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:23.674 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:00:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:23.861 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:00:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:23.861 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:00:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:23.862 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:00:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:24.046 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:00:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:24.047 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:00:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:24.048 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:00:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:24.048 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:00:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:24.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:00:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:24.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:00:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:24.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:00:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:24.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:00:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:24.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:00:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:24.353 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:00:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:24.353 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:00:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:24.354 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:00:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:24.354 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:00:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:24.354 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:00:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:24.812 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:00:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:25.010 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:00:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:25.012 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=12530MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:00:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:25.012 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:00:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:25.012 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:00:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:25.067 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:00:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:25.068 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:00:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:25.085 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:00:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:00:25 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57917 DF PROTO=TCP SPT=55892 DPT=9102 SEQ=3511560126 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180C0F9F0000000001030307) 
Dec 06 10:00:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:25.540 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:00:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:25.545 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:00:25 np0005548790.localdomain podman[282058]: 2025-12-06 10:00:25.554482182 +0000 UTC m=+0.072339946 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true)
Dec 06 10:00:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:25.560 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:00:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:25.562 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:00:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:00:25.562 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:00:25 np0005548790.localdomain podman[282058]: 2025-12-06 10:00:25.649033376 +0000 UTC m=+0.166891130 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible)
Dec 06 10:00:25 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:00:33 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57918 DF PROTO=TCP SPT=55892 DPT=9102 SEQ=3511560126 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180C2F1F0000000001030307) 
Dec 06 10:00:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:00:36 np0005548790.localdomain podman[282085]: 2025-12-06 10:00:36.54691417 +0000 UTC m=+0.061348368 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 10:00:36 np0005548790.localdomain podman[282085]: 2025-12-06 10:00:36.550626501 +0000 UTC m=+0.065060689 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:00:36 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:00:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:00:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:00:42 np0005548790.localdomain podman[282103]: 2025-12-06 10:00:42.559407125 +0000 UTC m=+0.077669570 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 06 10:00:42 np0005548790.localdomain podman[282103]: 2025-12-06 10:00:42.57108043 +0000 UTC m=+0.089342905 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 10:00:42 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:00:42 np0005548790.localdomain podman[282102]: 2025-12-06 10:00:42.62063894 +0000 UTC m=+0.138186346 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 10:00:42 np0005548790.localdomain podman[282102]: 2025-12-06 10:00:42.633848706 +0000 UTC m=+0.151396122 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 10:00:42 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:00:45 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:00:45 np0005548790.localdomain podman[282145]: 2025-12-06 10:00:45.563168679 +0000 UTC m=+0.079037247 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, build-date=2025-08-20T13:12:41, config_id=edpm, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, version=9.6, io.buildah.version=1.33.7)
Dec 06 10:00:45 np0005548790.localdomain podman[282145]: 2025-12-06 10:00:45.603390835 +0000 UTC m=+0.119259373 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal)
Dec 06 10:00:45 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:00:48 np0005548790.localdomain sudo[282165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:00:48 np0005548790.localdomain sudo[282165]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:00:48 np0005548790.localdomain sudo[282165]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:00:48.367 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:00:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:00:48.368 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:00:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:00:48.368 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:00:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2923 DF PROTO=TCP SPT=47002 DPT=9102 SEQ=4212113578 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180C68F10000000001030307) 
Dec 06 10:00:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:00:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:00:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:00:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148370 "" "Go-http-client/1.1"
Dec 06 10:00:48 np0005548790.localdomain sudo[282183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:00:48 np0005548790.localdomain sudo[282183]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:00:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:00:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17231 "" "Go-http-client/1.1"
Dec 06 10:00:49 np0005548790.localdomain sudo[282183]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:49 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2924 DF PROTO=TCP SPT=47002 DPT=9102 SEQ=4212113578 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180C6CDF0000000001030307) 
Dec 06 10:00:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:00:49 np0005548790.localdomain systemd[1]: tmp-crun.CjQwID.mount: Deactivated successfully.
Dec 06 10:00:49 np0005548790.localdomain podman[282233]: 2025-12-06 10:00:49.586802183 +0000 UTC m=+0.100325782 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec 06 10:00:49 np0005548790.localdomain podman[282233]: 2025-12-06 10:00:49.60595659 +0000 UTC m=+0.119480169 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 10:00:49 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:00:49 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57919 DF PROTO=TCP SPT=55892 DPT=9102 SEQ=3511560126 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180C6F1F0000000001030307) 
Dec 06 10:00:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2925 DF PROTO=TCP SPT=47002 DPT=9102 SEQ=4212113578 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180C74DF0000000001030307) 
Dec 06 10:00:52 np0005548790.localdomain sudo[282253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:00:52 np0005548790.localdomain sudo[282253]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:00:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:00:52 np0005548790.localdomain sudo[282253]: pam_unix(sudo:session): session closed for user root
Dec 06 10:00:52 np0005548790.localdomain systemd[1]: tmp-crun.R3x3Bp.mount: Deactivated successfully.
Dec 06 10:00:52 np0005548790.localdomain podman[282271]: 2025-12-06 10:00:52.31268999 +0000 UTC m=+0.081425331 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 10:00:52 np0005548790.localdomain podman[282271]: 2025-12-06 10:00:52.348350594 +0000 UTC m=+0.117085955 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 10:00:52 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:00:52 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41086 DF PROTO=TCP SPT=44802 DPT=9102 SEQ=2016751072 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180C791F0000000001030307) 
Dec 06 10:00:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:00:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:00:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:00:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:00:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:00:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:00:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:00:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:00:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:00:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:00:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:00:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:00:55 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2926 DF PROTO=TCP SPT=47002 DPT=9102 SEQ=4212113578 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180C849F0000000001030307) 
Dec 06 10:00:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:00:56 np0005548790.localdomain systemd[1]: tmp-crun.wtjdLW.mount: Deactivated successfully.
Dec 06 10:00:56 np0005548790.localdomain podman[282293]: 2025-12-06 10:00:56.565879657 +0000 UTC m=+0.083558169 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:00:56 np0005548790.localdomain podman[282293]: 2025-12-06 10:00:56.603140324 +0000 UTC m=+0.120818806 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:00:56 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:01:01 np0005548790.localdomain CROND[282319]: (root) CMD (run-parts /etc/cron.hourly)
Dec 06 10:01:01 np0005548790.localdomain run-parts[282322]: (/etc/cron.hourly) starting 0anacron
Dec 06 10:01:01 np0005548790.localdomain run-parts[282328]: (/etc/cron.hourly) finished 0anacron
Dec 06 10:01:01 np0005548790.localdomain CROND[282318]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 06 10:01:03 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2927 DF PROTO=TCP SPT=47002 DPT=9102 SEQ=4212113578 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180CA5200000000001030307) 
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:01:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:01:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:01:07 np0005548790.localdomain podman[282329]: 2025-12-06 10:01:07.564123372 +0000 UTC m=+0.081662078 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 06 10:01:07 np0005548790.localdomain podman[282329]: 2025-12-06 10:01:07.568681125 +0000 UTC m=+0.086219801 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 06 10:01:07 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:01:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:01:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:01:13 np0005548790.localdomain podman[282347]: 2025-12-06 10:01:13.576705239 +0000 UTC m=+0.084986128 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:01:13 np0005548790.localdomain podman[282347]: 2025-12-06 10:01:13.583363569 +0000 UTC m=+0.091644528 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 10:01:13 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:01:13 np0005548790.localdomain podman[282348]: 2025-12-06 10:01:13.633255797 +0000 UTC m=+0.136220232 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:01:13 np0005548790.localdomain podman[282348]: 2025-12-06 10:01:13.646905646 +0000 UTC m=+0.149870041 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 10:01:13 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:01:14 np0005548790.localdomain sshd[282389]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:01:14 np0005548790.localdomain sshd[282389]: Accepted publickey for zuul from 38.102.83.114 port 60410 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 10:01:14 np0005548790.localdomain systemd-logind[760]: New session 61 of user zuul.
Dec 06 10:01:14 np0005548790.localdomain systemd[1]: Started Session 61 of User zuul.
Dec 06 10:01:14 np0005548790.localdomain sshd[282389]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 10:01:14 np0005548790.localdomain sudo[282409]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-urutedvtohacyugfadluvddkfhzikndp ; /usr/bin/python3
Dec 06 10:01:15 np0005548790.localdomain sudo[282409]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 10:01:15 np0005548790.localdomain python3[282411]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 10:01:16 np0005548790.localdomain subscription-manager[282412]: Unregistered machine with identity: 5867a12f-d1f0-415d-b20c-b8a52147736c
Dec 06 10:01:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:01:16 np0005548790.localdomain podman[282414]: 2025-12-06 10:01:16.57847635 +0000 UTC m=+0.082345926 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-type=git, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, name=ubi9-minimal, vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter)
Dec 06 10:01:16 np0005548790.localdomain podman[282414]: 2025-12-06 10:01:16.596297052 +0000 UTC m=+0.100166618 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9)
Dec 06 10:01:16 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:01:16 np0005548790.localdomain sudo[282409]: pam_unix(sudo:session): session closed for user root
Dec 06 10:01:18 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29000 DF PROTO=TCP SPT=57402 DPT=9102 SEQ=2294463830 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180CDE210000000001030307) 
Dec 06 10:01:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:01:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:01:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:01:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 148370 "" "Go-http-client/1.1"
Dec 06 10:01:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:01:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17234 "" "Go-http-client/1.1"
Dec 06 10:01:19 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29001 DF PROTO=TCP SPT=57402 DPT=9102 SEQ=2294463830 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180CE21F0000000001030307) 
Dec 06 10:01:20 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2928 DF PROTO=TCP SPT=47002 DPT=9102 SEQ=4212113578 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180CE5200000000001030307) 
Dec 06 10:01:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:01:20 np0005548790.localdomain systemd[1]: tmp-crun.9xEH7t.mount: Deactivated successfully.
Dec 06 10:01:20 np0005548790.localdomain podman[282436]: 2025-12-06 10:01:20.568323811 +0000 UTC m=+0.077420803 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 10:01:20 np0005548790.localdomain podman[282436]: 2025-12-06 10:01:20.579311368 +0000 UTC m=+0.088408440 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd)
Dec 06 10:01:20 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:01:21 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29002 DF PROTO=TCP SPT=57402 DPT=9102 SEQ=2294463830 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180CEA200000000001030307) 
Dec 06 10:01:22 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57920 DF PROTO=TCP SPT=55892 DPT=9102 SEQ=3511560126 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180CED1F0000000001030307) 
Dec 06 10:01:22 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:01:22 np0005548790.localdomain systemd[1]: tmp-crun.WvAaVS.mount: Deactivated successfully.
Dec 06 10:01:22 np0005548790.localdomain podman[282454]: 2025-12-06 10:01:22.555356703 +0000 UTC m=+0.072755837 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:01:22 np0005548790.localdomain podman[282454]: 2025-12-06 10:01:22.567188752 +0000 UTC m=+0.084587916 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 10:01:22 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:01:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:23.558 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:01:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:23.558 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:01:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:23.559 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:01:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:23.559 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:01:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:23.579 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:01:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:23.579 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:01:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:23.579 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:01:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:01:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:01:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:01:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:01:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:01:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:01:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:01:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:01:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:01:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:01:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:01:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:01:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:24.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:01:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:24.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:01:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:24.334 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:01:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:24.357 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:01:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:24.358 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:01:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:24.359 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:01:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:24.359 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:01:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:24.360 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:01:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:24.820 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:01:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:25.031 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:01:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:25.034 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=12520MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:01:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:25.035 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:01:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:25.035 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:01:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:25.136 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:01:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:25.137 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:01:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:25.150 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:01:25 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29003 DF PROTO=TCP SPT=57402 DPT=9102 SEQ=2294463830 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180CF9DF0000000001030307) 
Dec 06 10:01:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:25.593 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:01:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:25.597 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:01:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:25.619 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:01:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:25.620 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:01:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:25.620 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:01:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:26.619 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:01:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:26.620 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:01:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:01:26.621 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:01:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:01:27 np0005548790.localdomain podman[282520]: 2025-12-06 10:01:27.571525636 +0000 UTC m=+0.086642013 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 10:01:27 np0005548790.localdomain podman[282520]: 2025-12-06 10:01:27.608432423 +0000 UTC m=+0.123548860 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:01:27 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:01:34 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29004 DF PROTO=TCP SPT=57402 DPT=9102 SEQ=2294463830 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180D1B1F0000000001030307) 
Dec 06 10:01:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:01:38 np0005548790.localdomain podman[282546]: 2025-12-06 10:01:38.555858046 +0000 UTC m=+0.074551596 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 10:01:38 np0005548790.localdomain podman[282546]: 2025-12-06 10:01:38.589108873 +0000 UTC m=+0.107802453 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:01:38 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:01:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:01:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:01:44 np0005548790.localdomain podman[282562]: 2025-12-06 10:01:44.86923202 +0000 UTC m=+0.077201857 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:01:44 np0005548790.localdomain podman[282562]: 2025-12-06 10:01:44.880404122 +0000 UTC m=+0.088373999 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:01:44 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:01:44 np0005548790.localdomain podman[282563]: 2025-12-06 10:01:44.927567247 +0000 UTC m=+0.130722724 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3)
Dec 06 10:01:44 np0005548790.localdomain podman[282563]: 2025-12-06 10:01:44.941191205 +0000 UTC m=+0.144346732 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute)
Dec 06 10:01:44 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:01:46 np0005548790.localdomain sudo[282604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:01:46 np0005548790.localdomain sudo[282604]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:01:46 np0005548790.localdomain sudo[282604]: pam_unix(sudo:session): session closed for user root
Dec 06 10:01:46 np0005548790.localdomain sudo[282622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 -- inventory --format=json-pretty --filter-for-batch
Dec 06 10:01:46 np0005548790.localdomain sudo[282622]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:01:47 np0005548790.localdomain podman[282680]: 
Dec 06 10:01:47 np0005548790.localdomain podman[282680]: 2025-12-06 10:01:47.100284936 +0000 UTC m=+0.076711234 container create 545bb146f41195ba5cea1ccd13e8993d7f1db8119eef4eb0b5fd13eb1b995c07 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_maxwell, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, vendor=Red Hat, Inc., release=1763362218, ceph=True, RELEASE=main, io.openshift.tags=rhceph ceph, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, distribution-scope=public)
Dec 06 10:01:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:01:47 np0005548790.localdomain systemd[1]: Started libpod-conmon-545bb146f41195ba5cea1ccd13e8993d7f1db8119eef4eb0b5fd13eb1b995c07.scope.
Dec 06 10:01:47 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:01:47 np0005548790.localdomain podman[282680]: 2025-12-06 10:01:47.166948527 +0000 UTC m=+0.143374825 container init 545bb146f41195ba5cea1ccd13e8993d7f1db8119eef4eb0b5fd13eb1b995c07 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_maxwell, io.buildah.version=1.41.4, version=7, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, distribution-scope=public, RELEASE=main, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7)
Dec 06 10:01:47 np0005548790.localdomain podman[282680]: 2025-12-06 10:01:47.06825246 +0000 UTC m=+0.044678778 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:01:47 np0005548790.localdomain podman[282680]: 2025-12-06 10:01:47.17889151 +0000 UTC m=+0.155317808 container start 545bb146f41195ba5cea1ccd13e8993d7f1db8119eef4eb0b5fd13eb1b995c07 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_maxwell, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, RELEASE=main, GIT_CLEAN=True, distribution-scope=public, build-date=2025-11-26T19:44:28Z, architecture=x86_64, version=7)
Dec 06 10:01:47 np0005548790.localdomain podman[282680]: 2025-12-06 10:01:47.179238669 +0000 UTC m=+0.155665007 container attach 545bb146f41195ba5cea1ccd13e8993d7f1db8119eef4eb0b5fd13eb1b995c07 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_maxwell, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, architecture=x86_64, io.openshift.expose-services=, version=7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, distribution-scope=public, com.redhat.component=rhceph-container, GIT_CLEAN=True, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 06 10:01:47 np0005548790.localdomain cranky_maxwell[282696]: 167 167
Dec 06 10:01:47 np0005548790.localdomain systemd[1]: libpod-545bb146f41195ba5cea1ccd13e8993d7f1db8119eef4eb0b5fd13eb1b995c07.scope: Deactivated successfully.
Dec 06 10:01:47 np0005548790.localdomain podman[282680]: 2025-12-06 10:01:47.183062582 +0000 UTC m=+0.159488870 container died 545bb146f41195ba5cea1ccd13e8993d7f1db8119eef4eb0b5fd13eb1b995c07 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_maxwell, ceph=True, version=7, GIT_CLEAN=True, name=rhceph, architecture=x86_64, GIT_BRANCH=main, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, distribution-scope=public, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 10:01:47 np0005548790.localdomain podman[282695]: 2025-12-06 10:01:47.279338095 +0000 UTC m=+0.142143953 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, name=ubi9-minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 06 10:01:47 np0005548790.localdomain podman[282710]: 2025-12-06 10:01:47.334886865 +0000 UTC m=+0.138297088 container remove 545bb146f41195ba5cea1ccd13e8993d7f1db8119eef4eb0b5fd13eb1b995c07 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_maxwell, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, version=7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, GIT_BRANCH=main, name=rhceph, GIT_CLEAN=True, ceph=True, distribution-scope=public, RELEASE=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 06 10:01:47 np0005548790.localdomain systemd[1]: libpod-conmon-545bb146f41195ba5cea1ccd13e8993d7f1db8119eef4eb0b5fd13eb1b995c07.scope: Deactivated successfully.
Dec 06 10:01:47 np0005548790.localdomain podman[282695]: 2025-12-06 10:01:47.374727592 +0000 UTC m=+0.237533450 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, vcs-type=git, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 06 10:01:47 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:01:47 np0005548790.localdomain podman[282741]: 
Dec 06 10:01:47 np0005548790.localdomain podman[282741]: 2025-12-06 10:01:47.546713519 +0000 UTC m=+0.077041983 container create b5486cd946a24eed3adffe55c4ce57d204b35c91628648f87a26235c23a148b1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_dhawan, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, CEPH_POINT_RELEASE=, release=1763362218, RELEASE=main, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 06 10:01:47 np0005548790.localdomain systemd[1]: Started libpod-conmon-b5486cd946a24eed3adffe55c4ce57d204b35c91628648f87a26235c23a148b1.scope.
Dec 06 10:01:47 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:01:47 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93754f42447e5e49159eaa684fc7af92552c8da38fdb5f673910490d362363f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:01:47 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93754f42447e5e49159eaa684fc7af92552c8da38fdb5f673910490d362363f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:01:47 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93754f42447e5e49159eaa684fc7af92552c8da38fdb5f673910490d362363f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:01:47 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93754f42447e5e49159eaa684fc7af92552c8da38fdb5f673910490d362363f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:01:47 np0005548790.localdomain podman[282741]: 2025-12-06 10:01:47.514238731 +0000 UTC m=+0.044567205 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:01:47 np0005548790.localdomain podman[282741]: 2025-12-06 10:01:47.616662279 +0000 UTC m=+0.146990743 container init b5486cd946a24eed3adffe55c4ce57d204b35c91628648f87a26235c23a148b1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_dhawan, vendor=Red Hat, Inc., release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, name=rhceph, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, version=7, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=)
Dec 06 10:01:47 np0005548790.localdomain podman[282741]: 2025-12-06 10:01:47.628892139 +0000 UTC m=+0.159220593 container start b5486cd946a24eed3adffe55c4ce57d204b35c91628648f87a26235c23a148b1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_dhawan, release=1763362218, description=Red Hat Ceph Storage 7, ceph=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, name=rhceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main)
Dec 06 10:01:47 np0005548790.localdomain podman[282741]: 2025-12-06 10:01:47.629179547 +0000 UTC m=+0.159508051 container attach b5486cd946a24eed3adffe55c4ce57d204b35c91628648f87a26235c23a148b1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_dhawan, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, architecture=x86_64, release=1763362218, version=7, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main)
Dec 06 10:01:48 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-7835c0a3d41b944f7ed687d25496460e84082dbd5b06fa9fdf8b7d6747fb5aa9-merged.mount: Deactivated successfully.
Dec 06 10:01:48 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22422 DF PROTO=TCP SPT=37588 DPT=9102 SEQ=2892369116 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180D53510000000001030307) 
Dec 06 10:01:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:01:48.391 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:01:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:01:48.397 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:01:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:01:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:01:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:01:48.398 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:01:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:01:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150378 "" "Go-http-client/1.1"
Dec 06 10:01:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:01:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17673 "" "Go-http-client/1.1"
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]: [
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:     {
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:         "available": false,
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:         "ceph_device": false,
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:         "lsm_data": {},
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:         "lvs": [],
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:         "path": "/dev/sr0",
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:         "rejected_reasons": [
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:             "Insufficient space (<5GB)",
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:             "Has a FileSystem"
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:         ],
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:         "sys_api": {
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:             "actuators": null,
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:             "device_nodes": "sr0",
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:             "human_readable_size": "482.00 KB",
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:             "id_bus": "ata",
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:             "model": "QEMU DVD-ROM",
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:             "nr_requests": "2",
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:             "partitions": {},
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:             "path": "/dev/sr0",
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:             "removable": "1",
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:             "rev": "2.5+",
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:             "ro": "0",
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:             "rotational": "1",
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:             "sas_address": "",
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:             "sas_device_handle": "",
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:             "scheduler_mode": "mq-deadline",
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:             "sectors": 0,
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:             "sectorsize": "2048",
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:             "size": 493568.0,
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:             "support_discard": "0",
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:             "type": "disk",
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:             "vendor": "QEMU"
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:         }
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]:     }
Dec 06 10:01:48 np0005548790.localdomain hardcore_dhawan[282756]: ]
Dec 06 10:01:48 np0005548790.localdomain systemd[1]: libpod-b5486cd946a24eed3adffe55c4ce57d204b35c91628648f87a26235c23a148b1.scope: Deactivated successfully.
Dec 06 10:01:48 np0005548790.localdomain systemd[1]: libpod-b5486cd946a24eed3adffe55c4ce57d204b35c91628648f87a26235c23a148b1.scope: Consumed 1.121s CPU time.
Dec 06 10:01:48 np0005548790.localdomain podman[282741]: 2025-12-06 10:01:48.707353441 +0000 UTC m=+1.237681895 container died b5486cd946a24eed3adffe55c4ce57d204b35c91628648f87a26235c23a148b1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_dhawan, io.openshift.expose-services=, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, architecture=x86_64, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 06 10:01:48 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-93754f42447e5e49159eaa684fc7af92552c8da38fdb5f673910490d362363f6-merged.mount: Deactivated successfully.
Dec 06 10:01:48 np0005548790.localdomain podman[284753]: 2025-12-06 10:01:48.799540102 +0000 UTC m=+0.081062992 container remove b5486cd946a24eed3adffe55c4ce57d204b35c91628648f87a26235c23a148b1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_dhawan, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, build-date=2025-11-26T19:44:28Z, RELEASE=main, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, ceph=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, vcs-type=git)
Dec 06 10:01:48 np0005548790.localdomain systemd[1]: libpod-conmon-b5486cd946a24eed3adffe55c4ce57d204b35c91628648f87a26235c23a148b1.scope: Deactivated successfully.
Dec 06 10:01:48 np0005548790.localdomain sudo[282622]: pam_unix(sudo:session): session closed for user root
Dec 06 10:01:49 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22423 DF PROTO=TCP SPT=37588 DPT=9102 SEQ=2892369116 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180D575F0000000001030307) 
Dec 06 10:01:49 np0005548790.localdomain sudo[284767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:01:49 np0005548790.localdomain sudo[284767]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:01:49 np0005548790.localdomain sudo[284767]: pam_unix(sudo:session): session closed for user root
Dec 06 10:01:49 np0005548790.localdomain sudo[284785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:01:49 np0005548790.localdomain sudo[284785]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:01:49 np0005548790.localdomain sudo[284785]: pam_unix(sudo:session): session closed for user root
Dec 06 10:01:49 np0005548790.localdomain sudo[284803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:01:49 np0005548790.localdomain sudo[284803]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:01:50 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29005 DF PROTO=TCP SPT=57402 DPT=9102 SEQ=2294463830 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180D5B200000000001030307) 
Dec 06 10:01:50 np0005548790.localdomain sudo[284803]: pam_unix(sudo:session): session closed for user root
Dec 06 10:01:51 np0005548790.localdomain sudo[284852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:01:51 np0005548790.localdomain sudo[284852]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:01:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:01:51 np0005548790.localdomain sudo[284852]: pam_unix(sudo:session): session closed for user root
Dec 06 10:01:51 np0005548790.localdomain systemd[1]: tmp-crun.XEFn1V.mount: Deactivated successfully.
Dec 06 10:01:51 np0005548790.localdomain podman[284870]: 2025-12-06 10:01:51.299289578 +0000 UTC m=+0.097863506 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 10:01:51 np0005548790.localdomain podman[284870]: 2025-12-06 10:01:51.315116495 +0000 UTC m=+0.113690383 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 06 10:01:51 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:01:51 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22424 DF PROTO=TCP SPT=37588 DPT=9102 SEQ=2892369116 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180D5F5F0000000001030307) 
Dec 06 10:01:52 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2929 DF PROTO=TCP SPT=47002 DPT=9102 SEQ=4212113578 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180D63200000000001030307) 
Dec 06 10:01:53 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:01:53 np0005548790.localdomain podman[284889]: 2025-12-06 10:01:53.578116614 +0000 UTC m=+0.093386304 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 10:01:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:01:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:01:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:01:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:01:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:01:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:01:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:01:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:01:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:01:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:01:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:01:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:01:53 np0005548790.localdomain podman[284889]: 2025-12-06 10:01:53.614425945 +0000 UTC m=+0.129695655 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 10:01:53 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:01:54 np0005548790.localdomain sshd[284911]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:01:54 np0005548790.localdomain sshd[284911]: Accepted publickey for tripleo-admin from 192.168.122.11 port 55102 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 10:01:54 np0005548790.localdomain systemd-logind[760]: New session 62 of user tripleo-admin.
Dec 06 10:01:54 np0005548790.localdomain systemd[1]: Created slice User Slice of UID 1003.
Dec 06 10:01:54 np0005548790.localdomain systemd[1]: Starting User Runtime Directory /run/user/1003...
Dec 06 10:01:54 np0005548790.localdomain systemd[1]: Finished User Runtime Directory /run/user/1003.
Dec 06 10:01:54 np0005548790.localdomain systemd[1]: Starting User Manager for UID 1003...
Dec 06 10:01:54 np0005548790.localdomain systemd[284915]: pam_unix(systemd-user:session): session opened for user tripleo-admin(uid=1003) by (uid=0)
Dec 06 10:01:55 np0005548790.localdomain systemd[284915]: Queued start job for default target Main User Target.
Dec 06 10:01:55 np0005548790.localdomain systemd[284915]: Created slice User Application Slice.
Dec 06 10:01:55 np0005548790.localdomain systemd[284915]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 06 10:01:55 np0005548790.localdomain systemd[284915]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 10:01:55 np0005548790.localdomain systemd[284915]: Reached target Paths.
Dec 06 10:01:55 np0005548790.localdomain systemd[284915]: Reached target Timers.
Dec 06 10:01:55 np0005548790.localdomain systemd[284915]: Starting D-Bus User Message Bus Socket...
Dec 06 10:01:55 np0005548790.localdomain systemd[284915]: Starting Create User's Volatile Files and Directories...
Dec 06 10:01:55 np0005548790.localdomain systemd[284915]: Finished Create User's Volatile Files and Directories.
Dec 06 10:01:55 np0005548790.localdomain systemd[284915]: Listening on D-Bus User Message Bus Socket.
Dec 06 10:01:55 np0005548790.localdomain systemd[284915]: Reached target Sockets.
Dec 06 10:01:55 np0005548790.localdomain systemd[284915]: Reached target Basic System.
Dec 06 10:01:55 np0005548790.localdomain systemd[284915]: Reached target Main User Target.
Dec 06 10:01:55 np0005548790.localdomain systemd[284915]: Startup finished in 138ms.
Dec 06 10:01:55 np0005548790.localdomain systemd[1]: Started User Manager for UID 1003.
Dec 06 10:01:55 np0005548790.localdomain systemd[1]: Started Session 62 of User tripleo-admin.
Dec 06 10:01:55 np0005548790.localdomain sshd[284911]: pam_unix(sshd:session): session opened for user tripleo-admin(uid=1003) by (uid=0)
Dec 06 10:01:55 np0005548790.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:1e:1a:76 MACDST=fa:16:3e:26:ab:84 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22425 DF PROTO=TCP SPT=37588 DPT=9102 SEQ=2892369116 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A180D6F200000000001030307) 
Dec 06 10:01:55 np0005548790.localdomain sudo[285056]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-viyuxutykjfkxpilisvsnwxaakhuvoai ; /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765015315.1493666-60088-216553207755247/AnsiballZ_blockinfile.py
Dec 06 10:01:55 np0005548790.localdomain sudo[285056]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 06 10:01:55 np0005548790.localdomain python3[285058]: ansible-ansible.builtin.blockinfile Invoked with marker_begin=BEGIN ceph firewall rules marker_end=END ceph firewall rules path=/etc/nftables/edpm-rules.nft mode=0644 block=# 100 ceph_alertmanager (9093)
                                                          add rule inet filter EDPM_INPUT tcp dport { 9093 } ct state new counter accept comment "100 ceph_alertmanager"
                                                          # 100 ceph_dashboard (8443)
                                                          add rule inet filter EDPM_INPUT tcp dport { 8443 } ct state new counter accept comment "100 ceph_dashboard"
                                                          # 100 ceph_grafana (3100)
                                                          add rule inet filter EDPM_INPUT tcp dport { 3100 } ct state new counter accept comment "100 ceph_grafana"
                                                          # 100 ceph_prometheus (9092)
                                                          add rule inet filter EDPM_INPUT tcp dport { 9092 } ct state new counter accept comment "100 ceph_prometheus"
                                                          # 100 ceph_rgw (8080)
                                                          add rule inet filter EDPM_INPUT tcp dport { 8080 } ct state new counter accept comment "100 ceph_rgw"
                                                          # 110 ceph_mon (6789, 3300, 9100)
                                                          add rule inet filter EDPM_INPUT tcp dport { 6789,3300,9100 } ct state new counter accept comment "110 ceph_mon"
                                                          # 112 ceph_mds (6800-7300, 9100)
                                                          add rule inet filter EDPM_INPUT tcp dport { 6800-7300,9100 } ct state new counter accept comment "112 ceph_mds"
                                                          # 113 ceph_mgr (6800-7300, 8444)
                                                          add rule inet filter EDPM_INPUT tcp dport { 6800-7300,8444 } ct state new counter accept comment "113 ceph_mgr"
                                                          # 120 ceph_nfs (2049, 12049)
                                                          add rule inet filter EDPM_INPUT tcp dport { 2049,12049 } ct state new counter accept comment "120 ceph_nfs"
                                                          # 123 ceph_dashboard (9090, 9094, 9283)
                                                          add rule inet filter EDPM_INPUT tcp dport { 9090,9094,9283 } ct state new counter accept comment "123 ceph_dashboard"
                                                           insertbefore=^# Lock down INPUT chains state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False unsafe_writes=False insertafter=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 10:01:55 np0005548790.localdomain sudo[285056]: pam_unix(sudo:session): session closed for user root
Dec 06 10:01:55 np0005548790.localdomain systemd-journald[47675]: Field hash table of /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal has a fill level at 80.5 (268 of 333 items), suggesting rotation.
Dec 06 10:01:55 np0005548790.localdomain systemd-journald[47675]: /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 06 10:01:55 np0005548790.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 10:01:55 np0005548790.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 10:01:56 np0005548790.localdomain sudo[285201]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bbxyvrzghbvwgixofstzlhiydpfifztd ; /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765015315.9601874-60102-71015981414618/AnsiballZ_systemd.py
Dec 06 10:01:56 np0005548790.localdomain sudo[285201]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 06 10:01:56 np0005548790.localdomain python3[285203]: ansible-ansible.builtin.systemd Invoked with name=nftables state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 06 10:01:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:01:57 np0005548790.localdomain systemd[1]: Stopping Netfilter Tables...
Dec 06 10:01:57 np0005548790.localdomain podman[285206]: 2025-12-06 10:01:57.83163827 +0000 UTC m=+0.085279915 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 10:01:57 np0005548790.localdomain systemd[1]: nftables.service: Deactivated successfully.
Dec 06 10:01:57 np0005548790.localdomain systemd[1]: Stopped Netfilter Tables.
Dec 06 10:01:57 np0005548790.localdomain systemd[1]: Starting Netfilter Tables...
Dec 06 10:01:57 np0005548790.localdomain podman[285206]: 2025-12-06 10:01:57.901475337 +0000 UTC m=+0.155116992 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 10:01:57 np0005548790.localdomain systemd[1]: Finished Netfilter Tables.
Dec 06 10:01:57 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:01:57 np0005548790.localdomain sudo[285201]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:00 np0005548790.localdomain sudo[285252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:02:00 np0005548790.localdomain sudo[285252]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:02:00 np0005548790.localdomain sudo[285252]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:02 np0005548790.localdomain sudo[285270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:02:02 np0005548790.localdomain sudo[285270]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:02:02 np0005548790.localdomain sudo[285270]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:03 np0005548790.localdomain sudo[285288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:02:03 np0005548790.localdomain sudo[285288]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:02:03 np0005548790.localdomain sudo[285288]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:04 np0005548790.localdomain sudo[285306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:02:04 np0005548790.localdomain sudo[285306]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:02:04 np0005548790.localdomain sudo[285306]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:05 np0005548790.localdomain sudo[285324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:02:05 np0005548790.localdomain sudo[285324]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:02:05 np0005548790.localdomain sudo[285324]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:07 np0005548790.localdomain sudo[285342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:02:07 np0005548790.localdomain sudo[285342]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:02:07 np0005548790.localdomain sudo[285342]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:08 np0005548790.localdomain sudo[285360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:02:08 np0005548790.localdomain sudo[285360]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:02:08 np0005548790.localdomain sudo[285360]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:08 np0005548790.localdomain sudo[285378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:02:08 np0005548790.localdomain sudo[285378]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:02:08 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:02:08 np0005548790.localdomain podman[285411]: 2025-12-06 10:02:08.770316306 +0000 UTC m=+0.092768554 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 06 10:02:08 np0005548790.localdomain podman[285411]: 2025-12-06 10:02:08.804247369 +0000 UTC m=+0.126699587 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 10:02:08 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:02:08 np0005548790.localdomain podman[285455]: 
Dec 06 10:02:08 np0005548790.localdomain podman[285455]: 2025-12-06 10:02:08.94181644 +0000 UTC m=+0.062260504 container create bb5af50155c274bd7f04dbde24c0b31d98d4cca6b4fc2b30608b115794745b2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_bartik, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, RELEASE=main, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, ceph=True, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7)
Dec 06 10:02:08 np0005548790.localdomain systemd[1]: Started libpod-conmon-bb5af50155c274bd7f04dbde24c0b31d98d4cca6b4fc2b30608b115794745b2a.scope.
Dec 06 10:02:08 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:02:09 np0005548790.localdomain podman[285455]: 2025-12-06 10:02:08.913904201 +0000 UTC m=+0.034348325 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:02:09 np0005548790.localdomain podman[285455]: 2025-12-06 10:02:09.013692646 +0000 UTC m=+0.134136710 container init bb5af50155c274bd7f04dbde24c0b31d98d4cca6b4fc2b30608b115794745b2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_bartik, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, architecture=x86_64, version=7, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc.)
Dec 06 10:02:09 np0005548790.localdomain podman[285455]: 2025-12-06 10:02:09.023261676 +0000 UTC m=+0.143705740 container start bb5af50155c274bd7f04dbde24c0b31d98d4cca6b4fc2b30608b115794745b2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_bartik, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_BRANCH=main, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7)
Dec 06 10:02:09 np0005548790.localdomain podman[285455]: 2025-12-06 10:02:09.023496412 +0000 UTC m=+0.143940496 container attach bb5af50155c274bd7f04dbde24c0b31d98d4cca6b4fc2b30608b115794745b2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_bartik, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., RELEASE=main, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, distribution-scope=public)
Dec 06 10:02:09 np0005548790.localdomain systemd[1]: libpod-bb5af50155c274bd7f04dbde24c0b31d98d4cca6b4fc2b30608b115794745b2a.scope: Deactivated successfully.
Dec 06 10:02:09 np0005548790.localdomain flamboyant_bartik[285470]: 167 167
Dec 06 10:02:09 np0005548790.localdomain podman[285455]: 2025-12-06 10:02:09.028475318 +0000 UTC m=+0.148919442 container died bb5af50155c274bd7f04dbde24c0b31d98d4cca6b4fc2b30608b115794745b2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_bartik, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, release=1763362218, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, distribution-scope=public, name=rhceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph)
Dec 06 10:02:09 np0005548790.localdomain podman[285475]: 2025-12-06 10:02:09.124539471 +0000 UTC m=+0.082453334 container remove bb5af50155c274bd7f04dbde24c0b31d98d4cca6b4fc2b30608b115794745b2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_bartik, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, distribution-scope=public, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_BRANCH=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., name=rhceph, com.redhat.component=rhceph-container, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 06 10:02:09 np0005548790.localdomain systemd[1]: libpod-conmon-bb5af50155c274bd7f04dbde24c0b31d98d4cca6b4fc2b30608b115794745b2a.scope: Deactivated successfully.
Dec 06 10:02:09 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 10:02:09 np0005548790.localdomain systemd-sysv-generator[285520]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 10:02:09 np0005548790.localdomain systemd-rc-local-generator[285514]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 10:02:09 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:02:09 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 10:02:09 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:02:09 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:02:09 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 10:02:09 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 10:02:09 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:02:09 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:02:09 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:02:09 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-79a42d8ef67d82d0e5b575af15dc3ef15a04f5a2629d9805d26bafa68736e7a4-merged.mount: Deactivated successfully.
Dec 06 10:02:09 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 10:02:09 np0005548790.localdomain systemd-sysv-generator[285558]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 10:02:09 np0005548790.localdomain systemd-rc-local-generator[285554]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 10:02:09 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:02:09 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 10:02:09 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:02:09 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:02:09 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 10:02:09 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 10:02:09 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:02:09 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:02:09 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:02:09 np0005548790.localdomain systemd[1]: Starting Ceph mds.mds.np0005548790.vhcezv for 1939e851-b10c-5c3b-9bb7-8e7f380233e8...
Dec 06 10:02:10 np0005548790.localdomain podman[285617]: 
Dec 06 10:02:10 np0005548790.localdomain podman[285617]: 2025-12-06 10:02:10.286651548 +0000 UTC m=+0.077825157 container create c9df1b3b889ee3e1dd4b360d5717015bb9d7cadc67e328a0396603d7b3948c48 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mds-mds-np0005548790-vhcezv, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, release=1763362218, name=rhceph, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True)
Dec 06 10:02:10 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a9208c67c283c79804024b44d198c457c4c91cb8825a9a32c66ccdf90a5280/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:02:10 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a9208c67c283c79804024b44d198c457c4c91cb8825a9a32c66ccdf90a5280/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:02:10 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a9208c67c283c79804024b44d198c457c4c91cb8825a9a32c66ccdf90a5280/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:02:10 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a9208c67c283c79804024b44d198c457c4c91cb8825a9a32c66ccdf90a5280/merged/var/lib/ceph/mds/ceph-mds.np0005548790.vhcezv supports timestamps until 2038 (0x7fffffff)
Dec 06 10:02:10 np0005548790.localdomain podman[285617]: 2025-12-06 10:02:10.354233006 +0000 UTC m=+0.145406615 container init c9df1b3b889ee3e1dd4b360d5717015bb9d7cadc67e328a0396603d7b3948c48 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mds-mds-np0005548790-vhcezv, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, GIT_BRANCH=main, io.openshift.expose-services=)
Dec 06 10:02:10 np0005548790.localdomain podman[285617]: 2025-12-06 10:02:10.256234782 +0000 UTC m=+0.047408431 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:02:10 np0005548790.localdomain podman[285617]: 2025-12-06 10:02:10.362052409 +0000 UTC m=+0.153226018 container start c9df1b3b889ee3e1dd4b360d5717015bb9d7cadc67e328a0396603d7b3948c48 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mds-mds-np0005548790-vhcezv, ceph=True, io.openshift.tags=rhceph ceph, release=1763362218, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., name=rhceph, io.openshift.expose-services=, RELEASE=main, version=7, vcs-type=git, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public)
Dec 06 10:02:10 np0005548790.localdomain bash[285617]: c9df1b3b889ee3e1dd4b360d5717015bb9d7cadc67e328a0396603d7b3948c48
Dec 06 10:02:10 np0005548790.localdomain systemd[1]: Started Ceph mds.mds.np0005548790.vhcezv for 1939e851-b10c-5c3b-9bb7-8e7f380233e8.
Dec 06 10:02:10 np0005548790.localdomain sudo[285378]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:10 np0005548790.localdomain ceph-mds[285635]: set uid:gid to 167:167 (ceph:ceph)
Dec 06 10:02:10 np0005548790.localdomain ceph-mds[285635]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mds, pid 2
Dec 06 10:02:10 np0005548790.localdomain ceph-mds[285635]: main not setting numa affinity
Dec 06 10:02:10 np0005548790.localdomain ceph-mds[285635]: pidfile_write: ignore empty --pid-file
Dec 06 10:02:10 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mds-mds-np0005548790-vhcezv[285631]: starting mds.mds.np0005548790.vhcezv at 
Dec 06 10:02:10 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv Updating MDS map to version 6 from mon.1
Dec 06 10:02:11 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv Updating MDS map to version 7 from mon.1
Dec 06 10:02:11 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv Monitors have assigned me to become a standby.
Dec 06 10:02:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:02:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:02:15 np0005548790.localdomain podman[285655]: 2025-12-06 10:02:15.578153843 +0000 UTC m=+0.087780338 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 10:02:15 np0005548790.localdomain podman[285655]: 2025-12-06 10:02:15.621341869 +0000 UTC m=+0.130968384 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:02:15 np0005548790.localdomain podman[285656]: 2025-12-06 10:02:15.625420719 +0000 UTC m=+0.132560846 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:02:15 np0005548790.localdomain podman[285656]: 2025-12-06 10:02:15.639189984 +0000 UTC m=+0.146330091 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true)
Dec 06 10:02:15 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:02:15 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:02:15 np0005548790.localdomain sudo[285695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:02:15 np0005548790.localdomain sudo[285695]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:02:15 np0005548790.localdomain sudo[285695]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:15 np0005548790.localdomain sudo[285713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:02:15 np0005548790.localdomain sudo[285713]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:02:15 np0005548790.localdomain sudo[285713]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:15 np0005548790.localdomain sudo[285731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 10:02:15 np0005548790.localdomain sudo[285731]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:02:16 np0005548790.localdomain systemd[1]: tmp-crun.LTGmh5.mount: Deactivated successfully.
Dec 06 10:02:16 np0005548790.localdomain podman[285823]: 2025-12-06 10:02:16.690052076 +0000 UTC m=+0.088289692 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=1763362218, architecture=x86_64)
Dec 06 10:02:16 np0005548790.localdomain podman[285823]: 2025-12-06 10:02:16.795233097 +0000 UTC m=+0.193470723 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.openshift.expose-services=, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_BRANCH=main, RELEASE=main, description=Red Hat Ceph Storage 7, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 06 10:02:17 np0005548790.localdomain sshd[282392]: Received disconnect from 38.102.83.114 port 60410:11: disconnected by user
Dec 06 10:02:17 np0005548790.localdomain sshd[282392]: Disconnected from user zuul 38.102.83.114 port 60410
Dec 06 10:02:17 np0005548790.localdomain sshd[282389]: pam_unix(sshd:session): session closed for user zuul
Dec 06 10:02:17 np0005548790.localdomain systemd[1]: session-61.scope: Deactivated successfully.
Dec 06 10:02:17 np0005548790.localdomain systemd-logind[760]: Session 61 logged out. Waiting for processes to exit.
Dec 06 10:02:17 np0005548790.localdomain systemd-logind[760]: Removed session 61.
Dec 06 10:02:17 np0005548790.localdomain sudo[285731]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:02:17 np0005548790.localdomain systemd[1]: tmp-crun.2jDqMO.mount: Deactivated successfully.
Dec 06 10:02:17 np0005548790.localdomain podman[285904]: 2025-12-06 10:02:17.572261432 +0000 UTC m=+0.085957240 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, build-date=2025-08-20T13:12:41, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1755695350, config_id=edpm, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 06 10:02:17 np0005548790.localdomain podman[285904]: 2025-12-06 10:02:17.589066849 +0000 UTC m=+0.102762657 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., version=9.6, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64)
Dec 06 10:02:17 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:02:17 np0005548790.localdomain sudo[285923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:02:17 np0005548790.localdomain sudo[285923]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:02:17 np0005548790.localdomain sudo[285923]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:18 np0005548790.localdomain sudo[285941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:02:18 np0005548790.localdomain sudo[285941]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:02:18 np0005548790.localdomain sudo[285941]: pam_unix(sudo:session): session closed for user root
Dec 06 10:02:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:02:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:02:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:02:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150517 "" "Go-http-client/1.1"
Dec 06 10:02:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:02:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17720 "" "Go-http-client/1.1"
Dec 06 10:02:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:02:21 np0005548790.localdomain podman[285959]: 2025-12-06 10:02:21.56242905 +0000 UTC m=+0.079048501 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 10:02:21 np0005548790.localdomain podman[285959]: 2025-12-06 10:02:21.578253061 +0000 UTC m=+0.094872532 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Dec 06 10:02:21 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:02:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:22.329 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:02:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:22.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:02:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:22.332 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:02:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:22.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:02:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:22.352 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:02:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:02:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:02:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:02:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:02:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:02:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:02:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:02:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:02:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:02:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:02:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:02:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:02:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:24.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:02:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:24.334 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:02:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:24.334 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:02:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:24.353 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:02:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:24.354 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:02:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:24.355 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:02:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:24.355 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:02:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:24.355 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:02:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:02:24 np0005548790.localdomain podman[285980]: 2025-12-06 10:02:24.562708354 +0000 UTC m=+0.081555379 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:02:24 np0005548790.localdomain podman[285980]: 2025-12-06 10:02:24.572186563 +0000 UTC m=+0.091033608 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 10:02:24 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:02:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:24.810 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:02:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:25.023 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:02:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:25.025 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=12493MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:02:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:25.026 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:02:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:25.026 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:02:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:25.206 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:02:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:25.207 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:02:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:25.226 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:02:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:25.690 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:02:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:25.698 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:02:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:25.717 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:02:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:25.720 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:02:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:25.720 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:02:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:26.717 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:02:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:26.749 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:02:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:26.750 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:02:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:27.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:02:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:27.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:02:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:02:27.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:02:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:02:28 np0005548790.localdomain podman[286044]: 2025-12-06 10:02:28.566503024 +0000 UTC m=+0.084131479 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:02:28 np0005548790.localdomain podman[286044]: 2025-12-06 10:02:28.667176212 +0000 UTC m=+0.184804627 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 10:02:28 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:02:39 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:02:39 np0005548790.localdomain podman[286069]: 2025-12-06 10:02:39.573882192 +0000 UTC m=+0.087984484 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 10:02:39 np0005548790.localdomain podman[286069]: 2025-12-06 10:02:39.579262849 +0000 UTC m=+0.093365181 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:02:39 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:02:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:02:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:02:46 np0005548790.localdomain podman[286087]: 2025-12-06 10:02:46.574342038 +0000 UTC m=+0.087694456 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 10:02:46 np0005548790.localdomain podman[286087]: 2025-12-06 10:02:46.606730978 +0000 UTC m=+0.120083346 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:02:46 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:02:46 np0005548790.localdomain podman[286088]: 2025-12-06 10:02:46.626736333 +0000 UTC m=+0.137873342 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Dec 06 10:02:46 np0005548790.localdomain podman[286088]: 2025-12-06 10:02:46.636533459 +0000 UTC m=+0.147670468 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:02:46 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:02:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:02:48.383 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:02:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:02:48.383 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:02:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:02:48.384 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:02:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:02:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:02:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:02:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150517 "" "Go-http-client/1.1"
Dec 06 10:02:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:02:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17719 "" "Go-http-client/1.1"
Dec 06 10:02:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:02:48 np0005548790.localdomain systemd[1]: tmp-crun.50AFZi.mount: Deactivated successfully.
Dec 06 10:02:48 np0005548790.localdomain podman[286127]: 2025-12-06 10:02:48.568358843 +0000 UTC m=+0.082598168 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_id=edpm, name=ubi9-minimal, io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc.)
Dec 06 10:02:48 np0005548790.localdomain podman[286127]: 2025-12-06 10:02:48.586221269 +0000 UTC m=+0.100460584 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_id=edpm, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 06 10:02:48 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:02:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:02:52 np0005548790.localdomain systemd[1]: tmp-crun.7WzTKN.mount: Deactivated successfully.
Dec 06 10:02:52 np0005548790.localdomain podman[286147]: 2025-12-06 10:02:52.571844553 +0000 UTC m=+0.089567057 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=multipathd, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 10:02:52 np0005548790.localdomain podman[286147]: 2025-12-06 10:02:52.58203578 +0000 UTC m=+0.099758274 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, tcib_managed=true)
Dec 06 10:02:52 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:02:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:02:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:02:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:02:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:02:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:02:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:02:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:02:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:02:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:02:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:02:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:02:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:02:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:02:55 np0005548790.localdomain podman[286166]: 2025-12-06 10:02:55.565799704 +0000 UTC m=+0.081137437 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 10:02:55 np0005548790.localdomain podman[286166]: 2025-12-06 10:02:55.57925332 +0000 UTC m=+0.094591073 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:02:55 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:02:57 np0005548790.localdomain sshd[284931]: Received disconnect from 192.168.122.11 port 55102:11: disconnected by user
Dec 06 10:02:57 np0005548790.localdomain sshd[284931]: Disconnected from user tripleo-admin 192.168.122.11 port 55102
Dec 06 10:02:57 np0005548790.localdomain sshd[284911]: pam_unix(sshd:session): session closed for user tripleo-admin
Dec 06 10:02:57 np0005548790.localdomain systemd[1]: session-62.scope: Deactivated successfully.
Dec 06 10:02:57 np0005548790.localdomain systemd[1]: session-62.scope: Consumed 1.338s CPU time.
Dec 06 10:02:57 np0005548790.localdomain systemd-logind[760]: Session 62 logged out. Waiting for processes to exit.
Dec 06 10:02:57 np0005548790.localdomain systemd-logind[760]: Removed session 62.
Dec 06 10:02:59 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv Updating MDS map to version 13 from mon.1
Dec 06 10:02:59 np0005548790.localdomain ceph-mds[285635]: mds.0.13 handle_mds_map i am now mds.0.13
Dec 06 10:02:59 np0005548790.localdomain ceph-mds[285635]: mds.0.13 handle_mds_map state change up:standby --> up:replay
Dec 06 10:02:59 np0005548790.localdomain ceph-mds[285635]: mds.0.13 replay_start
Dec 06 10:02:59 np0005548790.localdomain ceph-mds[285635]: mds.0.13  waiting for osdmap 87 (which blocklists prior instance)
Dec 06 10:02:59 np0005548790.localdomain ceph-mds[285635]: mds.0.cache creating system inode with ino:0x100
Dec 06 10:02:59 np0005548790.localdomain ceph-mds[285635]: mds.0.cache creating system inode with ino:0x1
Dec 06 10:02:59 np0005548790.localdomain ceph-mds[285635]: mds.0.13 Finished replaying journal
Dec 06 10:02:59 np0005548790.localdomain ceph-mds[285635]: mds.0.13 making mds journal writeable
Dec 06 10:02:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:02:59 np0005548790.localdomain podman[286199]: 2025-12-06 10:02:59.560231369 +0000 UTC m=+0.074334852 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:02:59 np0005548790.localdomain podman[286199]: 2025-12-06 10:02:59.665341188 +0000 UTC m=+0.179444671 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 10:02:59 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:03:00 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv Updating MDS map to version 14 from mon.1
Dec 06 10:03:00 np0005548790.localdomain ceph-mds[285635]: mds.0.13 handle_mds_map i am now mds.0.13
Dec 06 10:03:00 np0005548790.localdomain ceph-mds[285635]: mds.0.13 handle_mds_map state change up:replay --> up:reconnect
Dec 06 10:03:00 np0005548790.localdomain ceph-mds[285635]: mds.0.13 reconnect_start
Dec 06 10:03:00 np0005548790.localdomain ceph-mds[285635]: mds.0.13 reopen_log
Dec 06 10:03:00 np0005548790.localdomain ceph-mds[285635]: mds.0.13 reconnect_done
Dec 06 10:03:01 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv Updating MDS map to version 15 from mon.1
Dec 06 10:03:01 np0005548790.localdomain ceph-mds[285635]: mds.0.13 handle_mds_map i am now mds.0.13
Dec 06 10:03:01 np0005548790.localdomain ceph-mds[285635]: mds.0.13 handle_mds_map state change up:reconnect --> up:rejoin
Dec 06 10:03:01 np0005548790.localdomain ceph-mds[285635]: mds.0.13 rejoin_start
Dec 06 10:03:01 np0005548790.localdomain ceph-mds[285635]: mds.0.13 rejoin_joint_start
Dec 06 10:03:01 np0005548790.localdomain ceph-mds[285635]: mds.0.13 rejoin_done
Dec 06 10:03:02 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv Updating MDS map to version 16 from mon.1
Dec 06 10:03:02 np0005548790.localdomain ceph-mds[285635]: mds.0.13 handle_mds_map i am now mds.0.13
Dec 06 10:03:02 np0005548790.localdomain ceph-mds[285635]: mds.0.13 handle_mds_map state change up:rejoin --> up:active
Dec 06 10:03:02 np0005548790.localdomain ceph-mds[285635]: mds.0.13 recovery_done -- successful recovery!
Dec 06 10:03:02 np0005548790.localdomain ceph-mds[285635]: mds.0.13 active_start
Dec 06 10:03:02 np0005548790.localdomain ceph-mds[285635]: mds.0.13 cluster recovered.
Dec 06 10:03:06 np0005548790.localdomain ceph-mds[285635]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec 06 10:03:06 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mds-mds-np0005548790-vhcezv[285631]: 2025-12-06T10:03:06.460+0000 7f3017d20640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.322 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:03:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:03:07 np0005548790.localdomain systemd[1]: Stopping User Manager for UID 1003...
Dec 06 10:03:07 np0005548790.localdomain systemd[284915]: Activating special unit Exit the Session...
Dec 06 10:03:07 np0005548790.localdomain systemd[284915]: Stopped target Main User Target.
Dec 06 10:03:07 np0005548790.localdomain systemd[284915]: Stopped target Basic System.
Dec 06 10:03:07 np0005548790.localdomain systemd[284915]: Stopped target Paths.
Dec 06 10:03:07 np0005548790.localdomain systemd[284915]: Stopped target Sockets.
Dec 06 10:03:07 np0005548790.localdomain systemd[284915]: Stopped target Timers.
Dec 06 10:03:07 np0005548790.localdomain systemd[284915]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 06 10:03:07 np0005548790.localdomain systemd[284915]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 06 10:03:07 np0005548790.localdomain systemd[284915]: Closed D-Bus User Message Bus Socket.
Dec 06 10:03:07 np0005548790.localdomain systemd[284915]: Stopped Create User's Volatile Files and Directories.
Dec 06 10:03:07 np0005548790.localdomain systemd[284915]: Removed slice User Application Slice.
Dec 06 10:03:07 np0005548790.localdomain systemd[284915]: Reached target Shutdown.
Dec 06 10:03:07 np0005548790.localdomain systemd[284915]: Finished Exit the Session.
Dec 06 10:03:07 np0005548790.localdomain systemd[284915]: Reached target Exit the Session.
Dec 06 10:03:07 np0005548790.localdomain systemd[1]: user@1003.service: Deactivated successfully.
Dec 06 10:03:07 np0005548790.localdomain systemd[1]: Stopped User Manager for UID 1003.
Dec 06 10:03:07 np0005548790.localdomain systemd[1]: Stopping User Runtime Directory /run/user/1003...
Dec 06 10:03:07 np0005548790.localdomain systemd[1]: run-user-1003.mount: Deactivated successfully.
Dec 06 10:03:07 np0005548790.localdomain systemd[1]: user-runtime-dir@1003.service: Deactivated successfully.
Dec 06 10:03:07 np0005548790.localdomain systemd[1]: Stopped User Runtime Directory /run/user/1003.
Dec 06 10:03:07 np0005548790.localdomain systemd[1]: Removed slice User Slice of UID 1003.
Dec 06 10:03:07 np0005548790.localdomain systemd[1]: user-1003.slice: Consumed 1.707s CPU time.
Dec 06 10:03:08 np0005548790.localdomain sudo[286229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:03:08 np0005548790.localdomain sudo[286229]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:08 np0005548790.localdomain sudo[286229]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:08 np0005548790.localdomain sudo[286247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:03:08 np0005548790.localdomain sudo[286247]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:08 np0005548790.localdomain sudo[286247]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:08 np0005548790.localdomain sudo[286265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:03:08 np0005548790.localdomain sudo[286265]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:09 np0005548790.localdomain sudo[286265]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:10 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:03:10 np0005548790.localdomain systemd[1]: tmp-crun.HEblow.mount: Deactivated successfully.
Dec 06 10:03:10 np0005548790.localdomain podman[286314]: 2025-12-06 10:03:10.575870394 +0000 UTC m=+0.093532415 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 10:03:10 np0005548790.localdomain podman[286314]: 2025-12-06 10:03:10.610063064 +0000 UTC m=+0.127724995 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 10:03:10 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:03:11 np0005548790.localdomain sudo[286332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:03:11 np0005548790.localdomain sudo[286332]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:11 np0005548790.localdomain sudo[286332]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:12 np0005548790.localdomain sudo[286350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:03:12 np0005548790.localdomain sudo[286350]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:12 np0005548790.localdomain sudo[286350]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:03:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:03:17 np0005548790.localdomain podman[286368]: 2025-12-06 10:03:17.56757244 +0000 UTC m=+0.076387379 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 10:03:17 np0005548790.localdomain podman[286368]: 2025-12-06 10:03:17.603375654 +0000 UTC m=+0.112190613 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 10:03:17 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:03:17 np0005548790.localdomain podman[286369]: 2025-12-06 10:03:17.62969994 +0000 UTC m=+0.135463476 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec 06 10:03:17 np0005548790.localdomain podman[286369]: 2025-12-06 10:03:17.643212597 +0000 UTC m=+0.148976143 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:03:17 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:03:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:03:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:03:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:03:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150517 "" "Go-http-client/1.1"
Dec 06 10:03:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:03:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17725 "" "Go-http-client/1.1"
Dec 06 10:03:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:03:19 np0005548790.localdomain systemd[1]: tmp-crun.89p3l0.mount: Deactivated successfully.
Dec 06 10:03:19 np0005548790.localdomain podman[286411]: 2025-12-06 10:03:19.574053574 +0000 UTC m=+0.091127000 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=minimal rhel9, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container)
Dec 06 10:03:19 np0005548790.localdomain podman[286411]: 2025-12-06 10:03:19.587391806 +0000 UTC m=+0.104465222 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, release=1755695350, config_id=edpm, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41)
Dec 06 10:03:19 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:03:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:22.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:03:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:22.334 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 10:03:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:22.359 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 10:03:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:22.360 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:03:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:22.361 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 10:03:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:22.375 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:03:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:23.384 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:03:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:23.384 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:03:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:23.385 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:03:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:23.405 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:03:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:03:23 np0005548790.localdomain podman[286432]: 2025-12-06 10:03:23.567618954 +0000 UTC m=+0.084706575 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 10:03:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:03:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:03:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:03:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:03:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:03:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:03:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:03:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:03:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:03:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:03:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:03:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:03:23 np0005548790.localdomain podman[286432]: 2025-12-06 10:03:23.609292148 +0000 UTC m=+0.126379709 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 10:03:23 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:03:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:24.349 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:03:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:25.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:03:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:25.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:03:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:26.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:03:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:26.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:03:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:26.355 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:03:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:26.355 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:03:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:26.355 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:03:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:26.356 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:03:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:26.356 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:03:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:03:26 np0005548790.localdomain podman[286469]: 2025-12-06 10:03:26.564091406 +0000 UTC m=+0.074250411 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 10:03:26 np0005548790.localdomain podman[286469]: 2025-12-06 10:03:26.575477435 +0000 UTC m=+0.085636490 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 10:03:26 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:03:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:26.756 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:03:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:26.958 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:03:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:26.960 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=12500MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:03:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:26.960 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:03:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:26.961 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:03:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:27.059 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:03:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:27.059 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:03:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:27.124 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Refreshing inventories for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 10:03:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:27.193 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updating ProviderTree inventory for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 10:03:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:27.194 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updating inventory in ProviderTree for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 10:03:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:27.209 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Refreshing aggregate associations for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 10:03:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:27.236 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Refreshing trait associations for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0, traits: HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AMD_SVM,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_F16C,HW_CPU_X86_ABM,HW_CPU_X86_BMI,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SVM,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 10:03:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:27.253 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:03:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:27.674 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:03:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:27.679 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:03:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:27.701 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:03:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:27.704 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:03:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:27.704 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:03:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:28.704 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:03:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:28.705 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:03:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:28.706 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:03:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:03:29.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:03:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:03:30 np0005548790.localdomain podman[286516]: 2025-12-06 10:03:30.567156106 +0000 UTC m=+0.084563341 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 10:03:30 np0005548790.localdomain podman[286516]: 2025-12-06 10:03:30.634381554 +0000 UTC m=+0.151788769 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 06 10:03:30 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:03:33 np0005548790.localdomain sudo[286542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:03:33 np0005548790.localdomain sudo[286542]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:33 np0005548790.localdomain sudo[286542]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:33 np0005548790.localdomain sudo[286560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 06 10:03:33 np0005548790.localdomain sudo[286560]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:33 np0005548790.localdomain sudo[286560]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:34 np0005548790.localdomain sudo[286599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:03:34 np0005548790.localdomain sudo[286599]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:34 np0005548790.localdomain sudo[286599]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:34 np0005548790.localdomain sudo[286617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:03:34 np0005548790.localdomain sudo[286617]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:34 np0005548790.localdomain sudo[286617]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:36 np0005548790.localdomain sudo[286635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:03:36 np0005548790.localdomain sudo[286635]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:36 np0005548790.localdomain sudo[286635]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:41 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:03:41 np0005548790.localdomain podman[286653]: 2025-12-06 10:03:41.604864701 +0000 UTC m=+0.117538488 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 06 10:03:41 np0005548790.localdomain podman[286653]: 2025-12-06 10:03:41.61513242 +0000 UTC m=+0.127806257 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 06 10:03:41 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:03:42 np0005548790.localdomain sudo[286671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:03:42 np0005548790.localdomain sudo[286671]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:42 np0005548790.localdomain sudo[286671]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:42 np0005548790.localdomain sudo[286689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:03:42 np0005548790.localdomain sudo[286689]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:42 np0005548790.localdomain podman[286749]: 
Dec 06 10:03:42 np0005548790.localdomain podman[286749]: 2025-12-06 10:03:42.660205295 +0000 UTC m=+0.059257214 container create d0ee0440d6eb2c45af9764321641692f4690977a74ab6163a60c46dbe5857dfd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_mccarthy, ceph=True, release=1763362218, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_BRANCH=main, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 06 10:03:42 np0005548790.localdomain systemd[1]: Started libpod-conmon-d0ee0440d6eb2c45af9764321641692f4690977a74ab6163a60c46dbe5857dfd.scope.
Dec 06 10:03:42 np0005548790.localdomain podman[286749]: 2025-12-06 10:03:42.633493788 +0000 UTC m=+0.032545777 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:03:42 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:03:42 np0005548790.localdomain podman[286749]: 2025-12-06 10:03:42.75711155 +0000 UTC m=+0.156163499 container init d0ee0440d6eb2c45af9764321641692f4690977a74ab6163a60c46dbe5857dfd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_mccarthy, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.41.4, architecture=x86_64, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=7, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7)
Dec 06 10:03:42 np0005548790.localdomain podman[286749]: 2025-12-06 10:03:42.767994326 +0000 UTC m=+0.167046285 container start d0ee0440d6eb2c45af9764321641692f4690977a74ab6163a60c46dbe5857dfd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_mccarthy, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.openshift.expose-services=, vendor=Red Hat, Inc., version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, ceph=True, vcs-type=git, distribution-scope=public, name=rhceph, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7)
Dec 06 10:03:42 np0005548790.localdomain podman[286749]: 2025-12-06 10:03:42.768294264 +0000 UTC m=+0.167346253 container attach d0ee0440d6eb2c45af9764321641692f4690977a74ab6163a60c46dbe5857dfd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_mccarthy, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, architecture=x86_64, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., GIT_BRANCH=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, ceph=True, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container)
Dec 06 10:03:42 np0005548790.localdomain exciting_mccarthy[286765]: 167 167
Dec 06 10:03:42 np0005548790.localdomain systemd[1]: libpod-d0ee0440d6eb2c45af9764321641692f4690977a74ab6163a60c46dbe5857dfd.scope: Deactivated successfully.
Dec 06 10:03:42 np0005548790.localdomain podman[286749]: 2025-12-06 10:03:42.774488443 +0000 UTC m=+0.173540432 container died d0ee0440d6eb2c45af9764321641692f4690977a74ab6163a60c46dbe5857dfd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_mccarthy, vcs-type=git, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.expose-services=, distribution-scope=public, version=7, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vendor=Red Hat, Inc., architecture=x86_64)
Dec 06 10:03:42 np0005548790.localdomain podman[286770]: 2025-12-06 10:03:42.872243221 +0000 UTC m=+0.084430267 container remove d0ee0440d6eb2c45af9764321641692f4690977a74ab6163a60c46dbe5857dfd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_mccarthy, distribution-scope=public, description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vendor=Red Hat, Inc., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, release=1763362218)
Dec 06 10:03:42 np0005548790.localdomain systemd[1]: libpod-conmon-d0ee0440d6eb2c45af9764321641692f4690977a74ab6163a60c46dbe5857dfd.scope: Deactivated successfully.
Dec 06 10:03:42 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 10:03:43 np0005548790.localdomain systemd-rc-local-generator[286809]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 10:03:43 np0005548790.localdomain systemd-sysv-generator[286813]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 10:03:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:03:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 10:03:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:03:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:03:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 10:03:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 10:03:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:03:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:03:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:03:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-6a23d430be0f6116e2c1c5fc6124e7131ac998c2f7e112581c906f67c5b61834-merged.mount: Deactivated successfully.
Dec 06 10:03:43 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 10:03:43 np0005548790.localdomain systemd-rc-local-generator[286851]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 10:03:43 np0005548790.localdomain systemd-sysv-generator[286858]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 10:03:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:03:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 10:03:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:03:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:03:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 10:03:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 10:03:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:03:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:03:43 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:03:43 np0005548790.localdomain systemd[1]: Starting Ceph mgr.np0005548790.kvkfyr for 1939e851-b10c-5c3b-9bb7-8e7f380233e8...
Dec 06 10:03:44 np0005548790.localdomain podman[286916]: 
Dec 06 10:03:44 np0005548790.localdomain podman[286916]: 2025-12-06 10:03:44.051504176 +0000 UTC m=+0.080104229 container create b4d24188cf11d1a647b309e22fd7630b13da3feade35c1f9a1922fbc7b4c48ad (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr, io.buildah.version=1.41.4, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhceph, GIT_BRANCH=main, version=7, architecture=x86_64, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7)
Dec 06 10:03:44 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbcb452a09497f6d55ef747ec5fcbed5f432bbd0e63f19f4b321436844a5d082/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:03:44 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbcb452a09497f6d55ef747ec5fcbed5f432bbd0e63f19f4b321436844a5d082/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:03:44 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbcb452a09497f6d55ef747ec5fcbed5f432bbd0e63f19f4b321436844a5d082/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:03:44 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbcb452a09497f6d55ef747ec5fcbed5f432bbd0e63f19f4b321436844a5d082/merged/var/lib/ceph/mgr/ceph-np0005548790.kvkfyr supports timestamps until 2038 (0x7fffffff)
Dec 06 10:03:44 np0005548790.localdomain podman[286916]: 2025-12-06 10:03:44.114576982 +0000 UTC m=+0.143177025 container init b4d24188cf11d1a647b309e22fd7630b13da3feade35c1f9a1922fbc7b4c48ad (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr, vcs-type=git, io.buildah.version=1.41.4, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_BRANCH=main, version=7, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 06 10:03:44 np0005548790.localdomain podman[286916]: 2025-12-06 10:03:44.021070978 +0000 UTC m=+0.049671051 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:03:44 np0005548790.localdomain podman[286916]: 2025-12-06 10:03:44.121651744 +0000 UTC m=+0.150251787 container start b4d24188cf11d1a647b309e22fd7630b13da3feade35c1f9a1922fbc7b4c48ad (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.openshift.tags=rhceph ceph, vcs-type=git, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vendor=Red Hat, Inc., name=rhceph, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.component=rhceph-container, distribution-scope=public, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=)
Dec 06 10:03:44 np0005548790.localdomain bash[286916]: b4d24188cf11d1a647b309e22fd7630b13da3feade35c1f9a1922fbc7b4c48ad
Dec 06 10:03:44 np0005548790.localdomain systemd[1]: Started Ceph mgr.np0005548790.kvkfyr for 1939e851-b10c-5c3b-9bb7-8e7f380233e8.
Dec 06 10:03:44 np0005548790.localdomain sudo[286689]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:44 np0005548790.localdomain ceph-mgr[286934]: set uid:gid to 167:167 (ceph:ceph)
Dec 06 10:03:44 np0005548790.localdomain ceph-mgr[286934]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2
Dec 06 10:03:44 np0005548790.localdomain ceph-mgr[286934]: pidfile_write: ignore empty --pid-file
Dec 06 10:03:44 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'alerts'
Dec 06 10:03:44 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 06 10:03:44 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'balancer'
Dec 06 10:03:44 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:03:44.279+0000 7efd860f8140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 06 10:03:44 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 06 10:03:44 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'cephadm'
Dec 06 10:03:44 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:03:44.345+0000 7efd860f8140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 06 10:03:44 np0005548790.localdomain sudo[286959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:03:44 np0005548790.localdomain sudo[286959]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:44 np0005548790.localdomain sudo[286959]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:44 np0005548790.localdomain sudo[286981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:03:44 np0005548790.localdomain sudo[286981]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:44 np0005548790.localdomain sudo[286981]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:44 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'crash'
Dec 06 10:03:44 np0005548790.localdomain sudo[287000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 10:03:44 np0005548790.localdomain sudo[287000]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:45 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 06 10:03:45 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'dashboard'
Dec 06 10:03:45 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:03:45.013+0000 7efd860f8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 06 10:03:45 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'devicehealth'
Dec 06 10:03:45 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 06 10:03:45 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'diskprediction_local'
Dec 06 10:03:45 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:03:45.573+0000 7efd860f8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 06 10:03:45 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 06 10:03:45 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 06 10:03:45 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]:   from numpy import show_config as show_numpy_config
Dec 06 10:03:45 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 06 10:03:45 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:03:45.720+0000 7efd860f8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 06 10:03:45 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'influx'
Dec 06 10:03:45 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 06 10:03:45 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'insights'
Dec 06 10:03:45 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:03:45.782+0000 7efd860f8140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 06 10:03:45 np0005548790.localdomain podman[287090]: 2025-12-06 10:03:45.828439837 +0000 UTC m=+0.086238616 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, distribution-scope=public, description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, com.redhat.component=rhceph-container, vcs-type=git, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, GIT_BRANCH=main, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., release=1763362218)
Dec 06 10:03:45 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'iostat'
Dec 06 10:03:45 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 06 10:03:45 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'k8sevents'
Dec 06 10:03:45 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:03:45.908+0000 7efd860f8140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 06 10:03:45 np0005548790.localdomain podman[287090]: 2025-12-06 10:03:45.96127131 +0000 UTC m=+0.219070039 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, GIT_CLEAN=True, name=rhceph, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, version=7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7)
Dec 06 10:03:46 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'localpool'
Dec 06 10:03:46 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'mds_autoscaler'
Dec 06 10:03:46 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'mirroring'
Dec 06 10:03:46 np0005548790.localdomain sudo[287000]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:46 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'nfs'
Dec 06 10:03:46 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 06 10:03:46 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'orchestrator'
Dec 06 10:03:46 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:03:46.689+0000 7efd860f8140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 06 10:03:46 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 06 10:03:46 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'osd_perf_query'
Dec 06 10:03:46 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:03:46.846+0000 7efd860f8140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 06 10:03:46 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 06 10:03:46 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'osd_support'
Dec 06 10:03:46 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:03:46.916+0000 7efd860f8140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 06 10:03:46 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 06 10:03:46 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'pg_autoscaler'
Dec 06 10:03:46 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:03:46.973+0000 7efd860f8140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 06 10:03:47 np0005548790.localdomain sudo[287190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:03:47 np0005548790.localdomain sudo[287190]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:47 np0005548790.localdomain sudo[287190]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:47 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 06 10:03:47 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'progress'
Dec 06 10:03:47 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:03:47.041+0000 7efd860f8140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 06 10:03:47 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 06 10:03:47 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'prometheus'
Dec 06 10:03:47 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:03:47.100+0000 7efd860f8140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 06 10:03:47 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 06 10:03:47 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'rbd_support'
Dec 06 10:03:47 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:03:47.410+0000 7efd860f8140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 06 10:03:47 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 06 10:03:47 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'restful'
Dec 06 10:03:47 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:03:47.491+0000 7efd860f8140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 06 10:03:47 np0005548790.localdomain sudo[287208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:03:47 np0005548790.localdomain sudo[287208]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:47 np0005548790.localdomain sudo[287208]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:47 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'rgw'
Dec 06 10:03:47 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 06 10:03:47 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'rook'
Dec 06 10:03:47 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:03:47.818+0000 7efd860f8140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 06 10:03:48 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 06 10:03:48 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'selftest'
Dec 06 10:03:48 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:03:48.242+0000 7efd860f8140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 06 10:03:48 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 06 10:03:48 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'snap_schedule'
Dec 06 10:03:48 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:03:48.303+0000 7efd860f8140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 06 10:03:48 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'stats'
Dec 06 10:03:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:03:48.385 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:03:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:03:48.386 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:03:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:03:48.387 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:03:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:03:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:03:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:03:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152653 "" "Go-http-client/1.1"
Dec 06 10:03:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:03:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18204 "" "Go-http-client/1.1"
Dec 06 10:03:48 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'status'
Dec 06 10:03:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:03:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:03:48 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 06 10:03:48 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:03:48.541+0000 7efd860f8140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 06 10:03:48 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'telegraf'
Dec 06 10:03:48 np0005548790.localdomain podman[287227]: 2025-12-06 10:03:48.599103666 +0000 UTC m=+0.097179374 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 10:03:48 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:03:48.607+0000 7efd860f8140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 06 10:03:48 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 06 10:03:48 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'telemetry'
Dec 06 10:03:48 np0005548790.localdomain podman[287226]: 2025-12-06 10:03:48.647869282 +0000 UTC m=+0.145548989 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 10:03:48 np0005548790.localdomain podman[287226]: 2025-12-06 10:03:48.663213689 +0000 UTC m=+0.160893376 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:03:48 np0005548790.localdomain podman[287227]: 2025-12-06 10:03:48.666446068 +0000 UTC m=+0.164521766 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:03:48 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:03:48 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:03:48 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 06 10:03:48 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'test_orchestrator'
Dec 06 10:03:48 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:03:48.747+0000 7efd860f8140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 06 10:03:48 np0005548790.localdomain sudo[287266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:03:48 np0005548790.localdomain sudo[287266]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:48 np0005548790.localdomain sudo[287266]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:48 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 06 10:03:48 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:03:48.904+0000 7efd860f8140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 06 10:03:48 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'volumes'
Dec 06 10:03:49 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 06 10:03:49 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'zabbix'
Dec 06 10:03:49 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:03:49.096+0000 7efd860f8140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 06 10:03:49 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 06 10:03:49 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:03:49.155+0000 7efd860f8140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 06 10:03:49 np0005548790.localdomain ceph-mgr[286934]: ms_deliver_dispatch: unhandled message 0x5635485111e0 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0
Dec 06 10:03:49 np0005548790.localdomain ceph-mgr[286934]: client.0 ms_handle_reset on v2:172.18.0.103:6800/3108124117
Dec 06 10:03:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:03:50 np0005548790.localdomain podman[287284]: 2025-12-06 10:03:50.599892575 +0000 UTC m=+0.117683812 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, vcs-type=git, config_id=edpm, release=1755695350, architecture=x86_64, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible)
Dec 06 10:03:50 np0005548790.localdomain podman[287284]: 2025-12-06 10:03:50.612760614 +0000 UTC m=+0.130551851 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, vcs-type=git)
Dec 06 10:03:50 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:03:52 np0005548790.localdomain sudo[287305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:03:52 np0005548790.localdomain sudo[287305]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:52 np0005548790.localdomain sudo[287305]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:03:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:03:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:03:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:03:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:03:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:03:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:03:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:03:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:03:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:03:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:03:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:03:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:03:54 np0005548790.localdomain systemd[1]: tmp-crun.QuKWS2.mount: Deactivated successfully.
Dec 06 10:03:54 np0005548790.localdomain podman[287323]: 2025-12-06 10:03:54.583221937 +0000 UTC m=+0.100611657 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd)
Dec 06 10:03:54 np0005548790.localdomain podman[287323]: 2025-12-06 10:03:54.599205611 +0000 UTC m=+0.116595301 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 06 10:03:54 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:03:56 np0005548790.localdomain sudo[287343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:03:56 np0005548790.localdomain sudo[287343]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:56 np0005548790.localdomain sudo[287343]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:56 np0005548790.localdomain sudo[287361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 10:03:56 np0005548790.localdomain sudo[287361]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:56 np0005548790.localdomain sudo[287361]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:56 np0005548790.localdomain sudo[287379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph
Dec 06 10:03:56 np0005548790.localdomain sudo[287379]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:56 np0005548790.localdomain sudo[287379]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:56 np0005548790.localdomain sudo[287397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:03:56 np0005548790.localdomain sudo[287397]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:56 np0005548790.localdomain sudo[287397]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:56 np0005548790.localdomain sudo[287415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:03:56 np0005548790.localdomain sudo[287415]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:03:56 np0005548790.localdomain sudo[287415]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:56 np0005548790.localdomain sudo[287439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:03:56 np0005548790.localdomain sudo[287439]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:56 np0005548790.localdomain sudo[287439]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:56 np0005548790.localdomain podman[287433]: 2025-12-06 10:03:56.704831092 +0000 UTC m=+0.086532185 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:03:56 np0005548790.localdomain podman[287433]: 2025-12-06 10:03:56.713125498 +0000 UTC m=+0.094826631 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:03:56 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:03:56 np0005548790.localdomain sudo[287490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:03:56 np0005548790.localdomain sudo[287490]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:56 np0005548790.localdomain sudo[287490]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:56 np0005548790.localdomain sudo[287508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:03:56 np0005548790.localdomain sudo[287508]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:56 np0005548790.localdomain sudo[287508]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:56 np0005548790.localdomain sudo[287526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 06 10:03:56 np0005548790.localdomain sudo[287526]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:56 np0005548790.localdomain sudo[287526]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:57 np0005548790.localdomain sudo[287544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:03:57 np0005548790.localdomain sudo[287544]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:57 np0005548790.localdomain sudo[287544]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:57 np0005548790.localdomain sudo[287562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:03:57 np0005548790.localdomain sudo[287562]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:57 np0005548790.localdomain sudo[287562]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:57 np0005548790.localdomain sudo[287580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:03:57 np0005548790.localdomain sudo[287580]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:57 np0005548790.localdomain sudo[287580]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:57 np0005548790.localdomain sudo[287598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:03:57 np0005548790.localdomain sudo[287598]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:57 np0005548790.localdomain sudo[287598]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:57 np0005548790.localdomain sudo[287616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:03:57 np0005548790.localdomain sudo[287616]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:57 np0005548790.localdomain sudo[287616]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:57 np0005548790.localdomain sudo[287650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:03:57 np0005548790.localdomain sudo[287650]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:57 np0005548790.localdomain sudo[287650]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:57 np0005548790.localdomain sudo[287668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:03:57 np0005548790.localdomain sudo[287668]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:57 np0005548790.localdomain sudo[287668]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:57 np0005548790.localdomain sudo[287686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:03:57 np0005548790.localdomain sudo[287686]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:57 np0005548790.localdomain sudo[287686]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:57 np0005548790.localdomain sudo[287704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 10:03:57 np0005548790.localdomain sudo[287704]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:57 np0005548790.localdomain sudo[287704]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:57 np0005548790.localdomain sudo[287722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph
Dec 06 10:03:57 np0005548790.localdomain sudo[287722]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:57 np0005548790.localdomain sudo[287722]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:57 np0005548790.localdomain sudo[287740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:03:57 np0005548790.localdomain sudo[287740]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:57 np0005548790.localdomain sudo[287740]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:57 np0005548790.localdomain sudo[287758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:03:57 np0005548790.localdomain sudo[287758]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:57 np0005548790.localdomain sudo[287758]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:58 np0005548790.localdomain sudo[287776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:03:58 np0005548790.localdomain sudo[287776]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:58 np0005548790.localdomain sudo[287776]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:58 np0005548790.localdomain sudo[287810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:03:58 np0005548790.localdomain sudo[287810]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:58 np0005548790.localdomain sudo[287810]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:58 np0005548790.localdomain sudo[287828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:03:58 np0005548790.localdomain sudo[287828]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:58 np0005548790.localdomain sudo[287828]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:58 np0005548790.localdomain sudo[287846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 06 10:03:58 np0005548790.localdomain sudo[287846]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:58 np0005548790.localdomain sudo[287846]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:58 np0005548790.localdomain sudo[287864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:03:58 np0005548790.localdomain sudo[287864]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:58 np0005548790.localdomain sudo[287864]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:58 np0005548790.localdomain sudo[287882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:03:58 np0005548790.localdomain sudo[287882]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:58 np0005548790.localdomain sudo[287882]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:58 np0005548790.localdomain sudo[287900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:03:58 np0005548790.localdomain sudo[287900]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:58 np0005548790.localdomain sudo[287900]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:58 np0005548790.localdomain sudo[287918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:03:58 np0005548790.localdomain sudo[287918]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:58 np0005548790.localdomain sudo[287918]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:58 np0005548790.localdomain sudo[287936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:03:58 np0005548790.localdomain sudo[287936]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:58 np0005548790.localdomain sudo[287936]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:58 np0005548790.localdomain sudo[287970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:03:58 np0005548790.localdomain sudo[287970]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:58 np0005548790.localdomain sudo[287970]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:59 np0005548790.localdomain sudo[287988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:03:59 np0005548790.localdomain sudo[287988]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:59 np0005548790.localdomain sudo[287988]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:59 np0005548790.localdomain sudo[288006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:03:59 np0005548790.localdomain sudo[288006]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:59 np0005548790.localdomain sudo[288006]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:59 np0005548790.localdomain sudo[288024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:03:59 np0005548790.localdomain sudo[288024]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:59 np0005548790.localdomain sudo[288024]: pam_unix(sudo:session): session closed for user root
Dec 06 10:03:59 np0005548790.localdomain sudo[288042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:03:59 np0005548790.localdomain sudo[288042]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:03:59 np0005548790.localdomain podman[288104]: 
Dec 06 10:03:59 np0005548790.localdomain podman[288104]: 2025-12-06 10:03:59.884641899 +0000 UTC m=+0.088463756 container create 31405c2f318b8d9e6daa0e60f044d0bdc5dda7ff77929518cf461c0793a036bf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_ellis, GIT_CLEAN=True, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, CEPH_POINT_RELEASE=, release=1763362218, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph)
Dec 06 10:03:59 np0005548790.localdomain systemd[1]: Started libpod-conmon-31405c2f318b8d9e6daa0e60f044d0bdc5dda7ff77929518cf461c0793a036bf.scope.
Dec 06 10:03:59 np0005548790.localdomain podman[288104]: 2025-12-06 10:03:59.846515643 +0000 UTC m=+0.050337520 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:03:59 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:03:59 np0005548790.localdomain podman[288104]: 2025-12-06 10:03:59.967892854 +0000 UTC m=+0.171714701 container init 31405c2f318b8d9e6daa0e60f044d0bdc5dda7ff77929518cf461c0793a036bf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_ellis, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, release=1763362218, build-date=2025-11-26T19:44:28Z, architecture=x86_64, GIT_BRANCH=main, ceph=True, version=7, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, name=rhceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 06 10:03:59 np0005548790.localdomain podman[288104]: 2025-12-06 10:03:59.97991145 +0000 UTC m=+0.183733307 container start 31405c2f318b8d9e6daa0e60f044d0bdc5dda7ff77929518cf461c0793a036bf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_ellis, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-type=git, build-date=2025-11-26T19:44:28Z, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7)
Dec 06 10:03:59 np0005548790.localdomain podman[288104]: 2025-12-06 10:03:59.980203038 +0000 UTC m=+0.184024885 container attach 31405c2f318b8d9e6daa0e60f044d0bdc5dda7ff77929518cf461c0793a036bf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_ellis, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=1763362218, build-date=2025-11-26T19:44:28Z, name=rhceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, distribution-scope=public, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, vcs-type=git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 10:03:59 np0005548790.localdomain stoic_ellis[288119]: 167 167
Dec 06 10:03:59 np0005548790.localdomain systemd[1]: libpod-31405c2f318b8d9e6daa0e60f044d0bdc5dda7ff77929518cf461c0793a036bf.scope: Deactivated successfully.
Dec 06 10:03:59 np0005548790.localdomain podman[288104]: 2025-12-06 10:03:59.984908137 +0000 UTC m=+0.188730014 container died 31405c2f318b8d9e6daa0e60f044d0bdc5dda7ff77929518cf461c0793a036bf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_ellis, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, name=rhceph, version=7, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 06 10:04:00 np0005548790.localdomain podman[288124]: 2025-12-06 10:04:00.107673486 +0000 UTC m=+0.106825177 container remove 31405c2f318b8d9e6daa0e60f044d0bdc5dda7ff77929518cf461c0793a036bf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stoic_ellis, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, ceph=True, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, name=rhceph, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git)
Dec 06 10:04:00 np0005548790.localdomain systemd[1]: libpod-conmon-31405c2f318b8d9e6daa0e60f044d0bdc5dda7ff77929518cf461c0793a036bf.scope: Deactivated successfully.
Dec 06 10:04:00 np0005548790.localdomain podman[288140]: 
Dec 06 10:04:00 np0005548790.localdomain podman[288140]: 2025-12-06 10:04:00.229422267 +0000 UTC m=+0.079473482 container create c827f2d98de10d642313994e5ac6957bbf35840eadda71482c4556c7c01f0100 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_mccarthy, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, distribution-scope=public, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, release=1763362218, vendor=Red Hat, Inc., name=rhceph, RELEASE=main, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7)
Dec 06 10:04:00 np0005548790.localdomain systemd[1]: Started libpod-conmon-c827f2d98de10d642313994e5ac6957bbf35840eadda71482c4556c7c01f0100.scope.
Dec 06 10:04:00 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:04:00 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca760b70241539fa6183c8f00282af95a066a73dc8b3bf3bcf15a73d31f8cbd5/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:04:00 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca760b70241539fa6183c8f00282af95a066a73dc8b3bf3bcf15a73d31f8cbd5/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Dec 06 10:04:00 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca760b70241539fa6183c8f00282af95a066a73dc8b3bf3bcf15a73d31f8cbd5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:04:00 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca760b70241539fa6183c8f00282af95a066a73dc8b3bf3bcf15a73d31f8cbd5/merged/var/lib/ceph/mon/ceph-np0005548790 supports timestamps until 2038 (0x7fffffff)
Dec 06 10:04:00 np0005548790.localdomain podman[288140]: 2025-12-06 10:04:00.195015001 +0000 UTC m=+0.045066256 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:04:00 np0005548790.localdomain podman[288140]: 2025-12-06 10:04:00.30270181 +0000 UTC m=+0.152753025 container init c827f2d98de10d642313994e5ac6957bbf35840eadda71482c4556c7c01f0100 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_mccarthy, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, CEPH_POINT_RELEASE=, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, release=1763362218)
Dec 06 10:04:00 np0005548790.localdomain podman[288140]: 2025-12-06 10:04:00.313048772 +0000 UTC m=+0.163099977 container start c827f2d98de10d642313994e5ac6957bbf35840eadda71482c4556c7c01f0100 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_mccarthy, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, release=1763362218, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, version=7, build-date=2025-11-26T19:44:28Z, ceph=True, RELEASE=main, io.buildah.version=1.41.4, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 06 10:04:00 np0005548790.localdomain podman[288140]: 2025-12-06 10:04:00.31409507 +0000 UTC m=+0.164146315 container attach c827f2d98de10d642313994e5ac6957bbf35840eadda71482c4556c7c01f0100 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_mccarthy, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, name=rhceph, io.openshift.expose-services=, version=7, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, release=1763362218, GIT_BRANCH=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git)
Dec 06 10:04:00 np0005548790.localdomain systemd[1]: libpod-c827f2d98de10d642313994e5ac6957bbf35840eadda71482c4556c7c01f0100.scope: Deactivated successfully.
Dec 06 10:04:00 np0005548790.localdomain podman[288140]: 2025-12-06 10:04:00.408939169 +0000 UTC m=+0.258990384 container died c827f2d98de10d642313994e5ac6957bbf35840eadda71482c4556c7c01f0100 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_mccarthy, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, release=1763362218, ceph=True, vcs-type=git, GIT_BRANCH=main, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, version=7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64)
Dec 06 10:04:00 np0005548790.localdomain podman[288181]: 2025-12-06 10:04:00.495494554 +0000 UTC m=+0.077173520 container remove c827f2d98de10d642313994e5ac6957bbf35840eadda71482c4556c7c01f0100 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_mccarthy, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, version=7, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.openshift.expose-services=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 10:04:00 np0005548790.localdomain systemd[1]: libpod-conmon-c827f2d98de10d642313994e5ac6957bbf35840eadda71482c4556c7c01f0100.scope: Deactivated successfully.
Dec 06 10:04:00 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 10:04:00 np0005548790.localdomain systemd-rc-local-generator[288224]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 10:04:00 np0005548790.localdomain systemd-sysv-generator[288228]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 10:04:00 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:04:00 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 10:04:00 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:04:00 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:04:00 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 10:04:00 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 10:04:00 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:04:00 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:04:00 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:04:00 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-063c82e231de5a7081bb9a232c3e38d89e64077459b17b9ca9d284d8780d128f-merged.mount: Deactivated successfully.
Dec 06 10:04:00 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:04:00 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 10:04:01 np0005548790.localdomain podman[288235]: 2025-12-06 10:04:01.016959517 +0000 UTC m=+0.093966416 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:04:01 np0005548790.localdomain systemd-sysv-generator[288289]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 10:04:01 np0005548790.localdomain systemd-rc-local-generator[288286]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 10:04:01 np0005548790.localdomain podman[288235]: 2025-12-06 10:04:01.070536404 +0000 UTC m=+0.147543343 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 10:04:01 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:04:01 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 10:04:01 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:04:01 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:04:01 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 10:04:01 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 10:04:01 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:04:01 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:04:01 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:04:01 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:04:01 np0005548790.localdomain systemd[1]: Starting Ceph mon.np0005548790 for 1939e851-b10c-5c3b-9bb7-8e7f380233e8...
Dec 06 10:04:01 np0005548790.localdomain podman[288356]: 
Dec 06 10:04:01 np0005548790.localdomain podman[288356]: 2025-12-06 10:04:01.729321784 +0000 UTC m=+0.109302084 container create c2f79d602097dfaaa7c8301203250e6916286483632f1ca70c4f11fc9e548b5c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mon-np0005548790, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, RELEASE=main, GIT_BRANCH=main, vendor=Red Hat, Inc., version=7, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph, architecture=x86_64, build-date=2025-11-26T19:44:28Z, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7)
Dec 06 10:04:01 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e800146688dd6667c503ef525a8dc9b08fdbcbbe4822831d3e3bbdbda8ab1a17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:04:01 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e800146688dd6667c503ef525a8dc9b08fdbcbbe4822831d3e3bbdbda8ab1a17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:04:01 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e800146688dd6667c503ef525a8dc9b08fdbcbbe4822831d3e3bbdbda8ab1a17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:04:01 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e800146688dd6667c503ef525a8dc9b08fdbcbbe4822831d3e3bbdbda8ab1a17/merged/var/lib/ceph/mon/ceph-np0005548790 supports timestamps until 2038 (0x7fffffff)
Dec 06 10:04:01 np0005548790.localdomain podman[288356]: 2025-12-06 10:04:01.792334187 +0000 UTC m=+0.172314497 container init c2f79d602097dfaaa7c8301203250e6916286483632f1ca70c4f11fc9e548b5c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mon-np0005548790, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, distribution-scope=public, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, ceph=True, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64)
Dec 06 10:04:01 np0005548790.localdomain podman[288356]: 2025-12-06 10:04:01.800351315 +0000 UTC m=+0.180331605 container start c2f79d602097dfaaa7c8301203250e6916286483632f1ca70c4f11fc9e548b5c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mon-np0005548790, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, RELEASE=main, ceph=True, release=1763362218, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.41.4, GIT_CLEAN=True, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 10:04:01 np0005548790.localdomain bash[288356]: c2f79d602097dfaaa7c8301203250e6916286483632f1ca70c4f11fc9e548b5c
Dec 06 10:04:01 np0005548790.localdomain podman[288356]: 2025-12-06 10:04:01.705508526 +0000 UTC m=+0.085488816 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:04:01 np0005548790.localdomain systemd[1]: Started Ceph mon.np0005548790 for 1939e851-b10c-5c3b-9bb7-8e7f380233e8.
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: set uid:gid to 167:167 (ceph:ceph)
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: pidfile_write: ignore empty --pid-file
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: load: jerasure load: lrc 
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: RocksDB version: 7.9.2
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: Git sha 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: Compile date 2025-09-23 00:00:00
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: DB SUMMARY
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: DB Session ID:  9EENGG53AOVF6BJXYAD5
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: CURRENT file:  CURRENT
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: IDENTITY file:  IDENTITY
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005548790/store.db dir, Total Num: 0, files: 
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005548790/store.db: 000004.log size: 761 ; 
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                         Options.error_if_exists: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                       Options.create_if_missing: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                         Options.paranoid_checks: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                                     Options.env: 0x5594fe59c9e0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                                Options.info_log: 0x559500d7ed20
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                Options.max_file_opening_threads: 16
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                              Options.statistics: (nil)
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                               Options.use_fsync: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                       Options.max_log_file_size: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                         Options.allow_fallocate: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                        Options.use_direct_reads: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:          Options.create_missing_column_families: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                              Options.db_log_dir: 
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                                 Options.wal_dir: 
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                   Options.advise_random_on_open: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                    Options.write_buffer_manager: 0x559500d8f540
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                            Options.rate_limiter: (nil)
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                  Options.unordered_write: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                               Options.row_cache: None
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                              Options.wal_filter: None
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:             Options.allow_ingest_behind: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:             Options.two_write_queues: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:             Options.manual_wal_flush: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:             Options.wal_compression: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:             Options.atomic_flush: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                 Options.log_readahead_size: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:             Options.allow_data_in_errors: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:             Options.db_host_id: __hostname__
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:             Options.max_background_jobs: 2
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:             Options.max_background_compactions: -1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:             Options.max_subcompactions: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:             Options.max_total_wal_size: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                          Options.max_open_files: -1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                          Options.bytes_per_sync: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:       Options.compaction_readahead_size: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                  Options.max_background_flushes: -1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: Compression algorithms supported:
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:         kZSTD supported: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:         kXpressCompression supported: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:         kBZip2Compression supported: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:         kLZ4Compression supported: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:         kZlibCompression supported: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:         kLZ4HCCompression supported: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:         kSnappyCompression supported: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005548790/store.db/MANIFEST-000005
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:           Options.merge_operator: 
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:        Options.compaction_filter: None
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559500d7e980)
                                                             cache_index_and_filter_blocks: 1
                                                             cache_index_and_filter_blocks_with_high_priority: 0
                                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                                             pin_top_level_index_and_filter: 1
                                                             index_type: 0
                                                             data_block_index_type: 0
                                                             index_shortening: 1
                                                             data_block_hash_table_util_ratio: 0.750000
                                                             checksum: 4
                                                             no_block_cache: 0
                                                             block_cache: 0x559500d7b350
                                                             block_cache_name: BinnedLRUCache
                                                             block_cache_options:
                                                               capacity : 536870912
                                                               num_shard_bits : 4
                                                               strict_capacity_limit : 0
                                                               high_pri_pool_ratio: 0.000
                                                             block_cache_compressed: (nil)
                                                             persistent_cache: (nil)
                                                             block_size: 4096
                                                             block_size_deviation: 10
                                                             block_restart_interval: 16
                                                             index_block_restart_interval: 1
                                                             metadata_block_size: 4096
                                                             partition_filters: 0
                                                             use_delta_encoding: 1
                                                             filter_policy: bloomfilter
                                                             whole_key_filtering: 1
                                                             verify_compression: 0
                                                             read_amp_bytes_per_bit: 0
                                                             format_version: 5
                                                             enable_index_compression: 1
                                                             block_align: 0
                                                             max_auto_readahead_size: 262144
                                                             prepopulate_block_cache: 0
                                                             initial_auto_readahead_size: 8192
                                                             num_file_reads_for_auto_readahead: 2
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:        Options.write_buffer_size: 33554432
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:  Options.max_write_buffer_number: 2
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:          Options.compression: NoCompression
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:             Options.num_levels: 7
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                   Options.table_properties_collectors: 
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                           Options.bloom_locality: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                               Options.ttl: 2592000
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                       Options.enable_blob_files: false
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                           Options.min_blob_size: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 10:04:01 np0005548790.localdomain sudo[288042]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005548790/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 8be51b2d-bcce-4d27-9e46-3f3cdf9e4a92
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015441853009, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015441855239, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 773, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 651, "raw_average_value_size": 130, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015441, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8be51b2d-bcce-4d27-9e46-3f3cdf9e4a92", "db_session_id": "9EENGG53AOVF6BJXYAD5", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015441855347, "job": 1, "event": "recovery_finished"}
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559500da2e00
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: DB pointer 0x559500e98000
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790 does not exist in monmap, will attempt to join an existing cluster
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                           ** DB Stats **
                                                           Uptime(secs): 0.0 total, 0.0 interval
                                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           
                                                           ** Compaction Stats [default] **
                                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                             L0      1/0    1.84 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                                            Sum      1/0    1.84 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                                           
                                                           ** Compaction Stats [default] **
                                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                                           
                                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                           
                                                           Uptime(secs): 0.0 total, 0.0 interval
                                                           Flush(GB): cumulative 0.000, interval 0.000
                                                           AddFile(GB): cumulative 0.000, interval 0.000
                                                           AddFile(Total Files): cumulative 0, interval 0
                                                           AddFile(L0 Files): cumulative 0, interval 0
                                                           AddFile(Keys): cumulative 0, interval 0
                                                           Cumulative compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                           Interval compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                           Block cache BinnedLRUCache@0x559500d7b350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.1e-05 secs_since: 0
                                                           Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                                           
                                                           ** File Read Latency Histogram By Level [default] **
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: using public_addr v2:172.18.0.108:0/0 -> [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0]
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: starting mon.np0005548790 rank -1 at public addrs [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] at bind addrs [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005548790 fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@-1(???) e0 preinit fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@-1(synchronizing) e3 sync_obtain_latest_monmap
Dec 06 10:04:01 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@-1(synchronizing) e3 sync_obtain_latest_monmap obtained monmap e3
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@-1(synchronizing).mds e16 new map
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@-1(synchronizing).mds e16 print_map
                                                           e16
                                                           enable_multiple, ever_enabled_multiple: 1,1
                                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}
                                                           legacy client fscid: 1
                                                            
                                                           Filesystem 'cephfs' (1)
                                                           fs_name        cephfs
                                                           epoch        16
                                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                                           created        2025-12-06T08:18:49.925523+0000
                                                           modified        2025-12-06T10:03:02.051468+0000
                                                           tableserver        0
                                                           root        0
                                                           session_timeout        60
                                                           session_autoclose        300
                                                           max_file_size        1099511627776
                                                           required_client_features        {}
                                                           last_failure        0
                                                           last_failure_osd_epoch        87
                                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}
                                                           max_mds        1
                                                           in        0
                                                           up        {0=26356}
                                                           failed        
                                                           damaged        
                                                           stopped        
                                                           data_pools        [6]
                                                           metadata_pool        7
                                                           inline_data        disabled
                                                           balancer        
                                                           bal_rank_mask        -1
                                                           standby_count_wanted        1
                                                           qdb_cluster        leader: 26356 members: 26356
                                                           [mds.mds.np0005548790.vhcezv{0:26356} state up:active seq 16 addr [v2:172.18.0.108:6808/1621657194,v1:172.18.0.108:6809/1621657194] compat {c=[1],r=[1],i=[17ff]}]
                                                            
                                                            
                                                           Standby daemons:
                                                            
                                                           [mds.mds.np0005548789.vxwwsq{-1:16884} state up:standby seq 1 addr [v2:172.18.0.107:6808/3033303281,v1:172.18.0.107:6809/3033303281] compat {c=[1],r=[1],i=[17ff]}]
                                                           [mds.mds.np0005548788.erzujf{-1:16890} state up:standby seq 1 addr [v2:172.18.0.106:6808/309324236,v1:172.18.0.106:6809/309324236] compat {c=[1],r=[1],i=[17ff]}]
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@-1(synchronizing).osd e87 crush map has features 3314933000854323200, adjusting msgr requires
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@-1(synchronizing).osd e87 crush map has features 432629239337189376, adjusting msgr requires
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@-1(synchronizing).osd e87 crush map has features 432629239337189376, adjusting msgr requires
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@-1(synchronizing).osd e87 crush map has features 432629239337189376, adjusting msgr requires
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: pgmap v3859: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='client.17040 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mgr", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Deploying daemon mgr.np0005548789.mzhmje on np0005548789.localdomain
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='client.17061 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005548785.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Added label mon to host np0005548785.localdomain
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: pgmap v3860: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='client.17067 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005548785.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Added label _admin to host np0005548785.localdomain
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Deploying daemon mgr.np0005548790.kvkfyr on np0005548790.localdomain
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: pgmap v3861: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='client.17079 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005548786.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Added label mon to host np0005548786.localdomain
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='client.17085 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005548786.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Added label _admin to host np0005548786.localdomain
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Standby manager daemon np0005548788.yvwbqq started
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: pgmap v3862: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: mgrmap e11: np0005548785.vhqlsq(active, since 2h), standbys: np0005548788.yvwbqq, np0005548787.umwsra, np0005548786.mczynb
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mgr metadata", "who": "np0005548788.yvwbqq", "id": "np0005548788.yvwbqq"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='client.17097 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005548787.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Added label mon to host np0005548787.localdomain
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Standby manager daemon np0005548789.mzhmje started
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: pgmap v3863: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='client.17103 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005548787.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Added label _admin to host np0005548787.localdomain
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: mgrmap e12: np0005548785.vhqlsq(active, since 2h), standbys: np0005548788.yvwbqq, np0005548787.umwsra, np0005548786.mczynb, np0005548789.mzhmje
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mgr metadata", "who": "np0005548789.mzhmje", "id": "np0005548789.mzhmje"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='client.17109 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005548788.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Added label mon to host np0005548788.localdomain
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Standby manager daemon np0005548790.kvkfyr started
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: pgmap v3864: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: mgrmap e13: np0005548785.vhqlsq(active, since 2h), standbys: np0005548788.yvwbqq, np0005548787.umwsra, np0005548786.mczynb, np0005548790.kvkfyr, np0005548789.mzhmje
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mgr metadata", "who": "np0005548790.kvkfyr", "id": "np0005548790.kvkfyr"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='client.17115 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005548788.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Added label _admin to host np0005548788.localdomain
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: pgmap v3865: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='client.17121 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005548789.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Added label mon to host np0005548789.localdomain
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='client.17127 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005548789.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Added label _admin to host np0005548789.localdomain
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: pgmap v3866: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='client.17133 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005548790.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Added label mon to host np0005548790.localdomain
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: pgmap v3867: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='client.17139 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005548790.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Added label _admin to host np0005548790.localdomain
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='client.17145 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Saving service mon spec with placement label:mon
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: pgmap v3868: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='client.17151 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005548788", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: pgmap v3869: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: Deploying daemon mon.np0005548790 on np0005548790.localdomain
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: pgmap v3870: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:02 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@-1(synchronizing).paxosservice(auth 1..34) refresh upgraded, format 0 -> 3
Dec 06 10:04:02 np0005548790.localdomain ceph-mgr[286934]: ms_deliver_dispatch: unhandled message 0x5635485111e0 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0
Dec 06 10:04:04 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@-1(probing) e4  my rank is now 3 (was -1)
Dec 06 10:04:04 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [INF] : mon.np0005548790 calling monitor election
Dec 06 10:04:04 np0005548790.localdomain ceph-mon[288373]: paxos.3).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 06 10:04:04 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(electing) e4 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:04:04 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(electing) e4  adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints
Dec 06 10:04:04 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(electing) e4  adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints
Dec 06 10:04:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(electing) e4  adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(electing) e4 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e4 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e4 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e4 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: mgrc update_daemon_metadata mon.np0005548790 metadata {addrs=[v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005548790.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.7 (Plow),distro_version=9.7,hostname=np0005548790.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux}
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: Deploying daemon mon.np0005548789 on np0005548789.localdomain
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548785"} : dispatch
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548786"} : dispatch
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548787"} : dispatch
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: mon.np0005548785 calling monitor election
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: mon.np0005548787 calling monitor election
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: mon.np0005548786 calling monitor election
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: pgmap v3871: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790 calling monitor election
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: pgmap v3872: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: mon.np0005548785 is new leader, mons np0005548785,np0005548787,np0005548786,np0005548790 in quorum (ranks 0,1,2,3)
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: monmap epoch 4
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: last_changed 2025-12-06T10:04:02.181213+0000
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: created 2025-12-06T07:57:14.295835+0000
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: min_mon_release 18 (reef)
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: election_strategy: 1
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005548785
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: 1: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005548787
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005548786
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: 3: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005548790
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: fsmap cephfs:1 {0=mds.np0005548790.vhcezv=up:active} 2 up:standby
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: osdmap e87: 6 total, 6 up, 6 in
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: mgrmap e13: np0005548785.vhqlsq(active, since 2h), standbys: np0005548788.yvwbqq, np0005548787.umwsra, np0005548786.mczynb, np0005548790.kvkfyr, np0005548789.mzhmje
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: overall HEALTH_OK
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:08 np0005548790.localdomain ceph-mon[288373]: pgmap v3873: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:08 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:08 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:04:08 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:04:08 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:08 np0005548790.localdomain ceph-mon[288373]: Deploying daemon mon.np0005548788 on np0005548788.localdomain
Dec 06 10:04:08 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:04:08 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:04:08 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e4  adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints
Dec 06 10:04:08 np0005548790.localdomain ceph-mgr[286934]: ms_deliver_dispatch: unhandled message 0x563548510f20 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0
Dec 06 10:04:08 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [INF] : mon.np0005548790 calling monitor election
Dec 06 10:04:08 np0005548790.localdomain ceph-mon[288373]: paxos.3).electionLogic(18) init, last seen epoch 18
Dec 06 10:04:08 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(electing) e5 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:04:08 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(electing) e5 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:04:09 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(electing) e5  adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints
Dec 06 10:04:09 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(electing) e5  adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints
Dec 06 10:04:10 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(electing) e5  adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints
Dec 06 10:04:12 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(electing) e5  adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints
Dec 06 10:04:12 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:04:12 np0005548790.localdomain podman[288412]: 2025-12-06 10:04:12.573521806 +0000 UTC m=+0.081509978 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:04:12 np0005548790.localdomain podman[288412]: 2025-12-06 10:04:12.609287778 +0000 UTC m=+0.117276000 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true)
Dec 06 10:04:12 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:04:13 np0005548790.localdomain ceph-mds[285635]: mds.beacon.mds.np0005548790.vhcezv missed beacon ack from the monitors
Dec 06 10:04:13 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(electing) e5 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:04:13 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e5 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: pgmap v3874: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548785"} : dispatch
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548786"} : dispatch
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548787"} : dispatch
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: mon.np0005548785 calling monitor election
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: mon.np0005548787 calling monitor election
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790 calling monitor election
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: mon.np0005548786 calling monitor election
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: pgmap v3875: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: mon.np0005548789 calling monitor election
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: pgmap v3876: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: mon.np0005548785 is new leader, mons np0005548785,np0005548787,np0005548786,np0005548790,np0005548789 in quorum (ranks 0,1,2,3,4)
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: monmap epoch 5
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: last_changed 2025-12-06T10:04:08.937568+0000
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: created 2025-12-06T07:57:14.295835+0000
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: min_mon_release 18 (reef)
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: election_strategy: 1
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005548785
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: 1: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005548787
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005548786
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: 3: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005548790
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: 4: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005548789
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: fsmap cephfs:1 {0=mds.np0005548790.vhcezv=up:active} 2 up:standby
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: osdmap e87: 6 total, 6 up, 6 in
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: mgrmap e13: np0005548785.vhqlsq(active, since 2h), standbys: np0005548788.yvwbqq, np0005548787.umwsra, np0005548786.mczynb, np0005548790.kvkfyr, np0005548789.mzhmje
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: overall HEALTH_OK
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e5  adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints
Dec 06 10:04:14 np0005548790.localdomain ceph-mgr[286934]: ms_deliver_dispatch: unhandled message 0x563548511600 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [INF] : mon.np0005548790 calling monitor election
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: paxos.3).electionLogic(22) init, last seen epoch 22
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(electing) e6 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(electing) e6 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:04:14 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(electing) e6 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:04:14 np0005548790.localdomain sudo[288432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:04:14 np0005548790.localdomain sudo[288432]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:14 np0005548790.localdomain sudo[288432]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:14 np0005548790.localdomain sudo[288450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:04:14 np0005548790.localdomain sudo[288450]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:14 np0005548790.localdomain sudo[288450]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:14 np0005548790.localdomain sudo[288468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 10:04:14 np0005548790.localdomain sudo[288468]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:15 np0005548790.localdomain podman[288559]: 2025-12-06 10:04:15.663386656 +0000 UTC m=+0.110548608 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, architecture=x86_64, GIT_BRANCH=main, RELEASE=main, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7)
Dec 06 10:04:15 np0005548790.localdomain podman[288559]: 2025-12-06 10:04:15.774395385 +0000 UTC m=+0.221557357 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_CLEAN=True, name=rhceph, GIT_BRANCH=main, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, io.openshift.expose-services=)
Dec 06 10:04:16 np0005548790.localdomain sudo[288468]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:04:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:04:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:04:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:04:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:04:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18687 "" "Go-http-client/1.1"
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(electing) e6 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548785"} : dispatch
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548786"} : dispatch
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548787"} : dispatch
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: mon.np0005548785 calling monitor election
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: mon.np0005548786 calling monitor election
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790 calling monitor election
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: mon.np0005548787 calling monitor election
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: mon.np0005548789 calling monitor election
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: from='client.17165 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005548788", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: pgmap v3877: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: mon.np0005548788 calling monitor election
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: pgmap v3878: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: pgmap v3879: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: mon.np0005548785 is new leader, mons np0005548785,np0005548787,np0005548786,np0005548790,np0005548789,np0005548788 in quorum (ranks 0,1,2,3,4,5)
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: monmap epoch 6
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: last_changed 2025-12-06T10:04:14.235362+0000
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: created 2025-12-06T07:57:14.295835+0000
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: min_mon_release 18 (reef)
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: election_strategy: 1
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005548785
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: 1: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005548787
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005548786
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: 3: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005548790
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: 4: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005548789
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: 5: [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] mon.np0005548788
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: fsmap cephfs:1 {0=mds.np0005548790.vhcezv=up:active} 2 up:standby
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: osdmap e87: 6 total, 6 up, 6 in
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: mgrmap e13: np0005548785.vhqlsq(active, since 2h), standbys: np0005548788.yvwbqq, np0005548787.umwsra, np0005548786.mczynb, np0005548790.kvkfyr, np0005548789.mzhmje
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: overall HEALTH_OK
Dec 06 10:04:19 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:04:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:04:19 np0005548790.localdomain sudo[288678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:04:19 np0005548790.localdomain sudo[288678]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:19 np0005548790.localdomain sudo[288678]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:19 np0005548790.localdomain systemd[1]: tmp-crun.seEPEU.mount: Deactivated successfully.
Dec 06 10:04:19 np0005548790.localdomain sudo[288717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:04:19 np0005548790.localdomain sudo[288717]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:19 np0005548790.localdomain podman[288695]: 2025-12-06 10:04:19.594701084 +0000 UTC m=+0.098104310 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:04:19 np0005548790.localdomain podman[288695]: 2025-12-06 10:04:19.603849972 +0000 UTC m=+0.107253168 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3)
Dec 06 10:04:19 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:04:19 np0005548790.localdomain podman[288694]: 2025-12-06 10:04:19.695010222 +0000 UTC m=+0.201709647 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:04:19 np0005548790.localdomain podman[288694]: 2025-12-06 10:04:19.708209301 +0000 UTC m=+0.214908756 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 10:04:19 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:04:20 np0005548790.localdomain sudo[288717]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:20 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:20 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:20 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:20 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:20 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:20 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:20 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:04:20 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:20 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:04:20 np0005548790.localdomain sudo[288788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 10:04:20 np0005548790.localdomain sudo[288788]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:20 np0005548790.localdomain sudo[288788]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:20 np0005548790.localdomain sudo[288806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph
Dec 06 10:04:20 np0005548790.localdomain sudo[288806]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:20 np0005548790.localdomain sudo[288806]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:20 np0005548790.localdomain sudo[288824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:04:20 np0005548790.localdomain sudo[288824]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:20 np0005548790.localdomain sudo[288824]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:04:20 np0005548790.localdomain sudo[288843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:04:20 np0005548790.localdomain podman[288842]: 2025-12-06 10:04:20.751883668 +0000 UTC m=+0.091185382 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, architecture=x86_64, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 06 10:04:20 np0005548790.localdomain sudo[288843]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:20 np0005548790.localdomain sudo[288843]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:20 np0005548790.localdomain podman[288842]: 2025-12-06 10:04:20.768566462 +0000 UTC m=+0.107868176 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., release=1755695350, managed_by=edpm_ansible, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, config_id=edpm, name=ubi9-minimal, vcs-type=git, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41)
Dec 06 10:04:20 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:04:20 np0005548790.localdomain sudo[288878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:04:20 np0005548790.localdomain sudo[288878]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:20 np0005548790.localdomain sudo[288878]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:20 np0005548790.localdomain sudo[288915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:04:20 np0005548790.localdomain sudo[288915]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:20 np0005548790.localdomain sudo[288915]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:21 np0005548790.localdomain sudo[288933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:04:21 np0005548790.localdomain sudo[288933]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:21 np0005548790.localdomain sudo[288933]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:21 np0005548790.localdomain sudo[288951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 06 10:04:21 np0005548790.localdomain sudo[288951]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:21 np0005548790.localdomain sudo[288951]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:21 np0005548790.localdomain sudo[288969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:04:21 np0005548790.localdomain sudo[288969]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:21 np0005548790.localdomain sudo[288969]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:21 np0005548790.localdomain sudo[288987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:04:21 np0005548790.localdomain sudo[288987]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:21 np0005548790.localdomain sudo[288987]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:21 np0005548790.localdomain sudo[289005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:04:21 np0005548790.localdomain sudo[289005]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:21 np0005548790.localdomain sudo[289005]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:21 np0005548790.localdomain sudo[289023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:04:21 np0005548790.localdomain sudo[289023]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:21 np0005548790.localdomain sudo[289023]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:21 np0005548790.localdomain sudo[289041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:04:21 np0005548790.localdomain sudo[289041]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:21 np0005548790.localdomain sudo[289041]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:21 np0005548790.localdomain sudo[289075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:04:21 np0005548790.localdomain sudo[289075]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:21 np0005548790.localdomain sudo[289075]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:21 np0005548790.localdomain sudo[289093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:04:21 np0005548790.localdomain sudo[289093]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:21 np0005548790.localdomain sudo[289093]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:21 np0005548790.localdomain sudo[289111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:04:21 np0005548790.localdomain sudo[289111]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:21 np0005548790.localdomain sudo[289111]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: Updating np0005548785.localdomain:/etc/ceph/ceph.conf
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: Updating np0005548786.localdomain:/etc/ceph/ceph.conf
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/etc/ceph/ceph.conf
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: pgmap v3880: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: from='client.34103 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005548788", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: Updating np0005548786.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: Updating np0005548785.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:22 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:04:22 np0005548790.localdomain sudo[289129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:04:22 np0005548790.localdomain sudo[289129]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:22 np0005548790.localdomain sudo[289129]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:23 np0005548790.localdomain ceph-mon[288373]: from='client.17178 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005548789", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:04:23 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:04:23 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:04:23 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:04:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:04:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:04:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:04:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:04:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:04:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:04:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:04:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:04:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:04:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:04:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:04:24 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mon.np0005548785 (monmap changed)...
Dec 06 10:04:24 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mon.np0005548785 on np0005548785.localdomain
Dec 06 10:04:24 np0005548790.localdomain ceph-mon[288373]: pgmap v3881: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:24 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548785.vhqlsq (monmap changed)...
Dec 06 10:04:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548785.vhqlsq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:04:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:04:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:24 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548785.vhqlsq on np0005548785.localdomain
Dec 06 10:04:24 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.107:0/2303863447' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:04:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:24.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:04:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:24.334 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:04:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:24.335 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:04:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:24.620 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:04:25 np0005548790.localdomain ceph-mon[288373]: from='client.17190 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005548790", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:04:25 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:25 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548785 (monmap changed)...
Dec 06 10:04:25 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548785.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:04:25 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:25 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548785 on np0005548785.localdomain
Dec 06 10:04:25 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:25 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:25 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.103:0/3114499803' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 06 10:04:25 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:25 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548786.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:04:25 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:25.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:04:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:25.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:04:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:04:25 np0005548790.localdomain podman[289147]: 2025-12-06 10:04:25.558967715 +0000 UTC m=+0.076830630 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible)
Dec 06 10:04:25 np0005548790.localdomain podman[289147]: 2025-12-06 10:04:25.571224088 +0000 UTC m=+0.089087013 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 10:04:25 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:04:26 np0005548790.localdomain ceph-mon[288373]: pgmap v3882: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:26 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548786 (monmap changed)...
Dec 06 10:04:26 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548786 on np0005548786.localdomain
Dec 06 10:04:26 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.107:0/4040142409' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:04:26 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:26 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:26 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:04:26 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:04:26 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:26.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:04:27 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mon.np0005548786 (monmap changed)...
Dec 06 10:04:27 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mon.np0005548786 on np0005548786.localdomain
Dec 06 10:04:27 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.103:0/1185796831' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json"} : dispatch
Dec 06 10:04:27 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:27 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:27 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548786.mczynb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:04:27 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:04:27 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:27.329 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:04:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:27.360 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:04:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:27.360 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:04:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:27.361 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:04:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:04:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:27.549 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:04:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:27.550 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:04:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:27.550 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:04:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:27.551 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:04:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:27.551 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:04:27 np0005548790.localdomain podman[289167]: 2025-12-06 10:04:27.562494758 +0000 UTC m=+0.078566487 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:04:27 np0005548790.localdomain podman[289167]: 2025-12-06 10:04:27.602234039 +0000 UTC m=+0.118305748 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:04:27 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:04:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:28.023 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:04:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:28.212 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:04:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:28.213 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=12025MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:04:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:28.214 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:04:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:28.214 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:04:28 np0005548790.localdomain ceph-mgr[286934]: client.0 ms_handle_reset on v2:172.18.0.103:6800/3108124117
Dec 06 10:04:28 np0005548790.localdomain sshd[26171]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 10:04:28 np0005548790.localdomain sshd[26116]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 10:04:28 np0005548790.localdomain sshd[26190]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 10:04:28 np0005548790.localdomain sshd[26078]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 10:04:28 np0005548790.localdomain sshd[25982]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 10:04:28 np0005548790.localdomain systemd[1]: session-14.scope: Deactivated successfully.
Dec 06 10:04:28 np0005548790.localdomain sshd[26059]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 10:04:28 np0005548790.localdomain systemd[1]: session-22.scope: Deactivated successfully.
Dec 06 10:04:28 np0005548790.localdomain systemd[1]: session-26.scope: Deactivated successfully.
Dec 06 10:04:28 np0005548790.localdomain systemd[1]: session-26.scope: Consumed 3min 24.386s CPU time.
Dec 06 10:04:28 np0005548790.localdomain systemd[1]: session-20.scope: Deactivated successfully.
Dec 06 10:04:28 np0005548790.localdomain systemd[1]: session-25.scope: Deactivated successfully.
Dec 06 10:04:28 np0005548790.localdomain systemd[1]: session-19.scope: Deactivated successfully.
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: Session 25 logged out. Waiting for processes to exit.
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon).osd e87 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon).osd e87 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon).osd e88 e88: 6 total, 6 up, 6 in
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: Session 14 logged out. Waiting for processes to exit.
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: Session 26 logged out. Waiting for processes to exit.
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: Session 20 logged out. Waiting for processes to exit.
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: Session 22 logged out. Waiting for processes to exit.
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: Session 19 logged out. Waiting for processes to exit.
Dec 06 10:04:28 np0005548790.localdomain sshd[25999]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 10:04:28 np0005548790.localdomain sshd[26097]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 10:04:28 np0005548790.localdomain sshd[26154]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 10:04:28 np0005548790.localdomain sshd[26021]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 10:04:28 np0005548790.localdomain systemd[1]: session-21.scope: Deactivated successfully.
Dec 06 10:04:28 np0005548790.localdomain sshd[26135]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 10:04:28 np0005548790.localdomain systemd[1]: session-16.scope: Deactivated successfully.
Dec 06 10:04:28 np0005548790.localdomain systemd[1]: session-24.scope: Deactivated successfully.
Dec 06 10:04:28 np0005548790.localdomain systemd[1]: session-17.scope: Deactivated successfully.
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: Session 21 logged out. Waiting for processes to exit.
Dec 06 10:04:28 np0005548790.localdomain systemd[1]: session-23.scope: Deactivated successfully.
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: Session 24 logged out. Waiting for processes to exit.
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: Session 16 logged out. Waiting for processes to exit.
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: Session 23 logged out. Waiting for processes to exit.
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "mon metadata", "id": "np0005548785"} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548785"} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "mon metadata", "id": "np0005548786"} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548786"} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "mon metadata", "id": "np0005548787"} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548787"} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "mon metadata", "id": "np0005548788"} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "mon metadata", "id": "np0005548789"} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "mon metadata", "id": "np0005548790"} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: Session 17 logged out. Waiting for processes to exit.
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005548789.vxwwsq"} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mds metadata", "who": "mds.np0005548789.vxwwsq"} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005548788.erzujf"} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mds metadata", "who": "mds.np0005548788.erzujf"} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005548790.vhcezv"} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mds metadata", "who": "mds.np0005548790.vhcezv"} : dispatch
Dec 06 10:04:28 np0005548790.localdomain sshd[26040]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: Removed session 14.
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: Removed session 22.
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: Removed session 26.
Dec 06 10:04:28 np0005548790.localdomain systemd[1]: session-18.scope: Deactivated successfully.
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: pgmap v3883: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548786.mczynb (monmap changed)...
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548786.mczynb on np0005548786.localdomain
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.108:0/2694903603' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mon.np0005548787 (monmap changed)...
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: from='mgr.14120 172.18.0.103:0/3317297540' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mon.np0005548787 on np0005548787.localdomain
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: Activating manager daemon np0005548788.yvwbqq
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.103:0/899954398' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: osdmap e88: 6 total, 6 up, 6 in
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: Removed session 20.
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: Session 18 logged out. Waiting for processes to exit.
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: Removed session 25.
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: Removed session 19.
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: Removed session 21.
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: Removed session 16.
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: Removed session 24.
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: Removed session 17.
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: Removed session 23.
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon).mds e16 all = 0
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon).mds e16 all = 0
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon).mds e16 all = 0
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005548788.yvwbqq", "id": "np0005548788.yvwbqq"} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr metadata", "who": "np0005548788.yvwbqq", "id": "np0005548788.yvwbqq"} : dispatch
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: Removed session 18.
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005548787.umwsra", "id": "np0005548787.umwsra"} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr metadata", "who": "np0005548787.umwsra", "id": "np0005548787.umwsra"} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005548786.mczynb", "id": "np0005548786.mczynb"} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr metadata", "who": "np0005548786.mczynb", "id": "np0005548786.mczynb"} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005548790.kvkfyr", "id": "np0005548790.kvkfyr"} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr metadata", "who": "np0005548790.kvkfyr", "id": "np0005548790.kvkfyr"} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005548789.mzhmje", "id": "np0005548789.mzhmje"} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr metadata", "who": "np0005548789.mzhmje", "id": "np0005548789.mzhmje"} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mds metadata"} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon).mds e16 all = 1
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd metadata"} : dispatch
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata"} : dispatch
Dec 06 10:04:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:28.565 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:04:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:28.567 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548788.yvwbqq/mirror_snapshot_schedule"} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548788.yvwbqq/mirror_snapshot_schedule"} : dispatch
Dec 06 10:04:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:28.586 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548788.yvwbqq/trash_purge_schedule"} v 0)
Dec 06 10:04:28 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548788.yvwbqq/trash_purge_schedule"} : dispatch
Dec 06 10:04:28 np0005548790.localdomain sshd[289233]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:04:28 np0005548790.localdomain sshd[289233]: Accepted publickey for ceph-admin from 192.168.122.106 port 41756 ssh2: RSA SHA256:HQYBT8n3HbnicDtP9tehd3+gJXMFtkw+fTMlmR2wCsE
Dec 06 10:04:28 np0005548790.localdomain systemd-logind[760]: New session 64 of user ceph-admin.
Dec 06 10:04:28 np0005548790.localdomain systemd[1]: Started Session 64 of User ceph-admin.
Dec 06 10:04:28 np0005548790.localdomain sshd[289233]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 06 10:04:28 np0005548790.localdomain sudo[289237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:04:28 np0005548790.localdomain sudo[289237]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:29 np0005548790.localdomain sudo[289237]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1147721137' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:04:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:29.057 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:04:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:29.065 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:04:29 np0005548790.localdomain sudo[289255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 10:04:29 np0005548790.localdomain sudo[289255]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:29.083 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:04:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:29.085 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:04:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:29.086 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: mgrmap e14: np0005548788.yvwbqq(active, starting, since 0.0774079s), standbys: np0005548787.umwsra, np0005548786.mczynb, np0005548790.kvkfyr, np0005548789.mzhmje
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548785"} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548786"} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548787"} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mds metadata", "who": "mds.np0005548789.vxwwsq"} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mds metadata", "who": "mds.np0005548788.erzujf"} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mds metadata", "who": "mds.np0005548790.vhcezv"} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr metadata", "who": "np0005548788.yvwbqq", "id": "np0005548788.yvwbqq"} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr metadata", "who": "np0005548787.umwsra", "id": "np0005548787.umwsra"} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr metadata", "who": "np0005548786.mczynb", "id": "np0005548786.mczynb"} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr metadata", "who": "np0005548790.kvkfyr", "id": "np0005548790.kvkfyr"} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr metadata", "who": "np0005548789.mzhmje", "id": "np0005548789.mzhmje"} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mds metadata"} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd metadata"} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata"} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: Manager daemon np0005548788.yvwbqq is now available
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548788.yvwbqq/mirror_snapshot_schedule"} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548788.yvwbqq/mirror_snapshot_schedule"} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548788.yvwbqq/trash_purge_schedule"} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548788.yvwbqq/trash_purge_schedule"} : dispatch
Dec 06 10:04:29 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.108:0/1147721137' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:04:29 np0005548790.localdomain podman[289347]: 2025-12-06 10:04:29.842112762 +0000 UTC m=+0.069213134 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, architecture=x86_64, name=rhceph, build-date=2025-11-26T19:44:28Z)
Dec 06 10:04:29 np0005548790.localdomain podman[289347]: 2025-12-06 10:04:29.953254445 +0000 UTC m=+0.180354877 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, architecture=x86_64, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, RELEASE=main)
Dec 06 10:04:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:30.059 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:04:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:30.060 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:04:30 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548786.localdomain.devices.0}] v 0)
Dec 06 10:04:30 np0005548790.localdomain sudo[289255]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:30 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0)
Dec 06 10:04:30 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain.devices.0}] v 0)
Dec 06 10:04:30 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0)
Dec 06 10:04:30 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:04:30 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548786.localdomain}] v 0)
Dec 06 10:04:31 np0005548790.localdomain ceph-mon[288373]: mgrmap e15: np0005548788.yvwbqq(active, since 1.22065s), standbys: np0005548787.umwsra, np0005548786.mczynb, np0005548790.kvkfyr, np0005548789.mzhmje
Dec 06 10:04:31 np0005548790.localdomain ceph-mon[288373]: [06/Dec/2025:10:04:30] ENGINE Bus STARTING
Dec 06 10:04:31 np0005548790.localdomain ceph-mon[288373]: [06/Dec/2025:10:04:30] ENGINE Serving on http://172.18.0.106:8765
Dec 06 10:04:31 np0005548790.localdomain ceph-mon[288373]: [06/Dec/2025:10:04:30] ENGINE Serving on https://172.18.0.106:7150
Dec 06 10:04:31 np0005548790.localdomain ceph-mon[288373]: [06/Dec/2025:10:04:30] ENGINE Bus STARTED
Dec 06 10:04:31 np0005548790.localdomain ceph-mon[288373]: pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:31 np0005548790.localdomain ceph-mon[288373]: [06/Dec/2025:10:04:30] ENGINE Client ('172.18.0.106', 43646) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 10:04:31 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.106:0/3456902707' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:04:31 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0)
Dec 06 10:04:31 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain}] v 0)
Dec 06 10:04:31 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0)
Dec 06 10:04:31 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:04:31 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548785.localdomain.devices.0}] v 0)
Dec 06 10:04:31 np0005548790.localdomain sudo[289468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:04:31 np0005548790.localdomain sudo[289468]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:31 np0005548790.localdomain sudo[289468]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:31 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548785.localdomain}] v 0)
Dec 06 10:04:31 np0005548790.localdomain sudo[289486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:04:31 np0005548790.localdomain sudo[289486]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:04:31.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:04:31 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:04:31 np0005548790.localdomain podman[289504]: 2025-12-06 10:04:31.568686842 +0000 UTC m=+0.081686482 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 06 10:04:31 np0005548790.localdomain podman[289504]: 2025-12-06 10:04:31.653112799 +0000 UTC m=+0.166112449 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 10:04:31 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:04:31 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon).osd e88 _set_new_cache_sizes cache_size:1019704996 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:04:31 np0005548790.localdomain sudo[289486]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mgrmap e16: np0005548788.yvwbqq(active, since 2s), standbys: np0005548787.umwsra, np0005548786.mczynb, np0005548790.kvkfyr, np0005548789.mzhmje
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.106:0/497756778' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:04:32 np0005548790.localdomain sudo[289559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:04:32 np0005548790.localdomain sudo[289559]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:32 np0005548790.localdomain sudo[289559]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:32 np0005548790.localdomain sudo[289577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 06 10:04:32 np0005548790.localdomain sudo[289577]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:32 np0005548790.localdomain sudo[289577]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548786.localdomain.devices.0}] v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548785.localdomain.devices.0}] v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548786.localdomain}] v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548785.localdomain}] v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain.devices.0}] v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005548786", "name": "osd_memory_target"} v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd/host:np0005548786", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005548785", "name": "osd_memory_target"} v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd/host:np0005548785", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain}] v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005548787", "name": "osd_memory_target"} v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd/host:np0005548787", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:04:32 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:04:33 np0005548790.localdomain sudo[289614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 10:04:33 np0005548790.localdomain sudo[289614]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:33 np0005548790.localdomain sudo[289614]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd/host:np0005548786", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd/host:np0005548786", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd/host:np0005548785", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd/host:np0005548785", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd/host:np0005548787", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd/host:np0005548787", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:04:33 np0005548790.localdomain sudo[289632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph
Dec 06 10:04:33 np0005548790.localdomain sudo[289632]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:33 np0005548790.localdomain sudo[289632]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:33 np0005548790.localdomain sudo[289650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:04:33 np0005548790.localdomain sudo[289650]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:33 np0005548790.localdomain sudo[289650]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:33 np0005548790.localdomain sudo[289668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:04:33 np0005548790.localdomain sudo[289668]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:33 np0005548790.localdomain sudo[289668]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:33 np0005548790.localdomain sudo[289686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:04:33 np0005548790.localdomain sudo[289686]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:33 np0005548790.localdomain sudo[289686]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:33 np0005548790.localdomain sudo[289720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:04:33 np0005548790.localdomain sudo[289720]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:33 np0005548790.localdomain sudo[289720]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:33 np0005548790.localdomain sudo[289738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:04:33 np0005548790.localdomain sudo[289738]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:33 np0005548790.localdomain sudo[289738]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:33 np0005548790.localdomain sudo[289756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 06 10:04:33 np0005548790.localdomain sudo[289756]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:33 np0005548790.localdomain sudo[289756]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:33 np0005548790.localdomain sudo[289774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:04:33 np0005548790.localdomain sudo[289774]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:33 np0005548790.localdomain sudo[289774]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:33 np0005548790.localdomain sudo[289792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:04:33 np0005548790.localdomain sudo[289792]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:33 np0005548790.localdomain sudo[289792]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:33 np0005548790.localdomain sudo[289810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:04:33 np0005548790.localdomain sudo[289810]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:33 np0005548790.localdomain sudo[289810]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:33 np0005548790.localdomain sudo[289828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:04:33 np0005548790.localdomain sudo[289828]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:33 np0005548790.localdomain sudo[289828]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:33 np0005548790.localdomain sudo[289846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:04:33 np0005548790.localdomain sudo[289846]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:33 np0005548790.localdomain sudo[289846]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:34 np0005548790.localdomain sudo[289880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:04:34 np0005548790.localdomain sudo[289880]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:34 np0005548790.localdomain sudo[289880]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:34 np0005548790.localdomain ceph-mon[288373]: Adjusting osd_memory_target on np0005548790.localdomain to 836.6M
Dec 06 10:04:34 np0005548790.localdomain ceph-mon[288373]: Adjusting osd_memory_target on np0005548789.localdomain to 836.6M
Dec 06 10:04:34 np0005548790.localdomain ceph-mon[288373]: Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:04:34 np0005548790.localdomain ceph-mon[288373]: Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:04:34 np0005548790.localdomain ceph-mon[288373]: Adjusting osd_memory_target on np0005548788.localdomain to 836.6M
Dec 06 10:04:34 np0005548790.localdomain ceph-mon[288373]: Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Dec 06 10:04:34 np0005548790.localdomain ceph-mon[288373]: Updating np0005548785.localdomain:/etc/ceph/ceph.conf
Dec 06 10:04:34 np0005548790.localdomain ceph-mon[288373]: Updating np0005548786.localdomain:/etc/ceph/ceph.conf
Dec 06 10:04:34 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/etc/ceph/ceph.conf
Dec 06 10:04:34 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf
Dec 06 10:04:34 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf
Dec 06 10:04:34 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf
Dec 06 10:04:34 np0005548790.localdomain ceph-mon[288373]: mgrmap e17: np0005548788.yvwbqq(active, since 4s), standbys: np0005548787.umwsra, np0005548786.mczynb, np0005548790.kvkfyr, np0005548789.mzhmje
Dec 06 10:04:34 np0005548790.localdomain ceph-mon[288373]: Standby manager daemon np0005548785.vhqlsq started
Dec 06 10:04:34 np0005548790.localdomain sudo[289898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:04:34 np0005548790.localdomain sudo[289898]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:34 np0005548790.localdomain sudo[289898]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:34 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005548785.vhqlsq", "id": "np0005548785.vhqlsq"} v 0)
Dec 06 10:04:34 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr metadata", "who": "np0005548785.vhqlsq", "id": "np0005548785.vhqlsq"} : dispatch
Dec 06 10:04:34 np0005548790.localdomain sudo[289916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:04:34 np0005548790.localdomain sudo[289916]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:34 np0005548790.localdomain sudo[289916]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:34 np0005548790.localdomain sudo[289934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 10:04:34 np0005548790.localdomain sudo[289934]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:34 np0005548790.localdomain sudo[289934]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:34 np0005548790.localdomain sudo[289952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph
Dec 06 10:04:34 np0005548790.localdomain sudo[289952]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:34 np0005548790.localdomain sudo[289952]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:34 np0005548790.localdomain sudo[289970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:04:34 np0005548790.localdomain sudo[289970]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:34 np0005548790.localdomain sudo[289970]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:34 np0005548790.localdomain sudo[289988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:04:34 np0005548790.localdomain sudo[289988]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:34 np0005548790.localdomain sudo[289988]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:34 np0005548790.localdomain sudo[290006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:04:34 np0005548790.localdomain sudo[290006]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:34 np0005548790.localdomain sudo[290006]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:34 np0005548790.localdomain sudo[290040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:04:34 np0005548790.localdomain sudo[290040]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:34 np0005548790.localdomain sudo[290040]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:34 np0005548790.localdomain sudo[290058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:04:34 np0005548790.localdomain sudo[290058]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:34 np0005548790.localdomain sudo[290058]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:34 np0005548790.localdomain sudo[290076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 06 10:04:34 np0005548790.localdomain sudo[290076]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:34 np0005548790.localdomain sudo[290076]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:34 np0005548790.localdomain sudo[290094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:04:34 np0005548790.localdomain sudo[290094]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:34 np0005548790.localdomain sudo[290094]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:34 np0005548790.localdomain sudo[290112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:04:34 np0005548790.localdomain sudo[290112]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:34 np0005548790.localdomain sudo[290112]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:35 np0005548790.localdomain sudo[290130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:04:35 np0005548790.localdomain sudo[290130]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:35 np0005548790.localdomain sudo[290130]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:35 np0005548790.localdomain sudo[290148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:04:35 np0005548790.localdomain sudo[290148]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:35 np0005548790.localdomain sudo[290148]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:35 np0005548790.localdomain sudo[290166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: Updating np0005548786.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: Updating np0005548785.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: Updating np0005548786.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: mgrmap e18: np0005548788.yvwbqq(active, since 5s), standbys: np0005548787.umwsra, np0005548786.mczynb, np0005548790.kvkfyr, np0005548789.mzhmje, np0005548785.vhqlsq
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: Updating np0005548785.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr metadata", "who": "np0005548785.vhqlsq", "id": "np0005548785.vhqlsq"} : dispatch
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:04:35 np0005548790.localdomain sudo[290166]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:35 np0005548790.localdomain sudo[290166]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:35 np0005548790.localdomain sudo[290200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:04:35 np0005548790.localdomain sudo[290200]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:35 np0005548790.localdomain sudo[290200]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548786.localdomain.devices.0}] v 0)
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548786.localdomain}] v 0)
Dec 06 10:04:35 np0005548790.localdomain sudo[290218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:04:35 np0005548790.localdomain sudo[290218]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:35 np0005548790.localdomain sudo[290218]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:35 np0005548790.localdomain sudo[290236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:04:35 np0005548790.localdomain sudo[290236]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:35 np0005548790.localdomain sudo[290236]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0)
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0)
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548785.localdomain.devices.0}] v 0)
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548785.localdomain}] v 0)
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0)
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0)
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain.devices.0}] v 0)
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain}] v 0)
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:04:35 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: Updating np0005548786.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: Updating np0005548785.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:04:36 np0005548790.localdomain sudo[290255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:04:36 np0005548790.localdomain sudo[290255]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:36 np0005548790.localdomain sudo[290255]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:36 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon).osd e88 _set_new_cache_sizes cache_size:1020047524 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:04:37 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain.devices.0}] v 0)
Dec 06 10:04:37 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain}] v 0)
Dec 06 10:04:37 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 06 10:04:37 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:04:37 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 06 10:04:37 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:04:37 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:04:37 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:37 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mon.np0005548787 (monmap changed)...
Dec 06 10:04:37 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:04:37 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:04:37 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:37 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mon.np0005548787 on np0005548787.localdomain
Dec 06 10:04:37 np0005548790.localdomain ceph-mon[288373]: pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 0 B/s wr, 16 op/s
Dec 06 10:04:37 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:37 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:37 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:04:37 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:04:38 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain.devices.0}] v 0)
Dec 06 10:04:38 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain}] v 0)
Dec 06 10:04:38 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 06 10:04:38 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:04:38 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:04:38 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:38 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548787.umwsra (monmap changed)...
Dec 06 10:04:38 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:04:38 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:38 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548787.umwsra on np0005548787.localdomain
Dec 06 10:04:38 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:38 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:38 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:04:38 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:04:38 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:38 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:04:39 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain.devices.0}] v 0)
Dec 06 10:04:39 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain}] v 0)
Dec 06 10:04:39 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 06 10:04:39 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:04:39 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:04:39 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:39 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548787 (monmap changed)...
Dec 06 10:04:39 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548787 on np0005548787.localdomain
Dec 06 10:04:39 np0005548790.localdomain ceph-mon[288373]: pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 13 op/s
Dec 06 10:04:39 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:39 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.32:0/1383583435' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:04:39 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.32:0/1383583435' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:04:39 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:39 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:39 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:04:39 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:04:39 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:40 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:04:40 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:04:40 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec 06 10:04:40 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 06 10:04:40 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:04:40 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:40 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548788 (monmap changed)...
Dec 06 10:04:40 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain
Dec 06 10:04:40 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:40 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:40 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 06 10:04:40 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:41 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:04:41 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:04:41 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0)
Dec 06 10:04:41 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 06 10:04:41 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:04:41 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:41 np0005548790.localdomain ceph-mon[288373]: from='client.17340 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:04:41 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.2 (monmap changed)...
Dec 06 10:04:41 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.2 on np0005548788.localdomain
Dec 06 10:04:41 np0005548790.localdomain ceph-mon[288373]: pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Dec 06 10:04:41 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:41 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:41 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 06 10:04:41 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:41 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054583 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:04:42 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:04:42 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:04:42 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 06 10:04:42 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:04:42 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:04:42 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:42 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.5 (monmap changed)...
Dec 06 10:04:42 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.5 on np0005548788.localdomain
Dec 06 10:04:42 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:42 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:42 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:04:42 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:04:42 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:43 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:04:43 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:04:43 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 06 10:04:43 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:04:43 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 06 10:04:43 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:04:43 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:04:43 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:04:43 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)...
Dec 06 10:04:43 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain
Dec 06 10:04:43 np0005548790.localdomain ceph-mon[288373]: pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 06 10:04:43 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:43 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:43 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:04:43 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:04:43 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:04:43 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:43 np0005548790.localdomain systemd[1]: tmp-crun.PsBkrY.mount: Deactivated successfully.
Dec 06 10:04:43 np0005548790.localdomain podman[290273]: 2025-12-06 10:04:43.5886847 +0000 UTC m=+0.096222827 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 10:04:43 np0005548790.localdomain podman[290273]: 2025-12-06 10:04:43.594023176 +0000 UTC m=+0.101561273 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 06 10:04:43 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:04:44 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:04:44 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:04:44 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec 06 10:04:44 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "quorum_status"} : dispatch
Dec 06 10:04:44 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "mon rm", "name": "np0005548785"} v 0)
Dec 06 10:04:44 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon rm", "name": "np0005548785"} : dispatch
Dec 06 10:04:44 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e6 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 06 10:04:44 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:04:44 np0005548790.localdomain ceph-mgr[286934]: ms_deliver_dispatch: unhandled message 0x563548511600 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0
Dec 06 10:04:44 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@3(peon) e7  my rank is now 2 (was 3)
Dec 06 10:04:44 np0005548790.localdomain ceph-mgr[286934]: client.0 ms_handle_reset on v2:172.18.0.105:3300/0
Dec 06 10:04:44 np0005548790.localdomain ceph-mgr[286934]: client.0 ms_handle_reset on v2:172.18.0.105:3300/0
Dec 06 10:04:44 np0005548790.localdomain ceph-mgr[286934]: ms_deliver_dispatch: unhandled message 0x563548511080 mon_map magic: 0 from mon.2 v2:172.18.0.108:3300/0
Dec 06 10:04:44 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [INF] : mon.np0005548790 calling monitor election
Dec 06 10:04:44 np0005548790.localdomain ceph-mon[288373]: paxos.2).electionLogic(26) init, last seen epoch 26
Dec 06 10:04:44 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(electing) e7 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:04:44 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(electing) e7 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:04:44 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(electing) e7 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:04:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:04:48.386 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:04:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:04:48.387 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:04:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:04:48.387 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:04:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:04:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:04:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:04:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:04:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:04:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18684 "" "Go-http-client/1.1"
Dec 06 10:04:49 np0005548790.localdomain ceph-mds[285635]: mds.beacon.mds.np0005548790.vhcezv missed beacon ack from the monitors
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon) e7 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)...
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: from='client.34161 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005548785"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: Remove daemons mon.np0005548785
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "quorum_status"} : dispatch
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: Safe to remove mon.np0005548785: new quorum should be ['np0005548787', 'np0005548786', 'np0005548790', 'np0005548789', 'np0005548788'] (from ['np0005548787', 'np0005548786', 'np0005548790', 'np0005548789', 'np0005548788'])
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: Removing monitor np0005548785 from monmap...
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon rm", "name": "np0005548785"} : dispatch
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mon.np0005548788 (monmap changed)...
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: Removing daemon mon.np0005548785 from np0005548785.localdomain -- ports []
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: mon.np0005548786 calling monitor election
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790 calling monitor election
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: mon.np0005548787 calling monitor election
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548786"} : dispatch
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548787"} : dispatch
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: mon.np0005548789 calling monitor election
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: mon.np0005548787 is new leader, mons np0005548787,np0005548786,np0005548790,np0005548789 in quorum (ranks 0,1,2,3)
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: monmap epoch 7
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: last_changed 2025-12-06T10:04:44.209099+0000
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: created 2025-12-06T07:57:14.295835+0000
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: min_mon_release 18 (reef)
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: election_strategy: 1
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005548787
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005548786
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: 2: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005548790
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: 3: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005548789
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: 4: [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] mon.np0005548788
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: fsmap cephfs:1 {0=mds.np0005548790.vhcezv=up:active} 2 up:standby
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: osdmap e88: 6 total, 6 up, 6 in
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: mgrmap e18: np0005548788.yvwbqq(active, since 20s), standbys: np0005548787.umwsra, np0005548786.mczynb, np0005548790.kvkfyr, np0005548789.mzhmje, np0005548785.vhqlsq
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: Health check failed: 1/5 mons down, quorum np0005548787,np0005548786,np0005548790,np0005548789 (MON_DOWN)
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: Health detail: HEALTH_WARN 1/5 mons down, quorum np0005548787,np0005548786,np0005548790,np0005548789
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]: [WRN] MON_DOWN: 1/5 mons down, quorum np0005548787,np0005548786,np0005548790,np0005548789
Dec 06 10:04:49 np0005548790.localdomain ceph-mon[288373]:     mon.np0005548788 (rank 4) addr [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] is down (out of quorum)
Dec 06 10:04:50 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [INF] : mon.np0005548790 calling monitor election
Dec 06 10:04:50 np0005548790.localdomain ceph-mon[288373]: paxos.2).electionLogic(29) init, last seen epoch 29, mid-election, bumping
Dec 06 10:04:50 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(electing) e7 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:04:50 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(electing) e7 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:04:50 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon) e7 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:04:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:04:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:04:50 np0005548790.localdomain podman[290293]: 2025-12-06 10:04:50.569284025 +0000 UTC m=+0.080635404 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:04:50 np0005548790.localdomain podman[290293]: 2025-12-06 10:04:50.609213991 +0000 UTC m=+0.120565370 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 06 10:04:50 np0005548790.localdomain podman[290292]: 2025-12-06 10:04:50.620549229 +0000 UTC m=+0.134685964 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 10:04:50 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:04:50 np0005548790.localdomain podman[290292]: 2025-12-06 10:04:50.63415103 +0000 UTC m=+0.148287795 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:04:50 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: mon.np0005548788 calling monitor election
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: from='client.34193 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005548785.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: Removed label mon from host np0005548785.localdomain
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548789 (monmap changed)...
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: mon.np0005548786 calling monitor election
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: mon.np0005548789 calling monitor election
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790 calling monitor election
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: mon.np0005548787 calling monitor election
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: mon.np0005548787 is new leader, mons np0005548787,np0005548786,np0005548790,np0005548789,np0005548788 in quorum (ranks 0,1,2,3,4)
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: monmap epoch 7
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: last_changed 2025-12-06T10:04:44.209099+0000
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: created 2025-12-06T07:57:14.295835+0000
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: min_mon_release 18 (reef)
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: election_strategy: 1
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005548787
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005548786
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: 2: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005548790
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: 3: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005548789
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: 4: [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] mon.np0005548788
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: fsmap cephfs:1 {0=mds.np0005548790.vhcezv=up:active} 2 up:standby
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: osdmap e88: 6 total, 6 up, 6 in
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: mgrmap e18: np0005548788.yvwbqq(active, since 22s), standbys: np0005548787.umwsra, np0005548786.mczynb, np0005548790.kvkfyr, np0005548789.mzhmje, np0005548785.vhqlsq
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: Health check cleared: MON_DOWN (was: 1/5 mons down, quorum np0005548787,np0005548786,np0005548790,np0005548789)
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: Cluster is now healthy
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: overall HEALTH_OK
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:04:51 np0005548790.localdomain podman[290332]: 2025-12-06 10:04:51.554447001 +0000 UTC m=+0.074908278 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 06 10:04:51 np0005548790.localdomain podman[290332]: 2025-12-06 10:04:51.563857877 +0000 UTC m=+0.084319104 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, build-date=2025-08-20T13:12:41, version=9.6, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 10:04:51 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:04:51 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054728 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:04:52 np0005548790.localdomain ceph-mon[288373]: from='client.26575 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005548785.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:04:52 np0005548790.localdomain ceph-mon[288373]: Removed label mgr from host np0005548785.localdomain
Dec 06 10:04:52 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.1 (monmap changed)...
Dec 06 10:04:52 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.1 on np0005548789.localdomain
Dec 06 10:04:52 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:52 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 06 10:04:52 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:52 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:52 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:53 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.4 (monmap changed)...
Dec 06 10:04:53 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.4 on np0005548789.localdomain
Dec 06 10:04:53 np0005548790.localdomain ceph-mon[288373]: from='client.34189 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005548785.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:04:53 np0005548790.localdomain ceph-mon[288373]: Removed label _admin from host np0005548785.localdomain
Dec 06 10:04:53 np0005548790.localdomain ceph-mon[288373]: pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:53 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:53 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:53 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:04:53 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:53 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:04:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:04:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:04:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:04:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:04:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:04:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:04:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:04:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:04:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:04:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:04:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:04:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:04:54 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)...
Dec 06 10:04:54 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain
Dec 06 10:04:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:04:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:04:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:04:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:55 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548789.mzhmje (monmap changed)...
Dec 06 10:04:55 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain
Dec 06 10:04:55 np0005548790.localdomain ceph-mon[288373]: pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:55 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:55 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:55 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:04:55 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:04:55 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:56 np0005548790.localdomain sudo[290352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:04:56 np0005548790.localdomain sudo[290352]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:04:56 np0005548790.localdomain sudo[290352]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:56 np0005548790.localdomain sudo[290371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:04:56 np0005548790.localdomain sudo[290371]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:56 np0005548790.localdomain podman[290370]: 2025-12-06 10:04:56.307967722 +0000 UTC m=+0.085788713 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd)
Dec 06 10:04:56 np0005548790.localdomain podman[290370]: 2025-12-06 10:04:56.317955965 +0000 UTC m=+0.095776886 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:04:56 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:04:56 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mon.np0005548789 (monmap changed)...
Dec 06 10:04:56 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mon.np0005548789 on np0005548789.localdomain
Dec 06 10:04:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:04:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:04:56 np0005548790.localdomain podman[290422]: 
Dec 06 10:04:56 np0005548790.localdomain podman[290422]: 2025-12-06 10:04:56.740247491 +0000 UTC m=+0.075548296 container create de55f101f63a3bc32580d7f43e345ccbad25f8d01697c669a35d048dfb7993b2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_driscoll, GIT_BRANCH=main, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, release=1763362218, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, distribution-scope=public, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git)
Dec 06 10:04:56 np0005548790.localdomain systemd[1]: Started libpod-conmon-de55f101f63a3bc32580d7f43e345ccbad25f8d01697c669a35d048dfb7993b2.scope.
Dec 06 10:04:56 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:04:56 np0005548790.localdomain podman[290422]: 2025-12-06 10:04:56.708842466 +0000 UTC m=+0.044143321 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:04:56 np0005548790.localdomain podman[290422]: 2025-12-06 10:04:56.813510273 +0000 UTC m=+0.148811078 container init de55f101f63a3bc32580d7f43e345ccbad25f8d01697c669a35d048dfb7993b2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_driscoll, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vcs-type=git, distribution-scope=public, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, name=rhceph, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., architecture=x86_64, ceph=True, release=1763362218, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4)
Dec 06 10:04:56 np0005548790.localdomain podman[290422]: 2025-12-06 10:04:56.822681902 +0000 UTC m=+0.157982707 container start de55f101f63a3bc32580d7f43e345ccbad25f8d01697c669a35d048dfb7993b2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_driscoll, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, RELEASE=main, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_BRANCH=main, name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4)
Dec 06 10:04:56 np0005548790.localdomain podman[290422]: 2025-12-06 10:04:56.822997871 +0000 UTC m=+0.158298676 container attach de55f101f63a3bc32580d7f43e345ccbad25f8d01697c669a35d048dfb7993b2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_driscoll, distribution-scope=public, RELEASE=main, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc.)
Dec 06 10:04:56 np0005548790.localdomain vibrant_driscoll[290437]: 167 167
Dec 06 10:04:56 np0005548790.localdomain systemd[1]: libpod-de55f101f63a3bc32580d7f43e345ccbad25f8d01697c669a35d048dfb7993b2.scope: Deactivated successfully.
Dec 06 10:04:56 np0005548790.localdomain podman[290422]: 2025-12-06 10:04:56.830637019 +0000 UTC m=+0.165937854 container died de55f101f63a3bc32580d7f43e345ccbad25f8d01697c669a35d048dfb7993b2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_driscoll, vcs-type=git, vendor=Red Hat, Inc., name=rhceph, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_BRANCH=main, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph)
Dec 06 10:04:56 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:04:56 np0005548790.localdomain podman[290442]: 2025-12-06 10:04:56.93065183 +0000 UTC m=+0.089897986 container remove de55f101f63a3bc32580d7f43e345ccbad25f8d01697c669a35d048dfb7993b2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_driscoll, RELEASE=main, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, CEPH_POINT_RELEASE=, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, name=rhceph)
Dec 06 10:04:56 np0005548790.localdomain systemd[1]: libpod-conmon-de55f101f63a3bc32580d7f43e345ccbad25f8d01697c669a35d048dfb7993b2.scope: Deactivated successfully.
Dec 06 10:04:57 np0005548790.localdomain sudo[290371]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:57 np0005548790.localdomain sudo[290458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:04:57 np0005548790.localdomain sudo[290458]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:57 np0005548790.localdomain sudo[290458]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:57 np0005548790.localdomain sudo[290476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:04:57 np0005548790.localdomain sudo[290476]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:57 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-bd77cd268018d5e069c1145e4d2dc9c1d61274f1b47d49712580dff77a574b40-merged.mount: Deactivated successfully.
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548790 (monmap changed)...
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:04:57.497723) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015497497863, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 11489, "num_deletes": 523, "total_data_size": 16425288, "memory_usage": 17269216, "flush_reason": "Manual Compaction"}
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015497572444, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 11186310, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 11494, "table_properties": {"data_size": 11133206, "index_size": 27526, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24517, "raw_key_size": 257425, "raw_average_key_size": 26, "raw_value_size": 10968634, "raw_average_value_size": 1120, "num_data_blocks": 1034, "num_entries": 9791, "num_filter_entries": 9791, "num_deletions": 522, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015441, "oldest_key_time": 1765015441, "file_creation_time": 1765015497, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8be51b2d-bcce-4d27-9e46-3f3cdf9e4a92", "db_session_id": "9EENGG53AOVF6BJXYAD5", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 74757 microseconds, and 26574 cpu microseconds.
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:04:57.572495) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 11186310 bytes OK
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:04:57.572513) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:04:57.574293) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:04:57.574319) EVENT_LOG_v1 {"time_micros": 1765015497574313, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:04:57.574337) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 16349370, prev total WAL file size 16349370, number of live WAL files 2.
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:04:57.577010) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130303430' seq:72057594037927935, type:22 .. '7061786F73003130323932' seq:0, type:0; will stop at (end)
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(10MB) 8(1887B)]
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015497577168, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 11188197, "oldest_snapshot_seqno": -1}
Dec 06 10:04:57 np0005548790.localdomain podman[290511]: 
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 9272 keys, 11178391 bytes, temperature: kUnknown
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015497649900, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 11178391, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11126592, "index_size": 27506, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23237, "raw_key_size": 248849, "raw_average_key_size": 26, "raw_value_size": 10968692, "raw_average_value_size": 1182, "num_data_blocks": 1033, "num_entries": 9272, "num_filter_entries": 9272, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015441, "oldest_key_time": 0, "file_creation_time": 1765015497, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8be51b2d-bcce-4d27-9e46-3f3cdf9e4a92", "db_session_id": "9EENGG53AOVF6BJXYAD5", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:04:57.650193) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 11178391 bytes
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:04:57.651806) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.7 rd, 153.5 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(10.7, 0.0 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 9796, records dropped: 524 output_compression: NoCompression
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:04:57.651837) EVENT_LOG_v1 {"time_micros": 1765015497651823, "job": 4, "event": "compaction_finished", "compaction_time_micros": 72810, "compaction_time_cpu_micros": 35172, "output_level": 6, "num_output_files": 1, "total_output_size": 11178391, "num_input_records": 9796, "num_output_records": 9272, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015497653433, "job": 4, "event": "table_file_deletion", "file_number": 14}
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015497653481, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec 06 10:04:57 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:04:57.576898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:04:57 np0005548790.localdomain podman[290511]: 2025-12-06 10:04:57.656904683 +0000 UTC m=+0.066001257 container create 77823ba5a52f076317bf557e69b19e29b3c7725f1af3861edbf5f4d566191249 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_colden, RELEASE=main, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, version=7, architecture=x86_64, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 06 10:04:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:04:57 np0005548790.localdomain systemd[1]: Started libpod-conmon-77823ba5a52f076317bf557e69b19e29b3c7725f1af3861edbf5f4d566191249.scope.
Dec 06 10:04:57 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:04:57 np0005548790.localdomain podman[290511]: 2025-12-06 10:04:57.633492786 +0000 UTC m=+0.042589350 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:04:57 np0005548790.localdomain podman[290526]: 2025-12-06 10:04:57.76636627 +0000 UTC m=+0.074452757 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:04:57 np0005548790.localdomain podman[290526]: 2025-12-06 10:04:57.778427528 +0000 UTC m=+0.086514015 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:04:57 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:04:57 np0005548790.localdomain podman[290511]: 2025-12-06 10:04:57.796652324 +0000 UTC m=+0.205748918 container init 77823ba5a52f076317bf557e69b19e29b3c7725f1af3861edbf5f4d566191249 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_colden, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, release=1763362218, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, GIT_BRANCH=main, RELEASE=main, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_CLEAN=True, ceph=True)
Dec 06 10:04:57 np0005548790.localdomain podman[290511]: 2025-12-06 10:04:57.807646383 +0000 UTC m=+0.216742977 container start 77823ba5a52f076317bf557e69b19e29b3c7725f1af3861edbf5f4d566191249 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_colden, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_BRANCH=main, version=7, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, description=Red Hat Ceph Storage 7, RELEASE=main, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, CEPH_POINT_RELEASE=, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 06 10:04:57 np0005548790.localdomain podman[290511]: 2025-12-06 10:04:57.80791601 +0000 UTC m=+0.217012614 container attach 77823ba5a52f076317bf557e69b19e29b3c7725f1af3861edbf5f4d566191249 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_colden, com.redhat.component=rhceph-container, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.buildah.version=1.41.4, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, vendor=Red Hat, Inc., RELEASE=main, architecture=x86_64, vcs-type=git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 06 10:04:57 np0005548790.localdomain clever_colden[290532]: 167 167
Dec 06 10:04:57 np0005548790.localdomain systemd[1]: libpod-77823ba5a52f076317bf557e69b19e29b3c7725f1af3861edbf5f4d566191249.scope: Deactivated successfully.
Dec 06 10:04:57 np0005548790.localdomain podman[290511]: 2025-12-06 10:04:57.811404435 +0000 UTC m=+0.220501039 container died 77823ba5a52f076317bf557e69b19e29b3c7725f1af3861edbf5f4d566191249 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_colden, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, distribution-scope=public, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, name=rhceph, io.openshift.expose-services=, vcs-type=git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, version=7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 06 10:04:57 np0005548790.localdomain podman[290553]: 2025-12-06 10:04:57.911708153 +0000 UTC m=+0.090874223 container remove 77823ba5a52f076317bf557e69b19e29b3c7725f1af3861edbf5f4d566191249 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_colden, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, architecture=x86_64, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, RELEASE=main)
Dec 06 10:04:57 np0005548790.localdomain systemd[1]: libpod-conmon-77823ba5a52f076317bf557e69b19e29b3c7725f1af3861edbf5f4d566191249.scope: Deactivated successfully.
Dec 06 10:04:58 np0005548790.localdomain sudo[290476]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:58 np0005548790.localdomain sudo[290577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:04:58 np0005548790.localdomain sudo[290577]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:58 np0005548790.localdomain sudo[290577]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:58 np0005548790.localdomain sudo[290595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:04:58 np0005548790.localdomain sudo[290595]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:58 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-58bd0b14efe11090ecab4702a08ff8bb432a66506fae2a0b598b98ac9b6d2f9c-merged.mount: Deactivated successfully.
Dec 06 10:04:58 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.0 (monmap changed)...
Dec 06 10:04:58 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.0 on np0005548790.localdomain
Dec 06 10:04:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 06 10:04:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:58 np0005548790.localdomain podman[290630]: 
Dec 06 10:04:58 np0005548790.localdomain podman[290630]: 2025-12-06 10:04:58.724590852 +0000 UTC m=+0.074960160 container create 5dcb429499fdb29b059accea72e5dd8db1e246f6aae63867ca04e5dc13486ef2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_chandrasekhar, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_BRANCH=main, vendor=Red Hat, Inc., ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph)
Dec 06 10:04:58 np0005548790.localdomain systemd[1]: Started libpod-conmon-5dcb429499fdb29b059accea72e5dd8db1e246f6aae63867ca04e5dc13486ef2.scope.
Dec 06 10:04:58 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:04:58 np0005548790.localdomain podman[290630]: 2025-12-06 10:04:58.696479618 +0000 UTC m=+0.046848926 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:04:58 np0005548790.localdomain podman[290630]: 2025-12-06 10:04:58.798837232 +0000 UTC m=+0.149206540 container init 5dcb429499fdb29b059accea72e5dd8db1e246f6aae63867ca04e5dc13486ef2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_chandrasekhar, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_CLEAN=True, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, architecture=x86_64, GIT_BRANCH=main, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 10:04:58 np0005548790.localdomain podman[290630]: 2025-12-06 10:04:58.805814141 +0000 UTC m=+0.156183439 container start 5dcb429499fdb29b059accea72e5dd8db1e246f6aae63867ca04e5dc13486ef2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_chandrasekhar, ceph=True, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-type=git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, architecture=x86_64, version=7, name=rhceph)
Dec 06 10:04:58 np0005548790.localdomain podman[290630]: 2025-12-06 10:04:58.806035877 +0000 UTC m=+0.156405245 container attach 5dcb429499fdb29b059accea72e5dd8db1e246f6aae63867ca04e5dc13486ef2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_chandrasekhar, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, CEPH_POINT_RELEASE=, distribution-scope=public, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, ceph=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, GIT_CLEAN=True, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, version=7)
Dec 06 10:04:58 np0005548790.localdomain hardcore_chandrasekhar[290646]: 167 167
Dec 06 10:04:58 np0005548790.localdomain systemd[1]: libpod-5dcb429499fdb29b059accea72e5dd8db1e246f6aae63867ca04e5dc13486ef2.scope: Deactivated successfully.
Dec 06 10:04:58 np0005548790.localdomain podman[290630]: 2025-12-06 10:04:58.811028403 +0000 UTC m=+0.161397701 container died 5dcb429499fdb29b059accea72e5dd8db1e246f6aae63867ca04e5dc13486ef2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_chandrasekhar, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vendor=Red Hat, Inc., version=7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, GIT_BRANCH=main, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 06 10:04:58 np0005548790.localdomain podman[290651]: 2025-12-06 10:04:58.907837626 +0000 UTC m=+0.086959216 container remove 5dcb429499fdb29b059accea72e5dd8db1e246f6aae63867ca04e5dc13486ef2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_chandrasekhar, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, com.redhat.component=rhceph-container, version=7, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.buildah.version=1.41.4, GIT_CLEAN=True)
Dec 06 10:04:58 np0005548790.localdomain systemd[1]: libpod-conmon-5dcb429499fdb29b059accea72e5dd8db1e246f6aae63867ca04e5dc13486ef2.scope: Deactivated successfully.
Dec 06 10:04:59 np0005548790.localdomain sudo[290595]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:59 np0005548790.localdomain sudo[290674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:04:59 np0005548790.localdomain sudo[290674]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:59 np0005548790.localdomain sudo[290674]: pam_unix(sudo:session): session closed for user root
Dec 06 10:04:59 np0005548790.localdomain sudo[290692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:04:59 np0005548790.localdomain sudo[290692]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:04:59 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-1b31c2ba768c40b0238905e6badf94f095c7ad442072bde4302f3e65e7566c5d-merged.mount: Deactivated successfully.
Dec 06 10:04:59 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.3 (monmap changed)...
Dec 06 10:04:59 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.3 on np0005548790.localdomain
Dec 06 10:04:59 np0005548790.localdomain ceph-mon[288373]: pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:04:59 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:59 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:04:59 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548790.vhcezv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:04:59 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:04:59 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548790.vhcezv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:04:59 np0005548790.localdomain podman[290726]: 
Dec 06 10:04:59 np0005548790.localdomain podman[290726]: 2025-12-06 10:04:59.733278198 +0000 UTC m=+0.076625565 container create ef724390fd626d2574c02beedff45bfbdbba8d87c64bb92d287c793e134a8a0a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_nightingale, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, vcs-type=git, GIT_CLEAN=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, ceph=True, io.openshift.tags=rhceph ceph, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7)
Dec 06 10:04:59 np0005548790.localdomain systemd[1]: Started libpod-conmon-ef724390fd626d2574c02beedff45bfbdbba8d87c64bb92d287c793e134a8a0a.scope.
Dec 06 10:04:59 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:04:59 np0005548790.localdomain podman[290726]: 2025-12-06 10:04:59.793694571 +0000 UTC m=+0.137041968 container init ef724390fd626d2574c02beedff45bfbdbba8d87c64bb92d287c793e134a8a0a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_nightingale, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, distribution-scope=public, version=7, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, GIT_CLEAN=True)
Dec 06 10:04:59 np0005548790.localdomain podman[290726]: 2025-12-06 10:04:59.70136383 +0000 UTC m=+0.044711257 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:04:59 np0005548790.localdomain podman[290726]: 2025-12-06 10:04:59.802256334 +0000 UTC m=+0.145603701 container start ef724390fd626d2574c02beedff45bfbdbba8d87c64bb92d287c793e134a8a0a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_nightingale, ceph=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 06 10:04:59 np0005548790.localdomain podman[290726]: 2025-12-06 10:04:59.802537922 +0000 UTC m=+0.145885329 container attach ef724390fd626d2574c02beedff45bfbdbba8d87c64bb92d287c793e134a8a0a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_nightingale, com.redhat.component=rhceph-container, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_BRANCH=main, release=1763362218, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, name=rhceph, architecture=x86_64, version=7)
Dec 06 10:04:59 np0005548790.localdomain wizardly_nightingale[290741]: 167 167
Dec 06 10:04:59 np0005548790.localdomain systemd[1]: libpod-ef724390fd626d2574c02beedff45bfbdbba8d87c64bb92d287c793e134a8a0a.scope: Deactivated successfully.
Dec 06 10:04:59 np0005548790.localdomain podman[290726]: 2025-12-06 10:04:59.805848191 +0000 UTC m=+0.149195588 container died ef724390fd626d2574c02beedff45bfbdbba8d87c64bb92d287c793e134a8a0a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_nightingale, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, distribution-scope=public, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, version=7, io.buildah.version=1.41.4, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True)
Dec 06 10:04:59 np0005548790.localdomain podman[290746]: 2025-12-06 10:04:59.89881379 +0000 UTC m=+0.081801116 container remove ef724390fd626d2574c02beedff45bfbdbba8d87c64bb92d287c793e134a8a0a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_nightingale, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, ceph=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, RELEASE=main, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 06 10:04:59 np0005548790.localdomain systemd[1]: libpod-conmon-ef724390fd626d2574c02beedff45bfbdbba8d87c64bb92d287c793e134a8a0a.scope: Deactivated successfully.
Dec 06 10:04:59 np0005548790.localdomain sudo[290692]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:00 np0005548790.localdomain sudo[290763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:05:00 np0005548790.localdomain sudo[290763]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:00 np0005548790.localdomain sudo[290763]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:00 np0005548790.localdomain sudo[290781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:05:00 np0005548790.localdomain sudo[290781]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:00 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-0f1729f9e5de71de90677a29072aeaf54254371158f848171caaa8b3afffea59-merged.mount: Deactivated successfully.
Dec 06 10:05:00 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mds.mds.np0005548790.vhcezv (monmap changed)...
Dec 06 10:05:00 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mds.mds.np0005548790.vhcezv on np0005548790.localdomain
Dec 06 10:05:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:05:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:05:00 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548790.kvkfyr (monmap changed)...
Dec 06 10:05:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:05:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:00 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548790.kvkfyr on np0005548790.localdomain
Dec 06 10:05:00 np0005548790.localdomain ceph-mon[288373]: pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:00 np0005548790.localdomain podman[290815]: 
Dec 06 10:05:00 np0005548790.localdomain podman[290815]: 2025-12-06 10:05:00.609519711 +0000 UTC m=+0.056875338 container create 84da6b67ee3eee2e6ca6d762f557fee0ba71b661343de878a402d6e47aeb4575 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_noether, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, RELEASE=main, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, release=1763362218, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, architecture=x86_64, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 10:05:00 np0005548790.localdomain systemd[1]: Started libpod-conmon-84da6b67ee3eee2e6ca6d762f557fee0ba71b661343de878a402d6e47aeb4575.scope.
Dec 06 10:05:00 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:05:00 np0005548790.localdomain podman[290815]: 2025-12-06 10:05:00.671056594 +0000 UTC m=+0.118412221 container init 84da6b67ee3eee2e6ca6d762f557fee0ba71b661343de878a402d6e47aeb4575 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_noether, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.openshift.expose-services=, architecture=x86_64, ceph=True, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, RELEASE=main, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, distribution-scope=public, com.redhat.component=rhceph-container, version=7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 06 10:05:00 np0005548790.localdomain podman[290815]: 2025-12-06 10:05:00.680075689 +0000 UTC m=+0.127431356 container start 84da6b67ee3eee2e6ca6d762f557fee0ba71b661343de878a402d6e47aeb4575 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_noether, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, distribution-scope=public, CEPH_POINT_RELEASE=, version=7, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., release=1763362218, vcs-type=git, io.openshift.tags=rhceph ceph)
Dec 06 10:05:00 np0005548790.localdomain podman[290815]: 2025-12-06 10:05:00.680355687 +0000 UTC m=+0.127711324 container attach 84da6b67ee3eee2e6ca6d762f557fee0ba71b661343de878a402d6e47aeb4575 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_noether, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, distribution-scope=public, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, version=7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, com.redhat.component=rhceph-container, vcs-type=git)
Dec 06 10:05:00 np0005548790.localdomain youthful_noether[290830]: 167 167
Dec 06 10:05:00 np0005548790.localdomain systemd[1]: libpod-84da6b67ee3eee2e6ca6d762f557fee0ba71b661343de878a402d6e47aeb4575.scope: Deactivated successfully.
Dec 06 10:05:00 np0005548790.localdomain podman[290815]: 2025-12-06 10:05:00.682585918 +0000 UTC m=+0.129941595 container died 84da6b67ee3eee2e6ca6d762f557fee0ba71b661343de878a402d6e47aeb4575 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_noether, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vcs-type=git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, RELEASE=main, GIT_BRANCH=main, ceph=True, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64)
Dec 06 10:05:00 np0005548790.localdomain podman[290815]: 2025-12-06 10:05:00.588421826 +0000 UTC m=+0.035777513 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:05:00 np0005548790.localdomain podman[290835]: 2025-12-06 10:05:00.772086612 +0000 UTC m=+0.077031217 container remove 84da6b67ee3eee2e6ca6d762f557fee0ba71b661343de878a402d6e47aeb4575 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_noether, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, version=7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4)
Dec 06 10:05:00 np0005548790.localdomain systemd[1]: libpod-conmon-84da6b67ee3eee2e6ca6d762f557fee0ba71b661343de878a402d6e47aeb4575.scope: Deactivated successfully.
Dec 06 10:05:00 np0005548790.localdomain sudo[290781]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:00 np0005548790.localdomain sudo[290851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:05:00 np0005548790.localdomain sudo[290851]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:00 np0005548790.localdomain sudo[290851]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:01 np0005548790.localdomain sudo[290869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:05:01 np0005548790.localdomain sudo[290869]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:01 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-85419356c2fd6b9362e2435fd69142e638a8182d212d13353a1cb4e9f371c276-merged.mount: Deactivated successfully.
Dec 06 10:05:01 np0005548790.localdomain podman[290904]: 
Dec 06 10:05:01 np0005548790.localdomain podman[290904]: 2025-12-06 10:05:01.490287656 +0000 UTC m=+0.073417218 container create af3a14938b0512fb66f02086199e7e92d3102cbd20713df3307f4a89ec15dce0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_hertz, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, GIT_BRANCH=main, com.redhat.component=rhceph-container, ceph=True, architecture=x86_64, vcs-type=git, version=7, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 10:05:01 np0005548790.localdomain systemd[1]: Started libpod-conmon-af3a14938b0512fb66f02086199e7e92d3102cbd20713df3307f4a89ec15dce0.scope.
Dec 06 10:05:01 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:05:01 np0005548790.localdomain podman[290904]: 2025-12-06 10:05:01.548025667 +0000 UTC m=+0.131155179 container init af3a14938b0512fb66f02086199e7e92d3102cbd20713df3307f4a89ec15dce0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_hertz, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_BRANCH=main, vendor=Red Hat, Inc., name=rhceph, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhceph ceph, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Dec 06 10:05:01 np0005548790.localdomain podman[290904]: 2025-12-06 10:05:01.557319349 +0000 UTC m=+0.140448861 container start af3a14938b0512fb66f02086199e7e92d3102cbd20713df3307f4a89ec15dce0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_hertz, io.buildah.version=1.41.4, GIT_BRANCH=main, vendor=Red Hat, Inc., version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.component=rhceph-container, io.openshift.expose-services=, distribution-scope=public, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=)
Dec 06 10:05:01 np0005548790.localdomain podman[290904]: 2025-12-06 10:05:01.557618797 +0000 UTC m=+0.140748309 container attach af3a14938b0512fb66f02086199e7e92d3102cbd20713df3307f4a89ec15dce0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_hertz, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, vcs-type=git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, release=1763362218)
Dec 06 10:05:01 np0005548790.localdomain keen_hertz[290917]: 167 167
Dec 06 10:05:01 np0005548790.localdomain podman[290904]: 2025-12-06 10:05:01.460820225 +0000 UTC m=+0.043949737 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:05:01 np0005548790.localdomain systemd[1]: libpod-af3a14938b0512fb66f02086199e7e92d3102cbd20713df3307f4a89ec15dce0.scope: Deactivated successfully.
Dec 06 10:05:01 np0005548790.localdomain podman[290904]: 2025-12-06 10:05:01.561578896 +0000 UTC m=+0.144708418 container died af3a14938b0512fb66f02086199e7e92d3102cbd20713df3307f4a89ec15dce0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_hertz, com.redhat.component=rhceph-container, vcs-type=git, build-date=2025-11-26T19:44:28Z, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_CLEAN=True, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, release=1763362218, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 06 10:05:01 np0005548790.localdomain podman[290922]: 2025-12-06 10:05:01.652273092 +0000 UTC m=+0.081353584 container remove af3a14938b0512fb66f02086199e7e92d3102cbd20713df3307f4a89ec15dce0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_hertz, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-type=git, io.buildah.version=1.41.4, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.openshift.tags=rhceph ceph)
Dec 06 10:05:01 np0005548790.localdomain systemd[1]: libpod-conmon-af3a14938b0512fb66f02086199e7e92d3102cbd20713df3307f4a89ec15dce0.scope: Deactivated successfully.
Dec 06 10:05:01 np0005548790.localdomain sudo[290869]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:01 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:01 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:01 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:05:01 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:05:01 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mon.np0005548790 (monmap changed)...
Dec 06 10:05:01 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:01 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mon.np0005548790 on np0005548790.localdomain
Dec 06 10:05:01 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:01 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:01 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:05:02 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:05:02 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d4b7483882c517660e5f72a48d8aa4bdb49f8a266cfd375f4bb16d7820432a24-merged.mount: Deactivated successfully.
Dec 06 10:05:02 np0005548790.localdomain podman[290939]: 2025-12-06 10:05:02.319000416 +0000 UTC m=+0.085595728 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:05:02 np0005548790.localdomain podman[290939]: 2025-12-06 10:05:02.390281015 +0000 UTC m=+0.156876287 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 06 10:05:02 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:05:02 np0005548790.localdomain ceph-mon[288373]: pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:03 np0005548790.localdomain sudo[290964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 10:05:03 np0005548790.localdomain sudo[290964]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:03 np0005548790.localdomain sudo[290964]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:03 np0005548790.localdomain sudo[290982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph
Dec 06 10:05:03 np0005548790.localdomain sudo[290982]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:03 np0005548790.localdomain sudo[290982]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:03 np0005548790.localdomain sudo[291000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:05:03 np0005548790.localdomain sudo[291000]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:03 np0005548790.localdomain sudo[291000]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:03 np0005548790.localdomain sudo[291018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:05:03 np0005548790.localdomain sudo[291018]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:03 np0005548790.localdomain sudo[291018]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:03 np0005548790.localdomain sudo[291036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:05:03 np0005548790.localdomain sudo[291036]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:03 np0005548790.localdomain sudo[291036]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:03 np0005548790.localdomain sudo[291070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:05:03 np0005548790.localdomain sudo[291070]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:03 np0005548790.localdomain sudo[291070]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:03 np0005548790.localdomain sudo[291088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:05:03 np0005548790.localdomain sudo[291088]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:03 np0005548790.localdomain sudo[291088]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:04 np0005548790.localdomain sudo[291106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 06 10:05:04 np0005548790.localdomain sudo[291106]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:04 np0005548790.localdomain sudo[291106]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:04 np0005548790.localdomain sudo[291124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:05:04 np0005548790.localdomain sudo[291124]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:04 np0005548790.localdomain sudo[291124]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:04 np0005548790.localdomain sudo[291142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:05:04 np0005548790.localdomain sudo[291142]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:04 np0005548790.localdomain sudo[291142]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:05:04 np0005548790.localdomain ceph-mon[288373]: Removing np0005548785.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:05:04 np0005548790.localdomain ceph-mon[288373]: Updating np0005548786.localdomain:/etc/ceph/ceph.conf
Dec 06 10:05:04 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/etc/ceph/ceph.conf
Dec 06 10:05:04 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf
Dec 06 10:05:04 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf
Dec 06 10:05:04 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf
Dec 06 10:05:04 np0005548790.localdomain ceph-mon[288373]: Removing np0005548785.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:05:04 np0005548790.localdomain ceph-mon[288373]: Removing np0005548785.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:05:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:04 np0005548790.localdomain sudo[291160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:05:04 np0005548790.localdomain sudo[291160]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:04 np0005548790.localdomain sudo[291160]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:04 np0005548790.localdomain sudo[291178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:05:04 np0005548790.localdomain sudo[291178]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:04 np0005548790.localdomain sudo[291178]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:04 np0005548790.localdomain sudo[291196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:05:04 np0005548790.localdomain sudo[291196]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:04 np0005548790.localdomain sudo[291196]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:04 np0005548790.localdomain sudo[291230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:05:04 np0005548790.localdomain sudo[291230]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:04 np0005548790.localdomain sudo[291230]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:04 np0005548790.localdomain sudo[291248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:05:04 np0005548790.localdomain sudo[291248]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:04 np0005548790.localdomain sudo[291248]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:04 np0005548790.localdomain sudo[291266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:05:04 np0005548790.localdomain sudo[291266]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:04 np0005548790.localdomain sudo[291266]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:05 np0005548790.localdomain ceph-mon[288373]: from='client.34194 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005548785.localdomain", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:05:05 np0005548790.localdomain ceph-mon[288373]: Added label _no_schedule to host np0005548785.localdomain
Dec 06 10:05:05 np0005548790.localdomain ceph-mon[288373]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005548785.localdomain
Dec 06 10:05:05 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:05:05 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:05:05 np0005548790.localdomain ceph-mon[288373]: Updating np0005548786.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:05:05 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:05:05 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:05:05 np0005548790.localdomain ceph-mon[288373]: pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:06 np0005548790.localdomain ceph-mon[288373]: Removing daemon crash.np0005548785 from np0005548785.localdomain -- ports []
Dec 06 10:05:06 np0005548790.localdomain ceph-mon[288373]: from='client.34199 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005548785.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:05:06 np0005548790.localdomain sudo[291284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:05:06 np0005548790.localdomain sudo[291284]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:06 np0005548790.localdomain sudo[291284]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:05:07 np0005548790.localdomain sudo[291302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:05:07 np0005548790.localdomain sudo[291302]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:07 np0005548790.localdomain sudo[291302]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:05:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:05:07 np0005548790.localdomain ceph-mon[288373]: from='client.26806 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005548785.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:05:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548785.localdomain"} : dispatch
Dec 06 10:05:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548785.localdomain"} : dispatch
Dec 06 10:05:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005548785.localdomain"}]': finished
Dec 06 10:05:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth rm", "entity": "client.crash.np0005548785.localdomain"} : dispatch
Dec 06 10:05:07 np0005548790.localdomain ceph-mon[288373]: Removed host np0005548785.localdomain
Dec 06 10:05:07 np0005548790.localdomain ceph-mon[288373]: Removing key for client.crash.np0005548785.localdomain
Dec 06 10:05:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth rm", "entity": "client.crash.np0005548785.localdomain"} : dispatch
Dec 06 10:05:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd='[{"prefix": "auth rm", "entity": "client.crash.np0005548785.localdomain"}]': finished
Dec 06 10:05:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:05:07 np0005548790.localdomain ceph-mon[288373]: pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:05:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:05:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548786.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:05:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548786.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:05:08 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548786 (monmap changed)...
Dec 06 10:05:08 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548786 on np0005548786.localdomain
Dec 06 10:05:08 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:08 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:08 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:05:08 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:05:08 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:09 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mon.np0005548786 (monmap changed)...
Dec 06 10:05:09 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mon.np0005548786 on np0005548786.localdomain
Dec 06 10:05:09 np0005548790.localdomain ceph-mon[288373]: pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548786.mczynb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:05:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:05:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548786.mczynb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:05:10 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548786.mczynb (monmap changed)...
Dec 06 10:05:10 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548786.mczynb on np0005548786.localdomain
Dec 06 10:05:10 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:10 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:10 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:05:10 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:05:10 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mon.np0005548787 (monmap changed)...
Dec 06 10:05:10 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:10 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mon.np0005548787 on np0005548787.localdomain
Dec 06 10:05:10 np0005548790.localdomain ceph-mon[288373]: pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:11 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:05:12 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:12 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:12 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:05:12 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:05:12 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548787.umwsra (monmap changed)...
Dec 06 10:05:12 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:05:12 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:12 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548787.umwsra on np0005548787.localdomain
Dec 06 10:05:12 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:12 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:12 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:05:12 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:12 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:05:13 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548787 (monmap changed)...
Dec 06 10:05:13 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548787 on np0005548787.localdomain
Dec 06 10:05:13 np0005548790.localdomain ceph-mon[288373]: pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:13 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:13 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:13 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:05:13 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:13 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:05:14 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548788 (monmap changed)...
Dec 06 10:05:14 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain
Dec 06 10:05:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 06 10:05:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:14 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:05:14 np0005548790.localdomain systemd[1]: tmp-crun.J5RVQb.mount: Deactivated successfully.
Dec 06 10:05:14 np0005548790.localdomain podman[291320]: 2025-12-06 10:05:14.582977873 +0000 UTC m=+0.097852413 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent)
Dec 06 10:05:14 np0005548790.localdomain podman[291320]: 2025-12-06 10:05:14.613849443 +0000 UTC m=+0.128724063 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec 06 10:05:14 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:05:15 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.2 (monmap changed)...
Dec 06 10:05:15 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.2 on np0005548788.localdomain
Dec 06 10:05:15 np0005548790.localdomain ceph-mon[288373]: from='client.26610 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:05:15 np0005548790.localdomain ceph-mon[288373]: Saving service mon spec with placement label:mon
Dec 06 10:05:15 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:15 np0005548790.localdomain ceph-mon[288373]: pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:15 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:15 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:15 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 06 10:05:15 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:16 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.5 (monmap changed)...
Dec 06 10:05:16 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.5 on np0005548788.localdomain
Dec 06 10:05:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:05:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:05:16 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:05:17 np0005548790.localdomain ceph-mgr[286934]: ms_deliver_dispatch: unhandled message 0x563548511080 mon_map magic: 0 from mon.2 v2:172.18.0.108:3300/0
Dec 06 10:05:17 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [INF] : mon.np0005548790 calling monitor election
Dec 06 10:05:17 np0005548790.localdomain ceph-mon[288373]: paxos.2).electionLogic(32) init, last seen epoch 32
Dec 06 10:05:17 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(electing) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:05:17 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(electing) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:05:17 np0005548790.localdomain sshd[291338]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:05:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:05:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:05:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:05:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:05:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:05:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18686 "" "Go-http-client/1.1"
Dec 06 10:05:19 np0005548790.localdomain sshd[291338]: Connection closed by authenticating user root 45.10.175.77 port 44816 [preauth]
Dec 06 10:05:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:05:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:05:21 np0005548790.localdomain podman[291341]: 2025-12-06 10:05:21.577063695 +0000 UTC m=+0.091485449 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 10:05:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:05:21 np0005548790.localdomain podman[291341]: 2025-12-06 10:05:21.618272445 +0000 UTC m=+0.132694189 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 10:05:21 np0005548790.localdomain systemd[1]: tmp-crun.MeOsOh.mount: Deactivated successfully.
Dec 06 10:05:21 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:05:21 np0005548790.localdomain podman[291340]: 2025-12-06 10:05:21.635147774 +0000 UTC m=+0.152038386 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 10:05:21 np0005548790.localdomain podman[291340]: 2025-12-06 10:05:21.721239236 +0000 UTC m=+0.238129838 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:05:21 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:05:21 np0005548790.localdomain podman[291371]: 2025-12-06 10:05:21.720898757 +0000 UTC m=+0.104272307 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, container_name=openstack_network_exporter, version=9.6)
Dec 06 10:05:21 np0005548790.localdomain podman[291371]: 2025-12-06 10:05:21.806372031 +0000 UTC m=+0.189745521 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., name=ubi9-minimal, release=1755695350, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 06 10:05:21 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(electing) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(electing) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)...
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: from='client.26625 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005548788"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: Remove daemons mon.np0005548788
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: Safe to remove mon.np0005548788: new quorum should be ['np0005548787', 'np0005548786', 'np0005548790', 'np0005548789'] (from ['np0005548787', 'np0005548786', 'np0005548790', 'np0005548789'])
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: Removing monitor np0005548788 from monmap...
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: Removing daemon mon.np0005548788 from np0005548788.localdomain -- ports []
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548786"} : dispatch
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548787"} : dispatch
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: mon.np0005548786 calling monitor election
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: mon.np0005548789 calling monitor election
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790 calling monitor election
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: mon.np0005548787 calling monitor election
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: mon.np0005548787 is new leader, mons np0005548787,np0005548790,np0005548789 in quorum (ranks 0,2,3)
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: overall HEALTH_OK
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: mon.np0005548787 calling monitor election
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: mon.np0005548787 is new leader, mons np0005548787,np0005548786,np0005548790,np0005548789 in quorum (ranks 0,1,2,3)
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: monmap epoch 8
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: last_changed 2025-12-06T10:05:17.086581+0000
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: created 2025-12-06T07:57:14.295835+0000
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: min_mon_release 18 (reef)
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: election_strategy: 1
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005548787
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005548786
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: 2: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005548790
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: 3: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005548789
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: fsmap cephfs:1 {0=mds.np0005548790.vhcezv=up:active} 2 up:standby
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: osdmap e88: 6 total, 6 up, 6 in
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: mgrmap e18: np0005548788.yvwbqq(active, since 53s), standbys: np0005548787.umwsra, np0005548786.mczynb, np0005548790.kvkfyr, np0005548789.mzhmje, np0005548785.vhqlsq
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: overall HEALTH_OK
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:22 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:22 np0005548790.localdomain systemd[1]: tmp-crun.0RqsIC.mount: Deactivated successfully.
Dec 06 10:05:23 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:05:23 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548789 (monmap changed)...
Dec 06 10:05:23 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:05:23 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:23 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain
Dec 06 10:05:23 np0005548790.localdomain ceph-mon[288373]: pgmap v30: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:05:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:05:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:05:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:05:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:05:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:05:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:05:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:05:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:05:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:05:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:05:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:05:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 06 10:05:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:24 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.1 (monmap changed)...
Dec 06 10:05:24 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.1 on np0005548789.localdomain
Dec 06 10:05:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:25.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:05:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:25.334 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:05:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:25.334 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:05:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:25.359 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:05:25 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:25 np0005548790.localdomain ceph-mon[288373]: pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:25 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:25 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 06 10:05:25 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:25 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.4 (monmap changed)...
Dec 06 10:05:25 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.4 on np0005548789.localdomain
Dec 06 10:05:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:26.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:05:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:05:26 np0005548790.localdomain podman[291402]: 2025-12-06 10:05:26.571352745 +0000 UTC m=+0.084027217 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 06 10:05:26 np0005548790.localdomain podman[291402]: 2025-12-06 10:05:26.588227114 +0000 UTC m=+0.100901606 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 10:05:26 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:05:26 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:26 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:26 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:05:26 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:26 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.107:0/3699539753' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:05:26 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)...
Dec 06 10:05:26 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:05:26 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain
Dec 06 10:05:26 np0005548790.localdomain ceph-mon[288373]: pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:26 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:26 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:26 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:05:26 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:05:26 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:05:26 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:26 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.107:0/2165954404' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:05:26 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:05:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:27.329 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:05:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:27.331 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:05:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:27.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:05:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:27.352 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:05:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:27.352 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:05:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:27.353 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:05:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:27.353 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:05:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:27.353 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:05:27 np0005548790.localdomain sudo[291439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:05:27 np0005548790.localdomain sudo[291439]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:27 np0005548790.localdomain sudo[291439]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:27 np0005548790.localdomain sudo[291457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:05:27 np0005548790.localdomain sudo[291457]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:27 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon) e8 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:05:27 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2317264595' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:05:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:27.823 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:05:27 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548789.mzhmje (monmap changed)...
Dec 06 10:05:28 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain
Dec 06 10:05:28 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:28 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:28 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:05:28 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:28 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:05:28 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.108:0/2317264595' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:05:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:28.042 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:05:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:28.043 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=12029MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:05:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:28.044 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:05:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:28.044 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:05:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:28.114 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:05:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:28.114 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:05:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:05:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:28.140 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:05:28 np0005548790.localdomain systemd[1]: tmp-crun.39wD9R.mount: Deactivated successfully.
Dec 06 10:05:28 np0005548790.localdomain podman[291491]: 2025-12-06 10:05:28.245212803 +0000 UTC m=+0.105319946 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 10:05:28 np0005548790.localdomain podman[291491]: 2025-12-06 10:05:28.286179187 +0000 UTC m=+0.146286330 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 10:05:28 np0005548790.localdomain podman[291500]: 
Dec 06 10:05:28 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:05:28 np0005548790.localdomain podman[291500]: 2025-12-06 10:05:28.306071438 +0000 UTC m=+0.144265595 container create a88d71d34ded97de4b3809c39416d0722d7142de4ab7fc23c6ea4b0f733203cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_sanderson, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.tags=rhceph ceph, release=1763362218, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z)
Dec 06 10:05:28 np0005548790.localdomain systemd[1]: Started libpod-conmon-a88d71d34ded97de4b3809c39416d0722d7142de4ab7fc23c6ea4b0f733203cf.scope.
Dec 06 10:05:28 np0005548790.localdomain podman[291500]: 2025-12-06 10:05:28.263237163 +0000 UTC m=+0.101431360 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:05:28 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:05:28 np0005548790.localdomain podman[291500]: 2025-12-06 10:05:28.383058282 +0000 UTC m=+0.221252439 container init a88d71d34ded97de4b3809c39416d0722d7142de4ab7fc23c6ea4b0f733203cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_sanderson, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=)
Dec 06 10:05:28 np0005548790.localdomain podman[291500]: 2025-12-06 10:05:28.395368796 +0000 UTC m=+0.233562943 container start a88d71d34ded97de4b3809c39416d0722d7142de4ab7fc23c6ea4b0f733203cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_sanderson, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, version=7, vcs-type=git, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, CEPH_POINT_RELEASE=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, name=rhceph, ceph=True, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z)
Dec 06 10:05:28 np0005548790.localdomain podman[291500]: 2025-12-06 10:05:28.395683726 +0000 UTC m=+0.233877913 container attach a88d71d34ded97de4b3809c39416d0722d7142de4ab7fc23c6ea4b0f733203cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_sanderson, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, RELEASE=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, name=rhceph, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64)
Dec 06 10:05:28 np0005548790.localdomain stupefied_sanderson[291553]: 167 167
Dec 06 10:05:28 np0005548790.localdomain systemd[1]: libpod-a88d71d34ded97de4b3809c39416d0722d7142de4ab7fc23c6ea4b0f733203cf.scope: Deactivated successfully.
Dec 06 10:05:28 np0005548790.localdomain podman[291500]: 2025-12-06 10:05:28.400386563 +0000 UTC m=+0.238580770 container died a88d71d34ded97de4b3809c39416d0722d7142de4ab7fc23c6ea4b0f733203cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_sanderson, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, version=7, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vendor=Red Hat, Inc.)
Dec 06 10:05:28 np0005548790.localdomain podman[291558]: 2025-12-06 10:05:28.493813135 +0000 UTC m=+0.081500498 container remove a88d71d34ded97de4b3809c39416d0722d7142de4ab7fc23c6ea4b0f733203cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_sanderson, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.buildah.version=1.41.4, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, CEPH_POINT_RELEASE=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, GIT_CLEAN=True, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 06 10:05:28 np0005548790.localdomain systemd[1]: libpod-conmon-a88d71d34ded97de4b3809c39416d0722d7142de4ab7fc23c6ea4b0f733203cf.scope: Deactivated successfully.
Dec 06 10:05:28 np0005548790.localdomain sudo[291457]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:28.590 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:05:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:28.599 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:05:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:28.617 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:05:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:28.619 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:05:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:28.619 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:05:28 np0005548790.localdomain sudo[291577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:05:28 np0005548790.localdomain sudo[291577]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:28 np0005548790.localdomain sudo[291577]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:28 np0005548790.localdomain sudo[291595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:05:28 np0005548790.localdomain sudo[291595]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:29 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548790 (monmap changed)...
Dec 06 10:05:29 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain
Dec 06 10:05:29 np0005548790.localdomain ceph-mon[288373]: pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:29 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.108:0/2669262318' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:05:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 06 10:05:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:29 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a12bd2550e6af0f4b0100de6d1162418ce9a1292a866b5f0a67782639487c437-merged.mount: Deactivated successfully.
Dec 06 10:05:29 np0005548790.localdomain podman[291629]: 
Dec 06 10:05:29 np0005548790.localdomain podman[291629]: 2025-12-06 10:05:29.242319343 +0000 UTC m=+0.075639098 container create 7b130a60ee09d10f207ff442f75e16988428c4c1036e232d41af4466f2b6d899 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_ritchie, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, GIT_CLEAN=True, version=7, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, release=1763362218, ceph=True)
Dec 06 10:05:29 np0005548790.localdomain systemd[1]: Started libpod-conmon-7b130a60ee09d10f207ff442f75e16988428c4c1036e232d41af4466f2b6d899.scope.
Dec 06 10:05:29 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:05:29 np0005548790.localdomain podman[291629]: 2025-12-06 10:05:29.304208566 +0000 UTC m=+0.137528361 container init 7b130a60ee09d10f207ff442f75e16988428c4c1036e232d41af4466f2b6d899 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_ritchie, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_BRANCH=main, release=1763362218, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.component=rhceph-container, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4)
Dec 06 10:05:29 np0005548790.localdomain podman[291629]: 2025-12-06 10:05:29.212027049 +0000 UTC m=+0.045346864 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:05:29 np0005548790.localdomain podman[291629]: 2025-12-06 10:05:29.31576338 +0000 UTC m=+0.149083165 container start 7b130a60ee09d10f207ff442f75e16988428c4c1036e232d41af4466f2b6d899 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_ritchie, version=7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, RELEASE=main, vendor=Red Hat, Inc., architecture=x86_64, ceph=True, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_CLEAN=True)
Dec 06 10:05:29 np0005548790.localdomain podman[291629]: 2025-12-06 10:05:29.316017027 +0000 UTC m=+0.149336902 container attach 7b130a60ee09d10f207ff442f75e16988428c4c1036e232d41af4466f2b6d899 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_ritchie, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, release=1763362218, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, vcs-type=git, version=7)
Dec 06 10:05:29 np0005548790.localdomain nervous_ritchie[291644]: 167 167
Dec 06 10:05:29 np0005548790.localdomain podman[291629]: 2025-12-06 10:05:29.31982872 +0000 UTC m=+0.153148535 container died 7b130a60ee09d10f207ff442f75e16988428c4c1036e232d41af4466f2b6d899 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_ritchie, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_BRANCH=main, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218)
Dec 06 10:05:29 np0005548790.localdomain systemd[1]: libpod-7b130a60ee09d10f207ff442f75e16988428c4c1036e232d41af4466f2b6d899.scope: Deactivated successfully.
Dec 06 10:05:29 np0005548790.localdomain podman[291650]: 2025-12-06 10:05:29.418772642 +0000 UTC m=+0.085272391 container remove 7b130a60ee09d10f207ff442f75e16988428c4c1036e232d41af4466f2b6d899 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_ritchie, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, name=rhceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, ceph=True, vendor=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, version=7)
Dec 06 10:05:29 np0005548790.localdomain systemd[1]: libpod-conmon-7b130a60ee09d10f207ff442f75e16988428c4c1036e232d41af4466f2b6d899.scope: Deactivated successfully.
Dec 06 10:05:29 np0005548790.localdomain sudo[291595]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:29 np0005548790.localdomain sudo[291674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:05:29 np0005548790.localdomain sudo[291674]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:29 np0005548790.localdomain sudo[291674]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:29 np0005548790.localdomain sudo[291692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:05:29 np0005548790.localdomain sudo[291692]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:30 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.0 (monmap changed)...
Dec 06 10:05:30 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.0 on np0005548790.localdomain
Dec 06 10:05:30 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:30 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:30 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 06 10:05:30 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:30 np0005548790.localdomain podman[291728]: 
Dec 06 10:05:30 np0005548790.localdomain podman[291728]: 2025-12-06 10:05:30.218889354 +0000 UTC m=+0.076872791 container create a3be72c87fe6d038912b1a7ac58bfa48c0bc493561e0c7b5814f396621808524 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_ptolemy, vcs-type=git, io.openshift.expose-services=, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, distribution-scope=public)
Dec 06 10:05:30 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-092bc35b89f316c2b70166c31bed77ba796d8e7e631df900a91c0e607c60f8b2-merged.mount: Deactivated successfully.
Dec 06 10:05:30 np0005548790.localdomain systemd[1]: Started libpod-conmon-a3be72c87fe6d038912b1a7ac58bfa48c0bc493561e0c7b5814f396621808524.scope.
Dec 06 10:05:30 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:05:30 np0005548790.localdomain podman[291728]: 2025-12-06 10:05:30.280189181 +0000 UTC m=+0.138172618 container init a3be72c87fe6d038912b1a7ac58bfa48c0bc493561e0c7b5814f396621808524 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_ptolemy, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, CEPH_POINT_RELEASE=, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, vcs-type=git, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 06 10:05:30 np0005548790.localdomain podman[291728]: 2025-12-06 10:05:30.18967761 +0000 UTC m=+0.047661077 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:05:30 np0005548790.localdomain thirsty_ptolemy[291743]: 167 167
Dec 06 10:05:30 np0005548790.localdomain systemd[1]: libpod-a3be72c87fe6d038912b1a7ac58bfa48c0bc493561e0c7b5814f396621808524.scope: Deactivated successfully.
Dec 06 10:05:30 np0005548790.localdomain podman[291728]: 2025-12-06 10:05:30.292556398 +0000 UTC m=+0.150539845 container start a3be72c87fe6d038912b1a7ac58bfa48c0bc493561e0c7b5814f396621808524 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_ptolemy, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, ceph=True, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_BRANCH=main, io.buildah.version=1.41.4, release=1763362218, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, description=Red Hat Ceph Storage 7, version=7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 10:05:30 np0005548790.localdomain podman[291728]: 2025-12-06 10:05:30.293261107 +0000 UTC m=+0.151244594 container attach a3be72c87fe6d038912b1a7ac58bfa48c0bc493561e0c7b5814f396621808524 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_ptolemy, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-type=git, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, ceph=True, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, architecture=x86_64, distribution-scope=public, name=rhceph, io.openshift.expose-services=)
Dec 06 10:05:30 np0005548790.localdomain podman[291728]: 2025-12-06 10:05:30.295602631 +0000 UTC m=+0.153586068 container died a3be72c87fe6d038912b1a7ac58bfa48c0bc493561e0c7b5814f396621808524 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_ptolemy, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, release=1763362218, name=rhceph, CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Dec 06 10:05:30 np0005548790.localdomain podman[291748]: 2025-12-06 10:05:30.387700786 +0000 UTC m=+0.082728221 container remove a3be72c87fe6d038912b1a7ac58bfa48c0bc493561e0c7b5814f396621808524 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=thirsty_ptolemy, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, architecture=x86_64, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.expose-services=, vendor=Red Hat, Inc., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, GIT_CLEAN=True)
Dec 06 10:05:30 np0005548790.localdomain systemd[1]: libpod-conmon-a3be72c87fe6d038912b1a7ac58bfa48c0bc493561e0c7b5814f396621808524.scope: Deactivated successfully.
Dec 06 10:05:30 np0005548790.localdomain sudo[291692]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:30.621 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:05:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:30.622 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:05:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:30.622 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:05:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:30.623 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:05:30 np0005548790.localdomain sudo[291769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:05:30 np0005548790.localdomain sudo[291769]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:30 np0005548790.localdomain sudo[291769]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:30 np0005548790.localdomain sudo[291787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:05:30 np0005548790.localdomain sudo[291787]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:31 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.3 (monmap changed)...
Dec 06 10:05:31 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.3 on np0005548790.localdomain
Dec 06 10:05:31 np0005548790.localdomain ceph-mon[288373]: pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:31 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:31 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:31 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548790.vhcezv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:05:31 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:31 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548790.vhcezv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:05:31 np0005548790.localdomain podman[291823]: 
Dec 06 10:05:31 np0005548790.localdomain podman[291823]: 2025-12-06 10:05:31.218687908 +0000 UTC m=+0.079633717 container create 9629eb11472202b7de058f466374c27db0974225b6bcdd1d0416c7916ce87bd6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_goodall, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, version=7, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, com.redhat.component=rhceph-container)
Dec 06 10:05:31 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-84f88a5753563290794dd5ba0639c6aa77b4b841f7e76581fd6c3a4486509c4b-merged.mount: Deactivated successfully.
Dec 06 10:05:31 np0005548790.localdomain systemd[1]: Started libpod-conmon-9629eb11472202b7de058f466374c27db0974225b6bcdd1d0416c7916ce87bd6.scope.
Dec 06 10:05:31 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:05:31 np0005548790.localdomain podman[291823]: 2025-12-06 10:05:31.283303615 +0000 UTC m=+0.144249454 container init 9629eb11472202b7de058f466374c27db0974225b6bcdd1d0416c7916ce87bd6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_goodall, RELEASE=main, ceph=True, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, version=7, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_BRANCH=main, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 10:05:31 np0005548790.localdomain podman[291823]: 2025-12-06 10:05:31.188145167 +0000 UTC m=+0.049091016 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:05:31 np0005548790.localdomain systemd[1]: tmp-crun.7NAjX4.mount: Deactivated successfully.
Dec 06 10:05:31 np0005548790.localdomain podman[291823]: 2025-12-06 10:05:31.295895538 +0000 UTC m=+0.156841337 container start 9629eb11472202b7de058f466374c27db0974225b6bcdd1d0416c7916ce87bd6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_goodall, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, version=7, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, release=1763362218, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, vendor=Red Hat, Inc.)
Dec 06 10:05:31 np0005548790.localdomain podman[291823]: 2025-12-06 10:05:31.296149205 +0000 UTC m=+0.157095064 container attach 9629eb11472202b7de058f466374c27db0974225b6bcdd1d0416c7916ce87bd6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_goodall, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, ceph=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, version=7, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.component=rhceph-container, architecture=x86_64)
Dec 06 10:05:31 np0005548790.localdomain beautiful_goodall[291838]: 167 167
Dec 06 10:05:31 np0005548790.localdomain systemd[1]: libpod-9629eb11472202b7de058f466374c27db0974225b6bcdd1d0416c7916ce87bd6.scope: Deactivated successfully.
Dec 06 10:05:31 np0005548790.localdomain podman[291823]: 2025-12-06 10:05:31.300349118 +0000 UTC m=+0.161294977 container died 9629eb11472202b7de058f466374c27db0974225b6bcdd1d0416c7916ce87bd6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_goodall, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, name=rhceph, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, RELEASE=main, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-type=git)
Dec 06 10:05:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:05:31.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:05:31 np0005548790.localdomain podman[291843]: 2025-12-06 10:05:31.413407484 +0000 UTC m=+0.100539005 container remove 9629eb11472202b7de058f466374c27db0974225b6bcdd1d0416c7916ce87bd6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_goodall, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, RELEASE=main, version=7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_CLEAN=True)
Dec 06 10:05:31 np0005548790.localdomain systemd[1]: libpod-conmon-9629eb11472202b7de058f466374c27db0974225b6bcdd1d0416c7916ce87bd6.scope: Deactivated successfully.
Dec 06 10:05:31 np0005548790.localdomain sudo[291787]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:31 np0005548790.localdomain sudo[291860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:05:31 np0005548790.localdomain sudo[291860]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:31 np0005548790.localdomain sudo[291860]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:31 np0005548790.localdomain sudo[291878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:05:31 np0005548790.localdomain sudo[291878]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:31 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:05:32 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mds.mds.np0005548790.vhcezv (monmap changed)...
Dec 06 10:05:32 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mds.mds.np0005548790.vhcezv on np0005548790.localdomain
Dec 06 10:05:32 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.106:0/556966387' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:05:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:05:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:05:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:05:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:32 np0005548790.localdomain podman[291914]: 
Dec 06 10:05:32 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-cbc4472d94d3f3815c028c434948bd19926836c26971ee89acde2b371d4f0637-merged.mount: Deactivated successfully.
Dec 06 10:05:32 np0005548790.localdomain podman[291914]: 2025-12-06 10:05:32.23022123 +0000 UTC m=+0.095629182 container create b95126378448ea8e43597b4565b4b0a5da62e687dd89cb8e100033f11fe72a4e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_shockley, ceph=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 06 10:05:32 np0005548790.localdomain systemd[1]: Started libpod-conmon-b95126378448ea8e43597b4565b4b0a5da62e687dd89cb8e100033f11fe72a4e.scope.
Dec 06 10:05:32 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:05:32 np0005548790.localdomain podman[291914]: 2025-12-06 10:05:32.284377304 +0000 UTC m=+0.149785216 container init b95126378448ea8e43597b4565b4b0a5da62e687dd89cb8e100033f11fe72a4e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_shockley, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, RELEASE=main, com.redhat.component=rhceph-container, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, ceph=True, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.openshift.expose-services=)
Dec 06 10:05:32 np0005548790.localdomain podman[291914]: 2025-12-06 10:05:32.19127006 +0000 UTC m=+0.056678052 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:05:32 np0005548790.localdomain podman[291914]: 2025-12-06 10:05:32.295124756 +0000 UTC m=+0.160532708 container start b95126378448ea8e43597b4565b4b0a5da62e687dd89cb8e100033f11fe72a4e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_shockley, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.openshift.expose-services=, name=rhceph, RELEASE=main, version=7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z)
Dec 06 10:05:32 np0005548790.localdomain podman[291914]: 2025-12-06 10:05:32.295311691 +0000 UTC m=+0.160719673 container attach b95126378448ea8e43597b4565b4b0a5da62e687dd89cb8e100033f11fe72a4e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_shockley, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_BRANCH=main, GIT_CLEAN=True, architecture=x86_64, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, release=1763362218, CEPH_POINT_RELEASE=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, distribution-scope=public)
Dec 06 10:05:32 np0005548790.localdomain elegant_shockley[291930]: 167 167
Dec 06 10:05:32 np0005548790.localdomain systemd[1]: libpod-b95126378448ea8e43597b4565b4b0a5da62e687dd89cb8e100033f11fe72a4e.scope: Deactivated successfully.
Dec 06 10:05:32 np0005548790.localdomain podman[291914]: 2025-12-06 10:05:32.311006677 +0000 UTC m=+0.176414659 container died b95126378448ea8e43597b4565b4b0a5da62e687dd89cb8e100033f11fe72a4e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_shockley, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-type=git, name=rhceph, distribution-scope=public, io.openshift.expose-services=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True)
Dec 06 10:05:32 np0005548790.localdomain podman[291935]: 2025-12-06 10:05:32.398265931 +0000 UTC m=+0.079172465 container remove b95126378448ea8e43597b4565b4b0a5da62e687dd89cb8e100033f11fe72a4e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_shockley, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_CLEAN=True, ceph=True, vcs-type=git, description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7)
Dec 06 10:05:32 np0005548790.localdomain systemd[1]: libpod-conmon-b95126378448ea8e43597b4565b4b0a5da62e687dd89cb8e100033f11fe72a4e.scope: Deactivated successfully.
Dec 06 10:05:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:05:32 np0005548790.localdomain sudo[291878]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:32 np0005548790.localdomain podman[291952]: 2025-12-06 10:05:32.523828556 +0000 UTC m=+0.090989685 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 10:05:32 np0005548790.localdomain podman[291952]: 2025-12-06 10:05:32.5941945 +0000 UTC m=+0.161355629 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 06 10:05:32 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:05:32 np0005548790.localdomain sudo[291973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:05:32 np0005548790.localdomain sudo[291973]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:32 np0005548790.localdomain sudo[291973]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:32 np0005548790.localdomain sudo[291995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:05:32 np0005548790.localdomain sudo[291995]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:33 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548790.kvkfyr (monmap changed)...
Dec 06 10:05:33 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548790.kvkfyr on np0005548790.localdomain
Dec 06 10:05:33 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.106:0/2556899461' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:05:33 np0005548790.localdomain ceph-mon[288373]: pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:33 np0005548790.localdomain systemd[1]: tmp-crun.qlqHvn.mount: Deactivated successfully.
Dec 06 10:05:33 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-0573941c70db7243013b81fc2bb1cf1dc1a289d1bab8bc5e36c2b4f99c264e9c-merged.mount: Deactivated successfully.
Dec 06 10:05:33 np0005548790.localdomain sudo[291995]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:34 np0005548790.localdomain ceph-mon[288373]: from='client.26676 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005548788.localdomain:172.18.0.103", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:05:34 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:05:34 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:34 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:34 np0005548790.localdomain ceph-mon[288373]: Deploying daemon mon.np0005548788 on np0005548788.localdomain
Dec 06 10:05:34 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:34 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:34 np0005548790.localdomain ceph-mon[288373]: pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:35 np0005548790.localdomain sudo[292044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 10:05:35 np0005548790.localdomain sudo[292044]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:35 np0005548790.localdomain sudo[292044]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:35 np0005548790.localdomain sudo[292062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph
Dec 06 10:05:35 np0005548790.localdomain sudo[292062]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:35 np0005548790.localdomain sudo[292062]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:35 np0005548790.localdomain sudo[292080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:05:35 np0005548790.localdomain sudo[292080]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:35 np0005548790.localdomain sudo[292080]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:35 np0005548790.localdomain sudo[292098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:05:35 np0005548790.localdomain sudo[292098]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:35 np0005548790.localdomain sudo[292098]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:35 np0005548790.localdomain sudo[292116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:05:35 np0005548790.localdomain sudo[292116]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:35 np0005548790.localdomain sudo[292116]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:35 np0005548790.localdomain sudo[292150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:05:35 np0005548790.localdomain sudo[292150]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:35 np0005548790.localdomain sudo[292150]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:35 np0005548790.localdomain sudo[292168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:05:35 np0005548790.localdomain sudo[292168]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:35 np0005548790.localdomain sudo[292168]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:35 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:35 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:05:35 np0005548790.localdomain ceph-mon[288373]: Updating np0005548786.localdomain:/etc/ceph/ceph.conf
Dec 06 10:05:35 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/etc/ceph/ceph.conf
Dec 06 10:05:35 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf
Dec 06 10:05:35 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf
Dec 06 10:05:35 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf
Dec 06 10:05:35 np0005548790.localdomain sudo[292186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 06 10:05:35 np0005548790.localdomain sudo[292186]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:35 np0005548790.localdomain sudo[292186]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:35 np0005548790.localdomain sudo[292204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:05:35 np0005548790.localdomain sudo[292204]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:35 np0005548790.localdomain sudo[292204]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:35 np0005548790.localdomain sudo[292222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:05:35 np0005548790.localdomain sudo[292222]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:35 np0005548790.localdomain sudo[292222]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:36 np0005548790.localdomain sudo[292240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:05:36 np0005548790.localdomain sudo[292240]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:36 np0005548790.localdomain sudo[292240]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:36 np0005548790.localdomain sudo[292258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:05:36 np0005548790.localdomain sudo[292258]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:36 np0005548790.localdomain sudo[292258]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:36 np0005548790.localdomain sudo[292276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:05:36 np0005548790.localdomain sudo[292276]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:36 np0005548790.localdomain sudo[292276]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:36 np0005548790.localdomain sudo[292310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:05:36 np0005548790.localdomain sudo[292310]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:36 np0005548790.localdomain sudo[292310]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:36 np0005548790.localdomain sudo[292328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:05:36 np0005548790.localdomain sudo[292328]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:36 np0005548790.localdomain sudo[292328]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:36 np0005548790.localdomain sudo[292346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:05:36 np0005548790.localdomain sudo[292346]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:36 np0005548790.localdomain sudo[292346]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:36 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon) e8  adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints
Dec 06 10:05:36 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon) e8  adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints
Dec 06 10:05:36 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:05:37 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon) e8  adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints
Dec 06 10:05:37 np0005548790.localdomain ceph-mgr[286934]: ms_deliver_dispatch: unhandled message 0x563551fee000 mon_map magic: 0 from mon.2 v2:172.18.0.108:3300/0
Dec 06 10:05:37 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [INF] : mon.np0005548790 calling monitor election
Dec 06 10:05:37 np0005548790.localdomain ceph-mon[288373]: paxos.2).electionLogic(38) init, last seen epoch 38
Dec 06 10:05:37 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(electing) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:05:37 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(electing) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:05:41 np0005548790.localdomain ceph-mds[285635]: mds.beacon.mds.np0005548790.vhcezv missed beacon ack from the monitors
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(electing) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(electing) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548786"} : dispatch
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548787"} : dispatch
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: mon.np0005548786 calling monitor election
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: mon.np0005548789 calling monitor election
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: mon.np0005548787 calling monitor election
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790 calling monitor election
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: mon.np0005548788 calling monitor election
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: mon.np0005548787 is new leader, mons np0005548787,np0005548790,np0005548789 in quorum (ranks 0,2,3)
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: overall HEALTH_OK
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: mon.np0005548787 calling monitor election
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: mon.np0005548787 is new leader, mons np0005548787,np0005548786,np0005548790,np0005548789,np0005548788 in quorum (ranks 0,1,2,3,4)
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: monmap epoch 9
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: last_changed 2025-12-06T10:05:37.030029+0000
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: created 2025-12-06T07:57:14.295835+0000
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: min_mon_release 18 (reef)
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: election_strategy: 1
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005548787
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005548786
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: 2: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005548790
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: 3: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005548789
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: 4: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005548788
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: fsmap cephfs:1 {0=mds.np0005548790.vhcezv=up:active} 2 up:standby
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: osdmap e88: 6 total, 6 up, 6 in
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: mgrmap e18: np0005548788.yvwbqq(active, since 73s), standbys: np0005548787.umwsra, np0005548786.mczynb, np0005548790.kvkfyr, np0005548789.mzhmje, np0005548785.vhqlsq
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: overall HEALTH_OK
Dec 06 10:05:42 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:42 np0005548790.localdomain sudo[292364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:05:42 np0005548790.localdomain sudo[292364]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:42 np0005548790.localdomain sudo[292364]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:43 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:43 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:43 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:05:43 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.32:0/1008829953' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:05:43 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.32:0/1008829953' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:05:43 np0005548790.localdomain ceph-mon[288373]: pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:43 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548786.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:05:43 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:43 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548786.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:05:43 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:05:44 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548786 (monmap changed)...
Dec 06 10:05:44 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548786 on np0005548786.localdomain
Dec 06 10:05:44 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:44 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:44 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:44 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548786.mczynb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:05:44 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:05:44 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:44 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548786.mczynb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:05:45 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:05:45 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548786.mczynb (monmap changed)...
Dec 06 10:05:45 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548786.mczynb on np0005548786.localdomain
Dec 06 10:05:45 np0005548790.localdomain ceph-mon[288373]: pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:45 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:45 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:45 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:05:45 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:05:45 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548787.umwsra (monmap changed)...
Dec 06 10:05:45 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:05:45 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:45 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548787.umwsra on np0005548787.localdomain
Dec 06 10:05:45 np0005548790.localdomain podman[292382]: 2025-12-06 10:05:45.568852497 +0000 UTC m=+0.085588730 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:05:45 np0005548790.localdomain podman[292382]: 2025-12-06 10:05:45.574155211 +0000 UTC m=+0.090891414 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 10:05:45 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:05:46 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:46 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:46 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:05:46 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:46 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548787 (monmap changed)...
Dec 06 10:05:46 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:05:46 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548787 on np0005548787.localdomain
Dec 06 10:05:46 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:46 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:46 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:05:46 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548788 (monmap changed)...
Dec 06 10:05:46 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:46 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:05:46 np0005548790.localdomain ceph-mon[288373]: pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:46 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain
Dec 06 10:05:46 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:05:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:05:48.388 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:05:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:05:48.388 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:05:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:05:48.389 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:05:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:05:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:05:48 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:48 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:48 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 06 10:05:48 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:48 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.2 (monmap changed)...
Dec 06 10:05:48 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.2 on np0005548788.localdomain
Dec 06 10:05:48 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.200:0/3950593935' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 06 10:05:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:05:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:05:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:05:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18696 "" "Go-http-client/1.1"
Dec 06 10:05:49 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:49 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:49 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 06 10:05:49 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:49 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.5 (monmap changed)...
Dec 06 10:05:49 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.5 on np0005548788.localdomain
Dec 06 10:05:49 np0005548790.localdomain ceph-mon[288373]: pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:50 np0005548790.localdomain ceph-mon[288373]: from='client.44109 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:05:50 np0005548790.localdomain ceph-mon[288373]: Reconfig service osd.default_drive_group
Dec 06 10:05:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:05:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:05:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon).osd e89 e89: 6 total, 6 up, 6 in
Dec 06 10:05:51 np0005548790.localdomain sshd[289233]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 10:05:51 np0005548790.localdomain systemd[1]: session-64.scope: Deactivated successfully.
Dec 06 10:05:51 np0005548790.localdomain systemd[1]: session-64.scope: Consumed 17.648s CPU time.
Dec 06 10:05:51 np0005548790.localdomain systemd-logind[760]: Session 64 logged out. Waiting for processes to exit.
Dec 06 10:05:51 np0005548790.localdomain systemd-logind[760]: Removed session 64.
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)...
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: pgmap v44: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 ' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.17055 172.18.0.106:0/93848351' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.200:0/3205170338' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: Activating manager daemon np0005548787.umwsra
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: osdmap e89: 6 total, 6 up, 6 in
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: mgrmap e19: np0005548787.umwsra(active, starting, since 0.0519269s), standbys: np0005548786.mczynb, np0005548790.kvkfyr, np0005548789.mzhmje, np0005548785.vhqlsq
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata", "id": "np0005548786"} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata", "id": "np0005548787"} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mds metadata", "who": "mds.np0005548789.vxwwsq"} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mds metadata", "who": "mds.np0005548788.erzujf"} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mds metadata", "who": "mds.np0005548790.vhcezv"} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mgr metadata", "who": "np0005548787.umwsra", "id": "np0005548787.umwsra"} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mgr metadata", "who": "np0005548786.mczynb", "id": "np0005548786.mczynb"} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mgr metadata", "who": "np0005548790.kvkfyr", "id": "np0005548790.kvkfyr"} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mgr metadata", "who": "np0005548789.mzhmje", "id": "np0005548789.mzhmje"} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mgr metadata", "who": "np0005548785.vhqlsq", "id": "np0005548785.vhqlsq"} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mds metadata"} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "osd metadata"} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata"} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: Manager daemon np0005548787.umwsra is now available
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548785.localdomain.devices.0"} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005548785.localdomain.devices.0"}]': finished
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548785.localdomain.devices.0"} : dispatch
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005548785.localdomain.devices.0"}]': finished
Dec 06 10:05:51 np0005548790.localdomain sshd[292400]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:05:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:05:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:05:51 np0005548790.localdomain sshd[292400]: Accepted publickey for ceph-admin from 192.168.122.105 port 38740 ssh2: RSA SHA256:HQYBT8n3HbnicDtP9tehd3+gJXMFtkw+fTMlmR2wCsE
Dec 06 10:05:51 np0005548790.localdomain systemd-logind[760]: New session 65 of user ceph-admin.
Dec 06 10:05:51 np0005548790.localdomain systemd[1]: Started Session 65 of User ceph-admin.
Dec 06 10:05:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:05:51 np0005548790.localdomain sshd[292400]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 06 10:05:51 np0005548790.localdomain podman[292404]: 2025-12-06 10:05:51.878174792 +0000 UTC m=+0.089479075 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0)
Dec 06 10:05:51 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:05:51 np0005548790.localdomain sudo[292443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:05:51 np0005548790.localdomain sudo[292443]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:51 np0005548790.localdomain podman[292403]: 2025-12-06 10:05:51.946081639 +0000 UTC m=+0.159497449 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:05:51 np0005548790.localdomain sudo[292443]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:51 np0005548790.localdomain podman[292404]: 2025-12-06 10:05:51.962187487 +0000 UTC m=+0.173491780 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:05:51 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:05:51 np0005548790.localdomain podman[292403]: 2025-12-06 10:05:51.983275951 +0000 UTC m=+0.196691751 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 10:05:52 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:05:52 np0005548790.localdomain sudo[292479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 10:05:52 np0005548790.localdomain podman[292423]: 2025-12-06 10:05:51.913872853 +0000 UTC m=+0.071115456 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, managed_by=edpm_ansible, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 06 10:05:52 np0005548790.localdomain sudo[292479]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:52 np0005548790.localdomain podman[292423]: 2025-12-06 10:05:52.045242997 +0000 UTC m=+0.202485590 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal)
Dec 06 10:05:52 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:05:52 np0005548790.localdomain ceph-mon[288373]: removing stray HostCache host record np0005548785.localdomain.devices.0
Dec 06 10:05:52 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548787.umwsra/mirror_snapshot_schedule"} : dispatch
Dec 06 10:05:52 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548787.umwsra/trash_purge_schedule"} : dispatch
Dec 06 10:05:52 np0005548790.localdomain ceph-mon[288373]: mgrmap e20: np0005548787.umwsra(active, since 1.0797s), standbys: np0005548786.mczynb, np0005548790.kvkfyr, np0005548789.mzhmje, np0005548785.vhqlsq
Dec 06 10:05:52 np0005548790.localdomain podman[292567]: 2025-12-06 10:05:52.721981962 +0000 UTC m=+0.081245910 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, version=7, release=1763362218, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 10:05:52 np0005548790.localdomain podman[292567]: 2025-12-06 10:05:52.830368081 +0000 UTC m=+0.189632049 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, vendor=Red Hat, Inc., ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main)
Dec 06 10:05:53 np0005548790.localdomain sudo[292479]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:53 np0005548790.localdomain ceph-mon[288373]: pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:53 np0005548790.localdomain ceph-mon[288373]: mgrmap e21: np0005548787.umwsra(active, since 2s), standbys: np0005548786.mczynb, np0005548790.kvkfyr, np0005548789.mzhmje, np0005548785.vhqlsq
Dec 06 10:05:53 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:53 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:53 np0005548790.localdomain sudo[292684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:05:53 np0005548790.localdomain sudo[292684]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:53 np0005548790.localdomain sudo[292684]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:05:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:05:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:05:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:05:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:05:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:05:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:05:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:05:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:05:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:05:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:05:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:05:53 np0005548790.localdomain sudo[292702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:05:53 np0005548790.localdomain sudo[292702]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:54 np0005548790.localdomain sudo[292702]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:54 np0005548790.localdomain sudo[292753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:05:54 np0005548790.localdomain sudo[292753]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:54 np0005548790.localdomain sudo[292753]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:54 np0005548790.localdomain ceph-mon[288373]: [06/Dec/2025:10:05:52] ENGINE Bus STARTING
Dec 06 10:05:54 np0005548790.localdomain ceph-mon[288373]: [06/Dec/2025:10:05:52] ENGINE Serving on https://172.18.0.105:7150
Dec 06 10:05:54 np0005548790.localdomain ceph-mon[288373]: [06/Dec/2025:10:05:52] ENGINE Client ('172.18.0.105', 55368) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 10:05:54 np0005548790.localdomain ceph-mon[288373]: [06/Dec/2025:10:05:53] ENGINE Serving on http://172.18.0.105:8765
Dec 06 10:05:54 np0005548790.localdomain ceph-mon[288373]: [06/Dec/2025:10:05:53] ENGINE Bus STARTED
Dec 06 10:05:54 np0005548790.localdomain ceph-mon[288373]: pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:54 np0005548790.localdomain sudo[292771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 06 10:05:54 np0005548790.localdomain sudo[292771]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:55 np0005548790.localdomain sudo[292771]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:55 np0005548790.localdomain sudo[292808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 10:05:55 np0005548790.localdomain sudo[292808]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:55 np0005548790.localdomain sudo[292808]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:55 np0005548790.localdomain sudo[292826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph
Dec 06 10:05:55 np0005548790.localdomain sudo[292826]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:55 np0005548790.localdomain sudo[292826]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:55 np0005548790.localdomain sudo[292844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:05:55 np0005548790.localdomain sudo[292844]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:55 np0005548790.localdomain sudo[292844]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:55 np0005548790.localdomain sudo[292862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:05:55 np0005548790.localdomain sudo[292862]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:55 np0005548790.localdomain sudo[292862]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:55 np0005548790.localdomain sudo[292880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:05:55 np0005548790.localdomain sudo[292880]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:55 np0005548790.localdomain sudo[292880]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:55 np0005548790.localdomain sudo[292914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:05:55 np0005548790.localdomain sudo[292914]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:55 np0005548790.localdomain sudo[292914]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:55 np0005548790.localdomain sudo[292932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:05:55 np0005548790.localdomain sudo[292932]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:55 np0005548790.localdomain sudo[292932]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:55 np0005548790.localdomain sudo[292950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 06 10:05:55 np0005548790.localdomain sudo[292950]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:55 np0005548790.localdomain sudo[292950]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:56 np0005548790.localdomain sudo[292968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:05:56 np0005548790.localdomain sudo[292968]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:56 np0005548790.localdomain sudo[292968]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config rm", "who": "osd/host:np0005548787", "name": "osd_memory_target"} : dispatch
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: Adjusting osd_memory_target on np0005548790.localdomain to 836.6M
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: Adjusting osd_memory_target on np0005548789.localdomain to 836.6M
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: Adjusting osd_memory_target on np0005548788.localdomain to 836.6M
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config rm", "who": "osd/host:np0005548786", "name": "osd_memory_target"} : dispatch
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: Updating np0005548786.localdomain:/etc/ceph/ceph.conf
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/etc/ceph/ceph.conf
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: mgrmap e22: np0005548787.umwsra(active, since 4s), standbys: np0005548786.mczynb, np0005548790.kvkfyr, np0005548789.mzhmje, np0005548785.vhqlsq
Dec 06 10:05:56 np0005548790.localdomain sudo[292986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:05:56 np0005548790.localdomain sudo[292986]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:56 np0005548790.localdomain sudo[292986]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:56 np0005548790.localdomain sudo[293004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:05:56 np0005548790.localdomain sudo[293004]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:56 np0005548790.localdomain sudo[293004]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:56 np0005548790.localdomain sudo[293022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:05:56 np0005548790.localdomain sudo[293022]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:56 np0005548790.localdomain sudo[293022]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:56 np0005548790.localdomain sudo[293040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:05:56 np0005548790.localdomain sudo[293040]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:56 np0005548790.localdomain sudo[293040]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:56 np0005548790.localdomain sudo[293074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:05:56 np0005548790.localdomain sudo[293074]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:56 np0005548790.localdomain sudo[293074]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:56 np0005548790.localdomain sudo[293092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:05:56 np0005548790.localdomain sudo[293092]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:56 np0005548790.localdomain sudo[293092]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:56 np0005548790.localdomain sudo[293110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:05:56 np0005548790.localdomain sudo[293110]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:05:56 np0005548790.localdomain sudo[293110]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:56 np0005548790.localdomain sudo[293129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 10:05:56 np0005548790.localdomain sudo[293129]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:56 np0005548790.localdomain sudo[293129]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:56 np0005548790.localdomain podman[293128]: 2025-12-06 10:05:56.731725074 +0000 UTC m=+0.079514134 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 10:05:56 np0005548790.localdomain podman[293128]: 2025-12-06 10:05:56.740260076 +0000 UTC m=+0.088049136 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:05:56 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:05:56 np0005548790.localdomain sudo[293162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph
Dec 06 10:05:56 np0005548790.localdomain sudo[293162]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:56 np0005548790.localdomain sudo[293162]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:56 np0005548790.localdomain sudo[293183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:05:56 np0005548790.localdomain sudo[293183]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:56 np0005548790.localdomain sudo[293183]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:56 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:05:56 np0005548790.localdomain sudo[293201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:05:56 np0005548790.localdomain sudo[293201]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:56 np0005548790.localdomain sudo[293201]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:57 np0005548790.localdomain sudo[293219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:05:57 np0005548790.localdomain sudo[293219]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:57 np0005548790.localdomain sudo[293219]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:57 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:05:57 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:05:57 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:05:57 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:05:57 np0005548790.localdomain ceph-mon[288373]: Updating np0005548786.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:05:57 np0005548790.localdomain ceph-mon[288373]: Standby manager daemon np0005548788.yvwbqq started
Dec 06 10:05:57 np0005548790.localdomain sudo[293253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:05:57 np0005548790.localdomain sudo[293253]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:57 np0005548790.localdomain sudo[293253]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:57 np0005548790.localdomain sudo[293271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:05:57 np0005548790.localdomain sudo[293271]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:57 np0005548790.localdomain sudo[293271]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:57 np0005548790.localdomain sudo[293289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 06 10:05:57 np0005548790.localdomain sudo[293289]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:57 np0005548790.localdomain sudo[293289]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:57 np0005548790.localdomain sudo[293307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:05:57 np0005548790.localdomain sudo[293307]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:57 np0005548790.localdomain sudo[293307]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:57 np0005548790.localdomain sudo[293325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:05:57 np0005548790.localdomain sudo[293325]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:57 np0005548790.localdomain sudo[293325]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:57 np0005548790.localdomain sudo[293343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:05:57 np0005548790.localdomain sudo[293343]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:57 np0005548790.localdomain sudo[293343]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:57 np0005548790.localdomain sudo[293361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:05:57 np0005548790.localdomain sudo[293361]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:57 np0005548790.localdomain sudo[293361]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:57 np0005548790.localdomain sudo[293379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:05:57 np0005548790.localdomain sudo[293379]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:57 np0005548790.localdomain sudo[293379]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:57 np0005548790.localdomain sudo[293413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:05:57 np0005548790.localdomain sudo[293413]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:57 np0005548790.localdomain sudo[293413]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:57 np0005548790.localdomain sudo[293431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:05:57 np0005548790.localdomain sudo[293431]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:57 np0005548790.localdomain sudo[293431]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:58 np0005548790.localdomain sudo[293449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:05:58 np0005548790.localdomain sudo[293449]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:58 np0005548790.localdomain sudo[293449]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:58 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:05:58 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:05:58 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:05:58 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:05:58 np0005548790.localdomain ceph-mon[288373]: Updating np0005548786.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:05:58 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:05:58 np0005548790.localdomain ceph-mon[288373]: mgrmap e23: np0005548787.umwsra(active, since 5s), standbys: np0005548786.mczynb, np0005548790.kvkfyr, np0005548789.mzhmje, np0005548785.vhqlsq, np0005548788.yvwbqq
Dec 06 10:05:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mgr metadata", "who": "np0005548788.yvwbqq", "id": "np0005548788.yvwbqq"} : dispatch
Dec 06 10:05:58 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:05:58 np0005548790.localdomain ceph-mon[288373]: pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:05:58 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:05:58 np0005548790.localdomain ceph-mon[288373]: Updating np0005548786.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:05:58 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:05:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:58 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:05:58 np0005548790.localdomain podman[293467]: 2025-12-06 10:05:58.571678678 +0000 UTC m=+0.083742868 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:05:58 np0005548790.localdomain sudo[293474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:05:58 np0005548790.localdomain sudo[293474]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:05:58 np0005548790.localdomain sudo[293474]: pam_unix(sudo:session): session closed for user root
Dec 06 10:05:58 np0005548790.localdomain podman[293467]: 2025-12-06 10:05:58.607965135 +0000 UTC m=+0.120029285 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:05:58 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 0 B/s wr, 20 op/s
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:05:59.583513) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015559583574, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2851, "num_deletes": 255, "total_data_size": 9201309, "memory_usage": 9798000, "flush_reason": "Manual Compaction"}
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015559624726, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 5563548, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11499, "largest_seqno": 14345, "table_properties": {"data_size": 5551678, "index_size": 7415, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 30447, "raw_average_key_size": 22, "raw_value_size": 5525769, "raw_average_value_size": 4129, "num_data_blocks": 320, "num_entries": 1338, "num_filter_entries": 1338, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015498, "oldest_key_time": 1765015498, "file_creation_time": 1765015559, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8be51b2d-bcce-4d27-9e46-3f3cdf9e4a92", "db_session_id": "9EENGG53AOVF6BJXYAD5", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 41273 microseconds, and 13223 cpu microseconds.
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:05:59.624764) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 5563548 bytes OK
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:05:59.624810) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:05:59.626480) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:05:59.626494) EVENT_LOG_v1 {"time_micros": 1765015559626490, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:05:59.626509) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 9187565, prev total WAL file size 9203789, number of live WAL files 2.
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:05:59.627627) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130323931' seq:72057594037927935, type:22 .. '7061786F73003130353433' seq:0, type:0; will stop at (end)
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(5433KB)], [15(10MB)]
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015559627689, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 16741939, "oldest_snapshot_seqno": -1}
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 10058 keys, 15396727 bytes, temperature: kUnknown
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015559733118, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 15396727, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15338609, "index_size": 31905, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25157, "raw_key_size": 267747, "raw_average_key_size": 26, "raw_value_size": 15165924, "raw_average_value_size": 1507, "num_data_blocks": 1223, "num_entries": 10058, "num_filter_entries": 10058, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015441, "oldest_key_time": 0, "file_creation_time": 1765015559, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8be51b2d-bcce-4d27-9e46-3f3cdf9e4a92", "db_session_id": "9EENGG53AOVF6BJXYAD5", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:05:59.733466) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 15396727 bytes
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:05:59.734695) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.6 rd, 145.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.3, 10.7 +0.0 blob) out(14.7 +0.0 blob), read-write-amplify(5.8) write-amplify(2.8) OK, records in: 10610, records dropped: 552 output_compression: NoCompression
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:05:59.734726) EVENT_LOG_v1 {"time_micros": 1765015559734713, "job": 6, "event": "compaction_finished", "compaction_time_micros": 105544, "compaction_time_cpu_micros": 41767, "output_level": 6, "num_output_files": 1, "total_output_size": 15396727, "num_input_records": 10610, "num_output_records": 10058, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:05:59.627518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:05:59.734916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:05:59.734925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:05:59.734930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:05:59.734935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:05:59.734939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015559736919, "job": 0, "event": "table_file_deletion", "file_number": 17}
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:05:59 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015559738527, "job": 0, "event": "table_file_deletion", "file_number": 15}
Dec 06 10:06:00 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.2 on np0005548788.localdomain
Dec 06 10:06:00 np0005548790.localdomain ceph-mon[288373]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)
Dec 06 10:06:00 np0005548790.localdomain ceph-mon[288373]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST)
Dec 06 10:06:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:06:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:06:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:01 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)...
Dec 06 10:06:01 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain
Dec 06 10:06:01 np0005548790.localdomain ceph-mon[288373]: pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 0 B/s wr, 14 op/s
Dec 06 10:06:01 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:01 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:01 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:06:01 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:01 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:06:02 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548789 (monmap changed)...
Dec 06 10:06:02 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain
Dec 06 10:06:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 06 10:06:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:06:03 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.1 (monmap changed)...
Dec 06 10:06:03 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.1 on np0005548789.localdomain
Dec 06 10:06:03 np0005548790.localdomain ceph-mon[288373]: pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 0 B/s wr, 11 op/s
Dec 06 10:06:03 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:03 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:03 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:03 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:03 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 06 10:06:03 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:03 np0005548790.localdomain podman[293508]: 2025-12-06 10:06:03.562462752 +0000 UTC m=+0.080318946 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 10:06:03 np0005548790.localdomain podman[293508]: 2025-12-06 10:06:03.667304313 +0000 UTC m=+0.185160527 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 10:06:03 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:06:04 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.4 (monmap changed)...
Dec 06 10:06:04 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.4 on np0005548789.localdomain
Dec 06 10:06:04 np0005548790.localdomain ceph-mon[288373]: from='client.44134 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:06:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:04 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)...
Dec 06 10:06:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:06:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:04 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain
Dec 06 10:06:04 np0005548790.localdomain ceph-mon[288373]: pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Dec 06 10:06:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:04 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548789.mzhmje (monmap changed)...
Dec 06 10:06:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:06:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:06:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:04 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain
Dec 06 10:06:05 np0005548790.localdomain sudo[293535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:06:05 np0005548790.localdomain sudo[293535]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:05 np0005548790.localdomain sudo[293535]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:05 np0005548790.localdomain sudo[293553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:06:05 np0005548790.localdomain sudo[293553]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:05 np0005548790.localdomain podman[293587]: 
Dec 06 10:06:05 np0005548790.localdomain podman[293587]: 2025-12-06 10:06:05.888295372 +0000 UTC m=+0.075600807 container create 95f512d03342cf4931524b7bb7767e4de912ac3940ec29997a094389705a51cd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_dubinsky, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, ceph=True, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., version=7, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True)
Dec 06 10:06:05 np0005548790.localdomain systemd[1]: Started libpod-conmon-95f512d03342cf4931524b7bb7767e4de912ac3940ec29997a094389705a51cd.scope.
Dec 06 10:06:05 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:06:05 np0005548790.localdomain podman[293587]: 2025-12-06 10:06:05.856838677 +0000 UTC m=+0.044144142 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:06:05 np0005548790.localdomain podman[293587]: 2025-12-06 10:06:05.961213306 +0000 UTC m=+0.148518741 container init 95f512d03342cf4931524b7bb7767e4de912ac3940ec29997a094389705a51cd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_dubinsky, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, com.redhat.component=rhceph-container, release=1763362218, architecture=x86_64, GIT_CLEAN=True, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True)
Dec 06 10:06:05 np0005548790.localdomain podman[293587]: 2025-12-06 10:06:05.971592968 +0000 UTC m=+0.158898393 container start 95f512d03342cf4931524b7bb7767e4de912ac3940ec29997a094389705a51cd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_dubinsky, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, distribution-scope=public, description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, vendor=Red Hat, Inc., RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, release=1763362218, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 06 10:06:05 np0005548790.localdomain podman[293587]: 2025-12-06 10:06:05.971963308 +0000 UTC m=+0.159268713 container attach 95f512d03342cf4931524b7bb7767e4de912ac3940ec29997a094389705a51cd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_dubinsky, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, RELEASE=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, version=7, distribution-scope=public, release=1763362218, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=)
Dec 06 10:06:05 np0005548790.localdomain great_dubinsky[293603]: 167 167
Dec 06 10:06:05 np0005548790.localdomain systemd[1]: libpod-95f512d03342cf4931524b7bb7767e4de912ac3940ec29997a094389705a51cd.scope: Deactivated successfully.
Dec 06 10:06:05 np0005548790.localdomain podman[293587]: 2025-12-06 10:06:05.977130779 +0000 UTC m=+0.164436194 container died 95f512d03342cf4931524b7bb7767e4de912ac3940ec29997a094389705a51cd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_dubinsky, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, distribution-scope=public, release=1763362218, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-type=git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, version=7, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7)
Dec 06 10:06:06 np0005548790.localdomain podman[293608]: 2025-12-06 10:06:06.077427147 +0000 UTC m=+0.091034057 container remove 95f512d03342cf4931524b7bb7767e4de912ac3940ec29997a094389705a51cd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_dubinsky, GIT_CLEAN=True, io.buildah.version=1.41.4, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, version=7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, RELEASE=main, architecture=x86_64, description=Red Hat Ceph Storage 7)
Dec 06 10:06:06 np0005548790.localdomain systemd[1]: libpod-conmon-95f512d03342cf4931524b7bb7767e4de912ac3940ec29997a094389705a51cd.scope: Deactivated successfully.
Dec 06 10:06:06 np0005548790.localdomain sudo[293553]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:06 np0005548790.localdomain sudo[293626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:06:06 np0005548790.localdomain sudo[293626]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:06 np0005548790.localdomain sudo[293626]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:06 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:06 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:06 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548790 (monmap changed)...
Dec 06 10:06:06 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:06:06 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:06 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain
Dec 06 10:06:06 np0005548790.localdomain ceph-mon[288373]: from='client.26785 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:06:06 np0005548790.localdomain ceph-mon[288373]: Saving service mon spec with placement label:mon
Dec 06 10:06:06 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:06 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:06 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:06 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 06 10:06:06 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:06 np0005548790.localdomain sudo[293644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:06:06 np0005548790.localdomain sudo[293644]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:06 np0005548790.localdomain podman[293678]: 
Dec 06 10:06:06 np0005548790.localdomain podman[293678]: 2025-12-06 10:06:06.803546787 +0000 UTC m=+0.070144259 container create a44317bbb60b663425f0fb1f4af91d5cb5e19e5cbfb0d6bb60195b26c647a99d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_clarke, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, version=7, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, release=1763362218, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, RELEASE=main, ceph=True, CEPH_POINT_RELEASE=)
Dec 06 10:06:06 np0005548790.localdomain systemd[1]: Started libpod-conmon-a44317bbb60b663425f0fb1f4af91d5cb5e19e5cbfb0d6bb60195b26c647a99d.scope.
Dec 06 10:06:06 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:06:06 np0005548790.localdomain podman[293678]: 2025-12-06 10:06:06.872503413 +0000 UTC m=+0.139100935 container init a44317bbb60b663425f0fb1f4af91d5cb5e19e5cbfb0d6bb60195b26c647a99d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_clarke, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, name=rhceph, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-type=git, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 06 10:06:06 np0005548790.localdomain podman[293678]: 2025-12-06 10:06:06.778405893 +0000 UTC m=+0.045003425 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:06:06 np0005548790.localdomain podman[293678]: 2025-12-06 10:06:06.882333549 +0000 UTC m=+0.148931021 container start a44317bbb60b663425f0fb1f4af91d5cb5e19e5cbfb0d6bb60195b26c647a99d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_clarke, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, architecture=x86_64, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, ceph=True)
Dec 06 10:06:06 np0005548790.localdomain podman[293678]: 2025-12-06 10:06:06.882628077 +0000 UTC m=+0.149225589 container attach a44317bbb60b663425f0fb1f4af91d5cb5e19e5cbfb0d6bb60195b26c647a99d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_clarke, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, release=1763362218, name=rhceph, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=)
Dec 06 10:06:06 np0005548790.localdomain wizardly_clarke[293693]: 167 167
Dec 06 10:06:06 np0005548790.localdomain systemd[1]: libpod-a44317bbb60b663425f0fb1f4af91d5cb5e19e5cbfb0d6bb60195b26c647a99d.scope: Deactivated successfully.
Dec 06 10:06:06 np0005548790.localdomain podman[293678]: 2025-12-06 10:06:06.886994926 +0000 UTC m=+0.153592448 container died a44317bbb60b663425f0fb1f4af91d5cb5e19e5cbfb0d6bb60195b26c647a99d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_clarke, version=7, vcs-type=git, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, release=1763362218, distribution-scope=public, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 06 10:06:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:06:06 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-4f342bc697bb70930cd37970c24e28597b614c0ec06bdbb768cbc98ae492f745-merged.mount: Deactivated successfully.
Dec 06 10:06:06 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-53f2ba0e4451d29399345a02ac95423508926b3e703382b02dd0fa262a93c5d8-merged.mount: Deactivated successfully.
Dec 06 10:06:06 np0005548790.localdomain podman[293698]: 2025-12-06 10:06:06.988368354 +0000 UTC m=+0.089817344 container remove a44317bbb60b663425f0fb1f4af91d5cb5e19e5cbfb0d6bb60195b26c647a99d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_clarke, vcs-type=git, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1763362218, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7)
Dec 06 10:06:06 np0005548790.localdomain systemd[1]: libpod-conmon-a44317bbb60b663425f0fb1f4af91d5cb5e19e5cbfb0d6bb60195b26c647a99d.scope: Deactivated successfully.
Dec 06 10:06:07 np0005548790.localdomain sudo[293644]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:07 np0005548790.localdomain ceph-mon[288373]: pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Dec 06 10:06:07 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.0 (monmap changed)...
Dec 06 10:06:07 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.0 on np0005548790.localdomain
Dec 06 10:06:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 06 10:06:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:07 np0005548790.localdomain sudo[293723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:06:07 np0005548790.localdomain sudo[293723]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:07 np0005548790.localdomain sudo[293723]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:07 np0005548790.localdomain sudo[293741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:06:07 np0005548790.localdomain sudo[293741]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:07 np0005548790.localdomain podman[293777]: 
Dec 06 10:06:07 np0005548790.localdomain podman[293777]: 2025-12-06 10:06:07.846274387 +0000 UTC m=+0.075673629 container create 6c1b35c9f303b0bf131829b6465232308bf270671e693b1ac1c98e06e8b1e255 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_morse, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.openshift.expose-services=, version=7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, name=rhceph, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, com.redhat.component=rhceph-container)
Dec 06 10:06:07 np0005548790.localdomain systemd[1]: Started libpod-conmon-6c1b35c9f303b0bf131829b6465232308bf270671e693b1ac1c98e06e8b1e255.scope.
Dec 06 10:06:07 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:06:07 np0005548790.localdomain podman[293777]: 2025-12-06 10:06:07.912804877 +0000 UTC m=+0.142204129 container init 6c1b35c9f303b0bf131829b6465232308bf270671e693b1ac1c98e06e8b1e255 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_morse, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, release=1763362218, com.redhat.component=rhceph-container)
Dec 06 10:06:07 np0005548790.localdomain podman[293777]: 2025-12-06 10:06:07.815614403 +0000 UTC m=+0.045013655 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:06:07 np0005548790.localdomain determined_morse[293792]: 167 167
Dec 06 10:06:07 np0005548790.localdomain podman[293777]: 2025-12-06 10:06:07.923798626 +0000 UTC m=+0.153197868 container start 6c1b35c9f303b0bf131829b6465232308bf270671e693b1ac1c98e06e8b1e255 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_morse, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, name=rhceph, io.openshift.expose-services=, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, vendor=Red Hat, Inc., release=1763362218, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, ceph=True, GIT_CLEAN=True, version=7)
Dec 06 10:06:07 np0005548790.localdomain podman[293777]: 2025-12-06 10:06:07.924424103 +0000 UTC m=+0.153823385 container attach 6c1b35c9f303b0bf131829b6465232308bf270671e693b1ac1c98e06e8b1e255 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_morse, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, CEPH_POINT_RELEASE=, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_CLEAN=True, name=rhceph, release=1763362218)
Dec 06 10:06:07 np0005548790.localdomain systemd[1]: libpod-6c1b35c9f303b0bf131829b6465232308bf270671e693b1ac1c98e06e8b1e255.scope: Deactivated successfully.
Dec 06 10:06:07 np0005548790.localdomain podman[293777]: 2025-12-06 10:06:07.926969462 +0000 UTC m=+0.156368714 container died 6c1b35c9f303b0bf131829b6465232308bf270671e693b1ac1c98e06e8b1e255 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_morse, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, name=rhceph, release=1763362218, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, distribution-scope=public, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, CEPH_POINT_RELEASE=, version=7, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main)
Dec 06 10:06:08 np0005548790.localdomain podman[293798]: 2025-12-06 10:06:08.020367792 +0000 UTC m=+0.085676980 container remove 6c1b35c9f303b0bf131829b6465232308bf270671e693b1ac1c98e06e8b1e255 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_morse, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, ceph=True, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, vendor=Red Hat, Inc., vcs-type=git, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.openshift.tags=rhceph ceph)
Dec 06 10:06:08 np0005548790.localdomain systemd[1]: libpod-conmon-6c1b35c9f303b0bf131829b6465232308bf270671e693b1ac1c98e06e8b1e255.scope: Deactivated successfully.
Dec 06 10:06:08 np0005548790.localdomain sudo[293741]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:08 np0005548790.localdomain ceph-mon[288373]: from='client.44149 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005548788", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:06:08 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.3 (monmap changed)...
Dec 06 10:06:08 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.3 on np0005548790.localdomain
Dec 06 10:06:08 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:08 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:08 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:08 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:08 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548790.vhcezv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:06:08 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:08 np0005548790.localdomain sudo[293822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:06:08 np0005548790.localdomain sudo[293822]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:08 np0005548790.localdomain sudo[293822]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:08 np0005548790.localdomain sudo[293840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:06:08 np0005548790.localdomain sudo[293840]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:08 np0005548790.localdomain podman[293875]: 
Dec 06 10:06:08 np0005548790.localdomain podman[293875]: 2025-12-06 10:06:08.856610208 +0000 UTC m=+0.100635148 container create 44c2f8ba64a1c714799176b892f5f31505c462833dc0012f445501f8902452fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_chaplygin, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, version=7, vendor=Red Hat, Inc., name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, com.redhat.component=rhceph-container, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 06 10:06:08 np0005548790.localdomain systemd[1]: Started libpod-conmon-44c2f8ba64a1c714799176b892f5f31505c462833dc0012f445501f8902452fc.scope.
Dec 06 10:06:08 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-4b0846bac11e5c11daf47cd6bb5cea8802861d495499c6391d9f691709dbfc52-merged.mount: Deactivated successfully.
Dec 06 10:06:08 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:06:08 np0005548790.localdomain podman[293875]: 2025-12-06 10:06:08.91476373 +0000 UTC m=+0.158788670 container init 44c2f8ba64a1c714799176b892f5f31505c462833dc0012f445501f8902452fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_chaplygin, name=rhceph, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, release=1763362218, ceph=True, version=7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, RELEASE=main, io.buildah.version=1.41.4, vcs-type=git)
Dec 06 10:06:08 np0005548790.localdomain podman[293875]: 2025-12-06 10:06:08.826227911 +0000 UTC m=+0.070252871 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:06:08 np0005548790.localdomain systemd[1]: tmp-crun.L6iXY4.mount: Deactivated successfully.
Dec 06 10:06:08 np0005548790.localdomain podman[293875]: 2025-12-06 10:06:08.931065412 +0000 UTC m=+0.175090362 container start 44c2f8ba64a1c714799176b892f5f31505c462833dc0012f445501f8902452fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_chaplygin, CEPH_POINT_RELEASE=, release=1763362218, GIT_BRANCH=main, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, ceph=True, distribution-scope=public, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vendor=Red Hat, Inc., name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 10:06:08 np0005548790.localdomain podman[293875]: 2025-12-06 10:06:08.931316219 +0000 UTC m=+0.175341159 container attach 44c2f8ba64a1c714799176b892f5f31505c462833dc0012f445501f8902452fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_chaplygin, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, build-date=2025-11-26T19:44:28Z, version=7, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 06 10:06:08 np0005548790.localdomain tender_chaplygin[293890]: 167 167
Dec 06 10:06:08 np0005548790.localdomain systemd[1]: libpod-44c2f8ba64a1c714799176b892f5f31505c462833dc0012f445501f8902452fc.scope: Deactivated successfully.
Dec 06 10:06:08 np0005548790.localdomain podman[293875]: 2025-12-06 10:06:08.93650622 +0000 UTC m=+0.180531200 container died 44c2f8ba64a1c714799176b892f5f31505c462833dc0012f445501f8902452fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_chaplygin, GIT_CLEAN=True, name=rhceph, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-type=git, description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, version=7, vendor=Red Hat, Inc., ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218)
Dec 06 10:06:09 np0005548790.localdomain podman[293895]: 2025-12-06 10:06:09.015914771 +0000 UTC m=+0.072697978 container remove 44c2f8ba64a1c714799176b892f5f31505c462833dc0012f445501f8902452fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=tender_chaplygin, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, RELEASE=main, vcs-type=git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, distribution-scope=public)
Dec 06 10:06:09 np0005548790.localdomain systemd[1]: libpod-conmon-44c2f8ba64a1c714799176b892f5f31505c462833dc0012f445501f8902452fc.scope: Deactivated successfully.
Dec 06 10:06:09 np0005548790.localdomain sudo[293840]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:09 np0005548790.localdomain sudo[293912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:06:09 np0005548790.localdomain sudo[293912]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:09 np0005548790.localdomain sudo[293912]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:09 np0005548790.localdomain sudo[293930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:06:09 np0005548790.localdomain sudo[293930]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:09 np0005548790.localdomain ceph-mon[288373]: pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Dec 06 10:06:09 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mds.mds.np0005548790.vhcezv (monmap changed)...
Dec 06 10:06:09 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mds.mds.np0005548790.vhcezv on np0005548790.localdomain
Dec 06 10:06:09 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.200:0/2211595861' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 06 10:06:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:06:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:06:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:09 np0005548790.localdomain podman[293964]: 
Dec 06 10:06:09 np0005548790.localdomain podman[293964]: 2025-12-06 10:06:09.686942362 +0000 UTC m=+0.073745347 container create 860071c57556aa70ed29f8c6890e05b22e79f765f2d98a9418619f8a1e15cb37 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_dirac, vendor=Red Hat, Inc., release=1763362218, build-date=2025-11-26T19:44:28Z, ceph=True, version=7, io.openshift.tags=rhceph ceph, name=rhceph, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.buildah.version=1.41.4, io.openshift.expose-services=, CEPH_POINT_RELEASE=)
Dec 06 10:06:09 np0005548790.localdomain systemd[1]: Started libpod-conmon-860071c57556aa70ed29f8c6890e05b22e79f765f2d98a9418619f8a1e15cb37.scope.
Dec 06 10:06:09 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:06:09 np0005548790.localdomain podman[293964]: 2025-12-06 10:06:09.749270387 +0000 UTC m=+0.136073372 container init 860071c57556aa70ed29f8c6890e05b22e79f765f2d98a9418619f8a1e15cb37 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_dirac, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, name=rhceph, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph)
Dec 06 10:06:09 np0005548790.localdomain podman[293964]: 2025-12-06 10:06:09.657377318 +0000 UTC m=+0.044180333 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:06:09 np0005548790.localdomain podman[293964]: 2025-12-06 10:06:09.757310405 +0000 UTC m=+0.144113410 container start 860071c57556aa70ed29f8c6890e05b22e79f765f2d98a9418619f8a1e15cb37 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_dirac, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_BRANCH=main, release=1763362218, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., vcs-type=git, name=rhceph, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.openshift.expose-services=)
Dec 06 10:06:09 np0005548790.localdomain podman[293964]: 2025-12-06 10:06:09.757582143 +0000 UTC m=+0.144385198 container attach 860071c57556aa70ed29f8c6890e05b22e79f765f2d98a9418619f8a1e15cb37 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_dirac, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, architecture=x86_64, version=7, RELEASE=main, release=1763362218, GIT_CLEAN=True, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, vcs-type=git)
Dec 06 10:06:09 np0005548790.localdomain wizardly_dirac[293979]: 167 167
Dec 06 10:06:09 np0005548790.localdomain systemd[1]: libpod-860071c57556aa70ed29f8c6890e05b22e79f765f2d98a9418619f8a1e15cb37.scope: Deactivated successfully.
Dec 06 10:06:09 np0005548790.localdomain podman[293964]: 2025-12-06 10:06:09.7607422 +0000 UTC m=+0.147545205 container died 860071c57556aa70ed29f8c6890e05b22e79f765f2d98a9418619f8a1e15cb37 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_dirac, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, ceph=True, release=1763362218, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main)
Dec 06 10:06:09 np0005548790.localdomain podman[293984]: 2025-12-06 10:06:09.854504619 +0000 UTC m=+0.085979060 container remove 860071c57556aa70ed29f8c6890e05b22e79f765f2d98a9418619f8a1e15cb37 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_dirac, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vendor=Red Hat, Inc., io.buildah.version=1.41.4, GIT_BRANCH=main, architecture=x86_64, build-date=2025-11-26T19:44:28Z, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.expose-services=)
Dec 06 10:06:09 np0005548790.localdomain systemd[1]: libpod-conmon-860071c57556aa70ed29f8c6890e05b22e79f765f2d98a9418619f8a1e15cb37.scope: Deactivated successfully.
Dec 06 10:06:09 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a3609d9fe22c6d6bad013d62fdc51f6e42fad19250f44544a314586dd9275f61-merged.mount: Deactivated successfully.
Dec 06 10:06:09 np0005548790.localdomain sudo[293930]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:10 np0005548790.localdomain sudo[294002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:06:10 np0005548790.localdomain sudo[294002]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:10 np0005548790.localdomain sudo[294002]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:10 np0005548790.localdomain sudo[294020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:06:10 np0005548790.localdomain sudo[294020]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:10 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548790.kvkfyr (monmap changed)...
Dec 06 10:06:10 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548790.kvkfyr on np0005548790.localdomain
Dec 06 10:06:10 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:10 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:10 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:06:10 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:06:10 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:10 np0005548790.localdomain podman[294055]: 
Dec 06 10:06:10 np0005548790.localdomain podman[294055]: 2025-12-06 10:06:10.569748173 +0000 UTC m=+0.082170756 container create 4c159eff8a9fa8f35b188251696f9097b1ade23837c6b58aa90adc142c0cfb2c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_hugle, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, ceph=True, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, distribution-scope=public, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 06 10:06:10 np0005548790.localdomain systemd[1]: Started libpod-conmon-4c159eff8a9fa8f35b188251696f9097b1ade23837c6b58aa90adc142c0cfb2c.scope.
Dec 06 10:06:10 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:06:10 np0005548790.localdomain podman[294055]: 2025-12-06 10:06:10.535847561 +0000 UTC m=+0.048270194 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:06:10 np0005548790.localdomain podman[294055]: 2025-12-06 10:06:10.63947641 +0000 UTC m=+0.151898993 container init 4c159eff8a9fa8f35b188251696f9097b1ade23837c6b58aa90adc142c0cfb2c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_hugle, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, name=rhceph, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, RELEASE=main, com.redhat.component=rhceph-container, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 06 10:06:10 np0005548790.localdomain podman[294055]: 2025-12-06 10:06:10.649965845 +0000 UTC m=+0.162388438 container start 4c159eff8a9fa8f35b188251696f9097b1ade23837c6b58aa90adc142c0cfb2c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_hugle, vcs-type=git, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.expose-services=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 06 10:06:10 np0005548790.localdomain podman[294055]: 2025-12-06 10:06:10.650441538 +0000 UTC m=+0.162864161 container attach 4c159eff8a9fa8f35b188251696f9097b1ade23837c6b58aa90adc142c0cfb2c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_hugle, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_BRANCH=main, version=7, GIT_CLEAN=True, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, com.redhat.component=rhceph-container, ceph=True)
Dec 06 10:06:10 np0005548790.localdomain wizardly_hugle[294070]: 167 167
Dec 06 10:06:10 np0005548790.localdomain systemd[1]: libpod-4c159eff8a9fa8f35b188251696f9097b1ade23837c6b58aa90adc142c0cfb2c.scope: Deactivated successfully.
Dec 06 10:06:10 np0005548790.localdomain podman[294055]: 2025-12-06 10:06:10.653457249 +0000 UTC m=+0.165879882 container died 4c159eff8a9fa8f35b188251696f9097b1ade23837c6b58aa90adc142c0cfb2c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_hugle, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_BRANCH=main, name=rhceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, ceph=True, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, release=1763362218, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container)
Dec 06 10:06:10 np0005548790.localdomain podman[294075]: 2025-12-06 10:06:10.749986238 +0000 UTC m=+0.087044677 container remove 4c159eff8a9fa8f35b188251696f9097b1ade23837c6b58aa90adc142c0cfb2c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_hugle, io.buildah.version=1.41.4, GIT_BRANCH=main, io.openshift.expose-services=, ceph=True, io.openshift.tags=rhceph ceph, vcs-type=git, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7)
Dec 06 10:06:10 np0005548790.localdomain systemd[1]: libpod-conmon-4c159eff8a9fa8f35b188251696f9097b1ade23837c6b58aa90adc142c0cfb2c.scope: Deactivated successfully.
Dec 06 10:06:10 np0005548790.localdomain sudo[294020]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:10 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-eacf31cd231ac7fbfb1b84f1ecc25da681864215b933f65693acebadf1cb30dd-merged.mount: Deactivated successfully.
Dec 06 10:06:11 np0005548790.localdomain sudo[294092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:06:11 np0005548790.localdomain sudo[294092]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:11 np0005548790.localdomain sudo[294092]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:11 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mon.np0005548790 (monmap changed)...
Dec 06 10:06:11 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mon.np0005548790 on np0005548790.localdomain
Dec 06 10:06:11 np0005548790.localdomain ceph-mon[288373]: pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:06:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:06:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:06:11 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:06:12 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mon.np0005548786 (monmap changed)...
Dec 06 10:06:12 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:06:12 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:06:12 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:12 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mon.np0005548786 on np0005548786.localdomain
Dec 06 10:06:12 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:12 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:12 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:12 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:06:12 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:06:12 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:13 np0005548790.localdomain ceph-mon[288373]: pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:06:13 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mon.np0005548787 (monmap changed)...
Dec 06 10:06:13 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mon.np0005548787 on np0005548787.localdomain
Dec 06 10:06:13 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:13 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:13 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:06:13 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:06:13 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:13 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.200:0/2329222999' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json"} : dispatch
Dec 06 10:06:14 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mon.np0005548788 (monmap changed)...
Dec 06 10:06:14 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mon.np0005548788 on np0005548788.localdomain
Dec 06 10:06:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:06:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:06:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:15 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mon.np0005548789 (monmap changed)...
Dec 06 10:06:15 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mon.np0005548789 on np0005548789.localdomain
Dec 06 10:06:15 np0005548790.localdomain ceph-mon[288373]: pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:06:15 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:15 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:16 np0005548790.localdomain ceph-mgr[286934]: ms_deliver_dispatch: unhandled message 0x563548511600 mon_map magic: 0 from mon.2 v2:172.18.0.108:3300/0
Dec 06 10:06:16 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@2(peon) e10  my rank is now 1 (was 2)
Dec 06 10:06:16 np0005548790.localdomain ceph-mgr[286934]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0
Dec 06 10:06:16 np0005548790.localdomain ceph-mgr[286934]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0
Dec 06 10:06:16 np0005548790.localdomain ceph-mgr[286934]: ms_deliver_dispatch: unhandled message 0x563548511080 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0
Dec 06 10:06:16 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [INF] : mon.np0005548790 calling monitor election
Dec 06 10:06:16 np0005548790.localdomain ceph-mon[288373]: paxos.1).electionLogic(44) init, last seen epoch 44
Dec 06 10:06:16 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(electing) e10 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:06:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:06:16 np0005548790.localdomain podman[294110]: 2025-12-06 10:06:16.567521641 +0000 UTC m=+0.082594038 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 10:06:16 np0005548790.localdomain podman[294110]: 2025-12-06 10:06:16.578072302 +0000 UTC m=+0.093144679 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 10:06:16 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(electing) e10 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e10 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: from='client.26906 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005548786"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: Remove daemons mon.np0005548786
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: Safe to remove mon.np0005548786: new quorum should be ['np0005548787', 'np0005548790', 'np0005548789', 'np0005548788'] (from ['np0005548787', 'np0005548790', 'np0005548789', 'np0005548788'])
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: Removing monitor np0005548786 from monmap...
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: Removing daemon mon.np0005548786 from np0005548786.localdomain -- ports []
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata", "id": "np0005548787"} : dispatch
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: mon.np0005548789 calling monitor election
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790 calling monitor election
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: mon.np0005548788 calling monitor election
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: mon.np0005548787 calling monitor election
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: mon.np0005548787 is new leader, mons np0005548787,np0005548790,np0005548789,np0005548788 in quorum (ranks 0,1,2,3)
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: monmap epoch 10
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: last_changed 2025-12-06T10:06:16.211793+0000
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: created 2025-12-06T07:57:14.295835+0000
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: min_mon_release 18 (reef)
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: election_strategy: 1
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005548787
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: 1: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005548790
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: 2: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005548789
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: 3: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005548788
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: fsmap cephfs:1 {0=mds.np0005548790.vhcezv=up:active} 2 up:standby
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: osdmap e89: 6 total, 6 up, 6 in
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: mgrmap e23: np0005548787.umwsra(active, since 26s), standbys: np0005548786.mczynb, np0005548790.kvkfyr, np0005548789.mzhmje, np0005548785.vhqlsq, np0005548788.yvwbqq
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]:     stray daemon mgr.np0005548785.vhqlsq on host np0005548785.localdomain not managed by cephadm
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:06:18 np0005548790.localdomain ceph-mon[288373]:     stray host np0005548785.localdomain has 1 stray daemons: ['mgr.np0005548785.vhqlsq']
Dec 06 10:06:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:06:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:06:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:06:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:06:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:06:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18694 "" "Go-http-client/1.1"
Dec 06 10:06:19 np0005548790.localdomain sudo[294128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 10:06:19 np0005548790.localdomain sudo[294128]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:19 np0005548790.localdomain sudo[294128]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:19 np0005548790.localdomain sudo[294146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph
Dec 06 10:06:19 np0005548790.localdomain sudo[294146]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:19 np0005548790.localdomain sudo[294146]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:19 np0005548790.localdomain ceph-mon[288373]: pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:06:19 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:19 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:19 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:19 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:06:19 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:19 np0005548790.localdomain sudo[294164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:06:19 np0005548790.localdomain sudo[294164]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:19 np0005548790.localdomain sudo[294164]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:19 np0005548790.localdomain sudo[294182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:06:19 np0005548790.localdomain sudo[294182]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:19 np0005548790.localdomain sudo[294182]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:19 np0005548790.localdomain sudo[294200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:06:19 np0005548790.localdomain sudo[294200]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:19 np0005548790.localdomain sudo[294200]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:19 np0005548790.localdomain sudo[294234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:06:19 np0005548790.localdomain sudo[294234]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:19 np0005548790.localdomain sudo[294234]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:19 np0005548790.localdomain sudo[294252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:06:19 np0005548790.localdomain sudo[294252]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:19 np0005548790.localdomain sudo[294252]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:19 np0005548790.localdomain sudo[294270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 06 10:06:19 np0005548790.localdomain sudo[294270]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:19 np0005548790.localdomain sudo[294270]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:19 np0005548790.localdomain sudo[294288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:06:19 np0005548790.localdomain sudo[294288]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:19 np0005548790.localdomain sudo[294288]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:20 np0005548790.localdomain sudo[294306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:06:20 np0005548790.localdomain sudo[294306]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:20 np0005548790.localdomain sudo[294306]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:20 np0005548790.localdomain sudo[294324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:06:20 np0005548790.localdomain sudo[294324]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:20 np0005548790.localdomain sudo[294324]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:20 np0005548790.localdomain sudo[294342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:06:20 np0005548790.localdomain sudo[294342]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:20 np0005548790.localdomain sudo[294342]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:20 np0005548790.localdomain sudo[294360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:06:20 np0005548790.localdomain sudo[294360]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:20 np0005548790.localdomain sudo[294360]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:20 np0005548790.localdomain ceph-mon[288373]: from='client.44175 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005548786.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:06:20 np0005548790.localdomain ceph-mon[288373]: Updating np0005548786.localdomain:/etc/ceph/ceph.conf
Dec 06 10:06:20 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/etc/ceph/ceph.conf
Dec 06 10:06:20 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf
Dec 06 10:06:20 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf
Dec 06 10:06:20 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf
Dec 06 10:06:20 np0005548790.localdomain ceph-mon[288373]: Removed label mon from host np0005548786.localdomain
Dec 06 10:06:20 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:20 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:20 np0005548790.localdomain sudo[294394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:06:20 np0005548790.localdomain sudo[294394]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:20 np0005548790.localdomain sudo[294394]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:20 np0005548790.localdomain sudo[294412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:06:20 np0005548790.localdomain sudo[294412]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:20 np0005548790.localdomain sudo[294412]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:20 np0005548790.localdomain sudo[294430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:06:20 np0005548790.localdomain sudo[294430]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:20 np0005548790.localdomain sudo[294430]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:21.041157) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015581041231, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1244, "num_deletes": 256, "total_data_size": 2170643, "memory_usage": 2212368, "flush_reason": "Manual Compaction"}
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015581050329, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 1158430, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14350, "largest_seqno": 15589, "table_properties": {"data_size": 1152840, "index_size": 2869, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13681, "raw_average_key_size": 20, "raw_value_size": 1140793, "raw_average_value_size": 1731, "num_data_blocks": 120, "num_entries": 659, "num_filter_entries": 659, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015559, "oldest_key_time": 1765015559, "file_creation_time": 1765015581, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8be51b2d-bcce-4d27-9e46-3f3cdf9e4a92", "db_session_id": "9EENGG53AOVF6BJXYAD5", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 9207 microseconds, and 4085 cpu microseconds.
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:21.050377) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 1158430 bytes OK
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:21.050398) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:21.052039) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:21.052058) EVENT_LOG_v1 {"time_micros": 1765015581052053, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:21.052080) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 2164175, prev total WAL file size 2164175, number of live WAL files 2.
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:21.052878) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031303330' seq:72057594037927935, type:22 .. '6B760031323837' seq:0, type:0; will stop at (end)
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(1131KB)], [18(14MB)]
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015581052962, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 16555157, "oldest_snapshot_seqno": -1}
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 10183 keys, 15590592 bytes, temperature: kUnknown
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015581168474, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 15590592, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15531833, "index_size": 32226, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25477, "raw_key_size": 272539, "raw_average_key_size": 26, "raw_value_size": 15356962, "raw_average_value_size": 1508, "num_data_blocks": 1220, "num_entries": 10183, "num_filter_entries": 10183, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015441, "oldest_key_time": 0, "file_creation_time": 1765015581, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8be51b2d-bcce-4d27-9e46-3f3cdf9e4a92", "db_session_id": "9EENGG53AOVF6BJXYAD5", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:21.168743) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 15590592 bytes
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:21.170758) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.2 rd, 134.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 14.7 +0.0 blob) out(14.9 +0.0 blob), read-write-amplify(27.7) write-amplify(13.5) OK, records in: 10717, records dropped: 534 output_compression: NoCompression
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:21.170812) EVENT_LOG_v1 {"time_micros": 1765015581170799, "job": 8, "event": "compaction_finished", "compaction_time_micros": 115576, "compaction_time_cpu_micros": 43980, "output_level": 6, "num_output_files": 1, "total_output_size": 15590592, "num_input_records": 10717, "num_output_records": 10183, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015581171156, "job": 8, "event": "table_file_deletion", "file_number": 20}
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015581173581, "job": 8, "event": "table_file_deletion", "file_number": 18}
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:21.052706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:21.173686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:21.173693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:21.173695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:21.173697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:21.173699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: Updating np0005548786.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: from='client.34289 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005548786.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: Removed label mgr from host np0005548786.localdomain
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:21 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:06:22 np0005548790.localdomain ceph-mon[288373]: Removing daemon mgr.np0005548786.mczynb from np0005548786.localdomain -- ports [8765]
Dec 06 10:06:22 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:22 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:06:22 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:06:22 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:06:22 np0005548790.localdomain podman[294448]: 2025-12-06 10:06:22.576222631 +0000 UTC m=+0.088348521 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 10:06:22 np0005548790.localdomain systemd[1]: tmp-crun.tA0mnb.mount: Deactivated successfully.
Dec 06 10:06:22 np0005548790.localdomain podman[294458]: 2025-12-06 10:06:22.619717398 +0000 UTC m=+0.089176373 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:06:22 np0005548790.localdomain podman[294458]: 2025-12-06 10:06:22.633969667 +0000 UTC m=+0.103428652 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 10:06:22 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:06:22 np0005548790.localdomain podman[294459]: 2025-12-06 10:06:22.725996606 +0000 UTC m=+0.191582498 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., distribution-scope=public, version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 06 10:06:22 np0005548790.localdomain podman[294459]: 2025-12-06 10:06:22.739156675 +0000 UTC m=+0.204742597 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, managed_by=edpm_ansible, maintainer=Red Hat, Inc., distribution-scope=public, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 10:06:22 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:06:22 np0005548790.localdomain podman[294448]: 2025-12-06 10:06:22.793397099 +0000 UTC m=+0.305522949 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:06:22 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:06:23 np0005548790.localdomain sudo[294506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:06:23 np0005548790.localdomain sudo[294506]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:23 np0005548790.localdomain sudo[294506]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:23 np0005548790.localdomain ceph-mon[288373]: pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:06:23 np0005548790.localdomain ceph-mon[288373]: from='client.44183 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005548786.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:06:23 np0005548790.localdomain ceph-mon[288373]: Removed label _admin from host np0005548786.localdomain
Dec 06 10:06:23 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth rm", "entity": "mgr.np0005548786.mczynb"} : dispatch
Dec 06 10:06:23 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005548786.mczynb"}]': finished
Dec 06 10:06:23 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:23 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:23 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:06:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:06:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:06:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:06:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:06:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:06:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:06:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:06:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:06:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:06:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:06:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:06:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:06:24 np0005548790.localdomain ceph-mon[288373]: Removing key for mgr.np0005548786.mczynb
Dec 06 10:06:25 np0005548790.localdomain sudo[294524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:06:25 np0005548790.localdomain sudo[294524]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:25 np0005548790.localdomain sudo[294524]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:25 np0005548790.localdomain ceph-mon[288373]: pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:06:25 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:25 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:25 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:25 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:06:25 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:25 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:25 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:25 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:06:25 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548786.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:06:25 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:25 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e10 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:06:25 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/544071182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:06:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:26.335 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:06:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:26.335 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:06:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:26.336 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:06:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:26.352 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:06:26 np0005548790.localdomain ceph-mon[288373]: Removing np0005548786.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:06:26 np0005548790.localdomain ceph-mon[288373]: Removing np0005548786.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:06:26 np0005548790.localdomain ceph-mon[288373]: Removing np0005548786.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:06:26 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548786 (monmap changed)...
Dec 06 10:06:26 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548786 on np0005548786.localdomain
Dec 06 10:06:26 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.107:0/544071182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:06:26 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:26 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:26 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:06:26 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:06:26 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:26 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:06:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:06:27 np0005548790.localdomain ceph-mon[288373]: pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:06:27 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mon.np0005548787 (monmap changed)...
Dec 06 10:06:27 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mon.np0005548787 on np0005548787.localdomain
Dec 06 10:06:27 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:27 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.107:0/1605363813' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:06:27 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:27 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:27 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:06:27 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:06:27 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:27 np0005548790.localdomain podman[294542]: 2025-12-06 10:06:27.588755044 +0000 UTC m=+0.103411172 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0)
Dec 06 10:06:27 np0005548790.localdomain podman[294542]: 2025-12-06 10:06:27.604167415 +0000 UTC m=+0.118823573 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 10:06:27 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:06:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:28.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:06:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:28.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:06:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:28.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:06:28 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548787.umwsra (monmap changed)...
Dec 06 10:06:28 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548787.umwsra on np0005548787.localdomain
Dec 06 10:06:28 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:28 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:28 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548787 (monmap changed)...
Dec 06 10:06:28 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:06:28 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:28 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548787 on np0005548787.localdomain
Dec 06 10:06:28 np0005548790.localdomain ceph-mon[288373]: pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:06:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:29.329 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:06:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:29.346 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:06:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:29.346 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:06:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:29.347 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:06:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:29.365 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:06:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:29.366 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:06:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:29.366 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:06:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:29.366 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:06:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:29.367 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:06:29 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:06:29 np0005548790.localdomain podman[294563]: 2025-12-06 10:06:29.564327622 +0000 UTC m=+0.080036619 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:06:29 np0005548790.localdomain podman[294563]: 2025-12-06 10:06:29.576118057 +0000 UTC m=+0.091827054 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:06:29 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:06:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:29.843 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:06:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:29 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548788 (monmap changed)...
Dec 06 10:06:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:06:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:29 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain
Dec 06 10:06:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 06 10:06:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:29 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.108:0/308530152' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:06:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:30.057 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:06:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:30.060 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=12041MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:06:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:30.060 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:06:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:30.061 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:06:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:30.145 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:06:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:30.145 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:06:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:30.177 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:06:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:30.611 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:06:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:30.617 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:06:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:30.647 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:06:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:30.650 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:06:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:30.651 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:06:30 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.2 (monmap changed)...
Dec 06 10:06:30 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.2 on np0005548788.localdomain
Dec 06 10:06:30 np0005548790.localdomain ceph-mon[288373]: pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:06:30 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.108:0/3072124168' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:06:30 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:30 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:30 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 06 10:06:30 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:31.638 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:06:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:31.639 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:06:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:06:31.639 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:06:31 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:06:32 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.5 (monmap changed)...
Dec 06 10:06:32 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.5 on np0005548788.localdomain
Dec 06 10:06:32 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.106:0/406415283' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:06:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:06:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:33 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)...
Dec 06 10:06:33 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain
Dec 06 10:06:33 np0005548790.localdomain ceph-mon[288373]: pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:06:33 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.106:0/3079514338' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:06:34 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:34 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:34 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)...
Dec 06 10:06:34 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:06:34 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:06:34 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:34 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain
Dec 06 10:06:34 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:34 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:34 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:34 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:34 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:06:34 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:06:34 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:06:34 np0005548790.localdomain systemd[1]: tmp-crun.ZMIQtQ.mount: Deactivated successfully.
Dec 06 10:06:34 np0005548790.localdomain podman[294628]: 2025-12-06 10:06:34.562946712 +0000 UTC m=+0.078438666 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 06 10:06:34 np0005548790.localdomain podman[294628]: 2025-12-06 10:06:34.605239421 +0000 UTC m=+0.120731565 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 10:06:34 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:06:35 np0005548790.localdomain ceph-mon[288373]: from='client.34370 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005548786.localdomain", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:06:35 np0005548790.localdomain ceph-mon[288373]: Added label _no_schedule to host np0005548786.localdomain
Dec 06 10:06:35 np0005548790.localdomain ceph-mon[288373]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005548786.localdomain
Dec 06 10:06:35 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mon.np0005548788 (monmap changed)...
Dec 06 10:06:35 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mon.np0005548788 on np0005548788.localdomain
Dec 06 10:06:35 np0005548790.localdomain ceph-mon[288373]: pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:06:35 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:35 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:35 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:06:35 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:36 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548789 (monmap changed)...
Dec 06 10:06:36 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain
Dec 06 10:06:36 np0005548790.localdomain ceph-mon[288373]: from='client.44219 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005548786.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:06:36 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:36 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:36 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 06 10:06:36 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:36 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:06:37 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.1 (monmap changed)...
Dec 06 10:06:37 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.1 on np0005548789.localdomain
Dec 06 10:06:37 np0005548790.localdomain ceph-mon[288373]: pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:06:37 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:37 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548786.localdomain"} : dispatch
Dec 06 10:06:37 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005548786.localdomain"}]': finished
Dec 06 10:06:37 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:37 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:37 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 06 10:06:37 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:38 np0005548790.localdomain ceph-mon[288373]: from='client.26868 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005548786.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:06:38 np0005548790.localdomain ceph-mon[288373]: Removed host np0005548786.localdomain
Dec 06 10:06:38 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.4 (monmap changed)...
Dec 06 10:06:38 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.4 on np0005548789.localdomain
Dec 06 10:06:38 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:38 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:38 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:06:38 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:39 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)...
Dec 06 10:06:39 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain
Dec 06 10:06:39 np0005548790.localdomain ceph-mon[288373]: pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:06:39 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:39 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:39 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:06:39 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:06:39 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:39 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.32:0/1005735917' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:06:39 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.32:0/1005735917' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:06:40 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548789.mzhmje (monmap changed)...
Dec 06 10:06:40 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain
Dec 06 10:06:40 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:40 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:40 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:06:40 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:06:40 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:40 np0005548790.localdomain sudo[294653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:06:40 np0005548790.localdomain sudo[294653]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:40 np0005548790.localdomain sudo[294653]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:41 np0005548790.localdomain sudo[294671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:41.048930) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015601048982, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1049, "num_deletes": 258, "total_data_size": 1528838, "memory_usage": 1555016, "flush_reason": "Manual Compaction"}
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Dec 06 10:06:41 np0005548790.localdomain sudo[294671]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015601059100, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 889561, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15594, "largest_seqno": 16638, "table_properties": {"data_size": 884640, "index_size": 2328, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12862, "raw_average_key_size": 21, "raw_value_size": 873892, "raw_average_value_size": 1442, "num_data_blocks": 100, "num_entries": 606, "num_filter_entries": 606, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015581, "oldest_key_time": 1765015581, "file_creation_time": 1765015601, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8be51b2d-bcce-4d27-9e46-3f3cdf9e4a92", "db_session_id": "9EENGG53AOVF6BJXYAD5", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 10220 microseconds, and 4442 cpu microseconds.
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:41.059151) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 889561 bytes OK
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:41.059178) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:41.061984) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:41.062008) EVENT_LOG_v1 {"time_micros": 1765015601062001, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:41.062032) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 1523229, prev total WAL file size 1523553, number of live WAL files 2.
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:41.063009) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353135' seq:72057594037927935, type:22 .. '6C6F676D0033373638' seq:0, type:0; will stop at (end)
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(868KB)], [21(14MB)]
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015601063060, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 16480153, "oldest_snapshot_seqno": -1}
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 10245 keys, 16342241 bytes, temperature: kUnknown
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015601196376, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 16342241, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16281656, "index_size": 33860, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25669, "raw_key_size": 275387, "raw_average_key_size": 26, "raw_value_size": 16104373, "raw_average_value_size": 1571, "num_data_blocks": 1290, "num_entries": 10245, "num_filter_entries": 10245, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015441, "oldest_key_time": 0, "file_creation_time": 1765015601, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8be51b2d-bcce-4d27-9e46-3f3cdf9e4a92", "db_session_id": "9EENGG53AOVF6BJXYAD5", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:41.197144) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 16342241 bytes
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:41.198845) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.3 rd, 122.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 14.9 +0.0 blob) out(15.6 +0.0 blob), read-write-amplify(36.9) write-amplify(18.4) OK, records in: 10789, records dropped: 544 output_compression: NoCompression
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:41.198881) EVENT_LOG_v1 {"time_micros": 1765015601198867, "job": 10, "event": "compaction_finished", "compaction_time_micros": 133693, "compaction_time_cpu_micros": 41376, "output_level": 6, "num_output_files": 1, "total_output_size": 16342241, "num_input_records": 10789, "num_output_records": 10245, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015601199385, "job": 10, "event": "table_file_deletion", "file_number": 23}
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015601201806, "job": 10, "event": "table_file_deletion", "file_number": 21}
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:41.062911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:41.202067) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:41.202077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:41.202082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:41.202087) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:06:41.202091) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mon.np0005548789 (monmap changed)...
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mon.np0005548789 on np0005548789.localdomain
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: pgmap v30: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:41 np0005548790.localdomain podman[294707]: 
Dec 06 10:06:41 np0005548790.localdomain podman[294707]: 2025-12-06 10:06:41.494642725 +0000 UTC m=+0.074814088 container create 19e41488cc546c00dd98eccde0974f6db4ed988cbaa65ff62c8020b929d53711 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_pascal, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_BRANCH=main, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=)
Dec 06 10:06:41 np0005548790.localdomain systemd[1]: Started libpod-conmon-19e41488cc546c00dd98eccde0974f6db4ed988cbaa65ff62c8020b929d53711.scope.
Dec 06 10:06:41 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:06:41 np0005548790.localdomain podman[294707]: 2025-12-06 10:06:41.558007868 +0000 UTC m=+0.138179251 container init 19e41488cc546c00dd98eccde0974f6db4ed988cbaa65ff62c8020b929d53711 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_pascal, com.redhat.component=rhceph-container, name=rhceph, io.openshift.tags=rhceph ceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_CLEAN=True, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, release=1763362218, CEPH_POINT_RELEASE=, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., RELEASE=main, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 06 10:06:41 np0005548790.localdomain podman[294707]: 2025-12-06 10:06:41.46482703 +0000 UTC m=+0.044998413 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:06:41 np0005548790.localdomain podman[294707]: 2025-12-06 10:06:41.570241554 +0000 UTC m=+0.150412917 container start 19e41488cc546c00dd98eccde0974f6db4ed988cbaa65ff62c8020b929d53711 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_pascal, GIT_BRANCH=main, distribution-scope=public, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, ceph=True, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=rhceph)
Dec 06 10:06:41 np0005548790.localdomain podman[294707]: 2025-12-06 10:06:41.570515782 +0000 UTC m=+0.150687195 container attach 19e41488cc546c00dd98eccde0974f6db4ed988cbaa65ff62c8020b929d53711 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_pascal, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., RELEASE=main, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, release=1763362218, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, ceph=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7)
Dec 06 10:06:41 np0005548790.localdomain vibrant_pascal[294722]: 167 167
Dec 06 10:06:41 np0005548790.localdomain systemd[1]: libpod-19e41488cc546c00dd98eccde0974f6db4ed988cbaa65ff62c8020b929d53711.scope: Deactivated successfully.
Dec 06 10:06:41 np0005548790.localdomain podman[294707]: 2025-12-06 10:06:41.574631802 +0000 UTC m=+0.154803175 container died 19e41488cc546c00dd98eccde0974f6db4ed988cbaa65ff62c8020b929d53711 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_pascal, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, RELEASE=main, ceph=True, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, description=Red Hat Ceph Storage 7, architecture=x86_64, release=1763362218, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=)
Dec 06 10:06:41 np0005548790.localdomain podman[294728]: 2025-12-06 10:06:41.647916029 +0000 UTC m=+0.066382294 container remove 19e41488cc546c00dd98eccde0974f6db4ed988cbaa65ff62c8020b929d53711 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_pascal, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, RELEASE=main, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, architecture=x86_64, io.openshift.expose-services=, GIT_CLEAN=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 06 10:06:41 np0005548790.localdomain systemd[1]: libpod-conmon-19e41488cc546c00dd98eccde0974f6db4ed988cbaa65ff62c8020b929d53711.scope: Deactivated successfully.
Dec 06 10:06:41 np0005548790.localdomain sudo[294671]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:41 np0005548790.localdomain sudo[294743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:06:41 np0005548790.localdomain sudo[294743]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:41 np0005548790.localdomain sudo[294743]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:41 np0005548790.localdomain sudo[294761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:06:41 np0005548790.localdomain sudo[294761]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:41 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:06:42 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548790 (monmap changed)...
Dec 06 10:06:42 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain
Dec 06 10:06:42 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:42 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:42 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 06 10:06:42 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:42 np0005548790.localdomain podman[294796]: 
Dec 06 10:06:42 np0005548790.localdomain podman[294796]: 2025-12-06 10:06:42.314165181 +0000 UTC m=+0.079097983 container create 7d7504f5661652f9244296e7770226170ff733329000adf7a790079d1439731a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_leavitt, GIT_CLEAN=True, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.buildah.version=1.41.4, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, ceph=True, description=Red Hat Ceph Storage 7)
Dec 06 10:06:42 np0005548790.localdomain systemd[1]: Started libpod-conmon-7d7504f5661652f9244296e7770226170ff733329000adf7a790079d1439731a.scope.
Dec 06 10:06:42 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:06:42 np0005548790.localdomain podman[294796]: 2025-12-06 10:06:42.375303234 +0000 UTC m=+0.140236056 container init 7d7504f5661652f9244296e7770226170ff733329000adf7a790079d1439731a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_leavitt, distribution-scope=public, RELEASE=main, version=7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, ceph=True, build-date=2025-11-26T19:44:28Z, release=1763362218, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 06 10:06:42 np0005548790.localdomain stupefied_leavitt[294811]: 167 167
Dec 06 10:06:42 np0005548790.localdomain podman[294796]: 2025-12-06 10:06:42.285741852 +0000 UTC m=+0.050674704 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:06:42 np0005548790.localdomain podman[294796]: 2025-12-06 10:06:42.385248839 +0000 UTC m=+0.150181661 container start 7d7504f5661652f9244296e7770226170ff733329000adf7a790079d1439731a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_leavitt, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, name=rhceph, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 06 10:06:42 np0005548790.localdomain systemd[1]: libpod-7d7504f5661652f9244296e7770226170ff733329000adf7a790079d1439731a.scope: Deactivated successfully.
Dec 06 10:06:42 np0005548790.localdomain podman[294796]: 2025-12-06 10:06:42.386218186 +0000 UTC m=+0.151151018 container attach 7d7504f5661652f9244296e7770226170ff733329000adf7a790079d1439731a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_leavitt, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, architecture=x86_64, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., vcs-type=git, version=7, name=rhceph, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7)
Dec 06 10:06:42 np0005548790.localdomain podman[294796]: 2025-12-06 10:06:42.389428501 +0000 UTC m=+0.154361363 container died 7d7504f5661652f9244296e7770226170ff733329000adf7a790079d1439731a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_leavitt, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., ceph=True, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, architecture=x86_64, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, vcs-type=git, release=1763362218, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 06 10:06:42 np0005548790.localdomain podman[294816]: 2025-12-06 10:06:42.479056475 +0000 UTC m=+0.083043709 container remove 7d7504f5661652f9244296e7770226170ff733329000adf7a790079d1439731a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_leavitt, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, ceph=True, name=rhceph, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.buildah.version=1.41.4, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, distribution-scope=public, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 06 10:06:42 np0005548790.localdomain systemd[1]: libpod-conmon-7d7504f5661652f9244296e7770226170ff733329000adf7a790079d1439731a.scope: Deactivated successfully.
Dec 06 10:06:42 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-2d865d62a8d87339b920774d9ca3b4ff0b21885e3f74338c66523eddd6fd9488-merged.mount: Deactivated successfully.
Dec 06 10:06:42 np0005548790.localdomain sudo[294761]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:42 np0005548790.localdomain sudo[294840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:06:42 np0005548790.localdomain sudo[294840]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:42 np0005548790.localdomain sudo[294840]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:42 np0005548790.localdomain sudo[294858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:06:42 np0005548790.localdomain sudo[294858]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:43 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.0 (monmap changed)...
Dec 06 10:06:43 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.0 on np0005548790.localdomain
Dec 06 10:06:43 np0005548790.localdomain ceph-mon[288373]: pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:06:43 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:43 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:43 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 06 10:06:43 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:43 np0005548790.localdomain podman[294892]: 
Dec 06 10:06:43 np0005548790.localdomain podman[294892]: 2025-12-06 10:06:43.291389939 +0000 UTC m=+0.077551643 container create 07dd595bf2c3052e54b08f0650903f9722769a9bea23ec9048c5db29b2231d88 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_haibt, GIT_BRANCH=main, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=1763362218, ceph=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, build-date=2025-11-26T19:44:28Z, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 06 10:06:43 np0005548790.localdomain systemd[1]: Started libpod-conmon-07dd595bf2c3052e54b08f0650903f9722769a9bea23ec9048c5db29b2231d88.scope.
Dec 06 10:06:43 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:06:43 np0005548790.localdomain podman[294892]: 2025-12-06 10:06:43.260324679 +0000 UTC m=+0.046486423 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:06:43 np0005548790.localdomain podman[294892]: 2025-12-06 10:06:43.365302322 +0000 UTC m=+0.151464026 container init 07dd595bf2c3052e54b08f0650903f9722769a9bea23ec9048c5db29b2231d88 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_haibt, architecture=x86_64, version=7, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, RELEASE=main, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=)
Dec 06 10:06:43 np0005548790.localdomain podman[294892]: 2025-12-06 10:06:43.373935173 +0000 UTC m=+0.160096877 container start 07dd595bf2c3052e54b08f0650903f9722769a9bea23ec9048c5db29b2231d88 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_haibt, distribution-scope=public, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, release=1763362218, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_CLEAN=True, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 06 10:06:43 np0005548790.localdomain podman[294892]: 2025-12-06 10:06:43.374164339 +0000 UTC m=+0.160326043 container attach 07dd595bf2c3052e54b08f0650903f9722769a9bea23ec9048c5db29b2231d88 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_haibt, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph, GIT_BRANCH=main, distribution-scope=public, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 06 10:06:43 np0005548790.localdomain elastic_haibt[294907]: 167 167
Dec 06 10:06:43 np0005548790.localdomain systemd[1]: libpod-07dd595bf2c3052e54b08f0650903f9722769a9bea23ec9048c5db29b2231d88.scope: Deactivated successfully.
Dec 06 10:06:43 np0005548790.localdomain podman[294892]: 2025-12-06 10:06:43.377178179 +0000 UTC m=+0.163339913 container died 07dd595bf2c3052e54b08f0650903f9722769a9bea23ec9048c5db29b2231d88 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_haibt, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.openshift.expose-services=, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vcs-type=git, io.buildah.version=1.41.4, version=7, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 10:06:43 np0005548790.localdomain podman[294912]: 2025-12-06 10:06:43.473573894 +0000 UTC m=+0.086184833 container remove 07dd595bf2c3052e54b08f0650903f9722769a9bea23ec9048c5db29b2231d88 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_haibt, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, distribution-scope=public, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, release=1763362218, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 06 10:06:43 np0005548790.localdomain systemd[1]: libpod-conmon-07dd595bf2c3052e54b08f0650903f9722769a9bea23ec9048c5db29b2231d88.scope: Deactivated successfully.
Dec 06 10:06:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-20900c1d9e0063fb00489ed4e0d4f9f0c02c6d994530c101e354c76bd825c5ae-merged.mount: Deactivated successfully.
Dec 06 10:06:43 np0005548790.localdomain sudo[294858]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:43 np0005548790.localdomain sudo[294936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:06:43 np0005548790.localdomain sudo[294936]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:43 np0005548790.localdomain sudo[294936]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:43 np0005548790.localdomain sudo[294954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:06:43 np0005548790.localdomain sudo[294954]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:44 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.3 (monmap changed)...
Dec 06 10:06:44 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.3 on np0005548790.localdomain
Dec 06 10:06:44 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:44 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:44 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548790.vhcezv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:06:44 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:44 np0005548790.localdomain podman[294988]: 
Dec 06 10:06:44 np0005548790.localdomain podman[294988]: 2025-12-06 10:06:44.323732117 +0000 UTC m=+0.075081655 container create 8ed05c11ebbcfe19c10917de9fa088120f0fa5351debf68af987eea537027962 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_pascal, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, version=7, ceph=True, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 06 10:06:44 np0005548790.localdomain systemd[1]: Started libpod-conmon-8ed05c11ebbcfe19c10917de9fa088120f0fa5351debf68af987eea537027962.scope.
Dec 06 10:06:44 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:06:44 np0005548790.localdomain podman[294988]: 2025-12-06 10:06:44.294463206 +0000 UTC m=+0.045812754 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:06:44 np0005548790.localdomain podman[294988]: 2025-12-06 10:06:44.76008416 +0000 UTC m=+0.511433688 container init 8ed05c11ebbcfe19c10917de9fa088120f0fa5351debf68af987eea537027962 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_pascal, com.redhat.component=rhceph-container, distribution-scope=public, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, version=7, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, vcs-type=git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.buildah.version=1.41.4)
Dec 06 10:06:44 np0005548790.localdomain podman[294988]: 2025-12-06 10:06:44.773283073 +0000 UTC m=+0.524632611 container start 8ed05c11ebbcfe19c10917de9fa088120f0fa5351debf68af987eea537027962 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_pascal, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., release=1763362218, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_CLEAN=True, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-11-26T19:44:28Z, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, distribution-scope=public, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, RELEASE=main)
Dec 06 10:06:44 np0005548790.localdomain podman[294988]: 2025-12-06 10:06:44.773564601 +0000 UTC m=+0.524914179 container attach 8ed05c11ebbcfe19c10917de9fa088120f0fa5351debf68af987eea537027962 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_pascal, name=rhceph, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-type=git, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 10:06:44 np0005548790.localdomain cranky_pascal[295003]: 167 167
Dec 06 10:06:44 np0005548790.localdomain systemd[1]: libpod-8ed05c11ebbcfe19c10917de9fa088120f0fa5351debf68af987eea537027962.scope: Deactivated successfully.
Dec 06 10:06:44 np0005548790.localdomain podman[294988]: 2025-12-06 10:06:44.776504899 +0000 UTC m=+0.527854467 container died 8ed05c11ebbcfe19c10917de9fa088120f0fa5351debf68af987eea537027962 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_pascal, com.redhat.component=rhceph-container, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, release=1763362218, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, ceph=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main)
Dec 06 10:06:44 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-8e0c352951fa0ab64f05856b696e59171f0049b72cd7a8c3d1a66249b373709a-merged.mount: Deactivated successfully.
Dec 06 10:06:44 np0005548790.localdomain podman[295008]: 2025-12-06 10:06:44.87842227 +0000 UTC m=+0.088840382 container remove 8ed05c11ebbcfe19c10917de9fa088120f0fa5351debf68af987eea537027962 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_pascal, distribution-scope=public, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, ceph=True, architecture=x86_64, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph)
Dec 06 10:06:44 np0005548790.localdomain systemd[1]: libpod-conmon-8ed05c11ebbcfe19c10917de9fa088120f0fa5351debf68af987eea537027962.scope: Deactivated successfully.
Dec 06 10:06:44 np0005548790.localdomain sudo[294954]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:45 np0005548790.localdomain sudo[295026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:06:45 np0005548790.localdomain sudo[295026]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:45 np0005548790.localdomain sudo[295026]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:45 np0005548790.localdomain sudo[295044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:06:45 np0005548790.localdomain sudo[295044]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:45 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mds.mds.np0005548790.vhcezv (monmap changed)...
Dec 06 10:06:45 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mds.mds.np0005548790.vhcezv on np0005548790.localdomain
Dec 06 10:06:45 np0005548790.localdomain ceph-mon[288373]: pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:06:45 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:45 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:45 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:45 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:06:45 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:06:45 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:45 np0005548790.localdomain podman[295080]: 
Dec 06 10:06:45 np0005548790.localdomain podman[295080]: 2025-12-06 10:06:45.544606281 +0000 UTC m=+0.076552865 container create a96c54e6bf74bfb26954fa58aecd3a3322569bfe47d2584433e74b76353aa543 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_tharp, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, RELEASE=main, name=rhceph, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, architecture=x86_64)
Dec 06 10:06:45 np0005548790.localdomain systemd[1]: Started libpod-conmon-a96c54e6bf74bfb26954fa58aecd3a3322569bfe47d2584433e74b76353aa543.scope.
Dec 06 10:06:45 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:06:45 np0005548790.localdomain podman[295080]: 2025-12-06 10:06:45.602261672 +0000 UTC m=+0.134208256 container init a96c54e6bf74bfb26954fa58aecd3a3322569bfe47d2584433e74b76353aa543 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_tharp, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, vcs-type=git, version=7, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, ceph=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, release=1763362218)
Dec 06 10:06:45 np0005548790.localdomain podman[295080]: 2025-12-06 10:06:45.611111418 +0000 UTC m=+0.143058002 container start a96c54e6bf74bfb26954fa58aecd3a3322569bfe47d2584433e74b76353aa543 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_tharp, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, ceph=True, version=7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vcs-type=git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 06 10:06:45 np0005548790.localdomain podman[295080]: 2025-12-06 10:06:45.611309803 +0000 UTC m=+0.143256397 container attach a96c54e6bf74bfb26954fa58aecd3a3322569bfe47d2584433e74b76353aa543 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_tharp, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, architecture=x86_64, ceph=True, vendor=Red Hat, Inc., release=1763362218, RELEASE=main, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 06 10:06:45 np0005548790.localdomain podman[295080]: 2025-12-06 10:06:45.512890405 +0000 UTC m=+0.044837019 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:06:45 np0005548790.localdomain trusting_tharp[295095]: 167 167
Dec 06 10:06:45 np0005548790.localdomain systemd[1]: libpod-a96c54e6bf74bfb26954fa58aecd3a3322569bfe47d2584433e74b76353aa543.scope: Deactivated successfully.
Dec 06 10:06:45 np0005548790.localdomain podman[295080]: 2025-12-06 10:06:45.616559203 +0000 UTC m=+0.148505817 container died a96c54e6bf74bfb26954fa58aecd3a3322569bfe47d2584433e74b76353aa543 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_tharp, io.openshift.expose-services=, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-type=git, distribution-scope=public, CEPH_POINT_RELEASE=, RELEASE=main, ceph=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, release=1763362218, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 06 10:06:45 np0005548790.localdomain podman[295100]: 2025-12-06 10:06:45.709917326 +0000 UTC m=+0.082496274 container remove a96c54e6bf74bfb26954fa58aecd3a3322569bfe47d2584433e74b76353aa543 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_tharp, GIT_CLEAN=True, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.component=rhceph-container, version=7, build-date=2025-11-26T19:44:28Z, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, RELEASE=main, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 10:06:45 np0005548790.localdomain systemd[1]: libpod-conmon-a96c54e6bf74bfb26954fa58aecd3a3322569bfe47d2584433e74b76353aa543.scope: Deactivated successfully.
Dec 06 10:06:45 np0005548790.localdomain sudo[295044]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:46 np0005548790.localdomain sudo[295116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:06:46 np0005548790.localdomain sudo[295116]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:46 np0005548790.localdomain sudo[295116]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:46 np0005548790.localdomain ceph-mon[288373]: from='client.44243 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:06:46 np0005548790.localdomain ceph-mon[288373]: Saving service mon spec with placement label:mon
Dec 06 10:06:46 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548790.kvkfyr (monmap changed)...
Dec 06 10:06:46 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548790.kvkfyr on np0005548790.localdomain
Dec 06 10:06:46 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:46 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:46 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:46 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:06:46 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:46 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:06:46 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:06:47 np0005548790.localdomain ceph-mon[288373]: from='client.44251 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005548789", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:06:47 np0005548790.localdomain ceph-mon[288373]: pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:06:47 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:06:47 np0005548790.localdomain ceph-mgr[286934]: ms_deliver_dispatch: unhandled message 0x563548511080 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0
Dec 06 10:06:47 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [INF] : mon.np0005548790 calling monitor election
Dec 06 10:06:47 np0005548790.localdomain ceph-mon[288373]: paxos.1).electionLogic(46) init, last seen epoch 46
Dec 06 10:06:47 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(electing) e11 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:06:47 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(electing) e11 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:06:47 np0005548790.localdomain podman[295134]: 2025-12-06 10:06:47.610986065 +0000 UTC m=+0.125445961 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 10:06:47 np0005548790.localdomain podman[295134]: 2025-12-06 10:06:47.618598549 +0000 UTC m=+0.133058455 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 10:06:47 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:06:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:06:48.389 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:06:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:06:48.390 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:06:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:06:48.390 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:06:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:06:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:06:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:06:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:06:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:06:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18693 "" "Go-http-client/1.1"
Dec 06 10:06:52 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:06:52 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [INF] : mon.np0005548790 calling monitor election
Dec 06 10:06:52 np0005548790.localdomain ceph-mon[288373]: paxos.1).electionLogic(49) init, last seen epoch 49, mid-election, bumping
Dec 06 10:06:52 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(electing) e11 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:06:52 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(electing) e11 handle_timecheck drop unexpected msg
Dec 06 10:06:52 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(electing) e11 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:06:52 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:06:52 np0005548790.localdomain sudo[295152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 10:06:52 np0005548790.localdomain sudo[295152]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:06:52 np0005548790.localdomain sudo[295152]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:06:52 np0005548790.localdomain sudo[295177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph
Dec 06 10:06:52 np0005548790.localdomain sudo[295177]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:06:52 np0005548790.localdomain sudo[295177]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:52 np0005548790.localdomain podman[295171]: 2025-12-06 10:06:52.870664166 +0000 UTC m=+0.081535178 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, version=9.6, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Dec 06 10:06:52 np0005548790.localdomain podman[295171]: 2025-12-06 10:06:52.888561625 +0000 UTC m=+0.099432607 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, vcs-type=git, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.7)
Dec 06 10:06:52 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:06:52 np0005548790.localdomain sudo[295222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:06:52 np0005548790.localdomain sudo[295222]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:52 np0005548790.localdomain sudo[295222]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:52 np0005548790.localdomain podman[295211]: 2025-12-06 10:06:52.97223865 +0000 UTC m=+0.095274056 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 10:06:53 np0005548790.localdomain podman[295211]: 2025-12-06 10:06:53.009117824 +0000 UTC m=+0.132153200 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:06:53 np0005548790.localdomain sudo[295245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:06:53 np0005548790.localdomain sudo[295245]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:53 np0005548790.localdomain sudo[295245]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:53 np0005548790.localdomain systemd[1]: tmp-crun.82S1xH.mount: Deactivated successfully.
Dec 06 10:06:53 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:06:53 np0005548790.localdomain podman[295169]: 2025-12-06 10:06:53.031992245 +0000 UTC m=+0.247107060 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 06 10:06:53 np0005548790.localdomain podman[295169]: 2025-12-06 10:06:53.071221613 +0000 UTC m=+0.286336398 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 06 10:06:53 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:06:53 np0005548790.localdomain sudo[295283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:06:53 np0005548790.localdomain sudo[295283]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:53 np0005548790.localdomain sudo[295283]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:53 np0005548790.localdomain sudo[295317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:06:53 np0005548790.localdomain sudo[295317]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:53 np0005548790.localdomain sudo[295317]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:53 np0005548790.localdomain sudo[295335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:06:53 np0005548790.localdomain sudo[295335]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:53 np0005548790.localdomain sudo[295335]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:53 np0005548790.localdomain sudo[295353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 06 10:06:53 np0005548790.localdomain sudo[295353]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:53 np0005548790.localdomain sudo[295353]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:53 np0005548790.localdomain sudo[295371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:06:53 np0005548790.localdomain sudo[295371]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:53 np0005548790.localdomain sudo[295371]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:53 np0005548790.localdomain sudo[295389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:06:53 np0005548790.localdomain sudo[295389]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:53 np0005548790.localdomain sudo[295389]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790 calling monitor election
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: mon.np0005548788 calling monitor election
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: monmap epoch 11
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: last_changed 2025-12-06T10:06:47.518948+0000
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: created 2025-12-06T07:57:14.295835+0000
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: min_mon_release 18 (reef)
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: election_strategy: 1
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005548787
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: 1: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005548790
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: 2: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005548788
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790 calling monitor election
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: fsmap cephfs:1 {0=mds.np0005548790.vhcezv=up:active} 2 up:standby
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: osdmap e89: 6 total, 6 up, 6 in
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: mgrmap e23: np0005548787.umwsra(active, since 61s), standbys: np0005548786.mczynb, np0005548790.kvkfyr, np0005548789.mzhmje, np0005548785.vhqlsq, np0005548788.yvwbqq
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: Health check failed: 1/3 mons down, quorum np0005548787,np0005548790 (MON_DOWN)
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]:     stray daemon mgr.np0005548785.vhqlsq on host np0005548785.localdomain not managed by cephadm
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]:     stray host np0005548785.localdomain has 1 stray daemons: ['mgr.np0005548785.vhqlsq']
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: mon.np0005548787 calling monitor election
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: mon.np0005548787 is new leader, mons np0005548787,np0005548790,np0005548788 in quorum (ranks 0,1,2)
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: monmap epoch 11
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: last_changed 2025-12-06T10:06:47.518948+0000
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: created 2025-12-06T07:57:14.295835+0000
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: min_mon_release 18 (reef)
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: election_strategy: 1
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005548787
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: 1: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005548790
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: 2: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005548788
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: fsmap cephfs:1 {0=mds.np0005548790.vhcezv=up:active} 2 up:standby
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: osdmap e89: 6 total, 6 up, 6 in
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: mgrmap e23: np0005548787.umwsra(active, since 61s), standbys: np0005548786.mczynb, np0005548790.kvkfyr, np0005548789.mzhmje, np0005548785.vhqlsq, np0005548788.yvwbqq
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005548787,np0005548790)
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]:     stray daemon mgr.np0005548785.vhqlsq on host np0005548785.localdomain not managed by cephadm
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]:     stray host np0005548785.localdomain has 1 stray daemons: ['mgr.np0005548785.vhqlsq']
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/etc/ceph/ceph.conf
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:06:53 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:06:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:06:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:06:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:06:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:06:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:06:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:06:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:06:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:06:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:06:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:06:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:06:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:06:53 np0005548790.localdomain sudo[295407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:06:53 np0005548790.localdomain sudo[295407]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:53 np0005548790.localdomain sudo[295407]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:53 np0005548790.localdomain sudo[295425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:06:53 np0005548790.localdomain sudo[295425]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:53 np0005548790.localdomain sudo[295425]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:53 np0005548790.localdomain sudo[295443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:06:53 np0005548790.localdomain sudo[295443]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:53 np0005548790.localdomain sudo[295443]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:53 np0005548790.localdomain sudo[295477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:06:53 np0005548790.localdomain sudo[295477]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:53 np0005548790.localdomain sudo[295477]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:54 np0005548790.localdomain sudo[295495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:06:54 np0005548790.localdomain sudo[295495]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:54 np0005548790.localdomain sudo[295495]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:54 np0005548790.localdomain sudo[295513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:06:54 np0005548790.localdomain sudo[295513]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:54 np0005548790.localdomain sudo[295513]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:54 np0005548790.localdomain sudo[295531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:06:54 np0005548790.localdomain sudo[295531]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:06:54 np0005548790.localdomain sudo[295531]: pam_unix(sudo:session): session closed for user root
Dec 06 10:06:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:06:54 np0005548790.localdomain ceph-mon[288373]: pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:06:54 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548787.umwsra (monmap changed)...
Dec 06 10:06:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:06:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:06:54 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:54 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548787.umwsra on np0005548787.localdomain
Dec 06 10:06:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:56 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548787 (monmap changed)...
Dec 06 10:06:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:06:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:56 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548787 on np0005548787.localdomain
Dec 06 10:06:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:06:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:56 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:06:57 np0005548790.localdomain ceph-mon[288373]: pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:06:57 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548788 (monmap changed)...
Dec 06 10:06:57 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain
Dec 06 10:06:57 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:57 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:57 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:57 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 06 10:06:57 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:58 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.2 (monmap changed)...
Dec 06 10:06:58 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.2 on np0005548788.localdomain
Dec 06 10:06:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 06 10:06:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:06:58 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:06:58 np0005548790.localdomain podman[295549]: 2025-12-06 10:06:58.568590771 +0000 UTC m=+0.085625647 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:06:58 np0005548790.localdomain podman[295549]: 2025-12-06 10:06:58.584225999 +0000 UTC m=+0.101260895 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 10:06:58 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:06:59 np0005548790.localdomain ceph-mon[288373]: pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:06:59 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.5 (monmap changed)...
Dec 06 10:06:59 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.5 on np0005548788.localdomain
Dec 06 10:06:59 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:59 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:06:59 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:06:59 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:07:00 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)...
Dec 06 10:07:00 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain
Dec 06 10:07:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:07:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:07:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:07:00 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:07:00 np0005548790.localdomain podman[295569]: 2025-12-06 10:07:00.548547467 +0000 UTC m=+0.069685632 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:07:00 np0005548790.localdomain podman[295569]: 2025-12-06 10:07:00.586279595 +0000 UTC m=+0.107417700 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:07:00 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:07:01 np0005548790.localdomain ceph-mon[288373]: pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:07:01 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)...
Dec 06 10:07:01 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain
Dec 06 10:07:01 np0005548790.localdomain ceph-mon[288373]: from='client.44258 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005548789.localdomain:172.18.0.104", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:07:01 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:01 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:07:01 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:07:01 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:01 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:01 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:07:01 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:07:01 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:07:02 np0005548790.localdomain ceph-mon[288373]: Deploying daemon mon.np0005548789 on np0005548789.localdomain
Dec 06 10:07:02 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548789 (monmap changed)...
Dec 06 10:07:02 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain
Dec 06 10:07:02 np0005548790.localdomain ceph-mon[288373]: pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:07:03.160110) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015623160145, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1021, "num_deletes": 251, "total_data_size": 1530711, "memory_usage": 1553616, "flush_reason": "Manual Compaction"}
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015623168912, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 881085, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16643, "largest_seqno": 17659, "table_properties": {"data_size": 876229, "index_size": 2263, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 13172, "raw_average_key_size": 22, "raw_value_size": 865662, "raw_average_value_size": 1467, "num_data_blocks": 94, "num_entries": 590, "num_filter_entries": 590, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015601, "oldest_key_time": 1765015601, "file_creation_time": 1765015623, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8be51b2d-bcce-4d27-9e46-3f3cdf9e4a92", "db_session_id": "9EENGG53AOVF6BJXYAD5", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 8848 microseconds, and 3674 cpu microseconds.
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:07:03.168957) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 881085 bytes OK
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:07:03.168979) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:07:03.171104) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:07:03.171126) EVENT_LOG_v1 {"time_micros": 1765015623171120, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:07:03.171146) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 1525233, prev total WAL file size 1525233, number of live WAL files 2.
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:07:03.171979) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130353432' seq:72057594037927935, type:22 .. '7061786F73003130373934' seq:0, type:0; will stop at (end)
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(860KB)], [24(15MB)]
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015623172113, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 17223326, "oldest_snapshot_seqno": -1}
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 10303 keys, 13423283 bytes, temperature: kUnknown
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015623278042, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 13423283, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13364063, "index_size": 32367, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25797, "raw_key_size": 277835, "raw_average_key_size": 26, "raw_value_size": 13187458, "raw_average_value_size": 1279, "num_data_blocks": 1224, "num_entries": 10303, "num_filter_entries": 10303, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015441, "oldest_key_time": 0, "file_creation_time": 1765015623, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8be51b2d-bcce-4d27-9e46-3f3cdf9e4a92", "db_session_id": "9EENGG53AOVF6BJXYAD5", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:07:03.278451) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 13423283 bytes
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:07:03.280663) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.4 rd, 126.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 15.6 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(34.8) write-amplify(15.2) OK, records in: 10835, records dropped: 532 output_compression: NoCompression
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:07:03.280702) EVENT_LOG_v1 {"time_micros": 1765015623280684, "job": 12, "event": "compaction_finished", "compaction_time_micros": 106050, "compaction_time_cpu_micros": 55109, "output_level": 6, "num_output_files": 1, "total_output_size": 13423283, "num_input_records": 10835, "num_output_records": 10303, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015623281055, "job": 12, "event": "table_file_deletion", "file_number": 26}
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015623284239, "job": 12, "event": "table_file_deletion", "file_number": 24}
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:07:03.171830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:07:03.284399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:07:03.284408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:07:03.284412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:07:03.284415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:07:03.284417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:07:03 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:04 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.1 (monmap changed)...
Dec 06 10:07:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 06 10:07:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:07:04 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.1 on np0005548789.localdomain
Dec 06 10:07:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:07:05 np0005548790.localdomain ceph-mon[288373]: pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:07:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:05 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.4 (monmap changed)...
Dec 06 10:07:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 06 10:07:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:07:05 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.4 on np0005548789.localdomain
Dec 06 10:07:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:07:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:07:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:07:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:07:05 np0005548790.localdomain podman[295594]: 2025-12-06 10:07:05.563279647 +0000 UTC m=+0.079606197 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 10:07:05 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:05 np0005548790.localdomain podman[295594]: 2025-12-06 10:07:05.670289204 +0000 UTC m=+0.186615764 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 06 10:07:05 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:07:06 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)...
Dec 06 10:07:06 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain
Dec 06 10:07:06 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:07:06 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:06 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:06 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:07:06 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:07:06 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:07:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:07:07 np0005548790.localdomain sudo[295619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:07:07 np0005548790.localdomain sudo[295619]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:07:07 np0005548790.localdomain sudo[295619]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.323 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:07:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:07:07 np0005548790.localdomain sudo[295637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:07:07 np0005548790.localdomain sudo[295637]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:07:07 np0005548790.localdomain ceph-mon[288373]: pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:07:07 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548789.mzhmje (monmap changed)...
Dec 06 10:07:07 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain
Dec 06 10:07:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:07:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:07:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:07:07 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:07 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:07 np0005548790.localdomain podman[295671]: 
Dec 06 10:07:07 np0005548790.localdomain podman[295671]: 2025-12-06 10:07:07.809619526 +0000 UTC m=+0.070453622 container create dad37589e49c0abea5d3557552a03f675cb28555911a124eb6100c4efa24eab6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_dijkstra, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.openshift.expose-services=, RELEASE=main, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1763362218, com.redhat.component=rhceph-container, ceph=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 10:07:07 np0005548790.localdomain systemd[1]: Started libpod-conmon-dad37589e49c0abea5d3557552a03f675cb28555911a124eb6100c4efa24eab6.scope.
Dec 06 10:07:07 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:07:07 np0005548790.localdomain podman[295671]: 2025-12-06 10:07:07.881483126 +0000 UTC m=+0.142317242 container init dad37589e49c0abea5d3557552a03f675cb28555911a124eb6100c4efa24eab6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_dijkstra, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, version=7, architecture=x86_64, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 06 10:07:07 np0005548790.localdomain podman[295671]: 2025-12-06 10:07:07.784492355 +0000 UTC m=+0.045326501 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:07:07 np0005548790.localdomain podman[295671]: 2025-12-06 10:07:07.892122939 +0000 UTC m=+0.152957035 container start dad37589e49c0abea5d3557552a03f675cb28555911a124eb6100c4efa24eab6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_dijkstra, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.buildah.version=1.41.4, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-type=git, vendor=Red Hat, Inc., release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph)
Dec 06 10:07:07 np0005548790.localdomain podman[295671]: 2025-12-06 10:07:07.892398227 +0000 UTC m=+0.153232393 container attach dad37589e49c0abea5d3557552a03f675cb28555911a124eb6100c4efa24eab6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_dijkstra, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.expose-services=, RELEASE=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, architecture=x86_64, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, distribution-scope=public)
Dec 06 10:07:07 np0005548790.localdomain upbeat_dijkstra[295686]: 167 167
Dec 06 10:07:07 np0005548790.localdomain systemd[1]: libpod-dad37589e49c0abea5d3557552a03f675cb28555911a124eb6100c4efa24eab6.scope: Deactivated successfully.
Dec 06 10:07:07 np0005548790.localdomain podman[295671]: 2025-12-06 10:07:07.896528297 +0000 UTC m=+0.157362443 container died dad37589e49c0abea5d3557552a03f675cb28555911a124eb6100c4efa24eab6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_dijkstra, GIT_CLEAN=True, com.redhat.component=rhceph-container, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, release=1763362218, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, architecture=x86_64, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 06 10:07:07 np0005548790.localdomain podman[295691]: 2025-12-06 10:07:07.991539775 +0000 UTC m=+0.086386839 container remove dad37589e49c0abea5d3557552a03f675cb28555911a124eb6100c4efa24eab6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=upbeat_dijkstra, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, architecture=x86_64, version=7, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, distribution-scope=public, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 06 10:07:07 np0005548790.localdomain systemd[1]: libpod-conmon-dad37589e49c0abea5d3557552a03f675cb28555911a124eb6100c4efa24eab6.scope: Deactivated successfully.
Dec 06 10:07:08 np0005548790.localdomain sudo[295637]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:08 np0005548790.localdomain sudo[295707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:07:08 np0005548790.localdomain sudo[295707]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:07:08 np0005548790.localdomain sudo[295707]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:08 np0005548790.localdomain sudo[295725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:07:08 np0005548790.localdomain sudo[295725]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:07:08 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548790 (monmap changed)...
Dec 06 10:07:08 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain
Dec 06 10:07:08 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:07:08 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:08 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:08 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 06 10:07:08 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:07:08 np0005548790.localdomain podman[295759]: 
Dec 06 10:07:08 np0005548790.localdomain podman[295759]: 2025-12-06 10:07:08.710637279 +0000 UTC m=+0.086019959 container create 34e9027f6f24e505400a17f1afdb0d569323a608c889175497dd288996b97f26 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_colden, build-date=2025-11-26T19:44:28Z, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.expose-services=, RELEASE=main, distribution-scope=public, com.redhat.component=rhceph-container)
Dec 06 10:07:08 np0005548790.localdomain systemd[1]: Started libpod-conmon-34e9027f6f24e505400a17f1afdb0d569323a608c889175497dd288996b97f26.scope.
Dec 06 10:07:08 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:07:08 np0005548790.localdomain podman[295759]: 2025-12-06 10:07:08.679430105 +0000 UTC m=+0.054812825 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:07:08 np0005548790.localdomain podman[295759]: 2025-12-06 10:07:08.779067556 +0000 UTC m=+0.154450196 container init 34e9027f6f24e505400a17f1afdb0d569323a608c889175497dd288996b97f26 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_colden, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vcs-type=git, version=7, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 06 10:07:08 np0005548790.localdomain podman[295759]: 2025-12-06 10:07:08.788874218 +0000 UTC m=+0.164256848 container start 34e9027f6f24e505400a17f1afdb0d569323a608c889175497dd288996b97f26 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_colden, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 10:07:08 np0005548790.localdomain podman[295759]: 2025-12-06 10:07:08.789115744 +0000 UTC m=+0.164498384 container attach 34e9027f6f24e505400a17f1afdb0d569323a608c889175497dd288996b97f26 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_colden, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, name=rhceph, CEPH_POINT_RELEASE=, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True)
Dec 06 10:07:08 np0005548790.localdomain stupefied_colden[295774]: 167 167
Dec 06 10:07:08 np0005548790.localdomain systemd[1]: libpod-34e9027f6f24e505400a17f1afdb0d569323a608c889175497dd288996b97f26.scope: Deactivated successfully.
Dec 06 10:07:08 np0005548790.localdomain podman[295759]: 2025-12-06 10:07:08.791916369 +0000 UTC m=+0.167299049 container died 34e9027f6f24e505400a17f1afdb0d569323a608c889175497dd288996b97f26 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_colden, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, RELEASE=main, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., ceph=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git)
Dec 06 10:07:08 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-b025ab0d6377d5eaa9fc6804ed8057709c5e957e21cef996707c06dc5437ed13-merged.mount: Deactivated successfully.
Dec 06 10:07:08 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-0b0504eb381a5e514f95f1fc01b4d95c476c81d4a4bb0ae184e66ca43a47c1f0-merged.mount: Deactivated successfully.
Dec 06 10:07:08 np0005548790.localdomain podman[295779]: 2025-12-06 10:07:08.903120288 +0000 UTC m=+0.097431342 container remove 34e9027f6f24e505400a17f1afdb0d569323a608c889175497dd288996b97f26 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_colden, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, RELEASE=main, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 06 10:07:08 np0005548790.localdomain systemd[1]: libpod-conmon-34e9027f6f24e505400a17f1afdb0d569323a608c889175497dd288996b97f26.scope: Deactivated successfully.
Dec 06 10:07:09 np0005548790.localdomain sudo[295725]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:09 np0005548790.localdomain sudo[295802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:07:09 np0005548790.localdomain sudo[295802]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:07:09 np0005548790.localdomain sudo[295802]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:09 np0005548790.localdomain sudo[295820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:07:09 np0005548790.localdomain sudo[295820]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:07:09 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.0 (monmap changed)...
Dec 06 10:07:09 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.0 on np0005548790.localdomain
Dec 06 10:07:09 np0005548790.localdomain ceph-mon[288373]: pgmap v44: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:07:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:07:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 06 10:07:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:07:09 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:09 np0005548790.localdomain podman[295856]: 
Dec 06 10:07:09 np0005548790.localdomain podman[295856]: 2025-12-06 10:07:09.756538979 +0000 UTC m=+0.075394654 container create f3f348d8389c49d3fbf4efd111257db71fd728b3d37e40a57900762a636327f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_chatterjee, release=1763362218, version=7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, vcs-type=git, ceph=True, name=rhceph, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 06 10:07:09 np0005548790.localdomain systemd[1]: Started libpod-conmon-f3f348d8389c49d3fbf4efd111257db71fd728b3d37e40a57900762a636327f0.scope.
Dec 06 10:07:09 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:07:09 np0005548790.localdomain podman[295856]: 2025-12-06 10:07:09.819214733 +0000 UTC m=+0.138070408 container init f3f348d8389c49d3fbf4efd111257db71fd728b3d37e40a57900762a636327f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_chatterjee, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.tags=rhceph ceph, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-type=git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., name=rhceph, build-date=2025-11-26T19:44:28Z)
Dec 06 10:07:09 np0005548790.localdomain podman[295856]: 2025-12-06 10:07:09.72586504 +0000 UTC m=+0.044720755 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:07:09 np0005548790.localdomain podman[295856]: 2025-12-06 10:07:09.829160089 +0000 UTC m=+0.148015754 container start f3f348d8389c49d3fbf4efd111257db71fd728b3d37e40a57900762a636327f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_chatterjee, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, architecture=x86_64, build-date=2025-11-26T19:44:28Z, ceph=True, io.openshift.expose-services=, GIT_CLEAN=True, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-type=git, description=Red Hat Ceph Storage 7, RELEASE=main)
Dec 06 10:07:09 np0005548790.localdomain quirky_chatterjee[295871]: 167 167
Dec 06 10:07:09 np0005548790.localdomain podman[295856]: 2025-12-06 10:07:09.830084353 +0000 UTC m=+0.148940078 container attach f3f348d8389c49d3fbf4efd111257db71fd728b3d37e40a57900762a636327f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_chatterjee, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, name=rhceph, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7)
Dec 06 10:07:09 np0005548790.localdomain systemd[1]: libpod-f3f348d8389c49d3fbf4efd111257db71fd728b3d37e40a57900762a636327f0.scope: Deactivated successfully.
Dec 06 10:07:09 np0005548790.localdomain podman[295856]: 2025-12-06 10:07:09.834193634 +0000 UTC m=+0.153049339 container died f3f348d8389c49d3fbf4efd111257db71fd728b3d37e40a57900762a636327f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_chatterjee, name=rhceph, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, version=7, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, vendor=Red Hat, Inc., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main)
Dec 06 10:07:09 np0005548790.localdomain podman[295876]: 2025-12-06 10:07:09.933030583 +0000 UTC m=+0.086337557 container remove f3f348d8389c49d3fbf4efd111257db71fd728b3d37e40a57900762a636327f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_chatterjee, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, ceph=True, release=1763362218, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7)
Dec 06 10:07:09 np0005548790.localdomain systemd[1]: libpod-conmon-f3f348d8389c49d3fbf4efd111257db71fd728b3d37e40a57900762a636327f0.scope: Deactivated successfully.
Dec 06 10:07:10 np0005548790.localdomain sudo[295820]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:10 np0005548790.localdomain sudo[295900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:07:10 np0005548790.localdomain sudo[295900]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:07:10 np0005548790.localdomain sudo[295900]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:10 np0005548790.localdomain sudo[295918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:07:10 np0005548790.localdomain sudo[295918]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:07:10 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.3 (monmap changed)...
Dec 06 10:07:10 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.3 on np0005548790.localdomain
Dec 06 10:07:10 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:07:10 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:10 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:10 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548790.vhcezv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:07:10 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:07:10 np0005548790.localdomain podman[295952]: 
Dec 06 10:07:10 np0005548790.localdomain podman[295952]: 2025-12-06 10:07:10.746169948 +0000 UTC m=+0.082183686 container create 44e6135a52e926d9d33427fb63ae55e1254b2aab4e0a75a1bdfa1074c1f6058c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_murdock, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, distribution-scope=public, ceph=True, name=rhceph, GIT_BRANCH=main, GIT_CLEAN=True, vcs-type=git, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 06 10:07:10 np0005548790.localdomain systemd[1]: Started libpod-conmon-44e6135a52e926d9d33427fb63ae55e1254b2aab4e0a75a1bdfa1074c1f6058c.scope.
Dec 06 10:07:10 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:07:10 np0005548790.localdomain podman[295952]: 2025-12-06 10:07:10.71103784 +0000 UTC m=+0.047051618 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:07:10 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-365428e08eb6aa11dc441b2d7da8891d175c2b448b62147752adab547b6b4c81-merged.mount: Deactivated successfully.
Dec 06 10:07:10 np0005548790.localdomain podman[295952]: 2025-12-06 10:07:10.826169985 +0000 UTC m=+0.162183723 container init 44e6135a52e926d9d33427fb63ae55e1254b2aab4e0a75a1bdfa1074c1f6058c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_murdock, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, name=rhceph, architecture=x86_64, CEPH_POINT_RELEASE=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.tags=rhceph ceph)
Dec 06 10:07:10 np0005548790.localdomain podman[295952]: 2025-12-06 10:07:10.839354967 +0000 UTC m=+0.175368715 container start 44e6135a52e926d9d33427fb63ae55e1254b2aab4e0a75a1bdfa1074c1f6058c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_murdock, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, GIT_BRANCH=main, distribution-scope=public, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, name=rhceph, RELEASE=main, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, CEPH_POINT_RELEASE=)
Dec 06 10:07:10 np0005548790.localdomain podman[295952]: 2025-12-06 10:07:10.839874561 +0000 UTC m=+0.175888319 container attach 44e6135a52e926d9d33427fb63ae55e1254b2aab4e0a75a1bdfa1074c1f6058c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_murdock, GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vendor=Red Hat, Inc., release=1763362218, ceph=True, distribution-scope=public, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 06 10:07:10 np0005548790.localdomain optimistic_murdock[295967]: 167 167
Dec 06 10:07:10 np0005548790.localdomain systemd[1]: libpod-44e6135a52e926d9d33427fb63ae55e1254b2aab4e0a75a1bdfa1074c1f6058c.scope: Deactivated successfully.
Dec 06 10:07:10 np0005548790.localdomain podman[295952]: 2025-12-06 10:07:10.842316436 +0000 UTC m=+0.178330234 container died 44e6135a52e926d9d33427fb63ae55e1254b2aab4e0a75a1bdfa1074c1f6058c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_murdock, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, com.redhat.component=rhceph-container, distribution-scope=public, name=rhceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, build-date=2025-11-26T19:44:28Z, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 06 10:07:10 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ae1d31f03092013aa9f2ad04b0cefdf6d0b2c6e2199393130dec13faa854b3db-merged.mount: Deactivated successfully.
Dec 06 10:07:10 np0005548790.localdomain podman[295972]: 2025-12-06 10:07:10.937037995 +0000 UTC m=+0.085918115 container remove 44e6135a52e926d9d33427fb63ae55e1254b2aab4e0a75a1bdfa1074c1f6058c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_murdock, vcs-type=git, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_BRANCH=main, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, version=7, com.redhat.component=rhceph-container, name=rhceph, GIT_CLEAN=True, RELEASE=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True)
Dec 06 10:07:10 np0005548790.localdomain systemd[1]: libpod-conmon-44e6135a52e926d9d33427fb63ae55e1254b2aab4e0a75a1bdfa1074c1f6058c.scope: Deactivated successfully.
Dec 06 10:07:11 np0005548790.localdomain sudo[295918]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:11 np0005548790.localdomain sudo[295990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:07:11 np0005548790.localdomain sudo[295990]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:07:11 np0005548790.localdomain sudo[295990]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:11 np0005548790.localdomain sudo[296008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:07:11 np0005548790.localdomain sudo[296008]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:07:11 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mds.mds.np0005548790.vhcezv (monmap changed)...
Dec 06 10:07:11 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mds.mds.np0005548790.vhcezv on np0005548790.localdomain
Dec 06 10:07:11 np0005548790.localdomain ceph-mon[288373]: pgmap v45: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:07:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:07:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:07:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:07:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:07:11 np0005548790.localdomain podman[296044]: 
Dec 06 10:07:11 np0005548790.localdomain podman[296044]: 2025-12-06 10:07:11.63820127 +0000 UTC m=+0.072498397 container create c0cf6f20debcf350b55b516191700647f23173653c1aa97e716a70ef8a8a0e31 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_lovelace, io.openshift.tags=rhceph ceph, name=rhceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, GIT_CLEAN=True, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, vcs-type=git)
Dec 06 10:07:11 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:11 np0005548790.localdomain systemd[1]: Started libpod-conmon-c0cf6f20debcf350b55b516191700647f23173653c1aa97e716a70ef8a8a0e31.scope.
Dec 06 10:07:11 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:07:11 np0005548790.localdomain podman[296044]: 2025-12-06 10:07:11.703882144 +0000 UTC m=+0.138179261 container init c0cf6f20debcf350b55b516191700647f23173653c1aa97e716a70ef8a8a0e31 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_lovelace, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.buildah.version=1.41.4, GIT_CLEAN=True, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, release=1763362218, version=7, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 10:07:11 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:11 np0005548790.localdomain podman[296044]: 2025-12-06 10:07:11.608731743 +0000 UTC m=+0.043028940 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:07:11 np0005548790.localdomain podman[296044]: 2025-12-06 10:07:11.714151858 +0000 UTC m=+0.148448985 container start c0cf6f20debcf350b55b516191700647f23173653c1aa97e716a70ef8a8a0e31 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_lovelace, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, GIT_CLEAN=True, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, name=rhceph, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main)
Dec 06 10:07:11 np0005548790.localdomain podman[296044]: 2025-12-06 10:07:11.714409085 +0000 UTC m=+0.148706252 container attach c0cf6f20debcf350b55b516191700647f23173653c1aa97e716a70ef8a8a0e31 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_lovelace, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, ceph=True, RELEASE=main, vendor=Red Hat, Inc., release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64)
Dec 06 10:07:11 np0005548790.localdomain dreamy_lovelace[296060]: 167 167
Dec 06 10:07:11 np0005548790.localdomain systemd[1]: libpod-c0cf6f20debcf350b55b516191700647f23173653c1aa97e716a70ef8a8a0e31.scope: Deactivated successfully.
Dec 06 10:07:11 np0005548790.localdomain podman[296044]: 2025-12-06 10:07:11.717408415 +0000 UTC m=+0.151705602 container died c0cf6f20debcf350b55b516191700647f23173653c1aa97e716a70ef8a8a0e31 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_lovelace, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, distribution-scope=public, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, RELEASE=main, vcs-type=git, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.41.4, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 06 10:07:11 np0005548790.localdomain podman[296065]: 2025-12-06 10:07:11.806860344 +0000 UTC m=+0.082174545 container remove c0cf6f20debcf350b55b516191700647f23173653c1aa97e716a70ef8a8a0e31 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dreamy_lovelace, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, release=1763362218, distribution-scope=public, RELEASE=main, name=rhceph, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, vcs-type=git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.expose-services=)
Dec 06 10:07:11 np0005548790.localdomain systemd[1]: libpod-conmon-c0cf6f20debcf350b55b516191700647f23173653c1aa97e716a70ef8a8a0e31.scope: Deactivated successfully.
Dec 06 10:07:11 np0005548790.localdomain sudo[296008]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:11 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:07:11 np0005548790.localdomain sudo[296082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:07:11 np0005548790.localdomain sudo[296082]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:07:11 np0005548790.localdomain sudo[296082]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:12 np0005548790.localdomain sudo[296100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:07:12 np0005548790.localdomain sudo[296100]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:07:12 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548790.kvkfyr (monmap changed)...
Dec 06 10:07:12 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548790.kvkfyr on np0005548790.localdomain
Dec 06 10:07:12 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:07:12 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:12 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:12 np0005548790.localdomain sudo[296100]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:13 np0005548790.localdomain ceph-mon[288373]: pgmap v46: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:07:13 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:07:13 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:07:14 np0005548790.localdomain ceph-mon[288373]: pgmap v47: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:07:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:07:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:07:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:07:14 np0005548790.localdomain sudo[296150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:07:14 np0005548790.localdomain sudo[296150]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:07:14 np0005548790.localdomain sudo[296150]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:15 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.200:0/325418580' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 06 10:07:15 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:07:15 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:15 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:16 np0005548790.localdomain sudo[296168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:07:16 np0005548790.localdomain sudo[296168]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:07:16 np0005548790.localdomain sudo[296168]: pam_unix(sudo:session): session closed for user root
Dec 06 10:07:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:07:16 np0005548790.localdomain ceph-mon[288373]: from='client.44270 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:07:16 np0005548790.localdomain ceph-mon[288373]: Reconfig service osd.default_drive_group
Dec 06 10:07:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:07:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:07:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:07:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:16 np0005548790.localdomain ceph-mon[288373]: pgmap v48: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:07:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 06 10:07:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:07:16 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.2 on np0005548788.localdomain
Dec 06 10:07:16 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:07:17 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:17 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:07:17 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:17 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:17 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:17 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' 
Dec 06 10:07:17 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 06 10:07:17 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:07:17 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.5 on np0005548788.localdomain
Dec 06 10:07:17 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:17 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:17 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e90 e90: 6 total, 6 up, 6 in
Dec 06 10:07:17 np0005548790.localdomain sshd[292400]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 10:07:17 np0005548790.localdomain systemd-logind[760]: Session 65 logged out. Waiting for processes to exit.
Dec 06 10:07:17 np0005548790.localdomain systemd[1]: session-65.scope: Deactivated successfully.
Dec 06 10:07:17 np0005548790.localdomain systemd[1]: session-65.scope: Consumed 22.278s CPU time.
Dec 06 10:07:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:07:17 np0005548790.localdomain systemd-logind[760]: Removed session 65.
Dec 06 10:07:18 np0005548790.localdomain systemd[1]: tmp-crun.TWVgjq.mount: Deactivated successfully.
Dec 06 10:07:18 np0005548790.localdomain podman[296186]: 2025-12-06 10:07:18.050470522 +0000 UTC m=+0.093646162 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 10:07:18 np0005548790.localdomain podman[296186]: 2025-12-06 10:07:18.087495171 +0000 UTC m=+0.130670861 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 06 10:07:18 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:07:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:07:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:07:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:07:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:07:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:07:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18689 "" "Go-http-client/1.1"
Dec 06 10:07:18 np0005548790.localdomain ceph-mon[288373]: from='mgr.24103 172.18.0.105:0/2222976075' entity='mgr.np0005548787.umwsra' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:07:18 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.200:0/2080000025' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 06 10:07:18 np0005548790.localdomain ceph-mon[288373]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 06 10:07:18 np0005548790.localdomain ceph-mon[288373]: Activating manager daemon np0005548786.mczynb
Dec 06 10:07:18 np0005548790.localdomain ceph-mon[288373]: osdmap e90: 6 total, 6 up, 6 in
Dec 06 10:07:18 np0005548790.localdomain ceph-mon[288373]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Dec 06 10:07:18 np0005548790.localdomain ceph-mon[288373]: mgrmap e24: np0005548786.mczynb(active, starting, since 0.0565768s), standbys: np0005548790.kvkfyr, np0005548789.mzhmje, np0005548785.vhqlsq, np0005548788.yvwbqq
Dec 06 10:07:19 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:21 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:21 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:07:23 np0005548790.localdomain ceph-mon[288373]: Standby manager daemon np0005548787.umwsra started
Dec 06 10:07:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:07:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:07:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:07:23 np0005548790.localdomain systemd[1]: tmp-crun.dO4WtF.mount: Deactivated successfully.
Dec 06 10:07:23 np0005548790.localdomain podman[296205]: 2025-12-06 10:07:23.57813295 +0000 UTC m=+0.083319676 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:07:23 np0005548790.localdomain systemd[1]: tmp-crun.gUCeUF.mount: Deactivated successfully.
Dec 06 10:07:23 np0005548790.localdomain podman[296206]: 2025-12-06 10:07:23.588923998 +0000 UTC m=+0.085252378 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, version=9.6, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, distribution-scope=public, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, name=ubi9-minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 10:07:23 np0005548790.localdomain podman[296205]: 2025-12-06 10:07:23.593079779 +0000 UTC m=+0.098266435 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 06 10:07:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:07:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:07:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:07:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:07:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:07:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:07:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:07:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:07:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:07:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:07:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:07:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:07:23 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:07:23 np0005548790.localdomain podman[296206]: 2025-12-06 10:07:23.605740938 +0000 UTC m=+0.102069248 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_id=edpm, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, name=ubi9-minimal)
Dec 06 10:07:23 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:07:23 np0005548790.localdomain podman[296204]: 2025-12-06 10:07:23.68445302 +0000 UTC m=+0.188279479 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 10:07:23 np0005548790.localdomain podman[296204]: 2025-12-06 10:07:23.718389286 +0000 UTC m=+0.222215725 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 10:07:23 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:07:23 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:24 np0005548790.localdomain ceph-mon[288373]: mgrmap e25: np0005548786.mczynb(active, starting, since 5s), standbys: np0005548790.kvkfyr, np0005548789.mzhmje, np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548787.umwsra
Dec 06 10:07:25 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.107:0/3985914868' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:07:25 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:26 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:07:27 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.107:0/1488514553' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:07:27 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:27 np0005548790.localdomain systemd[1]: Stopping User Manager for UID 1002...
Dec 06 10:07:27 np0005548790.localdomain systemd[25986]: Activating special unit Exit the Session...
Dec 06 10:07:27 np0005548790.localdomain systemd[25986]: Removed slice User Background Tasks Slice.
Dec 06 10:07:27 np0005548790.localdomain systemd[25986]: Stopped target Main User Target.
Dec 06 10:07:27 np0005548790.localdomain systemd[25986]: Stopped target Basic System.
Dec 06 10:07:27 np0005548790.localdomain systemd[25986]: Stopped target Paths.
Dec 06 10:07:27 np0005548790.localdomain systemd[25986]: Stopped target Sockets.
Dec 06 10:07:27 np0005548790.localdomain systemd[25986]: Stopped target Timers.
Dec 06 10:07:27 np0005548790.localdomain systemd[25986]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 06 10:07:27 np0005548790.localdomain systemd[25986]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 06 10:07:27 np0005548790.localdomain systemd[25986]: Closed D-Bus User Message Bus Socket.
Dec 06 10:07:27 np0005548790.localdomain systemd[25986]: Stopped Create User's Volatile Files and Directories.
Dec 06 10:07:27 np0005548790.localdomain systemd[25986]: Removed slice User Application Slice.
Dec 06 10:07:27 np0005548790.localdomain systemd[25986]: Reached target Shutdown.
Dec 06 10:07:27 np0005548790.localdomain systemd[25986]: Finished Exit the Session.
Dec 06 10:07:27 np0005548790.localdomain systemd[25986]: Reached target Exit the Session.
Dec 06 10:07:27 np0005548790.localdomain systemd[1]: user@1002.service: Deactivated successfully.
Dec 06 10:07:27 np0005548790.localdomain systemd[1]: Stopped User Manager for UID 1002.
Dec 06 10:07:27 np0005548790.localdomain systemd[1]: user@1002.service: Consumed 12.886s CPU time, read 0B from disk, written 7.0K to disk.
Dec 06 10:07:27 np0005548790.localdomain systemd[1]: Stopping User Runtime Directory /run/user/1002...
Dec 06 10:07:28 np0005548790.localdomain systemd[1]: run-user-1002.mount: Deactivated successfully.
Dec 06 10:07:28 np0005548790.localdomain systemd[1]: user-runtime-dir@1002.service: Deactivated successfully.
Dec 06 10:07:28 np0005548790.localdomain systemd[1]: Stopped User Runtime Directory /run/user/1002.
Dec 06 10:07:28 np0005548790.localdomain systemd[1]: Removed slice User Slice of UID 1002.
Dec 06 10:07:28 np0005548790.localdomain systemd[1]: user-1002.slice: Consumed 4min 20.631s CPU time.
Dec 06 10:07:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:28.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:07:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:28.334 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:07:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:28.334 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:07:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:29.229 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:07:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:29.229 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:07:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:29.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:07:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:29.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:07:29 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:07:29 np0005548790.localdomain podman[296262]: 2025-12-06 10:07:29.568486345 +0000 UTC m=+0.082781451 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 10:07:29 np0005548790.localdomain podman[296262]: 2025-12-06 10:07:29.584186785 +0000 UTC m=+0.098481881 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 10:07:29 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:07:29 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:31.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:07:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:31.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:07:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:31.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:07:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:31.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:07:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:31.358 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:07:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:31.358 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:07:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:31.359 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:07:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:31.359 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:07:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:31.359 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:07:31 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:07:31 np0005548790.localdomain podman[296280]: 2025-12-06 10:07:31.549945711 +0000 UTC m=+0.069789415 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:07:31 np0005548790.localdomain podman[296280]: 2025-12-06 10:07:31.58624455 +0000 UTC m=+0.106088284 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:07:31 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:07:31 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:31 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:07:31 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3750572853' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:07:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:31.799 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:07:31 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:07:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:32.008 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:07:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:32.010 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=12046MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:07:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:32.010 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:07:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:32.011 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:07:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:32.094 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:07:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:32.094 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:07:32 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.108:0/3750572853' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:07:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:32.116 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:07:32 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:07:32 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1577274021' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:07:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:32.568 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:07:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:32.576 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:07:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:32.595 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:07:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:32.598 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:07:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:32.598 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:07:33 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.108:0/1577274021' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:07:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:33.599 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:07:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:07:33.600 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:07:33 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:33 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:34 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.106:0/2638595726' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:07:35 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.106:0/3442107100' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:07:35 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:07:36 np0005548790.localdomain systemd[1]: tmp-crun.MfWosR.mount: Deactivated successfully.
Dec 06 10:07:36 np0005548790.localdomain podman[296343]: 2025-12-06 10:07:36.291550057 +0000 UTC m=+0.064678058 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:07:36 np0005548790.localdomain podman[296343]: 2025-12-06 10:07:36.334304998 +0000 UTC m=+0.107433039 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 10:07:36 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:07:36 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:07:37 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:39 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.32:0/3222977501' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:07:39 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.32:0/3222977501' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:07:39 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:41 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 06 10:07:41 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:07:43 np0005548790.localdomain ceph-mgr[286934]: ms_deliver_dispatch: unhandled message 0x563548511600 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0
Dec 06 10:07:43 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [INF] : mon.np0005548790 calling monitor election
Dec 06 10:07:43 np0005548790.localdomain ceph-mon[288373]: paxos.1).electionLogic(52) init, last seen epoch 52
Dec 06 10:07:43 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(electing) e12 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:07:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:07:48.390 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:07:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:07:48.391 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:07:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:07:48.391 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:07:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:07:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:07:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:07:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:07:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:07:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:07:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18693 "" "Go-http-client/1.1"
Dec 06 10:07:48 np0005548790.localdomain podman[296369]: 2025-12-06 10:07:48.569189017 +0000 UTC m=+0.080753098 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 10:07:48 np0005548790.localdomain podman[296369]: 2025-12-06 10:07:48.577388996 +0000 UTC m=+0.088953067 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 10:07:48 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: paxos.1).electionLogic(53) init, last seen epoch 53, mid-election, bumping
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(electing) e12 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(electing) e12 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: mon.np0005548787 calling monitor election
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: mon.np0005548788 calling monitor election
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790 calling monitor election
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: mon.np0005548789 calling monitor election
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: mon.np0005548787 is new leader, mons np0005548787,np0005548790,np0005548788,np0005548789 in quorum (ranks 0,1,2,3)
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: monmap epoch 12
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: last_changed 2025-12-06T10:07:43.610976+0000
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: created 2025-12-06T07:57:14.295835+0000
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: min_mon_release 18 (reef)
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: election_strategy: 1
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005548787
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: 1: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005548790
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: 2: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005548788
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: 3: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005548789
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: fsmap cephfs:1 {0=mds.np0005548790.vhcezv=up:active} 2 up:standby
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: osdmap e90: 6 total, 6 up, 6 in
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: mgrmap e25: np0005548786.mczynb(active, starting, since 30s), standbys: np0005548790.kvkfyr, np0005548789.mzhmje, np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548787.umwsra
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]:     stray daemon mgr.np0005548785.vhqlsq on host np0005548785.localdomain not managed by cephadm
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:07:48 np0005548790.localdomain ceph-mon[288373]:     stray host np0005548785.localdomain has 1 stray daemons: ['mgr.np0005548785.vhqlsq']
Dec 06 10:07:51 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:07:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:07:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:07:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:07:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:07:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:07:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:07:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:07:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:07:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:07:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:07:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:07:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:07:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:07:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:07:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:07:54 np0005548790.localdomain podman[296389]: 2025-12-06 10:07:54.565768368 +0000 UTC m=+0.072545607 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, maintainer=Red Hat, Inc., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, container_name=openstack_network_exporter, io.openshift.expose-services=, version=9.6, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 06 10:07:54 np0005548790.localdomain podman[296389]: 2025-12-06 10:07:54.578268902 +0000 UTC m=+0.085046151 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., managed_by=edpm_ansible)
Dec 06 10:07:54 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:07:54 np0005548790.localdomain podman[296387]: 2025-12-06 10:07:54.631874334 +0000 UTC m=+0.137173174 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 10:07:54 np0005548790.localdomain podman[296387]: 2025-12-06 10:07:54.636185989 +0000 UTC m=+0.141484789 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 10:07:54 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:07:54 np0005548790.localdomain podman[296388]: 2025-12-06 10:07:54.689891573 +0000 UTC m=+0.194957527 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:07:54 np0005548790.localdomain podman[296388]: 2025-12-06 10:07:54.700541638 +0000 UTC m=+0.205607562 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 10:07:54 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:07:56 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:08:00 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:08:00 np0005548790.localdomain podman[296448]: 2025-12-06 10:08:00.570693542 +0000 UTC m=+0.085175135 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Dec 06 10:08:00 np0005548790.localdomain podman[296448]: 2025-12-06 10:08:00.610260198 +0000 UTC m=+0.124741771 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 10:08:00 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:08:01 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:08:02 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:08:02 np0005548790.localdomain podman[296467]: 2025-12-06 10:08:02.563514471 +0000 UTC m=+0.078517468 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:08:02 np0005548790.localdomain podman[296467]: 2025-12-06 10:08:02.576311333 +0000 UTC m=+0.091314290 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 10:08:02 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e91 e91: 6 total, 6 up, 6 in
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: mgr handle_mgr_map Activating!
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: mgr handle_mgr_map I am now activating
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005548787"} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mon metadata", "id": "np0005548787"} : dispatch
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005548788"} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005548789"} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005548790"} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005548789.vxwwsq"} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mds metadata", "who": "mds.np0005548789.vxwwsq"} : dispatch
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).mds e16 all = 0
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005548788.erzujf"} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mds metadata", "who": "mds.np0005548788.erzujf"} : dispatch
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).mds e16 all = 0
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005548790.vhcezv"} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mds metadata", "who": "mds.np0005548790.vhcezv"} : dispatch
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).mds e16 all = 0
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005548790.kvkfyr", "id": "np0005548790.kvkfyr"} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mgr metadata", "who": "np0005548790.kvkfyr", "id": "np0005548790.kvkfyr"} : dispatch
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005548789.mzhmje", "id": "np0005548789.mzhmje"} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mgr metadata", "who": "np0005548789.mzhmje", "id": "np0005548789.mzhmje"} : dispatch
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005548785.vhqlsq", "id": "np0005548785.vhqlsq"} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mgr metadata", "who": "np0005548785.vhqlsq", "id": "np0005548785.vhqlsq"} : dispatch
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005548788.yvwbqq", "id": "np0005548788.yvwbqq"} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mgr metadata", "who": "np0005548788.yvwbqq", "id": "np0005548788.yvwbqq"} : dispatch
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005548787.umwsra", "id": "np0005548787.umwsra"} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mgr metadata", "who": "np0005548787.umwsra", "id": "np0005548787.umwsra"} : dispatch
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mds metadata"} : dispatch
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).mds e16 all = 1
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata"} : dispatch
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mon metadata"} : dispatch
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: balancer
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Starting
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Optimize plan auto_2025-12-06_10:08:06
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:08:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [cephadm WARNING root] removing stray HostCache host record np0005548786.localdomain.devices.0
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [WRN] : removing stray HostCache host record np0005548786.localdomain.devices.0
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005548786.localdomain.devices.0"} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548786.localdomain.devices.0"} : dispatch
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005548786.localdomain.devices.0"} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548786.localdomain.devices.0"} : dispatch
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: cephadm
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: crash
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: devicehealth
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [devicehealth INFO root] Starting
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: iostat
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: nfs
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: orchestrator
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: pg_autoscaler
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: progress
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Loading...
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:08:06 np0005548790.localdomain podman[296508]: 2025-12-06 10:08:06.584445152 +0000 UTC m=+0.094900295 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7efd0c2791c0>, <progress.module.GhostEvent object at 0x7efd0c279160>, <progress.module.GhostEvent object at 0x7efd0c279640>, <progress.module.GhostEvent object at 0x7efd0c279670>, <progress.module.GhostEvent object at 0x7efd0c2796a0>, <progress.module.GhostEvent object at 0x7efd0c2796d0>, <progress.module.GhostEvent object at 0x7efd0c279700>, <progress.module.GhostEvent object at 0x7efd0c279730>, <progress.module.GhostEvent object at 0x7efd0c279760>, <progress.module.GhostEvent object at 0x7efd0c279790>, <progress.module.GhostEvent object at 0x7efd0c2797c0>, <progress.module.GhostEvent object at 0x7efd0c2797f0>, <progress.module.GhostEvent object at 0x7efd0c279820>, <progress.module.GhostEvent object at 0x7efd0c279850>, <progress.module.GhostEvent object at 0x7efd0c279880>, <progress.module.GhostEvent object at 0x7efd0c2798b0>, <progress.module.GhostEvent object at 0x7efd0c2798e0>, <progress.module.GhostEvent object at 0x7efd0c279910>, <progress.module.GhostEvent object at 0x7efd0c279940>, <progress.module.GhostEvent object at 0x7efd0c279970>, <progress.module.GhostEvent object at 0x7efd0c2799a0>, <progress.module.GhostEvent object at 0x7efd0c2799d0>, <progress.module.GhostEvent object at 0x7efd0c279a00>, <progress.module.GhostEvent object at 0x7efd0c279a30>, <progress.module.GhostEvent object at 0x7efd0c279a60>, <progress.module.GhostEvent object at 0x7efd0c279a90>, <progress.module.GhostEvent object at 0x7efd0c279ac0>, <progress.module.GhostEvent object at 0x7efd0c279af0>, <progress.module.GhostEvent object at 0x7efd0c279b20>, <progress.module.GhostEvent object at 0x7efd0c279b50>, <progress.module.GhostEvent object at 0x7efd0c279b80>, <progress.module.GhostEvent object at 0x7efd0c279bb0>, <progress.module.GhostEvent object at 0x7efd0c279be0>, <progress.module.GhostEvent object at 0x7efd0c279c10>, <progress.module.GhostEvent object at 0x7efd0c279c40>, <progress.module.GhostEvent object at 0x7efd0c279c70>, <progress.module.GhostEvent object at 0x7efd0c279ca0>, <progress.module.GhostEvent object at 0x7efd0c279cd0>, <progress.module.GhostEvent object at 0x7efd0c279d00>, <progress.module.GhostEvent object at 0x7efd0c279d30>, <progress.module.GhostEvent object at 0x7efd0c279d60>, <progress.module.GhostEvent object at 0x7efd0c279d90>, <progress.module.GhostEvent object at 0x7efd0c279dc0>, <progress.module.GhostEvent object at 0x7efd0c279df0>, <progress.module.GhostEvent object at 0x7efd0c279e20>, <progress.module.GhostEvent object at 0x7efd0c279e50>, <progress.module.GhostEvent object at 0x7efd0c279e80>, <progress.module.GhostEvent object at 0x7efd0c279eb0>, <progress.module.GhostEvent object at 0x7efd0c279ee0>, <progress.module.GhostEvent object at 0x7efd0c279f10>] historic events
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Loaded OSDMap, ready.
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] recovery thread starting
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] starting setup
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: rbd_support
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: restful
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [restful INFO root] server_addr: :: server_port: 8003
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [restful WARNING root] server not running: no certificate configured
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: status
Dec 06 10:08:06 np0005548790.localdomain podman[296508]: 2025-12-06 10:08:06.617376231 +0000 UTC m=+0.127831424 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: telemetry
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548790.kvkfyr/mirror_snapshot_schedule"} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548790.kvkfyr/mirror_snapshot_schedule"} : dispatch
Dec 06 10:08:06 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: volumes
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:08:06 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:06.655+0000 7efcf5958640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:08:06 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:06.655+0000 7efcf5958640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:08:06 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:06.655+0000 7efcf5958640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:08:06 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:06.655+0000 7efcf5958640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:08:06 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:06.655+0000 7efcf5958640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] PerfHandler: starting
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:08:06 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:06.660+0000 7efcf795c640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:08:06 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:06.660+0000 7efcf795c640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:08:06 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:06.660+0000 7efcf795c640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:08:06 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:06.660+0000 7efcf795c640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:08:06 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:06.660+0000 7efcf795c640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_task_task: images, start_after=
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TaskHandler: starting
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548790.kvkfyr/trash_purge_schedule"} v 0)
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548790.kvkfyr/trash_purge_schedule"} : dispatch
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 06 10:08:06 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] setup complete
Dec 06 10:08:06 np0005548790.localdomain sshd[296654]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:08:06 np0005548790.localdomain sshd[296654]: Accepted publickey for ceph-admin from 192.168.122.108 port 50120 ssh2: RSA SHA256:HQYBT8n3HbnicDtP9tehd3+gJXMFtkw+fTMlmR2wCsE
Dec 06 10:08:06 np0005548790.localdomain systemd[1]: Created slice User Slice of UID 1002.
Dec 06 10:08:06 np0005548790.localdomain systemd[1]: Starting User Runtime Directory /run/user/1002...
Dec 06 10:08:06 np0005548790.localdomain systemd-logind[760]: New session 66 of user ceph-admin.
Dec 06 10:08:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:08:06 np0005548790.localdomain systemd[1]: Finished User Runtime Directory /run/user/1002.
Dec 06 10:08:06 np0005548790.localdomain systemd[1]: Starting User Manager for UID 1002...
Dec 06 10:08:06 np0005548790.localdomain systemd[296658]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 06 10:08:07 np0005548790.localdomain systemd[296658]: Queued start job for default target Main User Target.
Dec 06 10:08:07 np0005548790.localdomain systemd[296658]: Created slice User Application Slice.
Dec 06 10:08:07 np0005548790.localdomain systemd[296658]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 06 10:08:07 np0005548790.localdomain systemd[296658]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 10:08:07 np0005548790.localdomain systemd[296658]: Reached target Paths.
Dec 06 10:08:07 np0005548790.localdomain systemd[296658]: Reached target Timers.
Dec 06 10:08:07 np0005548790.localdomain systemd[296658]: Starting D-Bus User Message Bus Socket...
Dec 06 10:08:07 np0005548790.localdomain systemd[296658]: Starting Create User's Volatile Files and Directories...
Dec 06 10:08:07 np0005548790.localdomain systemd[296658]: Finished Create User's Volatile Files and Directories.
Dec 06 10:08:07 np0005548790.localdomain systemd[296658]: Listening on D-Bus User Message Bus Socket.
Dec 06 10:08:07 np0005548790.localdomain systemd[296658]: Reached target Sockets.
Dec 06 10:08:07 np0005548790.localdomain systemd[296658]: Reached target Basic System.
Dec 06 10:08:07 np0005548790.localdomain systemd[296658]: Reached target Main User Target.
Dec 06 10:08:07 np0005548790.localdomain systemd[296658]: Startup finished in 160ms.
Dec 06 10:08:07 np0005548790.localdomain systemd[1]: Started User Manager for UID 1002.
Dec 06 10:08:07 np0005548790.localdomain systemd[1]: Started Session 66 of User ceph-admin.
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.200:0/1889957737' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: Activating manager daemon np0005548790.kvkfyr
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: osdmap e91: 6 total, 6 up, 6 in
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: mgrmap e26: np0005548790.kvkfyr(active, starting, since 0.0602071s), standbys: np0005548789.mzhmje, np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548787.umwsra
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mon metadata", "id": "np0005548787"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mds metadata", "who": "mds.np0005548789.vxwwsq"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mds metadata", "who": "mds.np0005548788.erzujf"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mds metadata", "who": "mds.np0005548790.vhcezv"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mgr metadata", "who": "np0005548790.kvkfyr", "id": "np0005548790.kvkfyr"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mgr metadata", "who": "np0005548789.mzhmje", "id": "np0005548789.mzhmje"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mgr metadata", "who": "np0005548785.vhqlsq", "id": "np0005548785.vhqlsq"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mgr metadata", "who": "np0005548788.yvwbqq", "id": "np0005548788.yvwbqq"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mgr metadata", "who": "np0005548787.umwsra", "id": "np0005548787.umwsra"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mds metadata"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mon metadata"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: Manager daemon np0005548790.kvkfyr is now available
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548786.localdomain.devices.0"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548786.localdomain.devices.0"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005548786.localdomain.devices.0"}]': finished
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548786.localdomain.devices.0"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548786.localdomain.devices.0"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005548786.localdomain.devices.0"}]': finished
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548790.kvkfyr/mirror_snapshot_schedule"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548790.kvkfyr/mirror_snapshot_schedule"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548790.kvkfyr/trash_purge_schedule"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548790.kvkfyr/trash_purge_schedule"} : dispatch
Dec 06 10:08:07 np0005548790.localdomain sshd[296654]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 06 10:08:07 np0005548790.localdomain sudo[296674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:08:07 np0005548790.localdomain sudo[296674]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:07 np0005548790.localdomain sudo[296674]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:07 np0005548790.localdomain sudo[296692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 10:08:07 np0005548790.localdomain sudo[296692]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.26943 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:08:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:08:07 np0005548790.localdomain ceph-mgr[286934]: mgr.server handle_report got status from non-daemon mon.np0005548789
Dec 06 10:08:07 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:07.485+0000 7efd24b36640 -1 mgr.server handle_report got status from non-daemon mon.np0005548789
Dec 06 10:08:07 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cherrypy.error] [06/Dec/2025:10:08:07] ENGINE Bus STARTING
Dec 06 10:08:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : [06/Dec/2025:10:08:07] ENGINE Bus STARTING
Dec 06 10:08:07 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cherrypy.error] [06/Dec/2025:10:08:07] ENGINE Serving on https://172.18.0.108:7150
Dec 06 10:08:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : [06/Dec/2025:10:08:07] ENGINE Serving on https://172.18.0.108:7150
Dec 06 10:08:07 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cherrypy.error] [06/Dec/2025:10:08:07] ENGINE Client ('172.18.0.108', 58740) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 10:08:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : [06/Dec/2025:10:08:07] ENGINE Client ('172.18.0.108', 58740) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 10:08:07 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cherrypy.error] [06/Dec/2025:10:08:07] ENGINE Serving on http://172.18.0.108:8765
Dec 06 10:08:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : [06/Dec/2025:10:08:07] ENGINE Serving on http://172.18.0.108:8765
Dec 06 10:08:07 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cherrypy.error] [06/Dec/2025:10:08:07] ENGINE Bus STARTED
Dec 06 10:08:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : [06/Dec/2025:10:08:07] ENGINE Bus STARTED
Dec 06 10:08:08 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.44342 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:08:08 np0005548790.localdomain ceph-mon[288373]: removing stray HostCache host record np0005548786.localdomain.devices.0
Dec 06 10:08:08 np0005548790.localdomain ceph-mon[288373]: mgrmap e27: np0005548790.kvkfyr(active, since 1.07281s), standbys: np0005548789.mzhmje, np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548787.umwsra
Dec 06 10:08:08 np0005548790.localdomain podman[296801]: 2025-12-06 10:08:08.230283414 +0000 UTC m=+0.103552276 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, distribution-scope=public, com.redhat.component=rhceph-container, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=)
Dec 06 10:08:08 np0005548790.localdomain podman[296801]: 2025-12-06 10:08:08.332192136 +0000 UTC m=+0.205461008 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_CLEAN=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, name=rhceph, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, version=7, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 10:08:08 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:08:08 np0005548790.localdomain ceph-mgr[286934]: [devicehealth INFO root] Check health
Dec 06 10:08:08 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain.devices.0}] v 0)
Dec 06 10:08:08 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain}] v 0)
Dec 06 10:08:08 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:08:08 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:08:09 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0)
Dec 06 10:08:09 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0)
Dec 06 10:08:09 np0005548790.localdomain sudo[296692]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:09 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0)
Dec 06 10:08:09 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0)
Dec 06 10:08:09 np0005548790.localdomain sudo[296933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:08:09 np0005548790.localdomain sudo[296933]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:09 np0005548790.localdomain sudo[296933]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:09 np0005548790.localdomain ceph-mon[288373]: from='client.26943 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:08:09 np0005548790.localdomain ceph-mon[288373]: pgmap v3: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:08:09 np0005548790.localdomain ceph-mon[288373]: [06/Dec/2025:10:08:07] ENGINE Bus STARTING
Dec 06 10:08:09 np0005548790.localdomain ceph-mon[288373]: [06/Dec/2025:10:08:07] ENGINE Serving on https://172.18.0.108:7150
Dec 06 10:08:09 np0005548790.localdomain ceph-mon[288373]: [06/Dec/2025:10:08:07] ENGINE Client ('172.18.0.108', 58740) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 10:08:09 np0005548790.localdomain ceph-mon[288373]: [06/Dec/2025:10:08:07] ENGINE Serving on http://172.18.0.108:8765
Dec 06 10:08:09 np0005548790.localdomain ceph-mon[288373]: [06/Dec/2025:10:08:07] ENGINE Bus STARTED
Dec 06 10:08:09 np0005548790.localdomain ceph-mon[288373]: from='client.44342 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:08:09 np0005548790.localdomain ceph-mon[288373]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm)
Dec 06 10:08:09 np0005548790.localdomain ceph-mon[288373]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm)
Dec 06 10:08:09 np0005548790.localdomain ceph-mon[288373]: Cluster is now healthy
Dec 06 10:08:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:09 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:09 np0005548790.localdomain sudo[296951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:08:09 np0005548790.localdomain sudo[296951]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:09 np0005548790.localdomain sudo[296951]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:09 np0005548790.localdomain sudo[297001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:08:09 np0005548790.localdomain sudo[297001]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:09 np0005548790.localdomain sudo[297001]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:10 np0005548790.localdomain sudo[297019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 06 10:08:10 np0005548790.localdomain sudo[297019]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain.devices.0}] v 0)
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain}] v 0)
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005548787", "name": "osd_memory_target"} v 0)
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd/host:np0005548787", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd/host:np0005548787", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd/host:np0005548787", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:10 np0005548790.localdomain sudo[297019]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0)
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0)
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0)
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:10 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO root] Adjusting osd_memory_target on np0005548790.localdomain to 836.6M
Dec 06 10:08:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005548790.localdomain to 836.6M
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0)
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:10 np0005548790.localdomain ceph-mgr[286934]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:08:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:08:10 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO root] Adjusting osd_memory_target on np0005548788.localdomain to 836.6M
Dec 06 10:08:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005548788.localdomain to 836.6M
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 06 10:08:10 np0005548790.localdomain ceph-mgr[286934]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Dec 06 10:08:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0)
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0)
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0)
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:10 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO root] Adjusting osd_memory_target on np0005548789.localdomain to 836.6M
Dec 06 10:08:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005548789.localdomain to 836.6M
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 06 10:08:10 np0005548790.localdomain ceph-mgr[286934]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:08:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:08:10 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548787.localdomain:/etc/ceph/ceph.conf
Dec 06 10:08:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548787.localdomain:/etc/ceph/ceph.conf
Dec 06 10:08:10 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548788.localdomain:/etc/ceph/ceph.conf
Dec 06 10:08:10 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548789.localdomain:/etc/ceph/ceph.conf
Dec 06 10:08:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548788.localdomain:/etc/ceph/ceph.conf
Dec 06 10:08:10 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548790.localdomain:/etc/ceph/ceph.conf
Dec 06 10:08:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548789.localdomain:/etc/ceph/ceph.conf
Dec 06 10:08:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548790.localdomain:/etc/ceph/ceph.conf
Dec 06 10:08:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.54127 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:08:10 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO root] Saving service mon spec with placement label:mon
Dec 06 10:08:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Saving service mon spec with placement label:mon
Dec 06 10:08:10 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 06 10:08:10 np0005548790.localdomain sudo[297057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 10:08:10 np0005548790.localdomain sudo[297057]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:10 np0005548790.localdomain sudo[297057]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:10 np0005548790.localdomain sudo[297075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph
Dec 06 10:08:10 np0005548790.localdomain sudo[297075]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:10 np0005548790.localdomain sudo[297075]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:10 np0005548790.localdomain sudo[297093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:08:10 np0005548790.localdomain sudo[297093]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:10 np0005548790.localdomain sudo[297093]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:10 np0005548790.localdomain sudo[297111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:08:10 np0005548790.localdomain sudo[297111]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:10 np0005548790.localdomain sudo[297111]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:10 np0005548790.localdomain sudo[297129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:08:10 np0005548790.localdomain sudo[297129]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:10 np0005548790.localdomain sudo[297129]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:11 np0005548790.localdomain sudo[297163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:08:11 np0005548790.localdomain sudo[297163]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:11 np0005548790.localdomain sudo[297163]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:11 np0005548790.localdomain sudo[297181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:08:11 np0005548790.localdomain sudo[297181]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:11 np0005548790.localdomain sudo[297181]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:11 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:08:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:08:11 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:08:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: mgrmap e28: np0005548790.kvkfyr(active, since 3s), standbys: np0005548789.mzhmje, np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548787.umwsra
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: Adjusting osd_memory_target on np0005548790.localdomain to 836.6M
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: Adjusting osd_memory_target on np0005548788.localdomain to 836.6M
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: Adjusting osd_memory_target on np0005548789.localdomain to 836.6M
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/etc/ceph/ceph.conf
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: from='client.54127 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: Saving service mon spec with placement label:mon
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:08:11 np0005548790.localdomain sudo[297199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 06 10:08:11 np0005548790.localdomain sudo[297199]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:11 np0005548790.localdomain sudo[297199]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:11 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:08:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:08:11 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:08:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:08:11 np0005548790.localdomain sudo[297217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:08:11 np0005548790.localdomain sudo[297217]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:11 np0005548790.localdomain sudo[297217]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:11 np0005548790.localdomain sudo[297235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:08:11 np0005548790.localdomain sudo[297235]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:11 np0005548790.localdomain sudo[297235]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:11 np0005548790.localdomain sudo[297253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:08:11 np0005548790.localdomain sudo[297253]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:11 np0005548790.localdomain sudo[297253]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:11 np0005548790.localdomain sudo[297271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:08:11 np0005548790.localdomain sudo[297271]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:11 np0005548790.localdomain sudo[297271]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:11 np0005548790.localdomain sudo[297289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:08:11 np0005548790.localdomain sudo[297289]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:11 np0005548790.localdomain sudo[297289]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:11 np0005548790.localdomain sudo[297323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:08:11 np0005548790.localdomain sudo[297323]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:11 np0005548790.localdomain sudo[297323]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:11 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548787.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:08:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548787.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:08:11 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:08:11 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548789.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:08:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548789.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:08:11 np0005548790.localdomain sudo[297341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:08:11 np0005548790.localdomain sudo[297341]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:12 np0005548790.localdomain sudo[297341]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:12 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548788.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:08:12 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548788.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:08:12 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.44354 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005548789", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:08:12 np0005548790.localdomain sudo[297359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:08:12 np0005548790.localdomain sudo[297359]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:12 np0005548790.localdomain sudo[297359]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:12 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548790.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:08:12 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548790.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:08:12 np0005548790.localdomain sudo[297377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 10:08:12 np0005548790.localdomain sudo[297377]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:12 np0005548790.localdomain sudo[297377]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:12 np0005548790.localdomain sudo[297395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph
Dec 06 10:08:12 np0005548790.localdomain sudo[297395]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:12 np0005548790.localdomain sudo[297395]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:12 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:08:12 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:08:12 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:08:12 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:08:12 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:08:12 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:08:12 np0005548790.localdomain ceph-mon[288373]: from='client.44354 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005548789", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:08:12 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:08:12 np0005548790.localdomain sudo[297413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:08:12 np0005548790.localdomain sudo[297413]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:12 np0005548790.localdomain sudo[297413]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:12 np0005548790.localdomain sudo[297431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:08:12 np0005548790.localdomain sudo[297431]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:12 np0005548790.localdomain sudo[297431]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:12 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:08:12 np0005548790.localdomain sudo[297449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:08:12 np0005548790.localdomain sudo[297449]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:12 np0005548790.localdomain sudo[297449]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:12 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:08:12 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:08:12 np0005548790.localdomain sudo[297483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:08:12 np0005548790.localdomain sudo[297483]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:12 np0005548790.localdomain sudo[297483]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:12 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:08:12 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:08:12 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:08:12 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:08:12 np0005548790.localdomain sudo[297501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:08:12 np0005548790.localdomain sudo[297501]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:12 np0005548790.localdomain sudo[297501]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:12 np0005548790.localdomain sudo[297519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 06 10:08:12 np0005548790.localdomain sudo[297519]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:12 np0005548790.localdomain sudo[297519]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:12 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:08:12 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:08:12 np0005548790.localdomain sudo[297537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:08:12 np0005548790.localdomain sudo[297537]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:12 np0005548790.localdomain sudo[297537]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:12 np0005548790.localdomain sudo[297555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:08:12 np0005548790.localdomain sudo[297555]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:12 np0005548790.localdomain sudo[297555]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:13 np0005548790.localdomain sudo[297573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:08:13 np0005548790.localdomain sudo[297573]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:13 np0005548790.localdomain sudo[297573]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:13 np0005548790.localdomain sudo[297591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:08:13 np0005548790.localdomain sudo[297591]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:13 np0005548790.localdomain sudo[297591]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:13 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain.devices.0}] v 0)
Dec 06 10:08:13 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain}] v 0)
Dec 06 10:08:13 np0005548790.localdomain sudo[297609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:08:13 np0005548790.localdomain sudo[297609]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:13 np0005548790.localdomain sudo[297609]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:13 np0005548790.localdomain sudo[297643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:08:13 np0005548790.localdomain sudo[297643]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:13 np0005548790.localdomain sudo[297643]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:13 np0005548790.localdomain sudo[297661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:08:13 np0005548790.localdomain sudo[297661]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:13 np0005548790.localdomain sudo[297661]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:13 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:08:13 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:08:13 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0)
Dec 06 10:08:13 np0005548790.localdomain sudo[297679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:08:13 np0005548790.localdomain sudo[297679]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:13 np0005548790.localdomain sudo[297679]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:13 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0)
Dec 06 10:08:13 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0)
Dec 06 10:08:13 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0)
Dec 06 10:08:13 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:08:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 0 B/s wr, 19 op/s
Dec 06 10:08:13 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 06 10:08:13 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] update: starting ev 506d7cd8-c362-4c1e-b7b0-9f89dbd89a86 (Updating node-proxy deployment (+4 -> 4))
Dec 06 10:08:13 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] complete: finished ev 506d7cd8-c362-4c1e-b7b0-9f89dbd89a86 (Updating node-proxy deployment (+4 -> 4))
Dec 06 10:08:13 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Completed event 506d7cd8-c362-4c1e-b7b0-9f89dbd89a86 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds
Dec 06 10:08:13 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:08:13 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:08:13 np0005548790.localdomain sudo[297697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:08:13 np0005548790.localdomain sudo[297697]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:13 np0005548790.localdomain sudo[297697]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:13 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005548787 (monmap changed)...
Dec 06 10:08:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005548787 (monmap changed)...
Dec 06 10:08:13 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 06 10:08:13 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:08:13 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 06 10:08:13 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:08:13 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:08:13 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:13 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005548787 on np0005548787.localdomain
Dec 06 10:08:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005548787 on np0005548787.localdomain
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.200:0/3734042444' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain.devices.0}] v 0)
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain}] v 0)
Dec 06 10:08:14 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005548787.umwsra (monmap changed)...
Dec 06 10:08:14 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005548787.umwsra (monmap changed)...
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:08:14 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:14 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005548787.umwsra on np0005548787.localdomain
Dec 06 10:08:14 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005548787.umwsra on np0005548787.localdomain
Dec 06 10:08:15 np0005548790.localdomain ceph-mon[288373]: pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 0 B/s wr, 19 op/s
Dec 06 10:08:15 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mon.np0005548787 (monmap changed)...
Dec 06 10:08:15 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mon.np0005548787 on np0005548787.localdomain
Dec 06 10:08:15 np0005548790.localdomain ceph-mon[288373]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)
Dec 06 10:08:15 np0005548790.localdomain ceph-mon[288373]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST)
Dec 06 10:08:15 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:15 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:15 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:08:15 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548787.umwsra", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:08:15 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:08:15 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 0 B/s wr, 14 op/s
Dec 06 10:08:15 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain.devices.0}] v 0)
Dec 06 10:08:15 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain}] v 0)
Dec 06 10:08:15 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005548787 (monmap changed)...
Dec 06 10:08:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005548787 (monmap changed)...
Dec 06 10:08:15 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 06 10:08:15 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:08:15 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:08:15 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:15 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005548787 on np0005548787.localdomain
Dec 06 10:08:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005548787 on np0005548787.localdomain
Dec 06 10:08:16 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548787.umwsra (monmap changed)...
Dec 06 10:08:16 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548787.umwsra on np0005548787.localdomain
Dec 06 10:08:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:08:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:08:16 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:16 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain.devices.0}] v 0)
Dec 06 10:08:16 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain}] v 0)
Dec 06 10:08:16 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005548788 (monmap changed)...
Dec 06 10:08:16 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005548788 (monmap changed)...
Dec 06 10:08:16 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 06 10:08:16 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:08:16 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:08:16 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:16 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain
Dec 06 10:08:16 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain
Dec 06 10:08:16 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Writing back 50 completed events
Dec 06 10:08:16 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:08:16 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:08:17 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:08:17 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:08:17 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)...
Dec 06 10:08:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)...
Dec 06 10:08:17 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec 06 10:08:17 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 06 10:08:17 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:08:17 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:17 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005548788.localdomain
Dec 06 10:08:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005548788.localdomain
Dec 06 10:08:17 np0005548790.localdomain ceph-mon[288373]: pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 0 B/s wr, 14 op/s
Dec 06 10:08:17 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548787 (monmap changed)...
Dec 06 10:08:17 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548787 on np0005548787.localdomain
Dec 06 10:08:17 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:17 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:17 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:08:17 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:08:17 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:17 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:17 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:17 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:17 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 06 10:08:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 0 B/s wr, 11 op/s
Dec 06 10:08:18 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:08:18 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:08:18 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548788 (monmap changed)...
Dec 06 10:08:18 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain
Dec 06 10:08:18 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.2 (monmap changed)...
Dec 06 10:08:18 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:18 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.2 on np0005548788.localdomain
Dec 06 10:08:18 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)...
Dec 06 10:08:18 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)...
Dec 06 10:08:18 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0)
Dec 06 10:08:18 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 06 10:08:18 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:08:18 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:18 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005548788.localdomain
Dec 06 10:08:18 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005548788.localdomain
Dec 06 10:08:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:08:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:08:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:08:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:08:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:08:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18700 "" "Go-http-client/1.1"
Dec 06 10:08:19 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:08:19 np0005548790.localdomain ceph-mon[288373]: pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 0 B/s wr, 11 op/s
Dec 06 10:08:19 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:19 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:19 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.5 (monmap changed)...
Dec 06 10:08:19 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 06 10:08:19 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:19 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.5 on np0005548788.localdomain
Dec 06 10:08:19 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.200:0/327302380' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json"} : dispatch
Dec 06 10:08:19 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:08:19 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:08:19 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:08:19 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)...
Dec 06 10:08:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)...
Dec 06 10:08:19 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 06 10:08:19 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:08:19 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:08:19 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:19 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain
Dec 06 10:08:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain
Dec 06 10:08:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Dec 06 10:08:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:08:19 np0005548790.localdomain systemd[1]: tmp-crun.E20dzd.mount: Deactivated successfully.
Dec 06 10:08:19 np0005548790.localdomain podman[297715]: 2025-12-06 10:08:19.608385641 +0000 UTC m=+0.090418566 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 10:08:19 np0005548790.localdomain podman[297715]: 2025-12-06 10:08:19.615016858 +0000 UTC m=+0.097049853 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:08:19 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:08:20 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)...
Dec 06 10:08:20 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)...
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:20 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain
Dec 06 10:08:20 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)...
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: from='mgr.26470 172.18.0.108:0/129842165' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:20 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e92 e92: 6 total, 6 up, 6 in
Dec 06 10:08:20 np0005548790.localdomain ceph-mgr[286934]: mgr handle_mgr_map I was active but no longer am
Dec 06 10:08:20 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:20.475+0000 7efd81088640 -1 mgr handle_mgr_map I was active but no longer am
Dec 06 10:08:20 np0005548790.localdomain sshd[296654]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 10:08:20 np0005548790.localdomain systemd[1]: session-66.scope: Deactivated successfully.
Dec 06 10:08:20 np0005548790.localdomain systemd[1]: session-66.scope: Consumed 6.143s CPU time.
Dec 06 10:08:20 np0005548790.localdomain systemd-logind[760]: Session 66 logged out. Waiting for processes to exit.
Dec 06 10:08:20 np0005548790.localdomain systemd-logind[760]: Removed session 66.
Dec 06 10:08:20 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: ignoring --setuser ceph since I am not root
Dec 06 10:08:20 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: ignoring --setgroup ceph since I am not root
Dec 06 10:08:20 np0005548790.localdomain ceph-mgr[286934]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2
Dec 06 10:08:20 np0005548790.localdomain ceph-mgr[286934]: pidfile_write: ignore empty --pid-file
Dec 06 10:08:20 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'alerts'
Dec 06 10:08:20 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 06 10:08:20 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'balancer'
Dec 06 10:08:20 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:20.695+0000 7f06c6d50140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 06 10:08:20 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 06 10:08:20 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'cephadm'
Dec 06 10:08:20 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:20.766+0000 7f06c6d50140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 06 10:08:20 np0005548790.localdomain sshd[297758]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:08:20 np0005548790.localdomain sshd[297758]: Accepted publickey for ceph-admin from 192.168.122.107 port 34020 ssh2: RSA SHA256:HQYBT8n3HbnicDtP9tehd3+gJXMFtkw+fTMlmR2wCsE
Dec 06 10:08:20 np0005548790.localdomain systemd-logind[760]: New session 68 of user ceph-admin.
Dec 06 10:08:20 np0005548790.localdomain systemd[1]: Started Session 68 of User ceph-admin.
Dec 06 10:08:20 np0005548790.localdomain sshd[297758]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 06 10:08:21 np0005548790.localdomain sudo[297762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:08:21 np0005548790.localdomain sudo[297762]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:21 np0005548790.localdomain sudo[297762]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:21 np0005548790.localdomain sudo[297780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 10:08:21 np0005548790.localdomain sudo[297780]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.200:0/2304971504' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: Activating manager daemon np0005548789.mzhmje
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: osdmap e92: 6 total, 6 up, 6 in
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: mgrmap e29: np0005548789.mzhmje(active, starting, since 0.0455985s), standbys: np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548787.umwsra
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548787"} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mds metadata", "who": "mds.np0005548789.vxwwsq"} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mds metadata", "who": "mds.np0005548788.erzujf"} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mds metadata", "who": "mds.np0005548790.vhcezv"} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr metadata", "who": "np0005548789.mzhmje", "id": "np0005548789.mzhmje"} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr metadata", "who": "np0005548785.vhqlsq", "id": "np0005548785.vhqlsq"} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr metadata", "who": "np0005548788.yvwbqq", "id": "np0005548788.yvwbqq"} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr metadata", "who": "np0005548787.umwsra", "id": "np0005548787.umwsra"} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mds metadata"} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd metadata"} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata"} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: Manager daemon np0005548789.mzhmje is now available
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548789.mzhmje/mirror_snapshot_schedule"} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548789.mzhmje/mirror_snapshot_schedule"} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548789.mzhmje/trash_purge_schedule"} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548789.mzhmje/trash_purge_schedule"} : dispatch
Dec 06 10:08:21 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'crash'
Dec 06 10:08:21 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 06 10:08:21 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'dashboard'
Dec 06 10:08:21 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:21.415+0000 7f06c6d50140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 06 10:08:21 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:08:21 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'devicehealth'
Dec 06 10:08:22 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 06 10:08:22 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'diskprediction_local'
Dec 06 10:08:22 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:22.006+0000 7f06c6d50140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 06 10:08:22 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 06 10:08:22 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 06 10:08:22 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]:   from numpy import show_config as show_numpy_config
Dec 06 10:08:22 np0005548790.localdomain systemd[1]: tmp-crun.NPQem3.mount: Deactivated successfully.
Dec 06 10:08:22 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 06 10:08:22 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'influx'
Dec 06 10:08:22 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:22.145+0000 7f06c6d50140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 06 10:08:22 np0005548790.localdomain podman[297875]: 2025-12-06 10:08:22.155228415 +0000 UTC m=+0.110772029 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, version=7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-type=git, release=1763362218, io.buildah.version=1.41.4, name=rhceph, architecture=x86_64, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 06 10:08:22 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 06 10:08:22 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'insights'
Dec 06 10:08:22 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:22.206+0000 7f06c6d50140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 06 10:08:22 np0005548790.localdomain podman[297875]: 2025-12-06 10:08:22.256452508 +0000 UTC m=+0.211996132 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.expose-services=, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, architecture=x86_64, GIT_CLEAN=True, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z)
Dec 06 10:08:22 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'iostat'
Dec 06 10:08:22 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 06 10:08:22 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'k8sevents'
Dec 06 10:08:22 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:22.325+0000 7f06c6d50140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 06 10:08:22 np0005548790.localdomain ceph-mon[288373]: mgrmap e30: np0005548789.mzhmje(active, since 1.06844s), standbys: np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548787.umwsra
Dec 06 10:08:22 np0005548790.localdomain ceph-mon[288373]: [06/Dec/2025:10:08:21] ENGINE Bus STARTING
Dec 06 10:08:22 np0005548790.localdomain ceph-mon[288373]: [06/Dec/2025:10:08:21] ENGINE Serving on https://172.18.0.107:7150
Dec 06 10:08:22 np0005548790.localdomain ceph-mon[288373]: [06/Dec/2025:10:08:21] ENGINE Client ('172.18.0.107', 48298) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 10:08:22 np0005548790.localdomain ceph-mon[288373]: [06/Dec/2025:10:08:21] ENGINE Serving on http://172.18.0.107:8765
Dec 06 10:08:22 np0005548790.localdomain ceph-mon[288373]: [06/Dec/2025:10:08:21] ENGINE Bus STARTED
Dec 06 10:08:22 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'localpool'
Dec 06 10:08:22 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'mds_autoscaler'
Dec 06 10:08:22 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'mirroring'
Dec 06 10:08:22 np0005548790.localdomain sudo[297780]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:22 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'nfs'
Dec 06 10:08:23 np0005548790.localdomain sudo[297993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:08:23 np0005548790.localdomain sudo[297993]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:23 np0005548790.localdomain sudo[297993]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:23 np0005548790.localdomain sudo[298011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:08:23 np0005548790.localdomain sudo[298011]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:23 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 06 10:08:23 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'orchestrator'
Dec 06 10:08:23 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:23.114+0000 7f06c6d50140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 06 10:08:23 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 06 10:08:23 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'osd_perf_query'
Dec 06 10:08:23 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:23.261+0000 7f06c6d50140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 06 10:08:23 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 06 10:08:23 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'osd_support'
Dec 06 10:08:23 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:23.326+0000 7f06c6d50140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 06 10:08:23 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 06 10:08:23 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'pg_autoscaler'
Dec 06 10:08:23 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:23.383+0000 7f06c6d50140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 06 10:08:23 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 06 10:08:23 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'progress'
Dec 06 10:08:23 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:23.451+0000 7f06c6d50140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 06 10:08:23 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:23 np0005548790.localdomain ceph-mon[288373]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm)
Dec 06 10:08:23 np0005548790.localdomain ceph-mon[288373]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm)
Dec 06 10:08:23 np0005548790.localdomain ceph-mon[288373]: Cluster is now healthy
Dec 06 10:08:23 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:23 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:23 np0005548790.localdomain ceph-mon[288373]: pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:08:23 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:23 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:23 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:23 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:23 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:23 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 06 10:08:23 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'prometheus'
Dec 06 10:08:23 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:23.512+0000 7f06c6d50140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 06 10:08:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:08:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:08:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:08:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:08:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:08:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:08:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:08:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:08:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:08:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:08:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:08:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:08:23 np0005548790.localdomain sudo[298011]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:23 np0005548790.localdomain sudo[298061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:08:23 np0005548790.localdomain sudo[298061]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:23 np0005548790.localdomain sudo[298061]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:23 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 06 10:08:23 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:23.826+0000 7f06c6d50140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 06 10:08:23 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'rbd_support'
Dec 06 10:08:23 np0005548790.localdomain sudo[298079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 06 10:08:23 np0005548790.localdomain sudo[298079]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:23 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 06 10:08:23 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'restful'
Dec 06 10:08:23 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:23.910+0000 7f06c6d50140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 06 10:08:24 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'rgw'
Dec 06 10:08:24 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 06 10:08:24 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'rook'
Dec 06 10:08:24 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:24.249+0000 7f06c6d50140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 06 10:08:24 np0005548790.localdomain sudo[298079]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:24 np0005548790.localdomain sudo[298116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 10:08:24 np0005548790.localdomain sudo[298116]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:24 np0005548790.localdomain sudo[298116]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:24 np0005548790.localdomain sudo[298134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph
Dec 06 10:08:24 np0005548790.localdomain sudo[298134]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:24 np0005548790.localdomain sudo[298134]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:24 np0005548790.localdomain sudo[298152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:08:24 np0005548790.localdomain sudo[298152]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:08:24 np0005548790.localdomain sudo[298152]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:24 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 06 10:08:24 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'selftest'
Dec 06 10:08:24 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:24.700+0000 7f06c6d50140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 06 10:08:24 np0005548790.localdomain sudo[298176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:08:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:08:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:08:24 np0005548790.localdomain podman[298170]: 2025-12-06 10:08:24.761863136 +0000 UTC m=+0.104357158 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, distribution-scope=public, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41)
Dec 06 10:08:24 np0005548790.localdomain sudo[298176]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:24 np0005548790.localdomain sudo[298176]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:24 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 06 10:08:24 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'snap_schedule'
Dec 06 10:08:24 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:24.778+0000 7f06c6d50140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 06 10:08:24 np0005548790.localdomain podman[298170]: 2025-12-06 10:08:24.802645016 +0000 UTC m=+0.145139058 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, distribution-scope=public, container_name=openstack_network_exporter, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm)
Dec 06 10:08:24 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'stats'
Dec 06 10:08:24 np0005548790.localdomain sudo[298228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:08:24 np0005548790.localdomain sudo[298228]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:24 np0005548790.localdomain sudo[298228]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:24 np0005548790.localdomain podman[298208]: 2025-12-06 10:08:24.882442666 +0000 UTC m=+0.110079900 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 06 10:08:24 np0005548790.localdomain podman[298208]: 2025-12-06 10:08:24.895760262 +0000 UTC m=+0.123397536 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:08:24 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:08:24 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:08:24 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'status'
Dec 06 10:08:24 np0005548790.localdomain podman[298207]: 2025-12-06 10:08:24.851165151 +0000 UTC m=+0.088344590 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: mgrmap e31: np0005548789.mzhmje(active, since 3s), standbys: np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548787.umwsra
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd/host:np0005548787", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd/host:np0005548787", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:24 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:08:24 np0005548790.localdomain podman[298207]: 2025-12-06 10:08:24.984253926 +0000 UTC m=+0.221433315 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 10:08:24 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:08:25 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 06 10:08:25 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'telegraf'
Dec 06 10:08:25 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:24.999+0000 7f06c6d50140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 06 10:08:25 np0005548790.localdomain sudo[298284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:08:25 np0005548790.localdomain sudo[298284]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:25 np0005548790.localdomain sudo[298284]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:25 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 06 10:08:25 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'telemetry'
Dec 06 10:08:25 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:25.062+0000 7f06c6d50140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 06 10:08:25 np0005548790.localdomain sudo[298302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:08:25 np0005548790.localdomain sudo[298302]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:25 np0005548790.localdomain sudo[298302]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:25 np0005548790.localdomain sudo[298320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 06 10:08:25 np0005548790.localdomain sudo[298320]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:25 np0005548790.localdomain sudo[298320]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:25 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 06 10:08:25 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'test_orchestrator'
Dec 06 10:08:25 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:25.203+0000 7f06c6d50140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 06 10:08:25 np0005548790.localdomain sudo[298338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:08:25 np0005548790.localdomain sudo[298338]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:25 np0005548790.localdomain sudo[298338]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:25 np0005548790.localdomain sudo[298356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:08:25 np0005548790.localdomain sudo[298356]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:25 np0005548790.localdomain sudo[298356]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:25 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 06 10:08:25 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'volumes'
Dec 06 10:08:25 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:25.365+0000 7f06c6d50140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 06 10:08:25 np0005548790.localdomain sudo[298374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:08:25 np0005548790.localdomain sudo[298374]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:25 np0005548790.localdomain sudo[298374]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:25 np0005548790.localdomain sudo[298392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:08:25 np0005548790.localdomain sudo[298392]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:25 np0005548790.localdomain sudo[298392]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:25 np0005548790.localdomain sudo[298410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:08:25 np0005548790.localdomain sudo[298410]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:25 np0005548790.localdomain sudo[298410]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:25 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 06 10:08:25 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Loading python module 'zabbix'
Dec 06 10:08:25 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:25.567+0000 7f06c6d50140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 06 10:08:25 np0005548790.localdomain ceph-mgr[286934]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 06 10:08:25 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:08:25.628+0000 7f06c6d50140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 06 10:08:25 np0005548790.localdomain ceph-mgr[286934]: ms_deliver_dispatch: unhandled message 0x55a2e9e831e0 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0
Dec 06 10:08:25 np0005548790.localdomain ceph-mgr[286934]: client.0 ms_handle_reset on v2:172.18.0.107:6810/2974129872
Dec 06 10:08:25 np0005548790.localdomain sudo[298444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:08:25 np0005548790.localdomain sudo[298444]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:25 np0005548790.localdomain sudo[298444]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:25 np0005548790.localdomain sudo[298462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:08:25 np0005548790.localdomain sudo[298462]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:25 np0005548790.localdomain sudo[298462]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:25 np0005548790.localdomain sudo[298480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:08:25 np0005548790.localdomain sudo[298480]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:25 np0005548790.localdomain sudo[298480]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:25 np0005548790.localdomain sudo[298498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 10:08:25 np0005548790.localdomain sudo[298498]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:25 np0005548790.localdomain sudo[298498]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:25 np0005548790.localdomain sudo[298516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph
Dec 06 10:08:25 np0005548790.localdomain sudo[298516]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:25 np0005548790.localdomain sudo[298516]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:25 np0005548790.localdomain ceph-mon[288373]: Adjusting osd_memory_target on np0005548788.localdomain to 836.6M
Dec 06 10:08:25 np0005548790.localdomain ceph-mon[288373]: Adjusting osd_memory_target on np0005548789.localdomain to 836.6M
Dec 06 10:08:25 np0005548790.localdomain ceph-mon[288373]: Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Dec 06 10:08:25 np0005548790.localdomain ceph-mon[288373]: Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:08:25 np0005548790.localdomain ceph-mon[288373]: Adjusting osd_memory_target on np0005548790.localdomain to 836.6M
Dec 06 10:08:25 np0005548790.localdomain ceph-mon[288373]: Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:08:25 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/etc/ceph/ceph.conf
Dec 06 10:08:25 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf
Dec 06 10:08:25 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf
Dec 06 10:08:25 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf
Dec 06 10:08:25 np0005548790.localdomain ceph-mon[288373]: pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:08:25 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:08:25 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:08:25 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:08:25 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:08:25 np0005548790.localdomain ceph-mon[288373]: Standby manager daemon np0005548790.kvkfyr started
Dec 06 10:08:25 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:08:25 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:08:25 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:08:25 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:08:26 np0005548790.localdomain sudo[298534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:08:26 np0005548790.localdomain sudo[298534]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:26 np0005548790.localdomain sudo[298534]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:26 np0005548790.localdomain sudo[298552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:08:26 np0005548790.localdomain sudo[298552]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:26 np0005548790.localdomain sudo[298552]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:26 np0005548790.localdomain sudo[298570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:08:26 np0005548790.localdomain sudo[298570]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:26 np0005548790.localdomain sudo[298570]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:26 np0005548790.localdomain sudo[298604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:08:26 np0005548790.localdomain sudo[298604]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:26 np0005548790.localdomain sudo[298604]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:26 np0005548790.localdomain sudo[298622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:08:26 np0005548790.localdomain sudo[298622]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:26 np0005548790.localdomain sudo[298622]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:26 np0005548790.localdomain sudo[298640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 06 10:08:26 np0005548790.localdomain sudo[298640]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:26 np0005548790.localdomain sudo[298640]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:26 np0005548790.localdomain sudo[298658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:08:26 np0005548790.localdomain sudo[298658]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:26 np0005548790.localdomain sudo[298658]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:26 np0005548790.localdomain sudo[298676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:08:26 np0005548790.localdomain sudo[298676]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:26 np0005548790.localdomain sudo[298676]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:26 np0005548790.localdomain sudo[298694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:08:26 np0005548790.localdomain sudo[298694]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:26 np0005548790.localdomain sudo[298694]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:26 np0005548790.localdomain sudo[298712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:08:26 np0005548790.localdomain sudo[298712]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:26 np0005548790.localdomain sudo[298712]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:26 np0005548790.localdomain sudo[298730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:08:26 np0005548790.localdomain sudo[298730]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:26 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:08:26 np0005548790.localdomain sudo[298730]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:27 np0005548790.localdomain sudo[298764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:08:27 np0005548790.localdomain sudo[298764]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:27 np0005548790.localdomain sudo[298764]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:27 np0005548790.localdomain sudo[298782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:08:27 np0005548790.localdomain sudo[298782]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:27 np0005548790.localdomain sudo[298782]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:27 np0005548790.localdomain sudo[298800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:08:27 np0005548790.localdomain sudo[298800]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:27 np0005548790.localdomain sudo[298800]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:27 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:08:27 np0005548790.localdomain ceph-mon[288373]: mgrmap e32: np0005548789.mzhmje(active, since 5s), standbys: np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548787.umwsra, np0005548790.kvkfyr
Dec 06 10:08:27 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr metadata", "who": "np0005548790.kvkfyr", "id": "np0005548790.kvkfyr"} : dispatch
Dec 06 10:08:27 np0005548790.localdomain ceph-mon[288373]: Updating np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:08:27 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:08:27 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:08:27 np0005548790.localdomain ceph-mon[288373]: pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 0 B/s wr, 23 op/s
Dec 06 10:08:27 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.107:0/1224196971' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:08:27 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:27 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:27 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:27 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:27 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:27 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:27 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:27 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:27 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:27 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:08:27 np0005548790.localdomain sudo[298818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:08:27 np0005548790.localdomain sudo[298818]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:27 np0005548790.localdomain sudo[298818]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:28.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:08:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:28.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:08:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:28.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:08:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:28.355 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:08:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:28.356 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:08:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:28.356 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 10:08:28 np0005548790.localdomain ceph-mon[288373]: pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 0 B/s wr, 20 op/s
Dec 06 10:08:28 np0005548790.localdomain ceph-mon[288373]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)
Dec 06 10:08:28 np0005548790.localdomain ceph-mon[288373]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST)
Dec 06 10:08:28 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)...
Dec 06 10:08:28 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:08:28 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:08:28 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:08:28 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:28 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain
Dec 06 10:08:28 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.107:0/3249850813' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:08:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:29.350 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:08:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:29.351 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:08:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:29.351 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:08:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:29 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mon.np0005548788 (monmap changed)...
Dec 06 10:08:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:08:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:08:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:29 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mon.np0005548788 on np0005548788.localdomain
Dec 06 10:08:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:08:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:08:29 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:29.910 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:08:30 np0005548790.localdomain ceph-mon[288373]: pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 0 B/s wr, 14 op/s
Dec 06 10:08:30 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548789 (monmap changed)...
Dec 06 10:08:30 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain
Dec 06 10:08:30 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:30 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:30 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 06 10:08:30 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:31.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:08:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:31.353 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:08:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:31.354 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:08:31 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:08:31 np0005548790.localdomain systemd[1]: tmp-crun.GLQvpE.mount: Deactivated successfully.
Dec 06 10:08:31 np0005548790.localdomain podman[298836]: 2025-12-06 10:08:31.570002811 +0000 UTC m=+0.085749342 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 10:08:31 np0005548790.localdomain podman[298836]: 2025-12-06 10:08:31.587833557 +0000 UTC m=+0.103580078 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 10:08:31 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:08:31 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.1 (monmap changed)...
Dec 06 10:08:31 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.1 on np0005548789.localdomain
Dec 06 10:08:31 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:31 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:31 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:31 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:31 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:31 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 06 10:08:31 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:31 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:08:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:32.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:08:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:32.334 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:08:32 np0005548790.localdomain ceph-mon[288373]: pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 0 B/s wr, 11 op/s
Dec 06 10:08:32 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.4 (monmap changed)...
Dec 06 10:08:32 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.4 on np0005548789.localdomain
Dec 06 10:08:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:08:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:08:32 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:33.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:08:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:33.351 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:08:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:33.351 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:08:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:33.351 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:08:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:33.352 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:08:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:33.352 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:08:33 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:08:33 np0005548790.localdomain podman[298868]: 2025-12-06 10:08:33.56923895 +0000 UTC m=+0.083369147 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:08:33 np0005548790.localdomain podman[298868]: 2025-12-06 10:08:33.578038655 +0000 UTC m=+0.092168832 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:08:33 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:08:33 np0005548790.localdomain ceph-mon[288373]: from='client.44410 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:08:33 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)...
Dec 06 10:08:33 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain
Dec 06 10:08:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:08:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:08:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:08:33 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:33 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:08:33 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3333674039' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:08:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:33.826 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:08:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:34.026 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:08:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:34.027 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=12011MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:08:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:34.028 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:08:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:34.028 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:08:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:34.167 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:08:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:34.167 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:08:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:34.224 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Refreshing inventories for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 10:08:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:34.524 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updating ProviderTree inventory for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 10:08:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:34.526 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updating inventory in ProviderTree for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 10:08:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:34.546 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Refreshing aggregate associations for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 10:08:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:34.585 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Refreshing trait associations for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0, traits: HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AMD_SVM,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_F16C,HW_CPU_X86_ABM,HW_CPU_X86_BMI,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SVM,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 10:08:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:34.600 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:08:34 np0005548790.localdomain ceph-mon[288373]: pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Dec 06 10:08:34 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548789.mzhmje (monmap changed)...
Dec 06 10:08:34 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain
Dec 06 10:08:34 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.108:0/3333674039' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:08:34 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:34 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:34 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:08:34 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:08:34 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:35 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e12 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:08:35 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2265627899' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:08:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:35.056 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:08:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:35.062 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:08:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:35.089 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:08:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:35.092 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:08:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:35.092 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:08:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:35.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:08:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:35.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:08:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:35.334 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 10:08:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:35.354 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 10:08:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:08:35.354 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:08:35 np0005548790.localdomain sudo[298924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:08:35 np0005548790.localdomain sudo[298924]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:35 np0005548790.localdomain sudo[298924]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:35 np0005548790.localdomain sudo[298942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:08:35 np0005548790.localdomain sudo[298942]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:35 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mon.np0005548789 (monmap changed)...
Dec 06 10:08:35 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mon.np0005548789 on np0005548789.localdomain
Dec 06 10:08:35 np0005548790.localdomain ceph-mon[288373]: from='client.44426 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005548787", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:08:35 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.106:0/109628701' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:08:35 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.108:0/2265627899' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:08:35 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:35 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:35 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:08:35 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:08:35 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:35 np0005548790.localdomain podman[298977]: 
Dec 06 10:08:35 np0005548790.localdomain podman[298977]: 2025-12-06 10:08:35.989484584 +0000 UTC m=+0.077952183 container create 01cafa8bd5533e7bc0ec47c8cd3e8dd209d395f86208e6af57f332435394ec91 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_panini, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, release=1763362218, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, distribution-scope=public, io.openshift.expose-services=, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 06 10:08:36 np0005548790.localdomain systemd[1]: Started libpod-conmon-01cafa8bd5533e7bc0ec47c8cd3e8dd209d395f86208e6af57f332435394ec91.scope.
Dec 06 10:08:36 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:08:36 np0005548790.localdomain podman[298977]: 2025-12-06 10:08:35.957566152 +0000 UTC m=+0.046033801 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:08:36 np0005548790.localdomain podman[298977]: 2025-12-06 10:08:36.072905592 +0000 UTC m=+0.161373191 container init 01cafa8bd5533e7bc0ec47c8cd3e8dd209d395f86208e6af57f332435394ec91 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_panini, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, release=1763362218, name=rhceph, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, ceph=True, distribution-scope=public, architecture=x86_64, CEPH_POINT_RELEASE=)
Dec 06 10:08:36 np0005548790.localdomain podman[298977]: 2025-12-06 10:08:36.084103831 +0000 UTC m=+0.172571440 container start 01cafa8bd5533e7bc0ec47c8cd3e8dd209d395f86208e6af57f332435394ec91 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_panini, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_BRANCH=main, release=1763362218, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, com.redhat.component=rhceph-container, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, version=7, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 06 10:08:36 np0005548790.localdomain podman[298977]: 2025-12-06 10:08:36.084420319 +0000 UTC m=+0.172887918 container attach 01cafa8bd5533e7bc0ec47c8cd3e8dd209d395f86208e6af57f332435394ec91 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_panini, vcs-type=git, description=Red Hat Ceph Storage 7, ceph=True, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_CLEAN=True, GIT_BRANCH=main, name=rhceph, com.redhat.component=rhceph-container, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, distribution-scope=public, build-date=2025-11-26T19:44:28Z, architecture=x86_64, vendor=Red Hat, Inc.)
Dec 06 10:08:36 np0005548790.localdomain gallant_panini[298992]: 167 167
Dec 06 10:08:36 np0005548790.localdomain systemd[1]: libpod-01cafa8bd5533e7bc0ec47c8cd3e8dd209d395f86208e6af57f332435394ec91.scope: Deactivated successfully.
Dec 06 10:08:36 np0005548790.localdomain podman[298977]: 2025-12-06 10:08:36.08853803 +0000 UTC m=+0.177005629 container died 01cafa8bd5533e7bc0ec47c8cd3e8dd209d395f86208e6af57f332435394ec91 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_panini, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.openshift.expose-services=, GIT_BRANCH=main, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, ceph=True, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, name=rhceph, description=Red Hat Ceph Storage 7)
Dec 06 10:08:36 np0005548790.localdomain podman[298997]: 2025-12-06 10:08:36.194508299 +0000 UTC m=+0.092839560 container remove 01cafa8bd5533e7bc0ec47c8cd3e8dd209d395f86208e6af57f332435394ec91 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_panini, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_BRANCH=main, RELEASE=main, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 06 10:08:36 np0005548790.localdomain systemd[1]: libpod-conmon-01cafa8bd5533e7bc0ec47c8cd3e8dd209d395f86208e6af57f332435394ec91.scope: Deactivated successfully.
Dec 06 10:08:36 np0005548790.localdomain sudo[298942]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:36 np0005548790.localdomain ceph-mgr[286934]: ms_deliver_dispatch: unhandled message 0x55a2e9e831e0 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0
Dec 06 10:08:36 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@1(peon) e13  my rank is now 0 (was 1)
Dec 06 10:08:36 np0005548790.localdomain ceph-mgr[286934]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0
Dec 06 10:08:36 np0005548790.localdomain ceph-mgr[286934]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0
Dec 06 10:08:36 np0005548790.localdomain ceph-mgr[286934]: ms_deliver_dispatch: unhandled message 0x55a2e9e83080 mon_map magic: 0 from mon.0 v2:172.18.0.108:3300/0
Dec 06 10:08:36 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [INF] : mon.np0005548790 calling monitor election
Dec 06 10:08:36 np0005548790.localdomain ceph-mon[288373]: paxos.0).electionLogic(56) init, last seen epoch 56
Dec 06 10:08:36 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(electing) e13 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:08:36 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(electing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005548788"} v 0)
Dec 06 10:08:36 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:08:36 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(electing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005548789"} v 0)
Dec 06 10:08:36 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:08:36 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(electing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005548790"} v 0)
Dec 06 10:08:36 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:08:36 np0005548790.localdomain sudo[299013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:08:36 np0005548790.localdomain sudo[299013]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:36 np0005548790.localdomain sudo[299013]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:36 np0005548790.localdomain sudo[299031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:08:36 np0005548790.localdomain sudo[299031]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:08:36 np0005548790.localdomain podman[299064]: 2025-12-06 10:08:36.972468195 +0000 UTC m=+0.090186479 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:08:36 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e7d08c3f0555e8497ba100aab91321c8b9d5bb4833da178c2526c4d8c8c10558-merged.mount: Deactivated successfully.
Dec 06 10:08:37 np0005548790.localdomain podman[299064]: 2025-12-06 10:08:37.010421449 +0000 UTC m=+0.128139753 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:08:37 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:08:37 np0005548790.localdomain podman[299073]: 
Dec 06 10:08:37 np0005548790.localdomain podman[299073]: 2025-12-06 10:08:37.070217105 +0000 UTC m=+0.166242030 container create e23f0a53207bd73b34a9989c3219c9b66e0df6ba4455a67ef4e66d66baf51f2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_lumiere, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_CLEAN=True, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 06 10:08:37 np0005548790.localdomain systemd[1]: Started libpod-conmon-e23f0a53207bd73b34a9989c3219c9b66e0df6ba4455a67ef4e66d66baf51f2a.scope.
Dec 06 10:08:37 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:08:37 np0005548790.localdomain podman[299073]: 2025-12-06 10:08:37.037744008 +0000 UTC m=+0.133768983 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:08:37 np0005548790.localdomain podman[299073]: 2025-12-06 10:08:37.139024004 +0000 UTC m=+0.235048949 container init e23f0a53207bd73b34a9989c3219c9b66e0df6ba4455a67ef4e66d66baf51f2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_lumiere, CEPH_POINT_RELEASE=, ceph=True, io.openshift.tags=rhceph ceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=1763362218, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, vcs-type=git)
Dec 06 10:08:37 np0005548790.localdomain podman[299073]: 2025-12-06 10:08:37.148591099 +0000 UTC m=+0.244616014 container start e23f0a53207bd73b34a9989c3219c9b66e0df6ba4455a67ef4e66d66baf51f2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_lumiere, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, architecture=x86_64, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-type=git, release=1763362218, vendor=Red Hat, Inc., RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, name=rhceph)
Dec 06 10:08:37 np0005548790.localdomain podman[299073]: 2025-12-06 10:08:37.149255716 +0000 UTC m=+0.245280711 container attach e23f0a53207bd73b34a9989c3219c9b66e0df6ba4455a67ef4e66d66baf51f2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_lumiere, com.redhat.component=rhceph-container, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, architecture=x86_64, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, ceph=True, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, name=rhceph)
Dec 06 10:08:37 np0005548790.localdomain admiring_lumiere[299106]: 167 167
Dec 06 10:08:37 np0005548790.localdomain systemd[1]: libpod-e23f0a53207bd73b34a9989c3219c9b66e0df6ba4455a67ef4e66d66baf51f2a.scope: Deactivated successfully.
Dec 06 10:08:37 np0005548790.localdomain podman[299073]: 2025-12-06 10:08:37.152255656 +0000 UTC m=+0.248280571 container died e23f0a53207bd73b34a9989c3219c9b66e0df6ba4455a67ef4e66d66baf51f2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_lumiere, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, CEPH_POINT_RELEASE=, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., RELEASE=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_BRANCH=main, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, ceph=True, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 10:08:37 np0005548790.localdomain podman[299111]: 2025-12-06 10:08:37.248939049 +0000 UTC m=+0.086674836 container remove e23f0a53207bd73b34a9989c3219c9b66e0df6ba4455a67ef4e66d66baf51f2a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_lumiere, distribution-scope=public, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, version=7, GIT_BRANCH=main)
Dec 06 10:08:37 np0005548790.localdomain systemd[1]: libpod-conmon-e23f0a53207bd73b34a9989c3219c9b66e0df6ba4455a67ef4e66d66baf51f2a.scope: Deactivated successfully.
Dec 06 10:08:37 np0005548790.localdomain sudo[299031]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:37 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(electing) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0)
Dec 06 10:08:37 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-70dc6cd42ef8ee998e43c3f60590cc8e33ec3f99bb7905363561cfcfbf2a19e2-merged.mount: Deactivated successfully.
Dec 06 10:08:41 np0005548790.localdomain ceph-mds[285635]: mds.beacon.mds.np0005548790.vhcezv missed beacon ack from the monitors
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [INF] : mon.np0005548790 is new leader, mons np0005548790,np0005548789 in quorum (ranks 0,2)
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [DBG] : monmap epoch 13
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [DBG] : fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [DBG] : last_changed 2025-12-06T10:08:36.308855+0000
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [DBG] : created 2025-12-06T07:57:14.295835+0000
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef)
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005548790
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005548788
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005548789
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005548790.vhcezv=up:active} 2 up:standby
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [DBG] : osdmap e92: 6 total, 6 up, 6 in
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [DBG] : mgrmap e32: np0005548789.mzhmje(active, since 20s), standbys: np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548787.umwsra, np0005548790.kvkfyr
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum np0005548790,np0005548789 (MON_DOWN)
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm; 1/3 mons down, quorum np0005548790,np0005548789
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [WRN] :     stray daemon mgr.np0005548785.vhqlsq on host np0005548785.localdomain not managed by cephadm
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [WRN] :     stray host np0005548785.localdomain has 1 stray daemons: ['mgr.np0005548785.vhqlsq']
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum np0005548790,np0005548789
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [WRN] :     mon.np0005548788 (rank 1) addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] is down (out of quorum)
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0)
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0)
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0)
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548790 (monmap changed)...
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: from='client.34469 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005548787"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: Remove daemons mon.np0005548787
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "quorum_status"} : dispatch
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: Safe to remove mon.np0005548787: new quorum should be ['np0005548790', 'np0005548788', 'np0005548789'] (from ['np0005548790', 'np0005548788', 'np0005548789'])
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.0 (monmap changed)...
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: Removing monitor np0005548787 from monmap...
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon rm", "name": "np0005548787"} : dispatch
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: Removing daemon mon.np0005548787 from np0005548787.localdomain -- ports []
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.0 on np0005548790.localdomain
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790 calling monitor election
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: mon.np0005548789 calling monitor election
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790 is new leader, mons np0005548790,np0005548789 in quorum (ranks 0,2)
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: monmap epoch 13
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: last_changed 2025-12-06T10:08:36.308855+0000
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: created 2025-12-06T07:57:14.295835+0000
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: min_mon_release 18 (reef)
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: election_strategy: 1
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005548790
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005548788
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005548789
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: fsmap cephfs:1 {0=mds.np0005548790.vhcezv=up:active} 2 up:standby
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: osdmap e92: 6 total, 6 up, 6 in
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: mgrmap e32: np0005548789.mzhmje(active, since 20s), standbys: np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548787.umwsra, np0005548790.kvkfyr
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: Health check failed: 1/3 mons down, quorum np0005548790,np0005548789 (MON_DOWN)
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm; 1/3 mons down, quorum np0005548790,np0005548789
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]:     stray daemon mgr.np0005548785.vhqlsq on host np0005548785.localdomain not managed by cephadm
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]:     stray host np0005548785.localdomain has 1 stray daemons: ['mgr.np0005548785.vhqlsq']
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005548790,np0005548789
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]:     mon.np0005548788 (rank 1) addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] is down (out of quorum)
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0)
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:41 np0005548790.localdomain sudo[299136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:08:41 np0005548790.localdomain sudo[299136]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:41 np0005548790.localdomain sudo[299136]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:41 np0005548790.localdomain sudo[299154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:08:41 np0005548790.localdomain sudo[299154]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:41 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:08:42 np0005548790.localdomain podman[299188]: 
Dec 06 10:08:42 np0005548790.localdomain podman[299188]: 2025-12-06 10:08:42.088957884 +0000 UTC m=+0.075063156 container create 4120ec47d62bf03a78d4cba3b72d55dc88c5327627c1389bca4209be11e3c6a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_lamport, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, ceph=True, version=7, distribution-scope=public, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.openshift.expose-services=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1763362218, GIT_BRANCH=main)
Dec 06 10:08:42 np0005548790.localdomain systemd[1]: Started libpod-conmon-4120ec47d62bf03a78d4cba3b72d55dc88c5327627c1389bca4209be11e3c6a4.scope.
Dec 06 10:08:42 np0005548790.localdomain podman[299188]: 2025-12-06 10:08:42.058160791 +0000 UTC m=+0.044266093 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:08:42 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:08:42 np0005548790.localdomain podman[299188]: 2025-12-06 10:08:42.176729017 +0000 UTC m=+0.162834289 container init 4120ec47d62bf03a78d4cba3b72d55dc88c5327627c1389bca4209be11e3c6a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_lamport, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, architecture=x86_64, GIT_BRANCH=main, io.openshift.expose-services=, version=7, vendor=Red Hat, Inc., RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 06 10:08:42 np0005548790.localdomain systemd[1]: tmp-crun.mIZGRn.mount: Deactivated successfully.
Dec 06 10:08:42 np0005548790.localdomain podman[299188]: 2025-12-06 10:08:42.19255758 +0000 UTC m=+0.178662852 container start 4120ec47d62bf03a78d4cba3b72d55dc88c5327627c1389bca4209be11e3c6a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_lamport, com.redhat.component=rhceph-container, GIT_BRANCH=main, ceph=True, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.buildah.version=1.41.4, GIT_CLEAN=True, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 06 10:08:42 np0005548790.localdomain podman[299188]: 2025-12-06 10:08:42.19292432 +0000 UTC m=+0.179029702 container attach 4120ec47d62bf03a78d4cba3b72d55dc88c5327627c1389bca4209be11e3c6a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_lamport, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_BRANCH=main, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat Ceph Storage 7, release=1763362218, version=7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main)
Dec 06 10:08:42 np0005548790.localdomain blissful_lamport[299203]: 167 167
Dec 06 10:08:42 np0005548790.localdomain systemd[1]: libpod-4120ec47d62bf03a78d4cba3b72d55dc88c5327627c1389bca4209be11e3c6a4.scope: Deactivated successfully.
Dec 06 10:08:42 np0005548790.localdomain podman[299188]: 2025-12-06 10:08:42.198864578 +0000 UTC m=+0.184969890 container died 4120ec47d62bf03a78d4cba3b72d55dc88c5327627c1389bca4209be11e3c6a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_lamport, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, name=rhceph, GIT_BRANCH=main, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, version=7, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 10:08:42 np0005548790.localdomain podman[299208]: 2025-12-06 10:08:42.302533957 +0000 UTC m=+0.090450616 container remove 4120ec47d62bf03a78d4cba3b72d55dc88c5327627c1389bca4209be11e3c6a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_lamport, distribution-scope=public, CEPH_POINT_RELEASE=, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, RELEASE=main, name=rhceph, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, com.redhat.component=rhceph-container)
Dec 06 10:08:42 np0005548790.localdomain systemd[1]: libpod-conmon-4120ec47d62bf03a78d4cba3b72d55dc88c5327627c1389bca4209be11e3c6a4.scope: Deactivated successfully.
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.3 (monmap changed)...
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.3 on np0005548790.localdomain
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.32:0/2722608319' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.32:0/2722608319' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:08:42 np0005548790.localdomain sudo[299154]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:42.476247) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015722476290, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 2900, "num_deletes": 256, "total_data_size": 11816752, "memory_usage": 12557864, "flush_reason": "Manual Compaction"}
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0)
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015722518075, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 7243198, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17664, "largest_seqno": 20559, "table_properties": {"data_size": 7231318, "index_size": 7361, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3461, "raw_key_size": 31281, "raw_average_key_size": 22, "raw_value_size": 7205079, "raw_average_value_size": 5255, "num_data_blocks": 317, "num_entries": 1371, "num_filter_entries": 1371, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015623, "oldest_key_time": 1765015623, "file_creation_time": 1765015722, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8be51b2d-bcce-4d27-9e46-3f3cdf9e4a92", "db_session_id": "9EENGG53AOVF6BJXYAD5", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 41877 microseconds, and 14527 cpu microseconds.
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:42.518125) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 7243198 bytes OK
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:42.518148) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:42.519746) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:42.519767) EVENT_LOG_v1 {"time_micros": 1765015722519762, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:42.519821) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 11802616, prev total WAL file size 11804488, number of live WAL files 2.
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:42.522041) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130373933' seq:72057594037927935, type:22 .. '7061786F73003131303435' seq:0, type:0; will stop at (end)
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(7073KB)], [27(12MB)]
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015722522128, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 20666481, "oldest_snapshot_seqno": -1}
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0)
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0)
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0)
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005548790.vhcezv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548790.vhcezv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 11123 keys, 17441649 bytes, temperature: kUnknown
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015722627165, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 17441649, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17376651, "index_size": 36097, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27845, "raw_key_size": 297757, "raw_average_key_size": 26, "raw_value_size": 17185455, "raw_average_value_size": 1545, "num_data_blocks": 1384, "num_entries": 11123, "num_filter_entries": 11123, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015441, "oldest_key_time": 0, "file_creation_time": 1765015722, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8be51b2d-bcce-4d27-9e46-3f3cdf9e4a92", "db_session_id": "9EENGG53AOVF6BJXYAD5", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:42.627607) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 17441649 bytes
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:42.629316) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.5 rd, 165.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(6.9, 12.8 +0.0 blob) out(16.6 +0.0 blob), read-write-amplify(5.3) write-amplify(2.4) OK, records in: 11674, records dropped: 551 output_compression: NoCompression
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:42.629351) EVENT_LOG_v1 {"time_micros": 1765015722629335, "job": 14, "event": "compaction_finished", "compaction_time_micros": 105178, "compaction_time_cpu_micros": 49519, "output_level": 6, "num_output_files": 1, "total_output_size": 17441649, "num_input_records": 11674, "num_output_records": 11123, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015722630653, "job": 14, "event": "table_file_deletion", "file_number": 29}
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015722632836, "job": 14, "event": "table_file_deletion", "file_number": 27}
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:42.521874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:42.632961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:42.632970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:42.632974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:42.632977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:42.632981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:08:42 np0005548790.localdomain sudo[299232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:08:42 np0005548790.localdomain sudo[299232]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:42 np0005548790.localdomain sudo[299232]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:42 np0005548790.localdomain sudo[299250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:08:42 np0005548790.localdomain sudo[299250]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 06 10:08:42 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:43 np0005548790.localdomain systemd[1]: tmp-crun.oUPuVK.mount: Deactivated successfully.
Dec 06 10:08:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-decceef509d8a04043135888205d1d8a7d9fde494c549eaf6f9feb5243499fc6-merged.mount: Deactivated successfully.
Dec 06 10:08:43 np0005548790.localdomain podman[299284]: 
Dec 06 10:08:43 np0005548790.localdomain podman[299284]: 2025-12-06 10:08:43.18748026 +0000 UTC m=+0.069176219 container create fe71300f8bcf6ef67aa74c0336cad9a8cbd7871bf45bb4644a51630985bc2f25 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_bartik, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, release=1763362218, name=rhceph, io.openshift.expose-services=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, distribution-scope=public, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 10:08:43 np0005548790.localdomain systemd[1]: Started libpod-conmon-fe71300f8bcf6ef67aa74c0336cad9a8cbd7871bf45bb4644a51630985bc2f25.scope.
Dec 06 10:08:43 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:08:43 np0005548790.localdomain podman[299284]: 2025-12-06 10:08:43.158463914 +0000 UTC m=+0.040159893 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:08:43 np0005548790.localdomain podman[299284]: 2025-12-06 10:08:43.268923365 +0000 UTC m=+0.150619314 container init fe71300f8bcf6ef67aa74c0336cad9a8cbd7871bf45bb4644a51630985bc2f25 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_bartik, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_CLEAN=True, version=7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, ceph=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, RELEASE=main, vendor=Red Hat, Inc., release=1763362218, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7)
Dec 06 10:08:43 np0005548790.localdomain podman[299284]: 2025-12-06 10:08:43.278915061 +0000 UTC m=+0.160611010 container start fe71300f8bcf6ef67aa74c0336cad9a8cbd7871bf45bb4644a51630985bc2f25 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_bartik, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, version=7, release=1763362218, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main)
Dec 06 10:08:43 np0005548790.localdomain podman[299284]: 2025-12-06 10:08:43.279178088 +0000 UTC m=+0.160874087 container attach fe71300f8bcf6ef67aa74c0336cad9a8cbd7871bf45bb4644a51630985bc2f25 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_bartik, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, vcs-type=git, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_CLEAN=True, vendor=Red Hat, Inc., name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 10:08:43 np0005548790.localdomain ecstatic_bartik[299299]: 167 167
Dec 06 10:08:43 np0005548790.localdomain podman[299284]: 2025-12-06 10:08:43.281746717 +0000 UTC m=+0.163442696 container died fe71300f8bcf6ef67aa74c0336cad9a8cbd7871bf45bb4644a51630985bc2f25 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_bartik, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, build-date=2025-11-26T19:44:28Z, version=7, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, io.openshift.expose-services=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, name=rhceph, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, com.redhat.component=rhceph-container)
Dec 06 10:08:43 np0005548790.localdomain systemd[1]: libpod-fe71300f8bcf6ef67aa74c0336cad9a8cbd7871bf45bb4644a51630985bc2f25.scope: Deactivated successfully.
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [INF] : mon.np0005548790 calling monitor election
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: paxos.0).electionLogic(59) init, last seen epoch 59, mid-election, bumping
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(electing) e13 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [INF] : mon.np0005548790 is new leader, mons np0005548790,np0005548788,np0005548789 in quorum (ranks 0,1,2)
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [DBG] : monmap epoch 13
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [DBG] : fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [DBG] : last_changed 2025-12-06T10:08:36.308855+0000
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [DBG] : created 2025-12-06T07:57:14.295835+0000
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef)
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005548790
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005548788
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005548789
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005548790.vhcezv=up:active} 2 up:standby
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [DBG] : osdmap e92: 6 total, 6 up, 6 in
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [DBG] : mgrmap e32: np0005548789.mzhmje(active, since 22s), standbys: np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548787.umwsra, np0005548790.kvkfyr
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005548790,np0005548789)
Dec 06 10:08:43 np0005548790.localdomain podman[299304]: 2025-12-06 10:08:43.390935913 +0000 UTC m=+0.099072557 container remove fe71300f8bcf6ef67aa74c0336cad9a8cbd7871bf45bb4644a51630985bc2f25 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_bartik, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, version=7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, RELEASE=main, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, ceph=True, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True)
Dec 06 10:08:43 np0005548790.localdomain systemd[1]: libpod-conmon-fe71300f8bcf6ef67aa74c0336cad9a8cbd7871bf45bb4644a51630985bc2f25.scope: Deactivated successfully.
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [WRN] :     stray daemon mgr.np0005548785.vhqlsq on host np0005548785.localdomain not managed by cephadm
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(cluster) log [WRN] :     stray host np0005548785.localdomain has 1 stray daemons: ['mgr.np0005548785.vhqlsq']
Dec 06 10:08:43 np0005548790.localdomain sudo[299250]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0)
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0)
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:43 np0005548790.localdomain sudo[299320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:08:43 np0005548790.localdomain sudo[299320]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:43 np0005548790.localdomain sudo[299320]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:43 np0005548790.localdomain sudo[299338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:08:43 np0005548790.localdomain sudo[299338]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: from='client.? 172.18.0.106:0/2689790601' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: mon.np0005548788 calling monitor election
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790 calling monitor election
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790 is new leader, mons np0005548790,np0005548788,np0005548789 in quorum (ranks 0,1,2)
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: monmap epoch 13
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: last_changed 2025-12-06T10:08:36.308855+0000
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: created 2025-12-06T07:57:14.295835+0000
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: min_mon_release 18 (reef)
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: election_strategy: 1
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005548790
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005548788
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005548789
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: fsmap cephfs:1 {0=mds.np0005548790.vhcezv=up:active} 2 up:standby
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: osdmap e92: 6 total, 6 up, 6 in
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: mgrmap e32: np0005548789.mzhmje(active, since 22s), standbys: np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548787.umwsra, np0005548790.kvkfyr
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005548790,np0005548789)
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]:     stray daemon mgr.np0005548785.vhqlsq on host np0005548785.localdomain not managed by cephadm
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]:     stray host np0005548785.localdomain has 1 stray daemons: ['mgr.np0005548785.vhqlsq']
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:08:43 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:44 np0005548790.localdomain systemd[1]: tmp-crun.Tlntzk.mount: Deactivated successfully.
Dec 06 10:08:44 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d77d49817a210e12ddd3fcc9e79f17f10f8f67f112d2cf6c91c424b8bf2db3d3-merged.mount: Deactivated successfully.
Dec 06 10:08:44 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 06 10:08:44 np0005548790.localdomain podman[299374]: 
Dec 06 10:08:44 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:44 np0005548790.localdomain podman[299374]: 2025-12-06 10:08:44.158596923 +0000 UTC m=+0.079960096 container create 7bb52ba44501e69375ed42f1728f22e0d246768eab57475ec75ad4c0c468d309 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_einstein, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, vcs-type=git, ceph=True, distribution-scope=public, GIT_CLEAN=True, io.openshift.expose-services=, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 06 10:08:44 np0005548790.localdomain systemd[1]: Started libpod-conmon-7bb52ba44501e69375ed42f1728f22e0d246768eab57475ec75ad4c0c468d309.scope.
Dec 06 10:08:44 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:08:44 np0005548790.localdomain podman[299374]: 2025-12-06 10:08:44.125363586 +0000 UTC m=+0.046726759 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:08:44 np0005548790.localdomain podman[299374]: 2025-12-06 10:08:44.228677165 +0000 UTC m=+0.150040328 container init 7bb52ba44501e69375ed42f1728f22e0d246768eab57475ec75ad4c0c468d309 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_einstein, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, vcs-type=git, ceph=True, version=7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, com.redhat.component=rhceph-container, RELEASE=main, io.buildah.version=1.41.4)
Dec 06 10:08:44 np0005548790.localdomain podman[299374]: 2025-12-06 10:08:44.242482593 +0000 UTC m=+0.163845736 container start 7bb52ba44501e69375ed42f1728f22e0d246768eab57475ec75ad4c0c468d309 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_einstein, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_BRANCH=main, RELEASE=main, architecture=x86_64, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, version=7, distribution-scope=public, ceph=True)
Dec 06 10:08:44 np0005548790.localdomain wizardly_einstein[299389]: 167 167
Dec 06 10:08:44 np0005548790.localdomain podman[299374]: 2025-12-06 10:08:44.24274355 +0000 UTC m=+0.164106713 container attach 7bb52ba44501e69375ed42f1728f22e0d246768eab57475ec75ad4c0c468d309 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_einstein, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, name=rhceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.buildah.version=1.41.4, GIT_CLEAN=True)
Dec 06 10:08:44 np0005548790.localdomain systemd[1]: libpod-7bb52ba44501e69375ed42f1728f22e0d246768eab57475ec75ad4c0c468d309.scope: Deactivated successfully.
Dec 06 10:08:44 np0005548790.localdomain podman[299374]: 2025-12-06 10:08:44.245954377 +0000 UTC m=+0.167317570 container died 7bb52ba44501e69375ed42f1728f22e0d246768eab57475ec75ad4c0c468d309 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_einstein, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, GIT_CLEAN=True, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, version=7, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, ceph=True, io.openshift.tags=rhceph ceph, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 06 10:08:44 np0005548790.localdomain podman[299394]: 2025-12-06 10:08:44.348981038 +0000 UTC m=+0.090227931 container remove 7bb52ba44501e69375ed42f1728f22e0d246768eab57475ec75ad4c0c468d309 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_einstein, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, release=1763362218, io.buildah.version=1.41.4, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, version=7, RELEASE=main, ceph=True, vendor=Red Hat, Inc.)
Dec 06 10:08:44 np0005548790.localdomain systemd[1]: libpod-conmon-7bb52ba44501e69375ed42f1728f22e0d246768eab57475ec75ad4c0c468d309.scope: Deactivated successfully.
Dec 06 10:08:44 np0005548790.localdomain sudo[299338]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:44 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0)
Dec 06 10:08:44 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:44 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0)
Dec 06 10:08:44 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:44 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 06 10:08:44 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:08:44 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 06 10:08:44 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:08:44 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:08:44 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:44 np0005548790.localdomain sudo[299411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:08:44 np0005548790.localdomain sudo[299411]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:44 np0005548790.localdomain sudo[299411]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:44 np0005548790.localdomain sudo[299429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:08:44 np0005548790.localdomain sudo[299429]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:44 np0005548790.localdomain ceph-mon[288373]: pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:08:44 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548790.kvkfyr (monmap changed)...
Dec 06 10:08:44 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548790.kvkfyr on np0005548790.localdomain
Dec 06 10:08:44 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:44 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:44 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:44 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:08:44 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:08:44 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:45 np0005548790.localdomain podman[299465]: 
Dec 06 10:08:45 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-7793a06787fb5f77223c520f2d1709c7aad1b156ce8a2be1a676d604f25eade2-merged.mount: Deactivated successfully.
Dec 06 10:08:45 np0005548790.localdomain podman[299465]: 2025-12-06 10:08:45.104598777 +0000 UTC m=+0.084251401 container create 60b35363e587506d4bd830510fddef009d0395e39204bb8d0c2c1f20c0f5d8ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_engelbart, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, RELEASE=main, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, CEPH_POINT_RELEASE=, release=1763362218, name=rhceph, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_BRANCH=main, io.openshift.expose-services=, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z)
Dec 06 10:08:45 np0005548790.localdomain systemd[1]: Started libpod-conmon-60b35363e587506d4bd830510fddef009d0395e39204bb8d0c2c1f20c0f5d8ce.scope.
Dec 06 10:08:45 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:08:45 np0005548790.localdomain podman[299465]: 2025-12-06 10:08:45.167309361 +0000 UTC m=+0.146961985 container init 60b35363e587506d4bd830510fddef009d0395e39204bb8d0c2c1f20c0f5d8ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_engelbart, com.redhat.component=rhceph-container, name=rhceph, version=7, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vendor=Red Hat, Inc., GIT_CLEAN=True, ceph=True, io.openshift.tags=rhceph ceph, release=1763362218, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 06 10:08:45 np0005548790.localdomain podman[299465]: 2025-12-06 10:08:45.068291737 +0000 UTC m=+0.047944431 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:08:45 np0005548790.localdomain infallible_engelbart[299480]: 167 167
Dec 06 10:08:45 np0005548790.localdomain systemd[1]: libpod-60b35363e587506d4bd830510fddef009d0395e39204bb8d0c2c1f20c0f5d8ce.scope: Deactivated successfully.
Dec 06 10:08:45 np0005548790.localdomain podman[299465]: 2025-12-06 10:08:45.181971384 +0000 UTC m=+0.161623988 container start 60b35363e587506d4bd830510fddef009d0395e39204bb8d0c2c1f20c0f5d8ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_engelbart, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, RELEASE=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_BRANCH=main, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, distribution-scope=public, build-date=2025-11-26T19:44:28Z, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 06 10:08:45 np0005548790.localdomain podman[299465]: 2025-12-06 10:08:45.182278742 +0000 UTC m=+0.161931376 container attach 60b35363e587506d4bd830510fddef009d0395e39204bb8d0c2c1f20c0f5d8ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_engelbart, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_BRANCH=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, com.redhat.component=rhceph-container, architecture=x86_64, distribution-scope=public, version=7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 06 10:08:45 np0005548790.localdomain podman[299465]: 2025-12-06 10:08:45.184324807 +0000 UTC m=+0.163977431 container died 60b35363e587506d4bd830510fddef009d0395e39204bb8d0c2c1f20c0f5d8ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_engelbart, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=rhceph-container, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, release=1763362218, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 06 10:08:45 np0005548790.localdomain podman[299485]: 2025-12-06 10:08:45.282360315 +0000 UTC m=+0.086247995 container remove 60b35363e587506d4bd830510fddef009d0395e39204bb8d0c2c1f20c0f5d8ce (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=infallible_engelbart, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_CLEAN=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 06 10:08:45 np0005548790.localdomain systemd[1]: libpod-conmon-60b35363e587506d4bd830510fddef009d0395e39204bb8d0c2c1f20c0f5d8ce.scope: Deactivated successfully.
Dec 06 10:08:45 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 06 10:08:45 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:45 np0005548790.localdomain sudo[299429]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:45 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0)
Dec 06 10:08:45 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:45 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0)
Dec 06 10:08:45 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:45 np0005548790.localdomain ceph-mon[288373]: from='client.44452 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005548787.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:08:45 np0005548790.localdomain ceph-mon[288373]: Removed label mgr from host np0005548787.localdomain
Dec 06 10:08:45 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mon.np0005548790 (monmap changed)...
Dec 06 10:08:45 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mon.np0005548790 on np0005548790.localdomain
Dec 06 10:08:45 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:45 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:45 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:46 np0005548790.localdomain systemd[1]: tmp-crun.SJzUFn.mount: Deactivated successfully.
Dec 06 10:08:46 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-8a05b763ff45614c80587fe6c769962ad0760d85fe582035cf4b1a32463044bf-merged.mount: Deactivated successfully.
Dec 06 10:08:46 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:08:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:08:48.391 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:08:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:08:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:08:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:08:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:08:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:08:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18701 "" "Go-http-client/1.1"
Dec 06 10:08:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:08:48.393 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:08:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:08:48.393 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:08:48 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain.devices.0}] v 0)
Dec 06 10:08:48 np0005548790.localdomain ceph-mon[288373]: pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:08:48 np0005548790.localdomain ceph-mon[288373]: from='client.44458 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005548787.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:08:48 np0005548790.localdomain ceph-mon[288373]: Removed label _admin from host np0005548787.localdomain
Dec 06 10:08:48 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:48 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain}] v 0)
Dec 06 10:08:48 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:48 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:08:48 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:48 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:08:48 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:08:48 np0005548790.localdomain sudo[299501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 10:08:48 np0005548790.localdomain sudo[299501]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:48 np0005548790.localdomain sudo[299501]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:48 np0005548790.localdomain sudo[299519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph
Dec 06 10:08:48 np0005548790.localdomain sudo[299519]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:48 np0005548790.localdomain sudo[299519]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:48 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain.devices.0}] v 0)
Dec 06 10:08:48 np0005548790.localdomain sudo[299537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:08:48 np0005548790.localdomain sudo[299537]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:48 np0005548790.localdomain sudo[299537]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:48 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:48 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain}] v 0)
Dec 06 10:08:48 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:48 np0005548790.localdomain sudo[299555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:08:48 np0005548790.localdomain sudo[299555]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:48 np0005548790.localdomain sudo[299555]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:49 np0005548790.localdomain sudo[299573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:08:49 np0005548790.localdomain sudo[299573]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:49 np0005548790.localdomain sudo[299573]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:49 np0005548790.localdomain sudo[299607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:08:49 np0005548790.localdomain sudo[299607]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:49 np0005548790.localdomain sudo[299607]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:49 np0005548790.localdomain sudo[299625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:08:49 np0005548790.localdomain sudo[299625]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:49 np0005548790.localdomain sudo[299625]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:49 np0005548790.localdomain sudo[299643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 06 10:08:49 np0005548790.localdomain sudo[299643]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:49 np0005548790.localdomain sudo[299643]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:49 np0005548790.localdomain sudo[299661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:08:49 np0005548790.localdomain sudo[299661]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:49 np0005548790.localdomain sudo[299661]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:49 np0005548790.localdomain sudo[299679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:08:49 np0005548790.localdomain sudo[299679]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:49 np0005548790.localdomain sudo[299679]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:49 np0005548790.localdomain sudo[299697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:08:49 np0005548790.localdomain sudo[299697]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:49 np0005548790.localdomain sudo[299697]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:49 np0005548790.localdomain ceph-mon[288373]: pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:08:49 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:49 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:49 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:49 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:08:49 np0005548790.localdomain ceph-mon[288373]: Removing np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:08:49 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf
Dec 06 10:08:49 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf
Dec 06 10:08:49 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf
Dec 06 10:08:49 np0005548790.localdomain ceph-mon[288373]: Removing np0005548787.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:08:49 np0005548790.localdomain ceph-mon[288373]: Removing np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:08:49 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:49 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:49 np0005548790.localdomain sudo[299715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:08:49 np0005548790.localdomain sudo[299715]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:49 np0005548790.localdomain sudo[299715]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:49 np0005548790.localdomain sudo[299733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:08:49 np0005548790.localdomain sudo[299733]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:08:49 np0005548790.localdomain sudo[299733]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:49 np0005548790.localdomain podman[299751]: 2025-12-06 10:08:49.855971694 +0000 UTC m=+0.086841050 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 10:08:49 np0005548790.localdomain podman[299751]: 2025-12-06 10:08:49.894279167 +0000 UTC m=+0.125148513 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 10:08:49 np0005548790.localdomain sudo[299781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:08:49 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:08:49 np0005548790.localdomain sudo[299781]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:49 np0005548790.localdomain sudo[299781]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:49 np0005548790.localdomain sudo[299803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:08:49 np0005548790.localdomain sudo[299803]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:49 np0005548790.localdomain sudo[299803]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:50 np0005548790.localdomain sudo[299821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:08:50 np0005548790.localdomain sudo[299821]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:50 np0005548790.localdomain sudo[299821]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0)
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0)
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0)
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0)
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:50 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:51 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:08:51 np0005548790.localdomain ceph-mon[288373]: Removing daemon mgr.np0005548787.umwsra from np0005548787.localdomain -- ports [8765]
Dec 06 10:08:51 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Dec 06 10:08:51 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:51.954448) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:08:51 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Dec 06 10:08:51 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015731954517, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 595, "num_deletes": 250, "total_data_size": 510176, "memory_usage": 522840, "flush_reason": "Manual Compaction"}
Dec 06 10:08:51 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Dec 06 10:08:51 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015731960337, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 453089, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20560, "largest_seqno": 21154, "table_properties": {"data_size": 449739, "index_size": 1205, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8096, "raw_average_key_size": 19, "raw_value_size": 442687, "raw_average_value_size": 1054, "num_data_blocks": 48, "num_entries": 420, "num_filter_entries": 420, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015722, "oldest_key_time": 1765015722, "file_creation_time": 1765015731, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8be51b2d-bcce-4d27-9e46-3f3cdf9e4a92", "db_session_id": "9EENGG53AOVF6BJXYAD5", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:08:51 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 5948 microseconds, and 2567 cpu microseconds.
Dec 06 10:08:51 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:08:51 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:51.960401) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 453089 bytes OK
Dec 06 10:08:51 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:51.960422) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Dec 06 10:08:51 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:51.962287) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Dec 06 10:08:51 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:51.962309) EVENT_LOG_v1 {"time_micros": 1765015731962303, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:08:51 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:51.962332) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:08:51 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 506733, prev total WAL file size 506733, number of live WAL files 2.
Dec 06 10:08:51 np0005548790.localdomain ceph-mon[288373]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:08:51 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:51.962885) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031323836' seq:72057594037927935, type:22 .. '6B760031353337' seq:0, type:0; will stop at (end)
Dec 06 10:08:51 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:08:51 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(442KB)], [30(16MB)]
Dec 06 10:08:51 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015731962932, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 17894738, "oldest_snapshot_seqno": -1}
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 11016 keys, 16889861 bytes, temperature: kUnknown
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015732068818, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 16889861, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16826450, "index_size": 34766, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27589, "raw_key_size": 297268, "raw_average_key_size": 26, "raw_value_size": 16637814, "raw_average_value_size": 1510, "num_data_blocks": 1311, "num_entries": 11016, "num_filter_entries": 11016, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015441, "oldest_key_time": 0, "file_creation_time": 1765015731, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "8be51b2d-bcce-4d27-9e46-3f3cdf9e4a92", "db_session_id": "9EENGG53AOVF6BJXYAD5", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:52.069312) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 16889861 bytes
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:52.071311) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.7 rd, 159.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 16.6 +0.0 blob) out(16.1 +0.0 blob), read-write-amplify(76.8) write-amplify(37.3) OK, records in: 11543, records dropped: 527 output_compression: NoCompression
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:52.071343) EVENT_LOG_v1 {"time_micros": 1765015732071329, "job": 16, "event": "compaction_finished", "compaction_time_micros": 106075, "compaction_time_cpu_micros": 30461, "output_level": 6, "num_output_files": 1, "total_output_size": 16889861, "num_input_records": 11543, "num_output_records": 11016, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015732071655, "job": 16, "event": "table_file_deletion", "file_number": 32}
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015732074624, "job": 16, "event": "table_file_deletion", "file_number": 30}
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:51.962803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:52.074745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:52.074755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:52.074758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:52.074762) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: rocksdb: (Original Log Time 2025/12/06-10:08:52.074765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:08:52 np0005548790.localdomain sshd[299839]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.np0005548787.umwsra"} v 0)
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth rm", "entity": "mgr.np0005548787.umwsra"} : dispatch
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005548787.umwsra"}]': finished
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:08:52 np0005548790.localdomain sudo[299841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:08:52 np0005548790.localdomain sudo[299841]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:52 np0005548790.localdomain sudo[299841]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth rm", "entity": "mgr.np0005548787.umwsra"} : dispatch
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005548787.umwsra"}]': finished
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:52 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:08:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:08:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:08:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:08:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:08:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:08:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:08:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:08:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:08:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:08:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:08:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:08:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:08:53 np0005548790.localdomain ceph-mon[288373]: Removing key for mgr.np0005548787.umwsra
Dec 06 10:08:54 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain.devices.0}] v 0)
Dec 06 10:08:54 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:54 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain}] v 0)
Dec 06 10:08:54 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:54 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:08:54 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:54 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:08:54 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:08:54 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:08:54 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:54 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:08:54 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:08:54 np0005548790.localdomain sshd[299839]: Connection reset by authenticating user root 45.135.232.92 port 27486 [preauth]
Dec 06 10:08:54 np0005548790.localdomain sshd[299859]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:08:54 np0005548790.localdomain sudo[299860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:08:54 np0005548790.localdomain sudo[299860]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:08:54 np0005548790.localdomain sudo[299860]: pam_unix(sudo:session): session closed for user root
Dec 06 10:08:54 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 06 10:08:54 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:08:54 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:08:54 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:55 np0005548790.localdomain ceph-mon[288373]: pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:08:55 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:55 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:55 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:55 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:08:55 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:55 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:08:55 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:08:55 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:55 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain.devices.0}] v 0)
Dec 06 10:08:55 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:55 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548787.localdomain}] v 0)
Dec 06 10:08:55 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:55 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 06 10:08:55 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:08:55 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:08:55 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:08:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:08:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:08:55 np0005548790.localdomain podman[299880]: 2025-12-06 10:08:55.582158204 +0000 UTC m=+0.088047032 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:08:55 np0005548790.localdomain podman[299881]: 2025-12-06 10:08:55.635471758 +0000 UTC m=+0.138311984 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, architecture=x86_64, io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm)
Dec 06 10:08:55 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:08:55 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:55 np0005548790.localdomain podman[299881]: 2025-12-06 10:08:55.679282908 +0000 UTC m=+0.182123064 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, release=1755695350, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, version=9.6, distribution-scope=public, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 06 10:08:55 np0005548790.localdomain podman[299880]: 2025-12-06 10:08:55.689124161 +0000 UTC m=+0.195012989 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 06 10:08:55 np0005548790.localdomain podman[299879]: 2025-12-06 10:08:55.692013188 +0000 UTC m=+0.200587407 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 10:08:55 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:08:55 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:08:55 np0005548790.localdomain podman[299879]: 2025-12-06 10:08:55.771328877 +0000 UTC m=+0.279903096 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:08:55 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:08:56 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548787 (monmap changed)...
Dec 06 10:08:56 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548787 on np0005548787.localdomain
Dec 06 10:08:56 np0005548790.localdomain ceph-mon[288373]: pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:08:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:56 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548788 (monmap changed)...
Dec 06 10:08:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:08:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:56 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain
Dec 06 10:08:56 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:56 np0005548790.localdomain sshd[299859]: Invalid user admin from 45.135.232.92 port 43092
Dec 06 10:08:56 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:08:56 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:56 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:08:56 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:56 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec 06 10:08:56 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 06 10:08:56 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:08:56 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:56 np0005548790.localdomain sshd[299859]: Connection reset by invalid user admin 45.135.232.92 port 43092 [preauth]
Dec 06 10:08:56 np0005548790.localdomain sshd[299939]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:08:56 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 06 10:08:56 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:56 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 06 10:08:56 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:56 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:08:57 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:08:57 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:57 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:57 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.2 (monmap changed)...
Dec 06 10:08:57 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 06 10:08:57 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:57 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.2 on np0005548788.localdomain
Dec 06 10:08:57 np0005548790.localdomain ceph-mon[288373]: from='client.54257 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005548787.localdomain", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:08:57 np0005548790.localdomain ceph-mon[288373]: Added label _no_schedule to host np0005548787.localdomain
Dec 06 10:08:57 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:57 np0005548790.localdomain ceph-mon[288373]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005548787.localdomain
Dec 06 10:08:57 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:57 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:57 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:08:57 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:57 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0)
Dec 06 10:08:57 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 06 10:08:57 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:08:57 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:58 np0005548790.localdomain sshd[299939]: Connection reset by authenticating user root 45.135.232.92 port 43106 [preauth]
Dec 06 10:08:58 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:08:58 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:58 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:08:58 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:58 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 06 10:08:58 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:08:58 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:08:58 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:58 np0005548790.localdomain ceph-mon[288373]: pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:08:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:58 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.5 (monmap changed)...
Dec 06 10:08:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 06 10:08:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:58 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.5 on np0005548788.localdomain
Dec 06 10:08:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:08:58 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:58 np0005548790.localdomain sshd[299941]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: from='client.44467 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005548787.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)...
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005548787.localdomain"} v 0)
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548787.localdomain"} : dispatch
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005548787.localdomain"}]': finished
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:08:59 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:00 np0005548790.localdomain sshd[299941]: Connection reset by authenticating user root 45.135.232.92 port 43114 [preauth]
Dec 06 10:09:00 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)...
Dec 06 10:09:00 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain
Dec 06 10:09:00 np0005548790.localdomain ceph-mon[288373]: pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:00 np0005548790.localdomain ceph-mon[288373]: from='client.44470 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005548787.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:09:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548787.localdomain"} : dispatch
Dec 06 10:09:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005548787.localdomain"}]': finished
Dec 06 10:09:00 np0005548790.localdomain ceph-mon[288373]: Removed host np0005548787.localdomain
Dec 06 10:09:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:00 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mon.np0005548788 (monmap changed)...
Dec 06 10:09:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:09:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:09:00 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:00 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mon.np0005548788 on np0005548788.localdomain
Dec 06 10:09:00 np0005548790.localdomain sshd[299943]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:09:00 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:09:01 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:01 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:09:01 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:01 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 06 10:09:01 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:09:01 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:09:01 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:01 np0005548790.localdomain sshd[299945]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:09:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:09:01 np0005548790.localdomain sshd[299945]: Accepted publickey for tripleo-admin from 192.168.122.11 port 46396 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 10:09:01 np0005548790.localdomain systemd-logind[760]: New session 69 of user tripleo-admin.
Dec 06 10:09:01 np0005548790.localdomain systemd[1]: Created slice User Slice of UID 1003.
Dec 06 10:09:01 np0005548790.localdomain systemd[1]: Starting User Runtime Directory /run/user/1003...
Dec 06 10:09:01 np0005548790.localdomain systemd[1]: Finished User Runtime Directory /run/user/1003.
Dec 06 10:09:01 np0005548790.localdomain podman[299947]: 2025-12-06 10:09:01.723870461 +0000 UTC m=+0.079262918 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:09:01 np0005548790.localdomain systemd[1]: Starting User Manager for UID 1003...
Dec 06 10:09:01 np0005548790.localdomain podman[299947]: 2025-12-06 10:09:01.739132119 +0000 UTC m=+0.094524556 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 10:09:01 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:09:01 np0005548790.localdomain systemd[299965]: pam_unix(systemd-user:session): session opened for user tripleo-admin(uid=1003) by (uid=0)
Dec 06 10:09:01 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0)
Dec 06 10:09:01 np0005548790.localdomain systemd[299965]: Queued start job for default target Main User Target.
Dec 06 10:09:01 np0005548790.localdomain systemd[299965]: Created slice User Application Slice.
Dec 06 10:09:01 np0005548790.localdomain systemd[299965]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 06 10:09:01 np0005548790.localdomain systemd[299965]: Started Daily Cleanup of User's Temporary Directories.
Dec 06 10:09:01 np0005548790.localdomain systemd[299965]: Reached target Paths.
Dec 06 10:09:01 np0005548790.localdomain systemd[299965]: Reached target Timers.
Dec 06 10:09:01 np0005548790.localdomain systemd[299965]: Starting D-Bus User Message Bus Socket...
Dec 06 10:09:01 np0005548790.localdomain systemd[299965]: Starting Create User's Volatile Files and Directories...
Dec 06 10:09:01 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:01 np0005548790.localdomain systemd[299965]: Finished Create User's Volatile Files and Directories.
Dec 06 10:09:01 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0)
Dec 06 10:09:01 np0005548790.localdomain systemd[299965]: Listening on D-Bus User Message Bus Socket.
Dec 06 10:09:01 np0005548790.localdomain systemd[299965]: Reached target Sockets.
Dec 06 10:09:01 np0005548790.localdomain systemd[299965]: Reached target Basic System.
Dec 06 10:09:01 np0005548790.localdomain systemd[299965]: Reached target Main User Target.
Dec 06 10:09:01 np0005548790.localdomain systemd[299965]: Startup finished in 120ms.
Dec 06 10:09:01 np0005548790.localdomain systemd[1]: Started User Manager for UID 1003.
Dec 06 10:09:01 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:01 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec 06 10:09:01 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 06 10:09:01 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:09:01 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:01 np0005548790.localdomain systemd[1]: Started Session 69 of User tripleo-admin.
Dec 06 10:09:01 np0005548790.localdomain sshd[299945]: pam_unix(sshd:session): session opened for user tripleo-admin(uid=1003) by (uid=0)
Dec 06 10:09:01 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:09:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:02 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548789 (monmap changed)...
Dec 06 10:09:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:09:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:02 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain
Dec 06 10:09:02 np0005548790.localdomain ceph-mon[288373]: pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:02 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.1 (monmap changed)...
Dec 06 10:09:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 06 10:09:02 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:02 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.1 on np0005548789.localdomain
Dec 06 10:09:02 np0005548790.localdomain sshd[299943]: Invalid user admin from 45.135.232.92 port 43118
Dec 06 10:09:02 np0005548790.localdomain sudo[300106]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rpqraoelsxcafayqjineqwnftwdchtpw ; /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765015742.007751-61743-50273301620584/AnsiballZ_lineinfile.py
Dec 06 10:09:02 np0005548790.localdomain sudo[300106]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 06 10:09:02 np0005548790.localdomain python3[300108]: ansible-ansible.builtin.lineinfile Invoked with dest=/etc/os-net-config/tripleo_config.yaml insertafter=172.18.0 line=    - ip_netmask: 172.18.0.105/24 backup=True path=/etc/os-net-config/tripleo_config.yaml state=present backrefs=False create=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 06 10:09:02 np0005548790.localdomain sudo[300106]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:02 np0005548790.localdomain sshd[299943]: Connection reset by invalid user admin 45.135.232.92 port 43118 [preauth]
Dec 06 10:09:02 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0)
Dec 06 10:09:02 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:02 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0)
Dec 06 10:09:02 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:02 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0)
Dec 06 10:09:02 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 06 10:09:02 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:09:02 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:03 np0005548790.localdomain sudo[300252]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xusyeprhuzmyqziudoqsgdsvggtssmql ; /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765015742.8703902-61759-161857523361665/AnsiballZ_command.py
Dec 06 10:09:03 np0005548790.localdomain sudo[300252]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 06 10:09:03 np0005548790.localdomain python3[300254]: ansible-ansible.legacy.command Invoked with _raw_params=ip a add 172.18.0.105/24 dev vlan21 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 10:09:03 np0005548790.localdomain sudo[300252]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:03 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0)
Dec 06 10:09:03 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:03 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0)
Dec 06 10:09:03 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:03 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 06 10:09:03 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:09:03 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:09:03 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:03 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:03 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:03 np0005548790.localdomain ceph-mon[288373]: Reconfiguring osd.4 (monmap changed)...
Dec 06 10:09:03 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 06 10:09:03 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:03 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon osd.4 on np0005548789.localdomain
Dec 06 10:09:03 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:03 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:03 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:09:03 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:04 np0005548790.localdomain sudo[300397]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hdgpxozcfxqbssygehsjwzeedisfoxdk ; /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1765015743.652506-61770-170090585538108/AnsiballZ_command.py
Dec 06 10:09:04 np0005548790.localdomain sudo[300397]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 06 10:09:04 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:09:04 np0005548790.localdomain podman[300400]: 2025-12-06 10:09:04.228762566 +0000 UTC m=+0.089591764 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:09:04 np0005548790.localdomain podman[300400]: 2025-12-06 10:09:04.243237212 +0000 UTC m=+0.104066410 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:09:04 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:09:04 np0005548790.localdomain python3[300399]: ansible-ansible.legacy.command Invoked with _raw_params=ping -W1 -c 3 172.18.0.105 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 10:09:04 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0)
Dec 06 10:09:04 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:04 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0)
Dec 06 10:09:04 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:04 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 06 10:09:04 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:09:04 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 06 10:09:04 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:09:04 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:09:04 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:04 np0005548790.localdomain ceph-mon[288373]: pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:04 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)...
Dec 06 10:09:04 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain
Dec 06 10:09:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:09:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:09:04 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:05 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0)
Dec 06 10:09:05 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:05 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0)
Dec 06 10:09:05 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:05 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 06 10:09:05 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:09:05 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 06 10:09:05 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:09:05 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:09:05 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:05 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mgr.np0005548789.mzhmje (monmap changed)...
Dec 06 10:09:05 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain
Dec 06 10:09:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:09:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:09:05 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0)
Dec 06 10:09:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0)
Dec 06 10:09:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 06 10:09:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:09:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:09:06 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:06 np0005548790.localdomain sudo[300425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:09:06 np0005548790.localdomain sudo[300425]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:06 np0005548790.localdomain sudo[300397]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:06 np0005548790.localdomain sudo[300425]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:06 np0005548790.localdomain sudo[300443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:09:06 np0005548790.localdomain sudo[300443]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:06 np0005548790.localdomain podman[300496]: 
Dec 06 10:09:06 np0005548790.localdomain podman[300496]: 2025-12-06 10:09:06.90031589 +0000 UTC m=+0.064407651 container create d733b935f208559dbf75add2a70b2ebbd06c9b52ebad3ed843d6d3e5c3841428 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_kepler, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, release=1763362218, io.buildah.version=1.41.4, GIT_BRANCH=main, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph)
Dec 06 10:09:06 np0005548790.localdomain systemd[1]: Started libpod-conmon-d733b935f208559dbf75add2a70b2ebbd06c9b52ebad3ed843d6d3e5c3841428.scope.
Dec 06 10:09:06 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:09:06 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:09:06 np0005548790.localdomain podman[300496]: 2025-12-06 10:09:06.967875835 +0000 UTC m=+0.131967636 container init d733b935f208559dbf75add2a70b2ebbd06c9b52ebad3ed843d6d3e5c3841428 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_kepler, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, GIT_BRANCH=main, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, release=1763362218, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, ceph=True, io.openshift.tags=rhceph ceph)
Dec 06 10:09:06 np0005548790.localdomain ceph-mon[288373]: pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:06 np0005548790.localdomain ceph-mon[288373]: Reconfiguring mon.np0005548789 (monmap changed)...
Dec 06 10:09:06 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon mon.np0005548789 on np0005548789.localdomain
Dec 06 10:09:06 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:06 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:06 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:09:06 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:06 np0005548790.localdomain podman[300496]: 2025-12-06 10:09:06.873910385 +0000 UTC m=+0.038002186 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:09:06 np0005548790.localdomain podman[300496]: 2025-12-06 10:09:06.976855825 +0000 UTC m=+0.140947616 container start d733b935f208559dbf75add2a70b2ebbd06c9b52ebad3ed843d6d3e5c3841428 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_kepler, com.redhat.component=rhceph-container, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, ceph=True, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.tags=rhceph ceph, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_BRANCH=main, distribution-scope=public, version=7, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z)
Dec 06 10:09:06 np0005548790.localdomain podman[300496]: 2025-12-06 10:09:06.977196904 +0000 UTC m=+0.141288745 container attach d733b935f208559dbf75add2a70b2ebbd06c9b52ebad3ed843d6d3e5c3841428 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_kepler, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.buildah.version=1.41.4, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, version=7)
Dec 06 10:09:06 np0005548790.localdomain objective_kepler[300512]: 167 167
Dec 06 10:09:06 np0005548790.localdomain systemd[1]: libpod-d733b935f208559dbf75add2a70b2ebbd06c9b52ebad3ed843d6d3e5c3841428.scope: Deactivated successfully.
Dec 06 10:09:06 np0005548790.localdomain podman[300496]: 2025-12-06 10:09:06.98117149 +0000 UTC m=+0.145263331 container died d733b935f208559dbf75add2a70b2ebbd06c9b52ebad3ed843d6d3e5c3841428 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_kepler, ceph=True, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_BRANCH=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 06 10:09:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:09:07 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-5e18a7ac5e459305d8b0f7c1f7ca35598cdeed7405ba8fde2a7d24690f45a0d4-merged.mount: Deactivated successfully.
Dec 06 10:09:07 np0005548790.localdomain podman[300517]: 2025-12-06 10:09:07.073198647 +0000 UTC m=+0.085083083 container remove d733b935f208559dbf75add2a70b2ebbd06c9b52ebad3ed843d6d3e5c3841428 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_kepler, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, name=rhceph, release=1763362218, RELEASE=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, vcs-type=git, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=)
Dec 06 10:09:07 np0005548790.localdomain systemd[1]: libpod-conmon-d733b935f208559dbf75add2a70b2ebbd06c9b52ebad3ed843d6d3e5c3841428.scope: Deactivated successfully.
Dec 06 10:09:07 np0005548790.localdomain sudo[300443]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0)
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:07 np0005548790.localdomain podman[300531]: 2025-12-06 10:09:07.157836137 +0000 UTC m=+0.098079320 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible)
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0)
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:07 np0005548790.localdomain podman[300531]: 2025-12-06 10:09:07.202233883 +0000 UTC m=+0.142477046 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:09:07 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:09:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:09:07 np0005548790.localdomain sudo[300558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:09:07 np0005548790.localdomain sudo[300558]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:07 np0005548790.localdomain sudo[300558]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:09:07 np0005548790.localdomain sudo[300576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:09:07 np0005548790.localdomain sudo[300576]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:07 np0005548790.localdomain sudo[300576]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: Reconfiguring crash.np0005548790 (monmap changed)...
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:07 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:09:08 np0005548790.localdomain ceph-mon[288373]: pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:08 np0005548790.localdomain ceph-mon[288373]: from='client.44476 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:09:08 np0005548790.localdomain ceph-mon[288373]: Saving service mon spec with placement label:mon
Dec 06 10:09:10 np0005548790.localdomain ceph-mon[288373]: pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:10 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:09:10 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:11 np0005548790.localdomain ceph-mon[288373]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:11 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:09:12 np0005548790.localdomain ceph-mon[288373]: pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:12 np0005548790.localdomain ceph-mon[288373]: from='client.44482 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005548790", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:09:13 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec 06 10:09:13 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [DBG] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "quorum_status"} : dispatch
Dec 06 10:09:13 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e13 handle_command mon_command({"prefix": "mon rm", "name": "np0005548790"} v 0)
Dec 06 10:09:13 np0005548790.localdomain ceph-mon[288373]: log_channel(audit) log [INF] : from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon rm", "name": "np0005548790"} : dispatch
Dec 06 10:09:13 np0005548790.localdomain ceph-mgr[286934]: ms_deliver_dispatch: unhandled message 0x55a2e9e831e0 mon_map magic: 0 from mon.0 v2:172.18.0.108:3300/0
Dec 06 10:09:13 np0005548790.localdomain ceph-mon[288373]: mon.np0005548790@0(leader) e14  removed from monmap, suicide.
Dec 06 10:09:13 np0005548790.localdomain ceph-mgr[286934]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0
Dec 06 10:09:13 np0005548790.localdomain ceph-mgr[286934]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0
Dec 06 10:09:13 np0005548790.localdomain ceph-mgr[286934]: ms_deliver_dispatch: unhandled message 0x55a2e9e83080 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0
Dec 06 10:09:13 np0005548790.localdomain sudo[300594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:09:13 np0005548790.localdomain sudo[300594]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:13 np0005548790.localdomain sudo[300594]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:13 np0005548790.localdomain podman[300608]: 2025-12-06 10:09:13.454356779 +0000 UTC m=+0.055097783 container died c2f79d602097dfaaa7c8301203250e6916286483632f1ca70c4f11fc9e548b5c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mon-np0005548790, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, ceph=True, distribution-scope=public, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 06 10:09:13 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e800146688dd6667c503ef525a8dc9b08fdbcbbe4822831d3e3bbdbda8ab1a17-merged.mount: Deactivated successfully.
Dec 06 10:09:13 np0005548790.localdomain podman[300608]: 2025-12-06 10:09:13.493269838 +0000 UTC m=+0.094010802 container remove c2f79d602097dfaaa7c8301203250e6916286483632f1ca70c4f11fc9e548b5c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mon-np0005548790, RELEASE=main, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhceph ceph, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., version=7, CEPH_POINT_RELEASE=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_CLEAN=True, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 06 10:09:13 np0005548790.localdomain sudo[300624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 10:09:13 np0005548790.localdomain sudo[300624]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:13 np0005548790.localdomain sudo[300625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 rm-daemon --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 --name mon.np0005548790 --force
Dec 06 10:09:13 np0005548790.localdomain sudo[300624]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:13 np0005548790.localdomain sudo[300625]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:13 np0005548790.localdomain sudo[300668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph
Dec 06 10:09:13 np0005548790.localdomain sudo[300668]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:13 np0005548790.localdomain sudo[300668]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:13 np0005548790.localdomain sudo[300693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:09:13 np0005548790.localdomain sudo[300693]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:13 np0005548790.localdomain sudo[300693]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:13 np0005548790.localdomain sudo[300711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:09:13 np0005548790.localdomain sudo[300711]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:13 np0005548790.localdomain sudo[300711]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:13 np0005548790.localdomain sudo[300729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:09:13 np0005548790.localdomain sudo[300729]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:13 np0005548790.localdomain sudo[300729]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:13 np0005548790.localdomain sudo[300795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:09:13 np0005548790.localdomain sudo[300795]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:13 np0005548790.localdomain sudo[300795]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:14 np0005548790.localdomain sudo[300818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:09:14 np0005548790.localdomain sudo[300818]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:14 np0005548790.localdomain sudo[300818]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:14 np0005548790.localdomain sudo[300859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 06 10:09:14 np0005548790.localdomain sudo[300859]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:14 np0005548790.localdomain sudo[300859]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:14 np0005548790.localdomain sudo[300885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:09:14 np0005548790.localdomain sudo[300885]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:14 np0005548790.localdomain sudo[300885]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:14 np0005548790.localdomain sudo[300904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:09:14 np0005548790.localdomain sudo[300904]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:14 np0005548790.localdomain sudo[300904]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:14 np0005548790.localdomain sudo[300928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:09:14 np0005548790.localdomain sudo[300928]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:14 np0005548790.localdomain sudo[300928]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:14 np0005548790.localdomain sudo[300953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:09:14 np0005548790.localdomain sudo[300953]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:14 np0005548790.localdomain sudo[300953]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:14 np0005548790.localdomain systemd[1]: ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8@mon.np0005548790.service: Deactivated successfully.
Dec 06 10:09:14 np0005548790.localdomain systemd[1]: Stopped Ceph mon.np0005548790 for 1939e851-b10c-5c3b-9bb7-8e7f380233e8.
Dec 06 10:09:14 np0005548790.localdomain systemd[1]: ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8@mon.np0005548790.service: Consumed 11.705s CPU time.
Dec 06 10:09:14 np0005548790.localdomain sudo[300971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:09:14 np0005548790.localdomain sudo[300971]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:14 np0005548790.localdomain sudo[300971]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:14 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 10:09:14 np0005548790.localdomain systemd-rc-local-generator[301045]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 10:09:14 np0005548790.localdomain systemd-sysv-generator[301051]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 10:09:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:09:14 np0005548790.localdomain sudo[301010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:09:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 10:09:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:09:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:09:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 10:09:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 10:09:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:09:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:09:14 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:09:14 np0005548790.localdomain sudo[301010]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:14 np0005548790.localdomain sudo[301010]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:14 np0005548790.localdomain sudo[300625]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:14 np0005548790.localdomain sudo[301062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:09:14 np0005548790.localdomain sudo[301062]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:14 np0005548790.localdomain sudo[301062]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:14 np0005548790.localdomain sudo[301080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:09:14 np0005548790.localdomain sudo[301080]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:14 np0005548790.localdomain sudo[301080]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:15 np0005548790.localdomain sudo[301098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:09:15 np0005548790.localdomain sudo[301098]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:15 np0005548790.localdomain sudo[301098]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:09:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:09:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:09:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152653 "" "Go-http-client/1.1"
Dec 06 10:09:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:09:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18219 "" "Go-http-client/1.1"
Dec 06 10:09:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:09:20 np0005548790.localdomain podman[301116]: 2025-12-06 10:09:20.563351568 +0000 UTC m=+0.077854840 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 06 10:09:20 np0005548790.localdomain podman[301116]: 2025-12-06 10:09:20.571207087 +0000 UTC m=+0.085710379 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 10:09:20 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:09:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:09:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:09:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:09:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:09:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:09:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:09:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:09:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:09:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:09:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:09:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:09:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:09:24 np0005548790.localdomain sudo[301133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:09:24 np0005548790.localdomain sudo[301133]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:24 np0005548790.localdomain sudo[301133]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:24 np0005548790.localdomain sudo[301151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:09:24 np0005548790.localdomain sudo[301151]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:25 np0005548790.localdomain podman[301186]: 
Dec 06 10:09:25 np0005548790.localdomain podman[301186]: 2025-12-06 10:09:25.459151592 +0000 UTC m=+0.081435936 container create c17f9b5157aaac4cc77231f2beda1f9262ab5a8767f4f7511dfd88b174a2bd52 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_shaw, ceph=True, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, architecture=x86_64, name=rhceph, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 06 10:09:25 np0005548790.localdomain systemd[1]: Started libpod-conmon-c17f9b5157aaac4cc77231f2beda1f9262ab5a8767f4f7511dfd88b174a2bd52.scope.
Dec 06 10:09:25 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:09:25 np0005548790.localdomain podman[301186]: 2025-12-06 10:09:25.51936663 +0000 UTC m=+0.141650944 container init c17f9b5157aaac4cc77231f2beda1f9262ab5a8767f4f7511dfd88b174a2bd52 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_shaw, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, vcs-type=git, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.openshift.expose-services=, release=1763362218, architecture=x86_64, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, ceph=True, io.openshift.tags=rhceph ceph, name=rhceph, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 06 10:09:25 np0005548790.localdomain podman[301186]: 2025-12-06 10:09:25.427161738 +0000 UTC m=+0.049446082 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:09:25 np0005548790.localdomain systemd[1]: tmp-crun.dI5A6z.mount: Deactivated successfully.
Dec 06 10:09:25 np0005548790.localdomain podman[301186]: 2025-12-06 10:09:25.535255974 +0000 UTC m=+0.157540288 container start c17f9b5157aaac4cc77231f2beda1f9262ab5a8767f4f7511dfd88b174a2bd52 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_shaw, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vendor=Red Hat, Inc., name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, distribution-scope=public, release=1763362218, com.redhat.component=rhceph-container)
Dec 06 10:09:25 np0005548790.localdomain elegant_shaw[301201]: 167 167
Dec 06 10:09:25 np0005548790.localdomain systemd[1]: libpod-c17f9b5157aaac4cc77231f2beda1f9262ab5a8767f4f7511dfd88b174a2bd52.scope: Deactivated successfully.
Dec 06 10:09:25 np0005548790.localdomain podman[301186]: 2025-12-06 10:09:25.536829916 +0000 UTC m=+0.159114240 container attach c17f9b5157aaac4cc77231f2beda1f9262ab5a8767f4f7511dfd88b174a2bd52 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_shaw, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.buildah.version=1.41.4, RELEASE=main, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vendor=Red Hat, Inc., GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.component=rhceph-container, architecture=x86_64, release=1763362218, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 10:09:25 np0005548790.localdomain podman[301186]: 2025-12-06 10:09:25.543309779 +0000 UTC m=+0.165594123 container died c17f9b5157aaac4cc77231f2beda1f9262ab5a8767f4f7511dfd88b174a2bd52 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_shaw, GIT_CLEAN=True, vcs-type=git, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=1763362218, RELEASE=main, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container)
Dec 06 10:09:25 np0005548790.localdomain podman[301206]: 2025-12-06 10:09:25.618713323 +0000 UTC m=+0.069148257 container remove c17f9b5157aaac4cc77231f2beda1f9262ab5a8767f4f7511dfd88b174a2bd52 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_shaw, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vcs-type=git, RELEASE=main, build-date=2025-11-26T19:44:28Z)
Dec 06 10:09:25 np0005548790.localdomain systemd[1]: libpod-conmon-c17f9b5157aaac4cc77231f2beda1f9262ab5a8767f4f7511dfd88b174a2bd52.scope: Deactivated successfully.
Dec 06 10:09:25 np0005548790.localdomain sudo[301151]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:25 np0005548790.localdomain sudo[301222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:09:25 np0005548790.localdomain sudo[301222]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:09:25 np0005548790.localdomain sudo[301222]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:09:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:09:25 np0005548790.localdomain sudo[301243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:09:25 np0005548790.localdomain sudo[301243]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:25 np0005548790.localdomain podman[301241]: 2025-12-06 10:09:25.900222311 +0000 UTC m=+0.076754101 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:09:25 np0005548790.localdomain podman[301241]: 2025-12-06 10:09:25.938069051 +0000 UTC m=+0.114600821 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3)
Dec 06 10:09:25 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:09:25 np0005548790.localdomain podman[301239]: 2025-12-06 10:09:25.952057055 +0000 UTC m=+0.132389476 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 10:09:26 np0005548790.localdomain podman[301242]: 2025-12-06 10:09:26.018118789 +0000 UTC m=+0.191047742 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, config_id=edpm, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vcs-type=git, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter)
Dec 06 10:09:26 np0005548790.localdomain podman[301242]: 2025-12-06 10:09:26.034055955 +0000 UTC m=+0.206984958 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, maintainer=Red Hat, Inc., distribution-scope=public, release=1755695350, managed_by=edpm_ansible, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, vcs-type=git)
Dec 06 10:09:26 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:09:26 np0005548790.localdomain podman[301239]: 2025-12-06 10:09:26.089133376 +0000 UTC m=+0.269465837 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 10:09:26 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:09:26 np0005548790.localdomain podman[301333]: 
Dec 06 10:09:26 np0005548790.localdomain podman[301333]: 2025-12-06 10:09:26.382464039 +0000 UTC m=+0.083880521 container create 011ee5e041fe4736418e2de57b4488dd9b7048adaf0d717d64a40f2d5c4c0be0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_chaum, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, version=7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.expose-services=)
Dec 06 10:09:26 np0005548790.localdomain systemd[1]: Started libpod-conmon-011ee5e041fe4736418e2de57b4488dd9b7048adaf0d717d64a40f2d5c4c0be0.scope.
Dec 06 10:09:26 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:09:26 np0005548790.localdomain podman[301333]: 2025-12-06 10:09:26.346346235 +0000 UTC m=+0.047762747 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:09:26 np0005548790.localdomain podman[301333]: 2025-12-06 10:09:26.453838295 +0000 UTC m=+0.155254777 container init 011ee5e041fe4736418e2de57b4488dd9b7048adaf0d717d64a40f2d5c4c0be0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_chaum, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, version=7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main)
Dec 06 10:09:26 np0005548790.localdomain sudo[301347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:09:26 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-894f5fba3d688f95b39ceeca8349d2653040025c207183ec603dbd3bf6142803-merged.mount: Deactivated successfully.
Dec 06 10:09:26 np0005548790.localdomain systemd[1]: libpod-011ee5e041fe4736418e2de57b4488dd9b7048adaf0d717d64a40f2d5c4c0be0.scope: Deactivated successfully.
Dec 06 10:09:26 np0005548790.localdomain romantic_chaum[301364]: 167 167
Dec 06 10:09:26 np0005548790.localdomain podman[301333]: 2025-12-06 10:09:26.466768851 +0000 UTC m=+0.168185313 container start 011ee5e041fe4736418e2de57b4488dd9b7048adaf0d717d64a40f2d5c4c0be0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_chaum, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, RELEASE=main, ceph=True)
Dec 06 10:09:26 np0005548790.localdomain podman[301333]: 2025-12-06 10:09:26.467452099 +0000 UTC m=+0.168868641 container attach 011ee5e041fe4736418e2de57b4488dd9b7048adaf0d717d64a40f2d5c4c0be0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_chaum, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_BRANCH=main, architecture=x86_64, CEPH_POINT_RELEASE=, RELEASE=main, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, version=7, name=rhceph, vcs-type=git)
Dec 06 10:09:26 np0005548790.localdomain sudo[301347]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:26 np0005548790.localdomain podman[301333]: 2025-12-06 10:09:26.46975657 +0000 UTC m=+0.171173042 container died 011ee5e041fe4736418e2de57b4488dd9b7048adaf0d717d64a40f2d5c4c0be0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_chaum, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, name=rhceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_BRANCH=main, RELEASE=main, CEPH_POINT_RELEASE=, ceph=True, io.openshift.expose-services=)
Dec 06 10:09:26 np0005548790.localdomain sudo[301347]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:26 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-872f39edc22a6d60b8af4b41b51e5ea252ad067589060b7e49062391b16e4c8b-merged.mount: Deactivated successfully.
Dec 06 10:09:26 np0005548790.localdomain sudo[301377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:09:26 np0005548790.localdomain sudo[301377]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:26 np0005548790.localdomain podman[301370]: 2025-12-06 10:09:26.550264331 +0000 UTC m=+0.076043532 container remove 011ee5e041fe4736418e2de57b4488dd9b7048adaf0d717d64a40f2d5c4c0be0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_chaum, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, ceph=True, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_BRANCH=main, release=1763362218, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, distribution-scope=public, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, build-date=2025-11-26T19:44:28Z, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 06 10:09:26 np0005548790.localdomain systemd[1]: libpod-conmon-011ee5e041fe4736418e2de57b4488dd9b7048adaf0d717d64a40f2d5c4c0be0.scope: Deactivated successfully.
Dec 06 10:09:26 np0005548790.localdomain sudo[301243]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:26 np0005548790.localdomain sudo[301412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:09:26 np0005548790.localdomain sudo[301412]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:26 np0005548790.localdomain sudo[301412]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:26 np0005548790.localdomain sudo[301439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:09:26 np0005548790.localdomain sudo[301439]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:27 np0005548790.localdomain podman[301493]: 
Dec 06 10:09:27 np0005548790.localdomain podman[301493]: 2025-12-06 10:09:27.094811543 +0000 UTC m=+0.067609727 container create 5ca76d691f06d6ad964f6279445577483eadce785e30250921fbc073590f083e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_rubin, ceph=True, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, name=rhceph, description=Red Hat Ceph Storage 7, vcs-type=git, version=7, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 06 10:09:27 np0005548790.localdomain systemd[1]: Started libpod-conmon-5ca76d691f06d6ad964f6279445577483eadce785e30250921fbc073590f083e.scope.
Dec 06 10:09:27 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:09:27 np0005548790.localdomain podman[301493]: 2025-12-06 10:09:27.062436579 +0000 UTC m=+0.035234803 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:09:27 np0005548790.localdomain podman[301493]: 2025-12-06 10:09:27.163439126 +0000 UTC m=+0.136237280 container init 5ca76d691f06d6ad964f6279445577483eadce785e30250921fbc073590f083e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_rubin, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_CLEAN=True, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, ceph=True, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, distribution-scope=public, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, name=rhceph, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 10:09:27 np0005548790.localdomain podman[301493]: 2025-12-06 10:09:27.172625451 +0000 UTC m=+0.145423635 container start 5ca76d691f06d6ad964f6279445577483eadce785e30250921fbc073590f083e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_rubin, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, RELEASE=main, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, CEPH_POINT_RELEASE=, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vendor=Red Hat, Inc., release=1763362218, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 06 10:09:27 np0005548790.localdomain crazy_rubin[301508]: 167 167
Dec 06 10:09:27 np0005548790.localdomain podman[301493]: 2025-12-06 10:09:27.17335157 +0000 UTC m=+0.146149754 container attach 5ca76d691f06d6ad964f6279445577483eadce785e30250921fbc073590f083e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_rubin, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, version=7, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, CEPH_POINT_RELEASE=, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, description=Red Hat Ceph Storage 7)
Dec 06 10:09:27 np0005548790.localdomain systemd[1]: libpod-5ca76d691f06d6ad964f6279445577483eadce785e30250921fbc073590f083e.scope: Deactivated successfully.
Dec 06 10:09:27 np0005548790.localdomain podman[301493]: 2025-12-06 10:09:27.176932775 +0000 UTC m=+0.149730989 container died 5ca76d691f06d6ad964f6279445577483eadce785e30250921fbc073590f083e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_rubin, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, version=7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, release=1763362218, CEPH_POINT_RELEASE=, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, vendor=Red Hat, Inc.)
Dec 06 10:09:27 np0005548790.localdomain podman[301515]: 2025-12-06 10:09:27.255866084 +0000 UTC m=+0.073251807 container remove 5ca76d691f06d6ad964f6279445577483eadce785e30250921fbc073590f083e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_rubin, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, GIT_CLEAN=True, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., RELEASE=main, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.openshift.tags=rhceph ceph, name=rhceph, CEPH_POINT_RELEASE=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, release=1763362218, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z)
Dec 06 10:09:27 np0005548790.localdomain systemd[1]: libpod-conmon-5ca76d691f06d6ad964f6279445577483eadce785e30250921fbc073590f083e.scope: Deactivated successfully.
Dec 06 10:09:27 np0005548790.localdomain podman[301543]: 
Dec 06 10:09:27 np0005548790.localdomain podman[301543]: 2025-12-06 10:09:27.370219957 +0000 UTC m=+0.079388721 container create f01b1ee4c2832cd5aef7c0cd8d87e93eb078abc4d7f4fd45d62363ce2a0d7f20 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_meitner, name=rhceph, CEPH_POINT_RELEASE=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vcs-type=git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_BRANCH=main)
Dec 06 10:09:27 np0005548790.localdomain systemd[1]: Started libpod-conmon-f01b1ee4c2832cd5aef7c0cd8d87e93eb078abc4d7f4fd45d62363ce2a0d7f20.scope.
Dec 06 10:09:27 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:09:27 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac0b59c34fd74064a78d443cf5ec04a7c60d55761c66c06e5ac0cb604b49258b/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:27 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac0b59c34fd74064a78d443cf5ec04a7c60d55761c66c06e5ac0cb604b49258b/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:27 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac0b59c34fd74064a78d443cf5ec04a7c60d55761c66c06e5ac0cb604b49258b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:27 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac0b59c34fd74064a78d443cf5ec04a7c60d55761c66c06e5ac0cb604b49258b/merged/var/lib/ceph/mon/ceph-np0005548790 supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:27 np0005548790.localdomain podman[301543]: 2025-12-06 10:09:27.33623989 +0000 UTC m=+0.045408694 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:09:27 np0005548790.localdomain podman[301543]: 2025-12-06 10:09:27.436857747 +0000 UTC m=+0.146026511 container init f01b1ee4c2832cd5aef7c0cd8d87e93eb078abc4d7f4fd45d62363ce2a0d7f20 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_meitner, RELEASE=main, io.buildah.version=1.41.4, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.component=rhceph-container, ceph=True, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z)
Dec 06 10:09:27 np0005548790.localdomain podman[301543]: 2025-12-06 10:09:27.448166609 +0000 UTC m=+0.157335373 container start f01b1ee4c2832cd5aef7c0cd8d87e93eb078abc4d7f4fd45d62363ce2a0d7f20 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_meitner, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, release=1763362218, GIT_CLEAN=True, ceph=True, distribution-scope=public, name=rhceph, GIT_BRANCH=main, io.buildah.version=1.41.4, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc.)
Dec 06 10:09:27 np0005548790.localdomain podman[301543]: 2025-12-06 10:09:27.448420056 +0000 UTC m=+0.157588840 container attach f01b1ee4c2832cd5aef7c0cd8d87e93eb078abc4d7f4fd45d62363ce2a0d7f20 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_meitner, vendor=Red Hat, Inc., version=7, GIT_CLEAN=True, release=1763362218, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_BRANCH=main, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 06 10:09:27 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-0023ce3a0adf74f74ab030da54152bacf8060600aa090b2a2cc3f7af8125a553-merged.mount: Deactivated successfully.
Dec 06 10:09:27 np0005548790.localdomain systemd[1]: libpod-f01b1ee4c2832cd5aef7c0cd8d87e93eb078abc4d7f4fd45d62363ce2a0d7f20.scope: Deactivated successfully.
Dec 06 10:09:27 np0005548790.localdomain podman[301543]: 2025-12-06 10:09:27.592891274 +0000 UTC m=+0.302060078 container died f01b1ee4c2832cd5aef7c0cd8d87e93eb078abc4d7f4fd45d62363ce2a0d7f20 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_meitner, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, release=1763362218, com.redhat.component=rhceph-container, GIT_CLEAN=True)
Dec 06 10:09:27 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ac0b59c34fd74064a78d443cf5ec04a7c60d55761c66c06e5ac0cb604b49258b-merged.mount: Deactivated successfully.
Dec 06 10:09:27 np0005548790.localdomain podman[301584]: 2025-12-06 10:09:27.687548862 +0000 UTC m=+0.080930332 container remove f01b1ee4c2832cd5aef7c0cd8d87e93eb078abc4d7f4fd45d62363ce2a0d7f20 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_meitner, name=rhceph, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, distribution-scope=public, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, release=1763362218, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, ceph=True)
Dec 06 10:09:27 np0005548790.localdomain systemd[1]: libpod-conmon-f01b1ee4c2832cd5aef7c0cd8d87e93eb078abc4d7f4fd45d62363ce2a0d7f20.scope: Deactivated successfully.
Dec 06 10:09:27 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 10:09:27 np0005548790.localdomain systemd-rc-local-generator[301624]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 10:09:27 np0005548790.localdomain systemd-sysv-generator[301627]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 10:09:27 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:09:27 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 10:09:27 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:09:27 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:09:27 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 10:09:27 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 10:09:27 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:09:27 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:09:27 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:09:28 np0005548790.localdomain systemd[1]: Reloading.
Dec 06 10:09:28 np0005548790.localdomain systemd-rc-local-generator[301660]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 06 10:09:28 np0005548790.localdomain systemd-sysv-generator[301666]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 06 10:09:28 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:09:28 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 06 10:09:28 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:09:28 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:09:28 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 06 10:09:28 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 06 10:09:28 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:09:28 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:09:28 np0005548790.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 06 10:09:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:28.367 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:09:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:28.368 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:09:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:28.368 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:09:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:28.391 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:09:28 np0005548790.localdomain systemd[1]: Starting Ceph mon.np0005548790 for 1939e851-b10c-5c3b-9bb7-8e7f380233e8...
Dec 06 10:09:28 np0005548790.localdomain podman[301724]: 
Dec 06 10:09:28 np0005548790.localdomain podman[301724]: 2025-12-06 10:09:28.781153077 +0000 UTC m=+0.083617164 container create d41db198b5de68bab02c5dbcca0d8fed0215f99aaf594495e4eea407c25adfd1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mon-np0005548790, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, RELEASE=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 10:09:28 np0005548790.localdomain systemd[1]: tmp-crun.Wljtkr.mount: Deactivated successfully.
Dec 06 10:09:28 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7864f2cfe8265f958082a2e2b5fb82fe10bcdac23845236747571dd01a8b6a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:28 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7864f2cfe8265f958082a2e2b5fb82fe10bcdac23845236747571dd01a8b6a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:28 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7864f2cfe8265f958082a2e2b5fb82fe10bcdac23845236747571dd01a8b6a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:28 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7864f2cfe8265f958082a2e2b5fb82fe10bcdac23845236747571dd01a8b6a7/merged/var/lib/ceph/mon/ceph-np0005548790 supports timestamps until 2038 (0x7fffffff)
Dec 06 10:09:28 np0005548790.localdomain podman[301724]: 2025-12-06 10:09:28.84491744 +0000 UTC m=+0.147381537 container init d41db198b5de68bab02c5dbcca0d8fed0215f99aaf594495e4eea407c25adfd1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mon-np0005548790, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., release=1763362218, architecture=x86_64, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 06 10:09:28 np0005548790.localdomain podman[301724]: 2025-12-06 10:09:28.750590781 +0000 UTC m=+0.053054908 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:09:28 np0005548790.localdomain bash[301724]: d41db198b5de68bab02c5dbcca0d8fed0215f99aaf594495e4eea407c25adfd1
Dec 06 10:09:28 np0005548790.localdomain podman[301724]: 2025-12-06 10:09:28.853803407 +0000 UTC m=+0.156267504 container start d41db198b5de68bab02c5dbcca0d8fed0215f99aaf594495e4eea407c25adfd1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mon-np0005548790, RELEASE=main, io.buildah.version=1.41.4, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.openshift.tags=rhceph ceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, name=rhceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, release=1763362218, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 10:09:28 np0005548790.localdomain systemd[1]: Started Ceph mon.np0005548790 for 1939e851-b10c-5c3b-9bb7-8e7f380233e8.
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: set uid:gid to 167:167 (ceph:ceph)
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: pidfile_write: ignore empty --pid-file
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: load: jerasure load: lrc 
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: RocksDB version: 7.9.2
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: Git sha 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: Compile date 2025-09-23 00:00:00
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: DB SUMMARY
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: DB Session ID:  CFD0WFBBCIFLI72L04W0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: CURRENT file:  CURRENT
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: IDENTITY file:  IDENTITY
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005548790/store.db dir, Total Num: 0, files: 
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005548790/store.db: 000004.log size: 636 ; 
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                         Options.error_if_exists: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                       Options.create_if_missing: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                         Options.paranoid_checks: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                                     Options.env: 0x55bcaf47b9e0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                                Options.info_log: 0x55bcb0286d20
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                Options.max_file_opening_threads: 16
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                              Options.statistics: (nil)
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                               Options.use_fsync: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                       Options.max_log_file_size: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                         Options.allow_fallocate: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                        Options.use_direct_reads: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:          Options.create_missing_column_families: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                              Options.db_log_dir: 
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                                 Options.wal_dir: 
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                   Options.advise_random_on_open: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                    Options.write_buffer_manager: 0x55bcb0297540
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                            Options.rate_limiter: (nil)
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                  Options.unordered_write: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                               Options.row_cache: None
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                              Options.wal_filter: None
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:             Options.allow_ingest_behind: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:             Options.two_write_queues: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:             Options.manual_wal_flush: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:             Options.wal_compression: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:             Options.atomic_flush: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                 Options.log_readahead_size: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:             Options.allow_data_in_errors: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:             Options.db_host_id: __hostname__
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:             Options.max_background_jobs: 2
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:             Options.max_background_compactions: -1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:             Options.max_subcompactions: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:             Options.max_total_wal_size: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                          Options.max_open_files: -1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                          Options.bytes_per_sync: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:       Options.compaction_readahead_size: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                  Options.max_background_flushes: -1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: Compression algorithms supported:
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:         kZSTD supported: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:         kXpressCompression supported: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:         kBZip2Compression supported: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:         kLZ4Compression supported: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:         kZlibCompression supported: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:         kLZ4HCCompression supported: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:         kSnappyCompression supported: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005548790/store.db/MANIFEST-000005
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:           Options.merge_operator: 
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:        Options.compaction_filter: None
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:        Options.compaction_filter_factory: None
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:  Options.sst_partitioner_factory: None
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bcb0286980)
                                                             cache_index_and_filter_blocks: 1
                                                             cache_index_and_filter_blocks_with_high_priority: 0
                                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                                             pin_top_level_index_and_filter: 1
                                                             index_type: 0
                                                             data_block_index_type: 0
                                                             index_shortening: 1
                                                             data_block_hash_table_util_ratio: 0.750000
                                                             checksum: 4
                                                             no_block_cache: 0
                                                             block_cache: 0x55bcb02831f0
                                                             block_cache_name: BinnedLRUCache
                                                             block_cache_options:
                                                               capacity : 536870912
                                                               num_shard_bits : 4
                                                               strict_capacity_limit : 0
                                                               high_pri_pool_ratio: 0.000
                                                             block_cache_compressed: (nil)
                                                             persistent_cache: (nil)
                                                             block_size: 4096
                                                             block_size_deviation: 10
                                                             block_restart_interval: 16
                                                             index_block_restart_interval: 1
                                                             metadata_block_size: 4096
                                                             partition_filters: 0
                                                             use_delta_encoding: 1
                                                             filter_policy: bloomfilter
                                                             whole_key_filtering: 1
                                                             verify_compression: 0
                                                             read_amp_bytes_per_bit: 0
                                                             format_version: 5
                                                             enable_index_compression: 1
                                                             block_align: 0
                                                             max_auto_readahead_size: 262144
                                                             prepopulate_block_cache: 0
                                                             initial_auto_readahead_size: 8192
                                                             num_file_reads_for_auto_readahead: 2
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:        Options.write_buffer_size: 33554432
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:  Options.max_write_buffer_number: 2
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:          Options.compression: NoCompression
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:       Options.prefix_extractor: nullptr
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:             Options.num_levels: 7
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                  Options.compression_opts.level: 32767
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:               Options.compression_opts.strategy: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                  Options.compression_opts.enabled: false
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                        Options.arena_block_size: 1048576
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                Options.disable_auto_compactions: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                   Options.table_properties_collectors: 
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                   Options.inplace_update_support: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                           Options.bloom_locality: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                    Options.max_successive_merges: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                Options.paranoid_file_checks: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                Options.force_consistency_checks: 1
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                Options.report_bg_io_stats: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                               Options.ttl: 2592000
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                       Options.enable_blob_files: false
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                           Options.min_blob_size: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                          Options.blob_file_size: 268435456
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb:                Options.blob_file_starting_level: 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005548790/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4dd2910d-705d-477e-9f8b-a80f7db9791a
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015768919008, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015768922648, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1762, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 648, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 526, "raw_average_value_size": 105, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015768, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015768922760, "job": 1, "event": "recovery_finished"}
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 06 10:09:28 np0005548790.localdomain sudo[301377]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55bcb02aae00
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: DB pointer 0x55bcb03a0000
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                           ** DB Stats **
                                                           Uptime(secs): 0.0 total, 0.0 interval
                                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           
                                                           ** Compaction Stats [default] **
                                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                             L0      1/0    1.72 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                            Sum      1/0    1.72 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           
                                                           ** Compaction Stats [default] **
                                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           
                                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                           
                                                           Uptime(secs): 0.0 total, 0.0 interval
                                                           Flush(GB): cumulative 0.000, interval 0.000
                                                           AddFile(GB): cumulative 0.000, interval 0.000
                                                           AddFile(Total Files): cumulative 0, interval 0
                                                           AddFile(L0 Files): cumulative 0, interval 0
                                                           AddFile(Keys): cumulative 0, interval 0
                                                           Cumulative compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                           Interval compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                           Block cache BinnedLRUCache@0x55bcb02831f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.2e-05 secs_since: 0
                                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                                           
                                                           ** File Read Latency Histogram By Level [default] **
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790 does not exist in monmap, will attempt to join an existing cluster
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: using public_addr v2:172.18.0.105:0/0 -> [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0]
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: starting mon.np0005548790 rank -1 at public addrs [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] at bind addrs [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005548790 fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@-1(???) e0 preinit fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@-1(synchronizing) e14 sync_obtain_latest_monmap
Dec 06 10:09:28 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@-1(synchronizing) e14 sync_obtain_latest_monmap obtained monmap e14
Dec 06 10:09:29 np0005548790.localdomain podman[301785]: 
Dec 06 10:09:29 np0005548790.localdomain podman[301785]: 2025-12-06 10:09:29.058552025 +0000 UTC m=+0.071855319 container create 1c02a675145828b72dc52dfceb50fe7966a15cbaa03813856e92ea02ccd383fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_matsumoto, GIT_BRANCH=main, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, architecture=x86_64, io.openshift.tags=rhceph ceph, ceph=True, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., name=rhceph, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 10:09:29 np0005548790.localdomain systemd[1]: Started libpod-conmon-1c02a675145828b72dc52dfceb50fe7966a15cbaa03813856e92ea02ccd383fe.scope.
Dec 06 10:09:29 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:09:29 np0005548790.localdomain podman[301785]: 2025-12-06 10:09:29.024143456 +0000 UTC m=+0.037445660 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:09:29 np0005548790.localdomain podman[301785]: 2025-12-06 10:09:29.131440102 +0000 UTC m=+0.144742286 container init 1c02a675145828b72dc52dfceb50fe7966a15cbaa03813856e92ea02ccd383fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_matsumoto, build-date=2025-11-26T19:44:28Z, release=1763362218, RELEASE=main, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, vendor=Red Hat, Inc., name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-type=git, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph)
Dec 06 10:09:29 np0005548790.localdomain podman[301785]: 2025-12-06 10:09:29.141905401 +0000 UTC m=+0.155207565 container start 1c02a675145828b72dc52dfceb50fe7966a15cbaa03813856e92ea02ccd383fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_matsumoto, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, io.openshift.expose-services=, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, ceph=True, version=7, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 06 10:09:29 np0005548790.localdomain podman[301785]: 2025-12-06 10:09:29.142156238 +0000 UTC m=+0.155458462 container attach 1c02a675145828b72dc52dfceb50fe7966a15cbaa03813856e92ea02ccd383fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_matsumoto, release=1763362218, io.buildah.version=1.41.4, GIT_BRANCH=main, name=rhceph, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, ceph=True, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, version=7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public)
Dec 06 10:09:29 np0005548790.localdomain sweet_matsumoto[301798]: 167 167
Dec 06 10:09:29 np0005548790.localdomain podman[301785]: 2025-12-06 10:09:29.145593089 +0000 UTC m=+0.158895273 container died 1c02a675145828b72dc52dfceb50fe7966a15cbaa03813856e92ea02ccd383fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_matsumoto, GIT_BRANCH=main, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, RELEASE=main, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7)
Dec 06 10:09:29 np0005548790.localdomain systemd[1]: libpod-1c02a675145828b72dc52dfceb50fe7966a15cbaa03813856e92ea02ccd383fe.scope: Deactivated successfully.
Dec 06 10:09:29 np0005548790.localdomain podman[301803]: 2025-12-06 10:09:29.252401192 +0000 UTC m=+0.095408339 container remove 1c02a675145828b72dc52dfceb50fe7966a15cbaa03813856e92ea02ccd383fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_matsumoto, io.openshift.expose-services=, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, architecture=x86_64, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, release=1763362218, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main)
Dec 06 10:09:29 np0005548790.localdomain systemd[1]: libpod-conmon-1c02a675145828b72dc52dfceb50fe7966a15cbaa03813856e92ea02ccd383fe.scope: Deactivated successfully.
Dec 06 10:09:29 np0005548790.localdomain sudo[301439]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@-1(synchronizing).mds e16 new map
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@-1(synchronizing).mds e16 print_map
                                                           e16
                                                           enable_multiple, ever_enabled_multiple: 1,1
                                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}
                                                           legacy client fscid: 1
                                                            
                                                           Filesystem 'cephfs' (1)
                                                           fs_name        cephfs
                                                           epoch        16
                                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                                           created        2025-12-06T08:18:49.925523+0000
                                                           modified        2025-12-06T10:03:02.051468+0000
                                                           tableserver        0
                                                           root        0
                                                           session_timeout        60
                                                           session_autoclose        300
                                                           max_file_size        1099511627776
                                                           required_client_features        {}
                                                           last_failure        0
                                                           last_failure_osd_epoch        87
                                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}
                                                           max_mds        1
                                                           in        0
                                                           up        {0=26356}
                                                           failed        
                                                           damaged        
                                                           stopped        
                                                           data_pools        [6]
                                                           metadata_pool        7
                                                           inline_data        disabled
                                                           balancer        
                                                           bal_rank_mask        -1
                                                           standby_count_wanted        1
                                                           qdb_cluster        leader: 26356 members: 26356
                                                           [mds.mds.np0005548790.vhcezv{0:26356} state up:active seq 16 addr [v2:172.18.0.108:6808/1621657194,v1:172.18.0.108:6809/1621657194] compat {c=[1],r=[1],i=[17ff]}]
                                                            
                                                            
                                                           Standby daemons:
                                                            
                                                           [mds.mds.np0005548789.vxwwsq{-1:16884} state up:standby seq 1 addr [v2:172.18.0.107:6808/3033303281,v1:172.18.0.107:6809/3033303281] compat {c=[1],r=[1],i=[17ff]}]
                                                           [mds.mds.np0005548788.erzujf{-1:16890} state up:standby seq 1 addr [v2:172.18.0.106:6808/309324236,v1:172.18.0.106:6809/309324236] compat {c=[1],r=[1],i=[17ff]}]
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@-1(synchronizing).osd e92 crush map has features 3314933000854323200, adjusting msgr requires
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@-1(synchronizing).osd e92 crush map has features 432629239337189376, adjusting msgr requires
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@-1(synchronizing).osd e92 crush map has features 432629239337189376, adjusting msgr requires
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@-1(synchronizing).osd e92 crush map has features 432629239337189376, adjusting msgr requires
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring mon.np0005548789 (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon mon.np0005548789 on np0005548789.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='client.44426 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005548787", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/109628701' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2265627899' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 ' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring crash.np0005548790 (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='client.34469 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005548787"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Remove daemons mon.np0005548787
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "quorum_status"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Safe to remove mon.np0005548787: new quorum should be ['np0005548790', 'np0005548788', 'np0005548789'] (from ['np0005548790', 'np0005548788', 'np0005548789'])
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring osd.0 (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Removing monitor np0005548787 from monmap...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon rm", "name": "np0005548787"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Removing daemon mon.np0005548787 from np0005548787.localdomain -- ports []
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon osd.0 on np0005548790.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790 calling monitor election
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548789 calling monitor election
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790 is new leader, mons np0005548790,np0005548789 in quorum (ranks 0,2)
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: monmap epoch 13
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: last_changed 2025-12-06T10:08:36.308855+0000
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: created 2025-12-06T07:57:14.295835+0000
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: min_mon_release 18 (reef)
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: election_strategy: 1
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005548790
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005548788
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005548789
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: fsmap cephfs:1 {0=mds.np0005548790.vhcezv=up:active} 2 up:standby
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: osdmap e92: 6 total, 6 up, 6 in
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: mgrmap e32: np0005548789.mzhmje(active, since 20s), standbys: np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548787.umwsra, np0005548790.kvkfyr
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Health check failed: 1/3 mons down, quorum np0005548790,np0005548789 (MON_DOWN)
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm; 1/3 mons down, quorum np0005548790,np0005548789
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]:     stray daemon mgr.np0005548785.vhqlsq on host np0005548785.localdomain not managed by cephadm
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]:     stray host np0005548785.localdomain has 1 stray daemons: ['mgr.np0005548785.vhqlsq']
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005548790,np0005548789
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]:     mon.np0005548788 (rank 1) addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] is down (out of quorum)
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring osd.3 (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon osd.3 on np0005548790.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2722608319' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2722608319' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2689790601' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548788 calling monitor election
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790 calling monitor election
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790 is new leader, mons np0005548790,np0005548788,np0005548789 in quorum (ranks 0,1,2)
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: monmap epoch 13
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: last_changed 2025-12-06T10:08:36.308855+0000
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: created 2025-12-06T07:57:14.295835+0000
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: min_mon_release 18 (reef)
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: election_strategy: 1
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005548790
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005548788
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005548789
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: fsmap cephfs:1 {0=mds.np0005548790.vhcezv=up:active} 2 up:standby
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: osdmap e92: 6 total, 6 up, 6 in
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: mgrmap e32: np0005548789.mzhmje(active, since 22s), standbys: np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548787.umwsra, np0005548790.kvkfyr
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005548790,np0005548789)
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]:     stray daemon mgr.np0005548785.vhqlsq on host np0005548785.localdomain not managed by cephadm
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]:     stray host np0005548785.localdomain has 1 stray daemons: ['mgr.np0005548785.vhqlsq']
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring mgr.np0005548790.kvkfyr (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon mgr.np0005548790.kvkfyr on np0005548790.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='client.44452 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005548787.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Removed label mgr from host np0005548787.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring mon.np0005548790 (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon mon.np0005548790 on np0005548790.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='client.44458 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005548787.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Removed label _admin from host np0005548787.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Removing np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Removing np0005548787.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Removing np0005548787.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Removing daemon mgr.np0005548787.umwsra from np0005548787.localdomain -- ports [8765]
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth rm", "entity": "mgr.np0005548787.umwsra"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005548787.umwsra"}]': finished
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Removing key for mgr.np0005548787.umwsra
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548787.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring crash.np0005548787 (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon crash.np0005548787 on np0005548787.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring crash.np0005548788 (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring osd.2 (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon osd.2 on np0005548788.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='client.54257 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005548787.localdomain", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Added label _no_schedule to host np0005548787.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005548787.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring osd.5 (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon osd.5 on np0005548788.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='client.44467 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005548787.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='client.44470 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005548787.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548787.localdomain"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005548787.localdomain"}]': finished
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Removed host np0005548787.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring mon.np0005548788 (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon mon.np0005548788 on np0005548788.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring crash.np0005548789 (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring osd.1 (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon osd.1 on np0005548789.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring osd.4 (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon osd.4 on np0005548789.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring mgr.np0005548789.mzhmje (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring mon.np0005548789 (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon mon.np0005548789 on np0005548789.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring crash.np0005548790 (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='client.44476 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Saving service mon spec with placement label:mon
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='client.44482 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005548790", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='client.44488 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005548790"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Remove daemons mon.np0005548790
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Safe to remove mon.np0005548790: new quorum should be ['np0005548788', 'np0005548789'] (from ['np0005548788', 'np0005548789'])
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Removing monitor np0005548790 from monmap...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Removing daemon mon.np0005548790 from np0005548790.localdomain -- ports []
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548789 calling monitor election
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548788 calling monitor election
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548788 is new leader, mons np0005548788,np0005548789 in quorum (ranks 0,1)
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: monmap epoch 14
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: last_changed 2025-12-06T10:09:13.351903+0000
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: created 2025-12-06T07:57:14.295835+0000
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: min_mon_release 18 (reef)
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: election_strategy: 1
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005548788
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005548789
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: fsmap cephfs:1 {0=mds.np0005548790.vhcezv=up:active} 2 up:standby
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: osdmap e92: 6 total, 6 up, 6 in
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: mgrmap e32: np0005548789.mzhmje(active, since 53s), standbys: np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548787.umwsra, np0005548790.kvkfyr
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]:     stray daemon mgr.np0005548785.vhqlsq on host np0005548785.localdomain not managed by cephadm
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]:     stray host np0005548785.localdomain has 1 stray daemons: ['mgr.np0005548785.vhqlsq']
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring crash.np0005548788 (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring osd.2 (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon osd.2 on np0005548788.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring osd.5 (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon osd.5 on np0005548788.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring crash.np0005548789 (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring osd.1 (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon osd.1 on np0005548789.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring osd.4 (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon osd.4 on np0005548789.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring mgr.np0005548789.mzhmje (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring crash.np0005548790 (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring osd.0 (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon osd.0 on np0005548790.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: from='client.54285 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005548790.localdomain:172.18.0.105", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Deploying daemon mon.np0005548790 on np0005548790.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring osd.3 (monmap changed)...
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon osd.3 on np0005548790.localdomain
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@-1(synchronizing).paxosservice(auth 1..40) refresh upgraded, format 0 -> 3
Dec 06 10:09:29 np0005548790.localdomain ceph-mgr[286934]: ms_deliver_dispatch: unhandled message 0x55a2e9e831e0 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0
Dec 06 10:09:29 np0005548790.localdomain sudo[301828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:09:29 np0005548790.localdomain sudo[301828]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:29 np0005548790.localdomain sudo[301828]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:29 np0005548790.localdomain sudo[301846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:09:29 np0005548790.localdomain sudo[301846]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:30 np0005548790.localdomain podman[301880]: 
Dec 06 10:09:30 np0005548790.localdomain podman[301880]: 2025-12-06 10:09:30.154872893 +0000 UTC m=+0.072640881 container create d28f9637fd5b2e391c55b3f0a83ebf66485eca95f4f95a9cf0bf92911496ec75 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_beaver, ceph=True, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, name=rhceph, architecture=x86_64)
Dec 06 10:09:30 np0005548790.localdomain systemd[1]: Started libpod-conmon-d28f9637fd5b2e391c55b3f0a83ebf66485eca95f4f95a9cf0bf92911496ec75.scope.
Dec 06 10:09:30 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:09:30 np0005548790.localdomain podman[301880]: 2025-12-06 10:09:30.211870825 +0000 UTC m=+0.129638813 container init d28f9637fd5b2e391c55b3f0a83ebf66485eca95f4f95a9cf0bf92911496ec75 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_beaver, version=7, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, RELEASE=main, distribution-scope=public, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, name=rhceph, io.openshift.tags=rhceph ceph, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 06 10:09:30 np0005548790.localdomain podman[301880]: 2025-12-06 10:09:30.222018605 +0000 UTC m=+0.139786593 container start d28f9637fd5b2e391c55b3f0a83ebf66485eca95f4f95a9cf0bf92911496ec75 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_beaver, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, RELEASE=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-type=git, name=rhceph, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z)
Dec 06 10:09:30 np0005548790.localdomain podman[301880]: 2025-12-06 10:09:30.222253503 +0000 UTC m=+0.140021571 container attach d28f9637fd5b2e391c55b3f0a83ebf66485eca95f4f95a9cf0bf92911496ec75 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_beaver, architecture=x86_64, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_CLEAN=True, com.redhat.component=rhceph-container, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, ceph=True, description=Red Hat Ceph Storage 7, release=1763362218, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, version=7, vendor=Red Hat, Inc.)
Dec 06 10:09:30 np0005548790.localdomain romantic_beaver[301895]: 167 167
Dec 06 10:09:30 np0005548790.localdomain systemd[1]: libpod-d28f9637fd5b2e391c55b3f0a83ebf66485eca95f4f95a9cf0bf92911496ec75.scope: Deactivated successfully.
Dec 06 10:09:30 np0005548790.localdomain podman[301880]: 2025-12-06 10:09:30.126216067 +0000 UTC m=+0.043984105 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:09:30 np0005548790.localdomain podman[301880]: 2025-12-06 10:09:30.226478965 +0000 UTC m=+0.144247013 container died d28f9637fd5b2e391c55b3f0a83ebf66485eca95f4f95a9cf0bf92911496ec75 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_beaver, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, vendor=Red Hat, Inc., vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 10:09:30 np0005548790.localdomain podman[301900]: 2025-12-06 10:09:30.320704612 +0000 UTC m=+0.081581540 container remove d28f9637fd5b2e391c55b3f0a83ebf66485eca95f4f95a9cf0bf92911496ec75 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_beaver, distribution-scope=public, com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.expose-services=, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, vcs-type=git, GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, release=1763362218, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, version=7, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7)
Dec 06 10:09:30 np0005548790.localdomain systemd[1]: libpod-conmon-d28f9637fd5b2e391c55b3f0a83ebf66485eca95f4f95a9cf0bf92911496ec75.scope: Deactivated successfully.
Dec 06 10:09:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:30.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:09:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:30.335 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:09:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:30.335 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:09:30 np0005548790.localdomain sudo[301846]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:30 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-642dfed0bcc77efe389626036edc34af77fd1bf0016c843eff60777226fd1edf-merged.mount: Deactivated successfully.
Dec 06 10:09:31 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@-1(probing) e15  my rank is now 2 (was -1)
Dec 06 10:09:31 np0005548790.localdomain ceph-mon[301742]: log_channel(cluster) log [INF] : mon.np0005548790 calling monitor election
Dec 06 10:09:31 np0005548790.localdomain ceph-mon[301742]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 06 10:09:31 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(electing) e15 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:09:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:09:32 np0005548790.localdomain systemd[1]: tmp-crun.3rCGU5.mount: Deactivated successfully.
Dec 06 10:09:32 np0005548790.localdomain podman[301917]: 2025-12-06 10:09:32.621438343 +0000 UTC m=+0.131694037 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible)
Dec 06 10:09:32 np0005548790.localdomain podman[301917]: 2025-12-06 10:09:32.640860192 +0000 UTC m=+0.151115936 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125)
Dec 06 10:09:32 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:09:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:33.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:09:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:33.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:09:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:33.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:09:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:34.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:09:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:34.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:09:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:34.359 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:09:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:34.359 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:09:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:34.359 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:09:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:34.360 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:09:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:34.360 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:09:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:09:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(electing) e15 handle_auth_request failed to assign global_id
Dec 06 10:09:34 np0005548790.localdomain systemd[1]: tmp-crun.F50UC8.mount: Deactivated successfully.
Dec 06 10:09:34 np0005548790.localdomain podman[301938]: 2025-12-06 10:09:34.563688552 +0000 UTC m=+0.075707613 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:09:34 np0005548790.localdomain podman[301938]: 2025-12-06 10:09:34.577334236 +0000 UTC m=+0.089353327 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 10:09:34 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:09:34 np0005548790.localdomain sudo[301971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:09:34 np0005548790.localdomain sudo[301971]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:34 np0005548790.localdomain sudo[301971]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(electing) e15 handle_auth_request failed to assign global_id
Dec 06 10:09:34 np0005548790.localdomain sudo[301989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:09:34 np0005548790.localdomain sudo[301989]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(electing) e15 handle_auth_request failed to assign global_id
Dec 06 10:09:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:35.080 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.720s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:09:35 np0005548790.localdomain podman[302034]: 
Dec 06 10:09:35 np0005548790.localdomain podman[302034]: 2025-12-06 10:09:35.225881016 +0000 UTC m=+0.083577153 container create 8d56f69fca8b1dd83d537871d204b6e037ca4b37ecf03591b90a88dc9e03d4c9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_brattain, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, release=1763362218, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.tags=rhceph ceph)
Dec 06 10:09:35 np0005548790.localdomain systemd[1]: Started libpod-conmon-8d56f69fca8b1dd83d537871d204b6e037ca4b37ecf03591b90a88dc9e03d4c9.scope.
Dec 06 10:09:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:35.278 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:09:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:35.280 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=11974MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:09:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:35.282 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:09:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:35.283 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:09:35 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:09:35 np0005548790.localdomain podman[302034]: 2025-12-06 10:09:35.195724181 +0000 UTC m=+0.053420338 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:09:35 np0005548790.localdomain podman[302034]: 2025-12-06 10:09:35.303072737 +0000 UTC m=+0.160768894 container init 8d56f69fca8b1dd83d537871d204b6e037ca4b37ecf03591b90a88dc9e03d4c9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_brattain, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, GIT_CLEAN=True, name=rhceph, RELEASE=main, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, version=7, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, ceph=True)
Dec 06 10:09:35 np0005548790.localdomain podman[302034]: 2025-12-06 10:09:35.314223975 +0000 UTC m=+0.171920132 container start 8d56f69fca8b1dd83d537871d204b6e037ca4b37ecf03591b90a88dc9e03d4c9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_brattain, RELEASE=main, GIT_BRANCH=main, distribution-scope=public, io.buildah.version=1.41.4, release=1763362218, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_CLEAN=True)
Dec 06 10:09:35 np0005548790.localdomain podman[302034]: 2025-12-06 10:09:35.314510672 +0000 UTC m=+0.172206829 container attach 8d56f69fca8b1dd83d537871d204b6e037ca4b37ecf03591b90a88dc9e03d4c9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_brattain, GIT_BRANCH=main, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, RELEASE=main, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 10:09:35 np0005548790.localdomain romantic_brattain[302049]: 167 167
Dec 06 10:09:35 np0005548790.localdomain systemd[1]: libpod-8d56f69fca8b1dd83d537871d204b6e037ca4b37ecf03591b90a88dc9e03d4c9.scope: Deactivated successfully.
Dec 06 10:09:35 np0005548790.localdomain podman[302034]: 2025-12-06 10:09:35.319139956 +0000 UTC m=+0.176836143 container died 8d56f69fca8b1dd83d537871d204b6e037ca4b37ecf03591b90a88dc9e03d4c9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_brattain, RELEASE=main, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_CLEAN=True, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, distribution-scope=public, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True, version=7)
Dec 06 10:09:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:35.362 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:09:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:35.364 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:09:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:35.387 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:09:35 np0005548790.localdomain podman[302054]: 2025-12-06 10:09:35.425967479 +0000 UTC m=+0.090584680 container remove 8d56f69fca8b1dd83d537871d204b6e037ca4b37ecf03591b90a88dc9e03d4c9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_brattain, com.redhat.component=rhceph-container, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, build-date=2025-11-26T19:44:28Z, version=7, ceph=True, name=rhceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 06 10:09:35 np0005548790.localdomain systemd[1]: libpod-conmon-8d56f69fca8b1dd83d537871d204b6e037ca4b37ecf03591b90a88dc9e03d4c9.scope: Deactivated successfully.
Dec 06 10:09:35 np0005548790.localdomain sudo[301989]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:35 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(electing) e15 handle_auth_request failed to assign global_id
Dec 06 10:09:35 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(electing) e15 handle_auth_request failed to assign global_id
Dec 06 10:09:35 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-2883906b5cc80bfaa6303a32c9fd4983de0ef5458555ae511caa0fb905013763-merged.mount: Deactivated successfully.
Dec 06 10:09:35 np0005548790.localdomain sudo[302090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:09:35 np0005548790.localdomain sudo[302090]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:35 np0005548790.localdomain sudo[302090]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:35 np0005548790.localdomain sudo[302108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 10:09:35 np0005548790.localdomain sudo[302108]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:35.843 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:09:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:35.850 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:09:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:35.873 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:09:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:35.875 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:09:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:35.877 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: Reconfiguring mds.mds.np0005548790.vhcezv (monmap changed)...
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon mds.mds.np0005548790.vhcezv on np0005548790.localdomain
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: mon.np0005548788 calling monitor election
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: mon.np0005548789 calling monitor election
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: mon.np0005548788 is new leader, mons np0005548788,np0005548789 in quorum (ranks 0,1)
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: monmap epoch 15
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: last_changed 2025-12-06T10:09:29.475464+0000
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: created 2025-12-06T07:57:14.295835+0000
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: min_mon_release 18 (reef)
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: election_strategy: 1
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005548788
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005548789
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: 2: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005548790
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: fsmap cephfs:1 {0=mds.np0005548790.vhcezv=up:active} 2 up:standby
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: osdmap e92: 6 total, 6 up, 6 in
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: mgrmap e32: np0005548789.mzhmje(active, since 74s), standbys: np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548787.umwsra, np0005548790.kvkfyr
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: Health check failed: 1/3 mons down, quorum np0005548788,np0005548789 (MON_DOWN)
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm; 1/3 mons down, quorum np0005548788,np0005548789
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]:     stray daemon mgr.np0005548785.vhqlsq on host np0005548785.localdomain not managed by cephadm
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]:     stray host np0005548785.localdomain has 1 stray daemons: ['mgr.np0005548785.vhqlsq']
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005548788,np0005548789
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]:     mon.np0005548790 (rank 2) addr [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] is down (out of quorum)
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548790.kvkfyr", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: Reconfiguring mgr.np0005548790.kvkfyr (monmap changed)...
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon mgr.np0005548790.kvkfyr on np0005548790.localdomain
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/862062483' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: log_channel(cluster) log [INF] : mon.np0005548790 calling monitor election
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(electing) e15 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(electing) e15 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: mgrc update_daemon_metadata mon.np0005548790 metadata {addrs=[v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005548790.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.7 (Plow),distro_version=9.7,hostname=np0005548790.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux}
Dec 06 10:09:36 np0005548790.localdomain systemd[1]: tmp-crun.jTsq4S.mount: Deactivated successfully.
Dec 06 10:09:36 np0005548790.localdomain podman[302202]: 2025-12-06 10:09:36.604100582 +0000 UTC m=+0.105935400 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, ceph=True, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, distribution-scope=public, GIT_CLEAN=True, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, release=1763362218, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790 calling monitor election
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790 calling monitor election
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: mon.np0005548789 calling monitor election
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: mon.np0005548788 calling monitor election
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: mon.np0005548788 is new leader, mons np0005548788,np0005548789,np0005548790 in quorum (ranks 0,1,2)
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: monmap epoch 15
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: last_changed 2025-12-06T10:09:29.475464+0000
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: created 2025-12-06T07:57:14.295835+0000
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: min_mon_release 18 (reef)
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: election_strategy: 1
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005548788
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005548789
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: 2: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005548790
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: fsmap cephfs:1 {0=mds.np0005548790.vhcezv=up:active} 2 up:standby
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: osdmap e92: 6 total, 6 up, 6 in
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: mgrmap e32: np0005548789.mzhmje(active, since 76s), standbys: np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548787.umwsra, np0005548790.kvkfyr
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005548788,np0005548789)
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]:     stray daemon mgr.np0005548785.vhqlsq on host np0005548785.localdomain not managed by cephadm
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:09:36 np0005548790.localdomain ceph-mon[301742]:     stray host np0005548785.localdomain has 1 stray daemons: ['mgr.np0005548785.vhqlsq']
Dec 06 10:09:36 np0005548790.localdomain podman[302202]: 2025-12-06 10:09:36.737329149 +0000 UTC m=+0.239164007 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, io.buildah.version=1.41.4, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.openshift.expose-services=, RELEASE=main, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=)
Dec 06 10:09:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:09:37 np0005548790.localdomain podman[302305]: 2025-12-06 10:09:37.426467803 +0000 UTC m=+0.102713453 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:09:37 np0005548790.localdomain podman[302305]: 2025-12-06 10:09:37.51022662 +0000 UTC m=+0.186472280 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec 06 10:09:37 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:09:37 np0005548790.localdomain sudo[302108]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:37 np0005548790.localdomain sudo[302345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:09:37 np0005548790.localdomain sudo[302345]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:37 np0005548790.localdomain sudo[302345]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:37 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2288131588' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:09:37 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:09:37 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:37 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:37 np0005548790.localdomain sudo[302363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:09:37 np0005548790.localdomain sudo[302363]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:09:37.877 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:09:38 np0005548790.localdomain sudo[302363]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:38 np0005548790.localdomain sudo[302413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 10:09:38 np0005548790.localdomain sudo[302413]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:38 np0005548790.localdomain sudo[302413]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:38 np0005548790.localdomain sudo[302431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph
Dec 06 10:09:38 np0005548790.localdomain sudo[302431]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:38 np0005548790.localdomain sudo[302431]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:38 np0005548790.localdomain sudo[302449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:09:38 np0005548790.localdomain sudo[302449]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:38 np0005548790.localdomain sudo[302449]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:38 np0005548790.localdomain ceph-mon[301742]: pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:09:38 np0005548790.localdomain sudo[302467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:09:38 np0005548790.localdomain sudo[302467]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:38 np0005548790.localdomain sudo[302467]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:38 np0005548790.localdomain sudo[302485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:09:38 np0005548790.localdomain sudo[302485]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:38 np0005548790.localdomain sudo[302485]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:39 np0005548790.localdomain sudo[302519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:09:39 np0005548790.localdomain sudo[302519]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:39 np0005548790.localdomain sudo[302519]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:39 np0005548790.localdomain sudo[302537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:09:39 np0005548790.localdomain sudo[302537]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:39 np0005548790.localdomain sudo[302537]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:39 np0005548790.localdomain sudo[302555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 06 10:09:39 np0005548790.localdomain sudo[302555]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:39 np0005548790.localdomain sudo[302555]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:39 np0005548790.localdomain sudo[302573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:09:39 np0005548790.localdomain sudo[302573]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:39 np0005548790.localdomain sudo[302573]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:39 np0005548790.localdomain sudo[302591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:09:39 np0005548790.localdomain sudo[302591]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:39 np0005548790.localdomain sudo[302591]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:39 np0005548790.localdomain sudo[302609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:09:39 np0005548790.localdomain sudo[302609]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:39 np0005548790.localdomain sudo[302609]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:39 np0005548790.localdomain sudo[302627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:09:39 np0005548790.localdomain sudo[302627]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:39 np0005548790.localdomain sudo[302627]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:39 np0005548790.localdomain sudo[302645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:09:39 np0005548790.localdomain sudo[302645]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:39 np0005548790.localdomain sudo[302645]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:39 np0005548790.localdomain sudo[302679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:09:39 np0005548790.localdomain sudo[302679]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:39 np0005548790.localdomain sudo[302679]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:39 np0005548790.localdomain sudo[302697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:09:39 np0005548790.localdomain sudo[302697]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:39 np0005548790.localdomain sudo[302697]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:39 np0005548790.localdomain sudo[302715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:09:39 np0005548790.localdomain sudo[302715]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:39 np0005548790.localdomain sudo[302715]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:39 np0005548790.localdomain ceph-mon[301742]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf
Dec 06 10:09:39 np0005548790.localdomain ceph-mon[301742]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf
Dec 06 10:09:39 np0005548790.localdomain ceph-mon[301742]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf
Dec 06 10:09:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2596033626' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:09:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2596033626' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:09:39 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:39 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:39 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:39 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:39 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:39 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:40 np0005548790.localdomain sudo[302733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:09:40 np0005548790.localdomain sudo[302733]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:40 np0005548790.localdomain sudo[302733]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:40 np0005548790.localdomain ceph-mon[301742]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:09:40 np0005548790.localdomain ceph-mon[301742]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:09:40 np0005548790.localdomain ceph-mon[301742]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:09:40 np0005548790.localdomain ceph-mon[301742]: pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:40 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:40 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:09:40 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548788.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:09:40 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:40 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.200:0/3895678344' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 06 10:09:40 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:41 np0005548790.localdomain ceph-mon[301742]: Reconfiguring crash.np0005548788 (monmap changed)...
Dec 06 10:09:41 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon crash.np0005548788 on np0005548788.localdomain
Dec 06 10:09:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 06 10:09:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:43 np0005548790.localdomain ceph-mon[301742]: Reconfiguring osd.2 (monmap changed)...
Dec 06 10:09:43 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon osd.2 on np0005548788.localdomain
Dec 06 10:09:43 np0005548790.localdomain ceph-mon[301742]: pgmap v44: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:43 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:43 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:43 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 06 10:09:43 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:44 np0005548790.localdomain ceph-mon[301742]: Reconfiguring osd.5 (monmap changed)...
Dec 06 10:09:44 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon osd.5 on np0005548788.localdomain
Dec 06 10:09:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548788.erzujf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:09:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548788.yvwbqq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:09:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:09:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: Reconfiguring mds.mds.np0005548788.erzujf (monmap changed)...
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon mds.mds.np0005548788.erzujf on np0005548788.localdomain
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: pgmap v45: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: Reconfiguring mgr.np0005548788.yvwbqq (monmap changed)...
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon mgr.np0005548788.yvwbqq on np0005548788.localdomain
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: from='client.44541 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: Reconfig service osd.default_drive_group
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: Reconfiguring crash.np0005548789 (monmap changed)...
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548789.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon crash.np0005548789 on np0005548789.localdomain
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:46 np0005548790.localdomain ceph-mon[301742]: pgmap v46: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:47 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:47 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:47 np0005548790.localdomain ceph-mon[301742]: Reconfiguring osd.1 (monmap changed)...
Dec 06 10:09:47 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 06 10:09:47 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:47 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon osd.1 on np0005548789.localdomain
Dec 06 10:09:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:48 np0005548790.localdomain ceph-mon[301742]: Reconfiguring osd.4 (monmap changed)...
Dec 06 10:09:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 06 10:09:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:48 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon osd.4 on np0005548789.localdomain
Dec 06 10:09:48 np0005548790.localdomain ceph-mon[301742]: pgmap v47: 177 pgs: 177 active+clean; 104 MiB data, 570 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:09:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:09:48.392 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:09:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:09:48.392 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:09:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:09:48.393 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:09:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:09:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:09:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:09:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:09:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:09:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18701 "" "Go-http-client/1.1"
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:09:49.133056) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015789133136, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 12557, "num_deletes": 257, "total_data_size": 20203447, "memory_usage": 21296864, "flush_reason": "Manual Compaction"}
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: Reconfiguring mds.mds.np0005548789.vxwwsq (monmap changed)...
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005548789.vxwwsq", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon mds.mds.np0005548789.vxwwsq on np0005548789.localdomain
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015789221696, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 17628724, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 12562, "table_properties": {"data_size": 17560864, "index_size": 37098, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29701, "raw_key_size": 316606, "raw_average_key_size": 26, "raw_value_size": 17358721, "raw_average_value_size": 1463, "num_data_blocks": 1411, "num_entries": 11864, "num_filter_entries": 11864, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015768, "oldest_key_time": 1765015768, "file_creation_time": 1765015789, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 88732 microseconds, and 32316 cpu microseconds.
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:09:49.221758) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 17628724 bytes OK
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:09:49.221819) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:09:49.224988) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:09:49.225025) EVENT_LOG_v1 {"time_micros": 1765015789225008, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:09:49.225046) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 20117307, prev total WAL file size 20180235, number of live WAL files 2.
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:09:49.229538) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131303434' seq:72057594037927935, type:22 .. '7061786F73003131323936' seq:0, type:0; will stop at (end)
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(16MB) 8(1762B)]
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015789229694, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 17630486, "oldest_snapshot_seqno": -1}
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 11614 keys, 17625214 bytes, temperature: kUnknown
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015789354350, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 17625214, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17558010, "index_size": 37087, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29061, "raw_key_size": 311837, "raw_average_key_size": 26, "raw_value_size": 17359126, "raw_average_value_size": 1494, "num_data_blocks": 1411, "num_entries": 11614, "num_filter_entries": 11614, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015768, "oldest_key_time": 0, "file_creation_time": 1765015789, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:09:49.354690) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 17625214 bytes
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:09:49.356639) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.6 rd, 141.6 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(16.8, 0.0 +0.0 blob) out(16.8 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 11869, records dropped: 255 output_compression: NoCompression
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:09:49.356668) EVENT_LOG_v1 {"time_micros": 1765015789356656, "job": 4, "event": "compaction_finished", "compaction_time_micros": 124510, "compaction_time_cpu_micros": 48384, "output_level": 6, "num_output_files": 1, "total_output_size": 17625214, "num_input_records": 11869, "num_output_records": 11614, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015789359156, "job": 4, "event": "table_file_deletion", "file_number": 14}
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015789359206, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:09:49.229409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e92 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e92 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 06 10:09:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e93 e93: 6 total, 6 up, 6 in
Dec 06 10:09:49 np0005548790.localdomain sshd[297758]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 10:09:49 np0005548790.localdomain systemd[1]: session-68.scope: Deactivated successfully.
Dec 06 10:09:49 np0005548790.localdomain systemd[1]: session-68.scope: Consumed 25.016s CPU time.
Dec 06 10:09:49 np0005548790.localdomain systemd-logind[760]: Session 68 logged out. Waiting for processes to exit.
Dec 06 10:09:49 np0005548790.localdomain systemd-logind[760]: Removed session 68.
Dec 06 10:09:49 np0005548790.localdomain sshd[302752]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:09:49 np0005548790.localdomain sshd[302752]: Accepted publickey for ceph-admin from 192.168.122.103 port 48272 ssh2: RSA SHA256:HQYBT8n3HbnicDtP9tehd3+gJXMFtkw+fTMlmR2wCsE
Dec 06 10:09:49 np0005548790.localdomain systemd-logind[760]: New session 71 of user ceph-admin.
Dec 06 10:09:49 np0005548790.localdomain systemd[1]: Started Session 71 of User ceph-admin.
Dec 06 10:09:49 np0005548790.localdomain sshd[302752]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 06 10:09:50 np0005548790.localdomain sudo[302756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:09:50 np0005548790.localdomain sudo[302756]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:50 np0005548790.localdomain sudo[302756]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:50 np0005548790.localdomain sudo[302774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 10:09:50 np0005548790.localdomain sudo[302774]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' 
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26618 172.18.0.107:0/2196335751' entity='mgr.np0005548789.mzhmje' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.200:0/90840268' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: Activating manager daemon np0005548785.vhqlsq
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: osdmap e93: 6 total, 6 up, 6 in
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.200:0/90840268' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: mgrmap e33: np0005548785.vhqlsq(active, starting, since 0.0485672s), standbys: np0005548788.yvwbqq, np0005548787.umwsra, np0005548790.kvkfyr
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mds metadata", "who": "mds.np0005548789.vxwwsq"} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mds metadata", "who": "mds.np0005548788.erzujf"} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mds metadata", "who": "mds.np0005548790.vhcezv"} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mgr metadata", "who": "np0005548785.vhqlsq", "id": "np0005548785.vhqlsq"} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mgr metadata", "who": "np0005548788.yvwbqq", "id": "np0005548788.yvwbqq"} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mgr metadata", "who": "np0005548787.umwsra", "id": "np0005548787.umwsra"} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mgr metadata", "who": "np0005548790.kvkfyr", "id": "np0005548790.kvkfyr"} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mds metadata"} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "osd metadata"} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mon metadata"} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: Manager daemon np0005548785.vhqlsq is now available
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: removing stray HostCache host record np0005548787.localdomain.devices.0
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548787.localdomain.devices.0"} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005548787.localdomain.devices.0"}]': finished
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005548787.localdomain.devices.0"} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005548787.localdomain.devices.0"}]': finished
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548785.vhqlsq/mirror_snapshot_schedule"} : dispatch
Dec 06 10:09:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548785.vhqlsq/trash_purge_schedule"} : dispatch
Dec 06 10:09:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:09:50 np0005548790.localdomain podman[302834]: 2025-12-06 10:09:50.785586945 +0000 UTC m=+0.088663079 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 06 10:09:50 np0005548790.localdomain podman[302834]: 2025-12-06 10:09:50.868230851 +0000 UTC m=+0.171306945 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 10:09:50 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:09:51 np0005548790.localdomain podman[302881]: 2025-12-06 10:09:51.056308115 +0000 UTC m=+0.099183140 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, vcs-type=git, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, GIT_CLEAN=True, distribution-scope=public, description=Red Hat Ceph Storage 7)
Dec 06 10:09:51 np0005548790.localdomain podman[302881]: 2025-12-06 10:09:51.160055586 +0000 UTC m=+0.202930581 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., name=rhceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, RELEASE=main, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, version=7, io.k8s.description=Red Hat Ceph Storage 7)
Dec 06 10:09:51 np0005548790.localdomain ceph-mon[301742]: mgrmap e34: np0005548785.vhqlsq(active, since 1.08564s), standbys: np0005548788.yvwbqq, np0005548787.umwsra, np0005548790.kvkfyr
Dec 06 10:09:51 np0005548790.localdomain ceph-mon[301742]: [06/Dec/2025:10:09:50] ENGINE Bus STARTING
Dec 06 10:09:51 np0005548790.localdomain sudo[302774]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:51 np0005548790.localdomain sudo[303002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:09:51 np0005548790.localdomain sudo[303002]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:51 np0005548790.localdomain sudo[303002]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:52 np0005548790.localdomain sudo[303020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:09:52 np0005548790.localdomain sudo[303020]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:52 np0005548790.localdomain sudo[303020]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:52 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:52 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:52 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:52 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:52 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:52 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:52 np0005548790.localdomain sudo[303070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:09:52 np0005548790.localdomain sudo[303070]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:52 np0005548790.localdomain sudo[303070]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:52 np0005548790.localdomain sudo[303088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 06 10:09:52 np0005548790.localdomain sudo[303088]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:53 np0005548790.localdomain sudo[303088]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:53 np0005548790.localdomain sudo[303126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 10:09:53 np0005548790.localdomain sudo[303126]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:53 np0005548790.localdomain sudo[303126]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:53 np0005548790.localdomain sudo[303144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph
Dec 06 10:09:53 np0005548790.localdomain sudo[303144]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:53 np0005548790.localdomain sudo[303144]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:09:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:09:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:09:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:09:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:09:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:09:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:09:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:09:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:09:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:09:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:09:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:09:53 np0005548790.localdomain sudo[303162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:09:53 np0005548790.localdomain sudo[303162]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:53 np0005548790.localdomain sudo[303162]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:53 np0005548790.localdomain sudo[303180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:09:53 np0005548790.localdomain sudo[303180]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:53 np0005548790.localdomain sudo[303180]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:53 np0005548790.localdomain sudo[303198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:09:53 np0005548790.localdomain sudo[303198]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:53 np0005548790.localdomain sudo[303198]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:53 np0005548790.localdomain ceph-mon[301742]: mgrmap e35: np0005548785.vhqlsq(active, since 3s), standbys: np0005548788.yvwbqq, np0005548787.umwsra, np0005548790.kvkfyr
Dec 06 10:09:53 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:53 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:53 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:53 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:53 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 06 10:09:53 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:53 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 06 10:09:53 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:53 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 06 10:09:53 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 06 10:09:53 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 06 10:09:53 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 06 10:09:53 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:53 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:09:53 np0005548790.localdomain sudo[303232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:09:53 np0005548790.localdomain sudo[303232]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:53 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e93 _set_new_cache_sizes cache_size:1019772364 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:09:53 np0005548790.localdomain sudo[303232]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:54 np0005548790.localdomain sudo[303250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:09:54 np0005548790.localdomain sudo[303250]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:54 np0005548790.localdomain sudo[303250]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:54 np0005548790.localdomain sudo[303268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 06 10:09:54 np0005548790.localdomain sudo[303268]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:54 np0005548790.localdomain sudo[303268]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:54 np0005548790.localdomain sudo[303286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:09:54 np0005548790.localdomain sudo[303286]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:54 np0005548790.localdomain sudo[303286]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:54 np0005548790.localdomain sudo[303304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:09:54 np0005548790.localdomain sudo[303304]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:54 np0005548790.localdomain sudo[303304]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:54 np0005548790.localdomain sudo[303322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:09:54 np0005548790.localdomain sudo[303322]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:54 np0005548790.localdomain sudo[303322]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:54 np0005548790.localdomain sudo[303340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:09:54 np0005548790.localdomain sudo[303340]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:54 np0005548790.localdomain sudo[303340]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:54 np0005548790.localdomain sudo[303358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:09:54 np0005548790.localdomain sudo[303358]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:54 np0005548790.localdomain sudo[303358]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:54 np0005548790.localdomain sudo[303392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:09:54 np0005548790.localdomain sudo[303392]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:54 np0005548790.localdomain sudo[303392]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:54 np0005548790.localdomain sudo[303410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:09:54 np0005548790.localdomain sudo[303410]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:54 np0005548790.localdomain sudo[303410]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:54 np0005548790.localdomain sudo[303428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:09:54 np0005548790.localdomain sudo[303428]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:54 np0005548790.localdomain sudo[303428]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:54 np0005548790.localdomain sudo[303446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 10:09:54 np0005548790.localdomain sudo[303446]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:54 np0005548790.localdomain sudo[303446]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:54 np0005548790.localdomain ceph-mon[301742]: Adjusting osd_memory_target on np0005548788.localdomain to 836.6M
Dec 06 10:09:54 np0005548790.localdomain ceph-mon[301742]: Adjusting osd_memory_target on np0005548789.localdomain to 836.6M
Dec 06 10:09:54 np0005548790.localdomain ceph-mon[301742]: Adjusting osd_memory_target on np0005548790.localdomain to 836.6M
Dec 06 10:09:54 np0005548790.localdomain ceph-mon[301742]: Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Dec 06 10:09:54 np0005548790.localdomain ceph-mon[301742]: Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:09:54 np0005548790.localdomain ceph-mon[301742]: Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:09:54 np0005548790.localdomain ceph-mon[301742]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf
Dec 06 10:09:54 np0005548790.localdomain ceph-mon[301742]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf
Dec 06 10:09:54 np0005548790.localdomain ceph-mon[301742]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf
Dec 06 10:09:54 np0005548790.localdomain ceph-mon[301742]: Standby manager daemon np0005548789.mzhmje started
Dec 06 10:09:54 np0005548790.localdomain sudo[303464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph
Dec 06 10:09:54 np0005548790.localdomain sudo[303464]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:54 np0005548790.localdomain sudo[303464]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:55 np0005548790.localdomain sudo[303482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:09:55 np0005548790.localdomain sudo[303482]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:55 np0005548790.localdomain sudo[303482]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:55 np0005548790.localdomain sudo[303500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:09:55 np0005548790.localdomain sudo[303500]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:55 np0005548790.localdomain sudo[303500]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:55 np0005548790.localdomain sudo[303518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:09:55 np0005548790.localdomain sudo[303518]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:55 np0005548790.localdomain sudo[303518]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:55 np0005548790.localdomain sudo[303552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:09:55 np0005548790.localdomain sudo[303552]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:55 np0005548790.localdomain sudo[303552]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:55 np0005548790.localdomain sudo[303570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:09:55 np0005548790.localdomain sudo[303570]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:55 np0005548790.localdomain sudo[303570]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:55 np0005548790.localdomain sudo[303588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 06 10:09:55 np0005548790.localdomain sudo[303588]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:55 np0005548790.localdomain sudo[303588]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:55 np0005548790.localdomain sudo[303606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:09:55 np0005548790.localdomain sudo[303606]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:55 np0005548790.localdomain sudo[303606]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:55 np0005548790.localdomain sudo[303624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:09:55 np0005548790.localdomain sudo[303624]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:55 np0005548790.localdomain sudo[303624]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:55 np0005548790.localdomain sudo[303642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:09:55 np0005548790.localdomain sudo[303642]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:55 np0005548790.localdomain sudo[303642]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:55 np0005548790.localdomain sudo[303660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:09:55 np0005548790.localdomain sudo[303660]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:55 np0005548790.localdomain sudo[303660]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:55 np0005548790.localdomain sudo[303678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:09:55 np0005548790.localdomain sudo[303678]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:55 np0005548790.localdomain sudo[303678]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:55 np0005548790.localdomain ceph-mon[301742]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:09:55 np0005548790.localdomain ceph-mon[301742]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:09:55 np0005548790.localdomain ceph-mon[301742]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:09:55 np0005548790.localdomain ceph-mon[301742]: Updating np0005548790.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:09:55 np0005548790.localdomain ceph-mon[301742]: Updating np0005548789.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:09:55 np0005548790.localdomain ceph-mon[301742]: Updating np0005548788.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:09:55 np0005548790.localdomain ceph-mon[301742]: mgrmap e36: np0005548785.vhqlsq(active, since 5s), standbys: np0005548788.yvwbqq, np0005548787.umwsra, np0005548790.kvkfyr, np0005548789.mzhmje
Dec 06 10:09:55 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mgr metadata", "who": "np0005548789.mzhmje", "id": "np0005548789.mzhmje"} : dispatch
Dec 06 10:09:56 np0005548790.localdomain sudo[303712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:09:56 np0005548790.localdomain sudo[303712]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:56 np0005548790.localdomain sudo[303712]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:09:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:09:56 np0005548790.localdomain sudo[303742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:09:56 np0005548790.localdomain sudo[303742]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:09:56 np0005548790.localdomain sudo[303742]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:56 np0005548790.localdomain podman[303730]: 2025-12-06 10:09:56.192664113 +0000 UTC m=+0.103482985 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec 06 10:09:56 np0005548790.localdomain podman[303731]: 2025-12-06 10:09:56.245896184 +0000 UTC m=+0.155771380 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, config_id=edpm, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, architecture=x86_64, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, release=1755695350, vcs-type=git)
Dec 06 10:09:56 np0005548790.localdomain podman[303730]: 2025-12-06 10:09:56.258349398 +0000 UTC m=+0.169168270 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 10:09:56 np0005548790.localdomain sudo[303785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:09:56 np0005548790.localdomain sudo[303785]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:56 np0005548790.localdomain sudo[303785]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:56 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:09:56 np0005548790.localdomain podman[303774]: 2025-12-06 10:09:56.332493827 +0000 UTC m=+0.139275491 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:09:56 np0005548790.localdomain podman[303774]: 2025-12-06 10:09:56.341038975 +0000 UTC m=+0.147820639 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:09:56 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:09:56 np0005548790.localdomain podman[303731]: 2025-12-06 10:09:56.359220871 +0000 UTC m=+0.269096107 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, vcs-type=git, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 06 10:09:56 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:09:56 np0005548790.localdomain sudo[303827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:09:56 np0005548790.localdomain sudo[303827]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:56 np0005548790.localdomain sudo[303827]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:56 np0005548790.localdomain ceph-mon[301742]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:09:56 np0005548790.localdomain ceph-mon[301742]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:09:56 np0005548790.localdomain ceph-mon[301742]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:09:56 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:56 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:56 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:56 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:56 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:56 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:56 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:56 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:09:56 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 06 10:09:56 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:57 np0005548790.localdomain ceph-mon[301742]: [06/Dec/2025:10:09:56] ENGINE Error in 'start' listener <bound method Server.start of <cephadm.service_discovery.Root object at 0x7fef2d81f340>>
                                                           Traceback (most recent call last):
                                                             File "/lib/python3.9/site-packages/cherrypy/process/wspbus.py", line 230, in publish
                                                               output.append(listener(*args, **kwargs))
                                                             File "/lib/python3.9/site-packages/cherrypy/_cpserver.py", line 180, in start
                                                               super(Server, self).start()
                                                             File "/lib/python3.9/site-packages/cherrypy/process/servers.py", line 184, in start
                                                               self.wait()
                                                             File "/lib/python3.9/site-packages/cherrypy/process/servers.py", line 260, in wait
                                                               portend.occupied(*self.bound_addr, timeout=Timeouts.occupied)
                                                             File "/lib/python3.9/site-packages/portend.py", line 162, in occupied
                                                               raise Timeout("Port {port} not bound on {host}.".format(**locals()))
                                                           portend.Timeout: Port 8765 not bound on 172.18.0.103.
Dec 06 10:09:57 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon osd.2 on np0005548788.localdomain
Dec 06 10:09:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 06 10:09:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:58 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e93 _set_new_cache_sizes cache_size:1020049712 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:09:58 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon osd.5 on np0005548788.localdomain
Dec 06 10:09:58 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:58 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:58 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:58 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:58 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005548789.mzhmje", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 06 10:09:58 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:09:58 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:59 np0005548790.localdomain sudo[303845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:09:59 np0005548790.localdomain sudo[303845]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:59 np0005548790.localdomain sudo[303845]: pam_unix(sudo:session): session closed for user root
Dec 06 10:09:59 np0005548790.localdomain sudo[303863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:09:59 np0005548790.localdomain sudo[303863]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:09:59 np0005548790.localdomain ceph-mon[301742]: Reconfiguring mgr.np0005548789.mzhmje (monmap changed)...
Dec 06 10:09:59 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon mgr.np0005548789.mzhmje on np0005548789.localdomain
Dec 06 10:09:59 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:59 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:09:59 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005548790.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 06 10:09:59 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:09:59 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:10:00 np0005548790.localdomain podman[303898]: 
Dec 06 10:10:00 np0005548790.localdomain podman[303898]: 2025-12-06 10:10:00.067972444 +0000 UTC m=+0.075350212 container create 971ffa7959c0d0f37033abaac4c85058884f8d8077f0d0f9ce6197c9efbf2bbc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_thompson, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_CLEAN=True, distribution-scope=public, com.redhat.component=rhceph-container)
Dec 06 10:10:00 np0005548790.localdomain systemd[1]: Started libpod-conmon-971ffa7959c0d0f37033abaac4c85058884f8d8077f0d0f9ce6197c9efbf2bbc.scope.
Dec 06 10:10:00 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:10:00 np0005548790.localdomain podman[303898]: 2025-12-06 10:10:00.038703193 +0000 UTC m=+0.046080981 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:10:00 np0005548790.localdomain podman[303898]: 2025-12-06 10:10:00.140969675 +0000 UTC m=+0.148347413 container init 971ffa7959c0d0f37033abaac4c85058884f8d8077f0d0f9ce6197c9efbf2bbc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_thompson, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, name=rhceph, ceph=True, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, RELEASE=main, io.openshift.expose-services=, architecture=x86_64)
Dec 06 10:10:00 np0005548790.localdomain podman[303898]: 2025-12-06 10:10:00.151997498 +0000 UTC m=+0.159375256 container start 971ffa7959c0d0f37033abaac4c85058884f8d8077f0d0f9ce6197c9efbf2bbc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_thompson, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=1763362218, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, CEPH_POINT_RELEASE=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, architecture=x86_64)
Dec 06 10:10:00 np0005548790.localdomain podman[303898]: 2025-12-06 10:10:00.152387629 +0000 UTC m=+0.159765397 container attach 971ffa7959c0d0f37033abaac4c85058884f8d8077f0d0f9ce6197c9efbf2bbc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_thompson, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, name=rhceph, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vendor=Red Hat, Inc., release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True)
Dec 06 10:10:00 np0005548790.localdomain optimistic_thompson[303913]: 167 167
Dec 06 10:10:00 np0005548790.localdomain systemd[1]: libpod-971ffa7959c0d0f37033abaac4c85058884f8d8077f0d0f9ce6197c9efbf2bbc.scope: Deactivated successfully.
Dec 06 10:10:00 np0005548790.localdomain podman[303898]: 2025-12-06 10:10:00.156470038 +0000 UTC m=+0.163847826 container died 971ffa7959c0d0f37033abaac4c85058884f8d8077f0d0f9ce6197c9efbf2bbc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_thompson, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, architecture=x86_64, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, ceph=True, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, release=1763362218)
Dec 06 10:10:00 np0005548790.localdomain podman[303918]: 2025-12-06 10:10:00.25052466 +0000 UTC m=+0.085071893 container remove 971ffa7959c0d0f37033abaac4c85058884f8d8077f0d0f9ce6197c9efbf2bbc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_thompson, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, distribution-scope=public, ceph=True, GIT_BRANCH=main, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, name=rhceph, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, RELEASE=main, io.openshift.tags=rhceph ceph)
Dec 06 10:10:00 np0005548790.localdomain systemd[1]: libpod-conmon-971ffa7959c0d0f37033abaac4c85058884f8d8077f0d0f9ce6197c9efbf2bbc.scope: Deactivated successfully.
Dec 06 10:10:00 np0005548790.localdomain sudo[303863]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:00 np0005548790.localdomain sudo[303934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:10:00 np0005548790.localdomain sudo[303934]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:00 np0005548790.localdomain sudo[303934]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:00 np0005548790.localdomain sudo[303952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:10:00 np0005548790.localdomain sudo[303952]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:00 np0005548790.localdomain ceph-mon[301742]: Reconfiguring crash.np0005548790 (monmap changed)...
Dec 06 10:10:00 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon crash.np0005548790 on np0005548790.localdomain
Dec 06 10:10:00 np0005548790.localdomain ceph-mon[301742]: overall HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:10:00 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:10:00 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:10:00 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 06 10:10:00 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:10:01 np0005548790.localdomain podman[303988]: 
Dec 06 10:10:01 np0005548790.localdomain podman[303988]: 2025-12-06 10:10:01.027438278 +0000 UTC m=+0.073536825 container create 0647cd35c1be5d63fdfa41c3da3955dbcfed861782ed1713cbaed773ef85d3fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_bhaskara, GIT_BRANCH=main, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, name=rhceph, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, ceph=True, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph)
Dec 06 10:10:01 np0005548790.localdomain systemd[1]: Started libpod-conmon-0647cd35c1be5d63fdfa41c3da3955dbcfed861782ed1713cbaed773ef85d3fd.scope.
Dec 06 10:10:01 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-2dfa2b523be447619fb6d40f9e4bf6840a0abf42e03c3a923ffc4385abc662cc-merged.mount: Deactivated successfully.
Dec 06 10:10:01 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:10:01 np0005548790.localdomain podman[303988]: 2025-12-06 10:10:00.998304 +0000 UTC m=+0.044402587 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:10:01 np0005548790.localdomain podman[303988]: 2025-12-06 10:10:01.102287847 +0000 UTC m=+0.148386404 container init 0647cd35c1be5d63fdfa41c3da3955dbcfed861782ed1713cbaed773ef85d3fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_bhaskara, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, release=1763362218, name=rhceph, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhceph ceph, architecture=x86_64, version=7, ceph=True, com.redhat.component=rhceph-container)
Dec 06 10:10:01 np0005548790.localdomain podman[303988]: 2025-12-06 10:10:01.113045574 +0000 UTC m=+0.159144121 container start 0647cd35c1be5d63fdfa41c3da3955dbcfed861782ed1713cbaed773ef85d3fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_bhaskara, distribution-scope=public, com.redhat.component=rhceph-container, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, CEPH_POINT_RELEASE=, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.buildah.version=1.41.4, GIT_CLEAN=True)
Dec 06 10:10:01 np0005548790.localdomain mystifying_bhaskara[304003]: 167 167
Dec 06 10:10:01 np0005548790.localdomain podman[303988]: 2025-12-06 10:10:01.113300771 +0000 UTC m=+0.159399398 container attach 0647cd35c1be5d63fdfa41c3da3955dbcfed861782ed1713cbaed773ef85d3fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_bhaskara, distribution-scope=public, GIT_CLEAN=True, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, name=rhceph, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, RELEASE=main, io.openshift.expose-services=, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, release=1763362218)
Dec 06 10:10:01 np0005548790.localdomain systemd[1]: libpod-0647cd35c1be5d63fdfa41c3da3955dbcfed861782ed1713cbaed773ef85d3fd.scope: Deactivated successfully.
Dec 06 10:10:01 np0005548790.localdomain podman[303988]: 2025-12-06 10:10:01.115519149 +0000 UTC m=+0.161617756 container died 0647cd35c1be5d63fdfa41c3da3955dbcfed861782ed1713cbaed773ef85d3fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_bhaskara, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, name=rhceph, build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, io.buildah.version=1.41.4, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7)
Dec 06 10:10:01 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c8fc9c7ebe916db5b20ad8992cf04446a86bffcff376640313980c1aefeb6ddf-merged.mount: Deactivated successfully.
Dec 06 10:10:01 np0005548790.localdomain podman[304008]: 2025-12-06 10:10:01.217110113 +0000 UTC m=+0.092078860 container remove 0647cd35c1be5d63fdfa41c3da3955dbcfed861782ed1713cbaed773ef85d3fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_bhaskara, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, vcs-type=git, version=7, io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_CLEAN=True, architecture=x86_64, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 06 10:10:01 np0005548790.localdomain systemd[1]: libpod-conmon-0647cd35c1be5d63fdfa41c3da3955dbcfed861782ed1713cbaed773ef85d3fd.scope: Deactivated successfully.
Dec 06 10:10:01 np0005548790.localdomain sudo[303952]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:01 np0005548790.localdomain sudo[304031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:10:01 np0005548790.localdomain sudo[304031]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:01 np0005548790.localdomain sudo[304031]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:01 np0005548790.localdomain sudo[304049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:10:01 np0005548790.localdomain sudo[304049]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:02 np0005548790.localdomain ceph-mon[301742]: Reconfiguring osd.0 (monmap changed)...
Dec 06 10:10:02 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon osd.0 on np0005548790.localdomain
Dec 06 10:10:02 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:10:02 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:10:02 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:10:02 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:10:02 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 06 10:10:02 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:10:02 np0005548790.localdomain podman[304083]: 
Dec 06 10:10:02 np0005548790.localdomain podman[304083]: 2025-12-06 10:10:02.159592672 +0000 UTC m=+0.090620321 container create ca83bdeccc4fb3b72970264e3d6ad003783eb2c52aa03ae9381cc8b86a7a4d3a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_agnesi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_CLEAN=True, io.buildah.version=1.41.4, distribution-scope=public, CEPH_POINT_RELEASE=, architecture=x86_64, vcs-type=git, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, RELEASE=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph)
Dec 06 10:10:02 np0005548790.localdomain systemd[1]: Started libpod-conmon-ca83bdeccc4fb3b72970264e3d6ad003783eb2c52aa03ae9381cc8b86a7a4d3a.scope.
Dec 06 10:10:02 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:10:02 np0005548790.localdomain podman[304083]: 2025-12-06 10:10:02.123896189 +0000 UTC m=+0.054923838 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:10:02 np0005548790.localdomain podman[304083]: 2025-12-06 10:10:02.232339325 +0000 UTC m=+0.163366934 container init ca83bdeccc4fb3b72970264e3d6ad003783eb2c52aa03ae9381cc8b86a7a4d3a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_agnesi, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vcs-type=git, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, io.openshift.expose-services=, release=1763362218, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, GIT_CLEAN=True, RELEASE=main, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=7, com.redhat.component=rhceph-container, ceph=True)
Dec 06 10:10:02 np0005548790.localdomain podman[304083]: 2025-12-06 10:10:02.24416355 +0000 UTC m=+0.175191229 container start ca83bdeccc4fb3b72970264e3d6ad003783eb2c52aa03ae9381cc8b86a7a4d3a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_agnesi, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, version=7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, RELEASE=main, architecture=x86_64, GIT_CLEAN=True, io.buildah.version=1.41.4, vendor=Red Hat, Inc., ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 06 10:10:02 np0005548790.localdomain podman[304083]: 2025-12-06 10:10:02.244843809 +0000 UTC m=+0.175871488 container attach ca83bdeccc4fb3b72970264e3d6ad003783eb2c52aa03ae9381cc8b86a7a4d3a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_agnesi, vendor=Red Hat, Inc., io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, ceph=True, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, architecture=x86_64)
Dec 06 10:10:02 np0005548790.localdomain vigorous_agnesi[304098]: 167 167
Dec 06 10:10:02 np0005548790.localdomain podman[304083]: 2025-12-06 10:10:02.248422805 +0000 UTC m=+0.179450514 container died ca83bdeccc4fb3b72970264e3d6ad003783eb2c52aa03ae9381cc8b86a7a4d3a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_agnesi, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, architecture=x86_64, description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-type=git, release=1763362218, ceph=True, io.buildah.version=1.41.4)
Dec 06 10:10:02 np0005548790.localdomain systemd[1]: libpod-ca83bdeccc4fb3b72970264e3d6ad003783eb2c52aa03ae9381cc8b86a7a4d3a.scope: Deactivated successfully.
Dec 06 10:10:02 np0005548790.localdomain podman[304103]: 2025-12-06 10:10:02.350049968 +0000 UTC m=+0.091018071 container remove ca83bdeccc4fb3b72970264e3d6ad003783eb2c52aa03ae9381cc8b86a7a4d3a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_agnesi, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, version=7, distribution-scope=public, GIT_CLEAN=True, ceph=True, GIT_BRANCH=main, RELEASE=main, vcs-type=git, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc.)
Dec 06 10:10:02 np0005548790.localdomain systemd[1]: libpod-conmon-ca83bdeccc4fb3b72970264e3d6ad003783eb2c52aa03ae9381cc8b86a7a4d3a.scope: Deactivated successfully.
Dec 06 10:10:02 np0005548790.localdomain sudo[304049]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:02 np0005548790.localdomain sudo[304127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:10:02 np0005548790.localdomain sudo[304127]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:02 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:10:02 np0005548790.localdomain sudo[304127]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:03 np0005548790.localdomain podman[304145]: 2025-12-06 10:10:03.020930455 +0000 UTC m=+0.098782910 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:10:03 np0005548790.localdomain podman[304145]: 2025-12-06 10:10:03.040197079 +0000 UTC m=+0.118049564 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 10:10:03 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:10:03 np0005548790.localdomain ceph-mon[301742]: [06/Dec/2025:10:10:01] ENGINE Error in 'start' listener <bound method Server.start of <cephadm.agent.HostData object at 0x7fefc03cbf40>>
                                                           Traceback (most recent call last):
                                                             File "/lib/python3.9/site-packages/cherrypy/process/wspbus.py", line 230, in publish
                                                               output.append(listener(*args, **kwargs))
                                                             File "/lib/python3.9/site-packages/cherrypy/_cpserver.py", line 180, in start
                                                               super(Server, self).start()
                                                             File "/lib/python3.9/site-packages/cherrypy/process/servers.py", line 184, in start
                                                               self.wait()
                                                             File "/lib/python3.9/site-packages/cherrypy/process/servers.py", line 260, in wait
                                                               portend.occupied(*self.bound_addr, timeout=Timeouts.occupied)
                                                             File "/lib/python3.9/site-packages/portend.py", line 162, in occupied
                                                               raise Timeout("Port {port} not bound on {host}.".format(**locals()))
                                                           portend.Timeout: Port 7150 not bound on 172.18.0.103.
Dec 06 10:10:03 np0005548790.localdomain ceph-mon[301742]: [06/Dec/2025:10:10:01] ENGINE Shutting down due to error in start listener:
                                                           Traceback (most recent call last):
                                                             File "/lib/python3.9/site-packages/cherrypy/process/wspbus.py", line 268, in start
                                                               self.publish('start')
                                                             File "/lib/python3.9/site-packages/cherrypy/process/wspbus.py", line 248, in publish
                                                               raise exc
                                                           cherrypy.process.wspbus.ChannelFailures: Timeout('Port 8765 not bound on 172.18.0.103.')
                                                           Timeout('Port 7150 not bound on 172.18.0.103.')
Dec 06 10:10:03 np0005548790.localdomain ceph-mon[301742]: [06/Dec/2025:10:10:01] ENGINE Bus STOPPING
Dec 06 10:10:03 np0005548790.localdomain ceph-mon[301742]: [06/Dec/2025:10:10:01] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('172.18.0.103', 8765)) already shut down
Dec 06 10:10:03 np0005548790.localdomain ceph-mon[301742]: [06/Dec/2025:10:10:01] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('172.18.0.103', 7150)) already shut down
Dec 06 10:10:03 np0005548790.localdomain ceph-mon[301742]: [06/Dec/2025:10:10:01] ENGINE Bus STOPPED
Dec 06 10:10:03 np0005548790.localdomain ceph-mon[301742]: [06/Dec/2025:10:10:01] ENGINE Bus EXITING
Dec 06 10:10:03 np0005548790.localdomain ceph-mon[301742]: [06/Dec/2025:10:10:01] ENGINE Bus EXITED
Dec 06 10:10:03 np0005548790.localdomain ceph-mon[301742]: Failed to run cephadm http server: Timeout('Port 8765 not bound on 172.18.0.103.')
                                                           Timeout('Port 7150 not bound on 172.18.0.103.')
Dec 06 10:10:03 np0005548790.localdomain ceph-mon[301742]: Reconfiguring osd.3 (monmap changed)...
Dec 06 10:10:03 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon osd.3 on np0005548790.localdomain
Dec 06 10:10:03 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:10:03 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:10:03 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:10:03 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:10:03 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:10:03 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:10:03 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:10:03 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:10:03 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-ce0e49e0de53dc0b2e98390f3ca90e07d8014c105cd98b994c80d3e003f90db7-merged.mount: Deactivated successfully.
Dec 06 10:10:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054642 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:10:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:10:05 np0005548790.localdomain systemd[1]: tmp-crun.bEBBui.mount: Deactivated successfully.
Dec 06 10:10:05 np0005548790.localdomain podman[304166]: 2025-12-06 10:10:05.559064407 +0000 UTC m=+0.072407575 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:10:05 np0005548790.localdomain podman[304166]: 2025-12-06 10:10:05.573175993 +0000 UTC m=+0.086519181 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:10:05 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:10:05 np0005548790.localdomain ceph-mon[301742]: from='mgr.26690 172.18.0.103:0/2299561010' entity='mgr.np0005548785.vhqlsq' 
Dec 06 10:10:06 np0005548790.localdomain sshd[299981]: Received disconnect from 192.168.122.11 port 46396:11: disconnected by user
Dec 06 10:10:06 np0005548790.localdomain sshd[299981]: Disconnected from user tripleo-admin 192.168.122.11 port 46396
Dec 06 10:10:06 np0005548790.localdomain sshd[299945]: pam_unix(sshd:session): session closed for user tripleo-admin
Dec 06 10:10:06 np0005548790.localdomain systemd[1]: session-69.scope: Deactivated successfully.
Dec 06 10:10:06 np0005548790.localdomain systemd[1]: session-69.scope: Consumed 1.764s CPU time.
Dec 06 10:10:06 np0005548790.localdomain systemd-logind[760]: Session 69 logged out. Waiting for processes to exit.
Dec 06 10:10:06 np0005548790.localdomain systemd-logind[760]: Removed session 69.
Dec 06 10:10:07 np0005548790.localdomain ceph-mon[301742]: mgrmap e37: np0005548785.vhqlsq(active, since 17s), standbys: np0005548788.yvwbqq, np0005548790.kvkfyr, np0005548789.mzhmje
Dec 06 10:10:08 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:10:08 np0005548790.localdomain systemd[1]: tmp-crun.J3lYBE.mount: Deactivated successfully.
Dec 06 10:10:08 np0005548790.localdomain podman[304189]: 2025-12-06 10:10:08.569283815 +0000 UTC m=+0.087525528 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 06 10:10:08 np0005548790.localdomain systemd[296658]: Starting Mark boot as successful...
Dec 06 10:10:08 np0005548790.localdomain systemd[296658]: Finished Mark boot as successful.
Dec 06 10:10:08 np0005548790.localdomain podman[304189]: 2025-12-06 10:10:08.607135496 +0000 UTC m=+0.125377259 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec 06 10:10:08 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:10:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:10:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 7800.1 total, 600.0 interval
                                                          Cumulative writes: 5320 writes, 23K keys, 5320 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5320 writes, 739 syncs, 7.20 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 50 writes, 178 keys, 50 commit groups, 1.0 writes per commit group, ingest: 0.27 MB, 0.00 MB/s
                                                          Interval WAL: 50 writes, 22 syncs, 2.27 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 10:10:08 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054730 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:10:10 np0005548790.localdomain ceph-mon[301742]: Health check update: 2 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)
Dec 06 10:10:10 np0005548790.localdomain ceph-mon[301742]: Health check update: 2 stray host(s) with 2 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST)
Dec 06 10:10:11 np0005548790.localdomain ceph-mon[301742]: pgmap v3: 177 pgs: 177 unknown; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 10:10:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:10:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 7800.2 total, 600.0 interval
                                                          Cumulative writes: 5681 writes, 24K keys, 5681 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5681 writes, 851 syncs, 6.68 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 196 writes, 469 keys, 196 commit groups, 1.0 writes per commit group, ingest: 0.43 MB, 0.00 MB/s
                                                          Interval WAL: 196 writes, 90 syncs, 2.18 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 10:10:13 np0005548790.localdomain ceph-mon[301742]: pgmap v4: 177 pgs: 177 unknown; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 10:10:13 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:10:15 np0005548790.localdomain ceph-mon[301742]: pgmap v5: 177 pgs: 177 unknown; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 10:10:16 np0005548790.localdomain systemd[1]: Stopping User Manager for UID 1003...
Dec 06 10:10:16 np0005548790.localdomain systemd[299965]: Activating special unit Exit the Session...
Dec 06 10:10:16 np0005548790.localdomain systemd[299965]: Stopped target Main User Target.
Dec 06 10:10:16 np0005548790.localdomain systemd[299965]: Stopped target Basic System.
Dec 06 10:10:16 np0005548790.localdomain systemd[299965]: Stopped target Paths.
Dec 06 10:10:16 np0005548790.localdomain systemd[299965]: Stopped target Sockets.
Dec 06 10:10:16 np0005548790.localdomain systemd[299965]: Stopped target Timers.
Dec 06 10:10:16 np0005548790.localdomain systemd[299965]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 06 10:10:16 np0005548790.localdomain systemd[299965]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 06 10:10:16 np0005548790.localdomain systemd[299965]: Closed D-Bus User Message Bus Socket.
Dec 06 10:10:16 np0005548790.localdomain systemd[299965]: Stopped Create User's Volatile Files and Directories.
Dec 06 10:10:16 np0005548790.localdomain systemd[299965]: Removed slice User Application Slice.
Dec 06 10:10:16 np0005548790.localdomain systemd[299965]: Reached target Shutdown.
Dec 06 10:10:16 np0005548790.localdomain systemd[299965]: Finished Exit the Session.
Dec 06 10:10:16 np0005548790.localdomain systemd[299965]: Reached target Exit the Session.
Dec 06 10:10:16 np0005548790.localdomain systemd[1]: user@1003.service: Deactivated successfully.
Dec 06 10:10:16 np0005548790.localdomain systemd[1]: Stopped User Manager for UID 1003.
Dec 06 10:10:16 np0005548790.localdomain systemd[1]: Stopping User Runtime Directory /run/user/1003...
Dec 06 10:10:16 np0005548790.localdomain systemd[1]: run-user-1003.mount: Deactivated successfully.
Dec 06 10:10:16 np0005548790.localdomain systemd[1]: user-runtime-dir@1003.service: Deactivated successfully.
Dec 06 10:10:16 np0005548790.localdomain systemd[1]: Stopped User Runtime Directory /run/user/1003.
Dec 06 10:10:16 np0005548790.localdomain systemd[1]: Removed slice User Slice of UID 1003.
Dec 06 10:10:16 np0005548790.localdomain systemd[1]: user-1003.slice: Consumed 2.234s CPU time.
Dec 06 10:10:17 np0005548790.localdomain ceph-mon[301742]: pgmap v6: 177 pgs: 177 unknown; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 10:10:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:10:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:10:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:10:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:10:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:10:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18709 "" "Go-http-client/1.1"
Dec 06 10:10:18 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:10:19 np0005548790.localdomain ceph-mon[301742]: pgmap v7: 177 pgs: 177 unknown; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 10:10:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:10:21 np0005548790.localdomain podman[304216]: 2025-12-06 10:10:21.575597993 +0000 UTC m=+0.088376631 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 10:10:21 np0005548790.localdomain podman[304216]: 2025-12-06 10:10:21.610224168 +0000 UTC m=+0.123002786 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Dec 06 10:10:21 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:10:21 np0005548790.localdomain ceph-mon[301742]: pgmap v8: 177 pgs: 177 unknown; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 10:10:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:10:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:10:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:10:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:10:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:10:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:10:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:10:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:10:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:10:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:10:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:10:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:10:23 np0005548790.localdomain ceph-mon[301742]: pgmap v9: 177 pgs: 177 unknown; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 10:10:23 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:10:25 np0005548790.localdomain ceph-mon[301742]: pgmap v10: 177 pgs: 177 unknown; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 10:10:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:10:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:10:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:10:26 np0005548790.localdomain podman[304235]: 2025-12-06 10:10:26.572935339 +0000 UTC m=+0.076955706 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Dec 06 10:10:26 np0005548790.localdomain podman[304235]: 2025-12-06 10:10:26.586728788 +0000 UTC m=+0.090749195 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 10:10:26 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:10:26 np0005548790.localdomain podman[304236]: 2025-12-06 10:10:26.635985442 +0000 UTC m=+0.132552750 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, config_id=edpm, name=ubi9-minimal, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 06 10:10:26 np0005548790.localdomain podman[304236]: 2025-12-06 10:10:26.708352105 +0000 UTC m=+0.204919343 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, distribution-scope=public, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, vcs-type=git, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7)
Dec 06 10:10:26 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:10:26 np0005548790.localdomain podman[304234]: 2025-12-06 10:10:26.732727057 +0000 UTC m=+0.238305645 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:10:26 np0005548790.localdomain podman[304234]: 2025-12-06 10:10:26.741900481 +0000 UTC m=+0.247479059 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:10:26 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:10:27 np0005548790.localdomain ceph-mon[301742]: pgmap v11: 177 pgs: 177 unknown; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 10:10:28 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/3108344624' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:10:28 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:10:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:29.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:10:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:29.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:10:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:29.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:10:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:29.352 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:10:29 np0005548790.localdomain ceph-mon[301742]: pgmap v12: 177 pgs: 177 unknown; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 10:10:29 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/3639889960' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:10:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:31.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:10:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:31.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:10:31 np0005548790.localdomain ceph-mon[301742]: pgmap v13: 177 pgs: 177 unknown; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 10:10:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:32.329 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:10:33 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:10:33 np0005548790.localdomain podman[304297]: 2025-12-06 10:10:33.559999002 +0000 UTC m=+0.078033895 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:10:33 np0005548790.localdomain podman[304297]: 2025-12-06 10:10:33.574199221 +0000 UTC m=+0.092234094 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:10:33 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:10:33 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:10:33 np0005548790.localdomain ceph-mon[301742]: pgmap v14: 177 pgs: 177 unknown; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 10:10:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:34.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:10:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:34.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:10:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:34.381 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:10:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:34.381 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:10:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:34.382 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:10:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:34.382 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:10:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:34.383 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:10:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:10:34 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4114425361' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:10:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:34.848 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:10:34 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/4114425361' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:10:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:35.076 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:10:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:35.078 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=12025MB free_disk=0.0GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:10:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:35.079 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:10:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:35.079 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:10:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:35.479 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:10:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:35.480 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=0GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:10:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:35.509 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:10:35 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:10:35 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1769226532' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:10:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:35.946 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:10:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:35.953 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updating inventory in ProviderTree for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 10:10:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:36.019 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updated inventory for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 06 10:10:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:36.019 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updating resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 06 10:10:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:36.020 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updating inventory in ProviderTree for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 with inventory: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 10:10:36 np0005548790.localdomain ceph-mon[301742]: pgmap v15: 177 pgs: 177 unknown; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 10:10:36 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1769226532' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:10:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:36.052 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:10:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:36.052 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:10:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:10:36 np0005548790.localdomain podman[304359]: 2025-12-06 10:10:36.311855051 +0000 UTC m=+0.089569943 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:10:36 np0005548790.localdomain podman[304359]: 2025-12-06 10:10:36.350308728 +0000 UTC m=+0.128023600 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:10:36 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:10:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:37.049 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:10:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:37.067 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:10:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:37.067 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:10:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:37.068 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:10:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:10:37.068 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:10:37 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e94 e94: 6 total, 6 up, 6 in
Dec 06 10:10:37 np0005548790.localdomain sshd[302752]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 10:10:37 np0005548790.localdomain systemd-logind[760]: Session 71 logged out. Waiting for processes to exit.
Dec 06 10:10:37 np0005548790.localdomain systemd[1]: session-71.scope: Deactivated successfully.
Dec 06 10:10:37 np0005548790.localdomain systemd[1]: session-71.scope: Consumed 8.769s CPU time.
Dec 06 10:10:37 np0005548790.localdomain systemd-logind[760]: Removed session 71.
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: pgmap v16: 177 pgs: 177 unknown; 0 B data, 0 B used, 0 B / 0 B avail
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.200:0/3346912753' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: Activating manager daemon np0005548788.yvwbqq
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: osdmap e94: 6 total, 6 up, 6 in
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.200:0/3346912753' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: mgrmap e38: np0005548788.yvwbqq(active, starting, since 0.0607571s), standbys: np0005548790.kvkfyr, np0005548789.mzhmje
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mds metadata", "who": "mds.np0005548789.vxwwsq"} : dispatch
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mds metadata", "who": "mds.np0005548788.erzujf"} : dispatch
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mds metadata", "who": "mds.np0005548790.vhcezv"} : dispatch
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr metadata", "who": "np0005548788.yvwbqq", "id": "np0005548788.yvwbqq"} : dispatch
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr metadata", "who": "np0005548790.kvkfyr", "id": "np0005548790.kvkfyr"} : dispatch
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr metadata", "who": "np0005548789.mzhmje", "id": "np0005548789.mzhmje"} : dispatch
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mds metadata"} : dispatch
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd metadata"} : dispatch
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mon metadata"} : dispatch
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: Manager daemon np0005548788.yvwbqq is now available
Dec 06 10:10:38 np0005548790.localdomain sshd[304382]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:10:38 np0005548790.localdomain sshd[304382]: Accepted publickey for ceph-admin from 192.168.122.106 port 44324 ssh2: RSA SHA256:HQYBT8n3HbnicDtP9tehd3+gJXMFtkw+fTMlmR2wCsE
Dec 06 10:10:38 np0005548790.localdomain systemd-logind[760]: New session 72 of user ceph-admin.
Dec 06 10:10:38 np0005548790.localdomain systemd[1]: Started Session 72 of User ceph-admin.
Dec 06 10:10:38 np0005548790.localdomain sshd[304382]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 06 10:10:38 np0005548790.localdomain sudo[304386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:10:38 np0005548790.localdomain sudo[304386]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:38 np0005548790.localdomain sudo[304386]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:38 np0005548790.localdomain sudo[304404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 10:10:38 np0005548790.localdomain sudo[304404]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/261091109' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:10:38 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/261091109' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:10:39 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548788.yvwbqq/mirror_snapshot_schedule"} : dispatch
Dec 06 10:10:39 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548788.yvwbqq/trash_purge_schedule"} : dispatch
Dec 06 10:10:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/3616421039' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:10:39 np0005548790.localdomain ceph-mon[301742]: mgrmap e39: np0005548788.yvwbqq(active, since 1.0804s), standbys: np0005548790.kvkfyr, np0005548789.mzhmje
Dec 06 10:10:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/261091109' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:10:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/261091109' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:10:39 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:10:39 np0005548790.localdomain podman[304465]: 2025-12-06 10:10:39.28613238 +0000 UTC m=+0.085485623 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:10:39 np0005548790.localdomain podman[304465]: 2025-12-06 10:10:39.362488089 +0000 UTC m=+0.161841362 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:10:39 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:10:39 np0005548790.localdomain podman[304518]: 2025-12-06 10:10:39.559007657 +0000 UTC m=+0.103709671 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, name=rhceph, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, com.redhat.component=rhceph-container, distribution-scope=public, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, version=7, GIT_BRANCH=main, ceph=True, CEPH_POINT_RELEASE=, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 06 10:10:39 np0005548790.localdomain podman[304518]: 2025-12-06 10:10:39.691565828 +0000 UTC m=+0.236267872 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, GIT_BRANCH=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, release=1763362218, ceph=True, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, io.openshift.expose-services=, distribution-scope=public, version=7, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 06 10:10:40 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/212188230' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:10:40 np0005548790.localdomain ceph-mon[301742]: [06/Dec/2025:10:10:39] ENGINE Bus STARTING
Dec 06 10:10:40 np0005548790.localdomain ceph-mon[301742]: [06/Dec/2025:10:10:39] ENGINE Serving on http://172.18.0.106:8765
Dec 06 10:10:40 np0005548790.localdomain ceph-mon[301742]: [06/Dec/2025:10:10:39] ENGINE Serving on https://172.18.0.106:7150
Dec 06 10:10:40 np0005548790.localdomain ceph-mon[301742]: [06/Dec/2025:10:10:39] ENGINE Bus STARTED
Dec 06 10:10:40 np0005548790.localdomain ceph-mon[301742]: [06/Dec/2025:10:10:39] ENGINE Client ('172.18.0.106', 48356) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 10:10:40 np0005548790.localdomain ceph-mon[301742]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 2 stray daemon(s) not managed by cephadm)
Dec 06 10:10:40 np0005548790.localdomain ceph-mon[301742]: Health check cleared: CEPHADM_STRAY_HOST (was: 2 stray host(s) with 2 daemon(s) not managed by cephadm)
Dec 06 10:10:40 np0005548790.localdomain ceph-mon[301742]: Cluster is now healthy
Dec 06 10:10:40 np0005548790.localdomain sudo[304404]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:40 np0005548790.localdomain sudo[304632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:10:40 np0005548790.localdomain sudo[304632]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:40 np0005548790.localdomain sudo[304632]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:40 np0005548790.localdomain sudo[304650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:10:40 np0005548790.localdomain sudo[304650]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:41 np0005548790.localdomain ceph-mon[301742]: from='client.44646 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:10:41 np0005548790.localdomain ceph-mon[301742]: pgmap v4: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:10:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:41 np0005548790.localdomain sudo[304650]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:41 np0005548790.localdomain sudo[304700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:10:41 np0005548790.localdomain sudo[304700]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:41 np0005548790.localdomain sudo[304700]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:41 np0005548790.localdomain sudo[304718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 06 10:10:41 np0005548790.localdomain sudo[304718]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:41 np0005548790.localdomain sudo[304718]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:42 np0005548790.localdomain sudo[304754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 10:10:42 np0005548790.localdomain sudo[304754]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:42 np0005548790.localdomain sudo[304754]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:42 np0005548790.localdomain sudo[304772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph
Dec 06 10:10:42 np0005548790.localdomain sudo[304772]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:42 np0005548790.localdomain sudo[304772]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:42 np0005548790.localdomain sudo[304790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:10:42 np0005548790.localdomain sudo[304790]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:42 np0005548790.localdomain sudo[304790]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:42 np0005548790.localdomain sudo[304808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:10:42 np0005548790.localdomain sudo[304808]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:42 np0005548790.localdomain sudo[304808]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:42 np0005548790.localdomain sudo[304826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:10:42 np0005548790.localdomain sudo[304826]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:42 np0005548790.localdomain sudo[304826]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:42 np0005548790.localdomain ceph-mon[301742]: mgrmap e40: np0005548788.yvwbqq(active, since 3s), standbys: np0005548790.kvkfyr, np0005548789.mzhmje
Dec 06 10:10:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 06 10:10:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 06 10:10:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 06 10:10:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 06 10:10:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 06 10:10:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 06 10:10:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:10:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:10:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:42 np0005548790.localdomain sudo[304860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:10:42 np0005548790.localdomain sudo[304860]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:42 np0005548790.localdomain sudo[304860]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:42 np0005548790.localdomain sudo[304878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:10:42 np0005548790.localdomain sudo[304878]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:42 np0005548790.localdomain sudo[304878]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:42 np0005548790.localdomain sudo[304896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 06 10:10:42 np0005548790.localdomain sudo[304896]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:42 np0005548790.localdomain sudo[304896]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:42 np0005548790.localdomain sudo[304914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:10:42 np0005548790.localdomain sudo[304914]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:42 np0005548790.localdomain sudo[304914]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:42 np0005548790.localdomain sudo[304932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:10:42 np0005548790.localdomain sudo[304932]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:42 np0005548790.localdomain sudo[304932]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:42 np0005548790.localdomain sudo[304950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:10:42 np0005548790.localdomain sudo[304950]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:42 np0005548790.localdomain sudo[304950]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:42 np0005548790.localdomain sudo[304968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:10:42 np0005548790.localdomain sudo[304968]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:42 np0005548790.localdomain sudo[304968]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:43 np0005548790.localdomain sudo[304986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:10:43 np0005548790.localdomain sudo[304986]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:43 np0005548790.localdomain sudo[304986]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:43 np0005548790.localdomain sudo[305020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:10:43 np0005548790.localdomain sudo[305020]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:43 np0005548790.localdomain sudo[305020]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:43 np0005548790.localdomain sudo[305038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:10:43 np0005548790.localdomain sudo[305038]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:43 np0005548790.localdomain sudo[305038]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:43 np0005548790.localdomain sudo[305056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:10:43 np0005548790.localdomain sudo[305056]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:43 np0005548790.localdomain sudo[305056]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:43 np0005548790.localdomain sudo[305074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 10:10:43 np0005548790.localdomain sudo[305074]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:43 np0005548790.localdomain sudo[305074]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:43 np0005548790.localdomain ceph-mon[301742]: Adjusting osd_memory_target on np0005548789.localdomain to 836.6M
Dec 06 10:10:43 np0005548790.localdomain ceph-mon[301742]: Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:10:43 np0005548790.localdomain ceph-mon[301742]: Adjusting osd_memory_target on np0005548788.localdomain to 836.6M
Dec 06 10:10:43 np0005548790.localdomain ceph-mon[301742]: Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Dec 06 10:10:43 np0005548790.localdomain ceph-mon[301742]: Adjusting osd_memory_target on np0005548790.localdomain to 836.6M
Dec 06 10:10:43 np0005548790.localdomain ceph-mon[301742]: Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:10:43 np0005548790.localdomain ceph-mon[301742]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf
Dec 06 10:10:43 np0005548790.localdomain ceph-mon[301742]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf
Dec 06 10:10:43 np0005548790.localdomain ceph-mon[301742]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf
Dec 06 10:10:43 np0005548790.localdomain ceph-mon[301742]: pgmap v5: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:10:43 np0005548790.localdomain ceph-mon[301742]: from='client.44655 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:10:43 np0005548790.localdomain ceph-mon[301742]: Saving service mon spec with placement label:mon
Dec 06 10:10:43 np0005548790.localdomain ceph-mon[301742]: Standby manager daemon np0005548785.vhqlsq started
Dec 06 10:10:43 np0005548790.localdomain sudo[305092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph
Dec 06 10:10:43 np0005548790.localdomain sudo[305092]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:43 np0005548790.localdomain sudo[305092]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:43 np0005548790.localdomain sudo[305110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:10:43 np0005548790.localdomain sudo[305110]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:43 np0005548790.localdomain sudo[305110]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:43 np0005548790.localdomain sudo[305128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:10:43 np0005548790.localdomain sudo[305128]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:43 np0005548790.localdomain sudo[305128]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:43 np0005548790.localdomain sudo[305146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:10:43 np0005548790.localdomain sudo[305146]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:43 np0005548790.localdomain sudo[305146]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:43 np0005548790.localdomain sudo[305180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:10:43 np0005548790.localdomain sudo[305180]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:43 np0005548790.localdomain sudo[305180]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:43 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:10:43 np0005548790.localdomain sudo[305198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:10:43 np0005548790.localdomain sudo[305198]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:43 np0005548790.localdomain sudo[305198]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:44 np0005548790.localdomain sudo[305216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 06 10:10:44 np0005548790.localdomain sudo[305216]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:44 np0005548790.localdomain sudo[305216]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:44 np0005548790.localdomain sudo[305234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:10:44 np0005548790.localdomain sudo[305234]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:44 np0005548790.localdomain sudo[305234]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:44 np0005548790.localdomain sudo[305252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:10:44 np0005548790.localdomain sudo[305252]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:44 np0005548790.localdomain sudo[305252]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:44 np0005548790.localdomain sudo[305270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:10:44 np0005548790.localdomain sudo[305270]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:44 np0005548790.localdomain sudo[305270]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:44 np0005548790.localdomain sudo[305288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:10:44 np0005548790.localdomain sudo[305288]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:44 np0005548790.localdomain sudo[305288]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:44 np0005548790.localdomain ceph-mon[301742]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:10:44 np0005548790.localdomain ceph-mon[301742]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:10:44 np0005548790.localdomain ceph-mon[301742]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:10:44 np0005548790.localdomain ceph-mon[301742]: Updating np0005548789.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:10:44 np0005548790.localdomain ceph-mon[301742]: Updating np0005548788.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:10:44 np0005548790.localdomain ceph-mon[301742]: Updating np0005548790.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:10:44 np0005548790.localdomain ceph-mon[301742]: from='client.44658 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005548790", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:10:44 np0005548790.localdomain ceph-mon[301742]: mgrmap e41: np0005548788.yvwbqq(active, since 6s), standbys: np0005548790.kvkfyr, np0005548785.vhqlsq, np0005548789.mzhmje
Dec 06 10:10:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "mgr metadata", "who": "np0005548785.vhqlsq", "id": "np0005548785.vhqlsq"} : dispatch
Dec 06 10:10:44 np0005548790.localdomain sudo[305306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:10:44 np0005548790.localdomain sudo[305306]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:44 np0005548790.localdomain sudo[305306]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:44 np0005548790.localdomain sudo[305340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:10:44 np0005548790.localdomain sudo[305340]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:44 np0005548790.localdomain sudo[305340]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:44 np0005548790.localdomain sudo[305358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:10:44 np0005548790.localdomain sudo[305358]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:44 np0005548790.localdomain sudo[305358]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:44 np0005548790.localdomain sudo[305376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:10:44 np0005548790.localdomain sudo[305376]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:44 np0005548790.localdomain sudo[305376]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:45 np0005548790.localdomain sudo[305394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:10:45 np0005548790.localdomain sudo[305394]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:45 np0005548790.localdomain sudo[305394]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:45 np0005548790.localdomain ceph-mon[301742]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:10:45 np0005548790.localdomain ceph-mon[301742]: pgmap v6: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:10:45 np0005548790.localdomain ceph-mon[301742]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:10:45 np0005548790.localdomain ceph-mon[301742]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:10:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:10:45 np0005548790.localdomain ceph-mon[301742]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)
Dec 06 10:10:45 np0005548790.localdomain ceph-mon[301742]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST)
Dec 06 10:10:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:10:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:10:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:10:46 np0005548790.localdomain ceph-mon[301742]: pgmap v7: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 0 B/s wr, 19 op/s
Dec 06 10:10:46 np0005548790.localdomain ceph-mon[301742]: Reconfiguring mon.np0005548788 (monmap changed)...
Dec 06 10:10:46 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon mon.np0005548788 on np0005548788.localdomain
Dec 06 10:10:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:10:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:10:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:10:46 np0005548790.localdomain sudo[305412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:10:46 np0005548790.localdomain sudo[305412]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:46 np0005548790.localdomain sudo[305412]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:46 np0005548790.localdomain sudo[305430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:10:46 np0005548790.localdomain sudo[305430]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:47 np0005548790.localdomain podman[305464]: 
Dec 06 10:10:47 np0005548790.localdomain podman[305464]: 2025-12-06 10:10:47.434184899 +0000 UTC m=+0.110519550 container create 15c1801e9ba15b2cd17fa49fa3e3db8103ca20fe818a469db27b22e0bd340543 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_bardeen, io.openshift.expose-services=, com.redhat.component=rhceph-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, name=rhceph, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 06 10:10:47 np0005548790.localdomain podman[305464]: 2025-12-06 10:10:47.372369883 +0000 UTC m=+0.048704574 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:10:47 np0005548790.localdomain systemd[1]: Started libpod-conmon-15c1801e9ba15b2cd17fa49fa3e3db8103ca20fe818a469db27b22e0bd340543.scope.
Dec 06 10:10:47 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:10:47 np0005548790.localdomain podman[305464]: 2025-12-06 10:10:47.517114223 +0000 UTC m=+0.193448864 container init 15c1801e9ba15b2cd17fa49fa3e3db8103ca20fe818a469db27b22e0bd340543 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_bardeen, CEPH_POINT_RELEASE=, io.openshift.expose-services=, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, description=Red Hat Ceph Storage 7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, vcs-type=git, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, ceph=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=1763362218)
Dec 06 10:10:47 np0005548790.localdomain podman[305464]: 2025-12-06 10:10:47.527963865 +0000 UTC m=+0.204298506 container start 15c1801e9ba15b2cd17fa49fa3e3db8103ca20fe818a469db27b22e0bd340543 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_bardeen, vcs-type=git, architecture=x86_64, version=7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, distribution-scope=public, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, RELEASE=main, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=)
Dec 06 10:10:47 np0005548790.localdomain podman[305464]: 2025-12-06 10:10:47.52850626 +0000 UTC m=+0.204840951 container attach 15c1801e9ba15b2cd17fa49fa3e3db8103ca20fe818a469db27b22e0bd340543 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_bardeen, distribution-scope=public, io.openshift.expose-services=, release=1763362218, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, architecture=x86_64, build-date=2025-11-26T19:44:28Z)
Dec 06 10:10:47 np0005548790.localdomain elated_bardeen[305480]: 167 167
Dec 06 10:10:47 np0005548790.localdomain podman[305464]: 2025-12-06 10:10:47.533007082 +0000 UTC m=+0.209341713 container died 15c1801e9ba15b2cd17fa49fa3e3db8103ca20fe818a469db27b22e0bd340543 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_bardeen, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-type=git, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, version=7, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container)
Dec 06 10:10:47 np0005548790.localdomain systemd[1]: libpod-15c1801e9ba15b2cd17fa49fa3e3db8103ca20fe818a469db27b22e0bd340543.scope: Deactivated successfully.
Dec 06 10:10:47 np0005548790.localdomain ceph-mon[301742]: Reconfiguring mon.np0005548789 (monmap changed)...
Dec 06 10:10:47 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon mon.np0005548789 on np0005548789.localdomain
Dec 06 10:10:47 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:47 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:47 np0005548790.localdomain ceph-mon[301742]: Reconfiguring mon.np0005548790 (monmap changed)...
Dec 06 10:10:47 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 06 10:10:47 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 06 10:10:47 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:10:47 np0005548790.localdomain ceph-mon[301742]: Reconfiguring daemon mon.np0005548790 on np0005548790.localdomain
Dec 06 10:10:47 np0005548790.localdomain ceph-mon[301742]: pgmap v8: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 0 B/s wr, 14 op/s
Dec 06 10:10:47 np0005548790.localdomain podman[305485]: 2025-12-06 10:10:47.658495473 +0000 UTC m=+0.111734492 container remove 15c1801e9ba15b2cd17fa49fa3e3db8103ca20fe818a469db27b22e0bd340543 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_bardeen, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, GIT_CLEAN=True, vcs-type=git, io.openshift.expose-services=, ceph=True, distribution-scope=public, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, build-date=2025-11-26T19:44:28Z)
Dec 06 10:10:47 np0005548790.localdomain systemd[1]: libpod-conmon-15c1801e9ba15b2cd17fa49fa3e3db8103ca20fe818a469db27b22e0bd340543.scope: Deactivated successfully.
Dec 06 10:10:47 np0005548790.localdomain sudo[305430]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:48 np0005548790.localdomain sudo[305501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:10:48 np0005548790.localdomain sudo[305501]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:10:48 np0005548790.localdomain sudo[305501]: pam_unix(sudo:session): session closed for user root
Dec 06 10:10:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:10:48.393 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:10:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:10:48.395 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:10:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:10:48.395 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:10:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:10:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:10:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:10:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:10:48 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-941fdfd57172f9a95a0bb17a42ce6060b060a7c749518b0b893aeb5afdd46bc4-merged.mount: Deactivated successfully.
Dec 06 10:10:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:10:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18712 "" "Go-http-client/1.1"
Dec 06 10:10:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:10:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:10:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:10:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.26871 172.18.0.106:0/3380714700' entity='mgr.np0005548788.yvwbqq' 
Dec 06 10:10:48 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:10:49 np0005548790.localdomain ceph-mon[301742]: pgmap v9: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 0 B/s wr, 11 op/s
Dec 06 10:10:51 np0005548790.localdomain ceph-mon[301742]: pgmap v10: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Dec 06 10:10:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:10:52 np0005548790.localdomain podman[305519]: 2025-12-06 10:10:52.582748574 +0000 UTC m=+0.091114086 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:10:52 np0005548790.localdomain podman[305519]: 2025-12-06 10:10:52.617180002 +0000 UTC m=+0.125545514 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 10:10:52 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:10:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:10:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:10:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:10:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:10:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:10:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:10:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:10:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:10:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:10:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:10:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:10:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:10:53 np0005548790.localdomain ceph-mon[301742]: pgmap v11: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Dec 06 10:10:53 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:10:55 np0005548790.localdomain ceph-mon[301742]: pgmap v12: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Dec 06 10:10:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:10:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:10:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:10:57 np0005548790.localdomain systemd[1]: tmp-crun.lD0yqh.mount: Deactivated successfully.
Dec 06 10:10:57 np0005548790.localdomain podman[305537]: 2025-12-06 10:10:57.585618112 +0000 UTC m=+0.095905205 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 10:10:57 np0005548790.localdomain podman[305538]: 2025-12-06 10:10:57.633836483 +0000 UTC m=+0.137631401 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm)
Dec 06 10:10:57 np0005548790.localdomain podman[305538]: 2025-12-06 10:10:57.644088728 +0000 UTC m=+0.147883606 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 06 10:10:57 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:10:57 np0005548790.localdomain podman[305537]: 2025-12-06 10:10:57.701631039 +0000 UTC m=+0.211918112 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 10:10:57 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:10:57 np0005548790.localdomain ceph-mon[301742]: pgmap v13: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:10:57 np0005548790.localdomain podman[305544]: 2025-12-06 10:10:57.794740798 +0000 UTC m=+0.293845719 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9)
Dec 06 10:10:57 np0005548790.localdomain podman[305544]: 2025-12-06 10:10:57.835179427 +0000 UTC m=+0.334284328 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.6, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=)
Dec 06 10:10:57 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:10:58 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:10:59 np0005548790.localdomain ceph-mon[301742]: pgmap v14: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:01 np0005548790.localdomain ceph-mon[301742]: pgmap v15: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:03 np0005548790.localdomain ceph-mon[301742]: pgmap v16: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:11:04 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:11:04 np0005548790.localdomain podman[305600]: 2025-12-06 10:11:04.612429221 +0000 UTC m=+0.119572883 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Dec 06 10:11:04 np0005548790.localdomain podman[305600]: 2025-12-06 10:11:04.621073304 +0000 UTC m=+0.128216926 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:11:04 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:11:05 np0005548790.localdomain ceph-mon[301742]: pgmap v17: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:11:06 np0005548790.localdomain podman[305618]: 2025-12-06 10:11:06.56253496 +0000 UTC m=+0.072313860 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 10:11:06 np0005548790.localdomain podman[305618]: 2025-12-06 10:11:06.57071641 +0000 UTC m=+0.080495280 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 10:11:06 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:11:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:11:07 np0005548790.localdomain ceph-mon[301742]: pgmap v18: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:08 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:11:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:11:09 np0005548790.localdomain podman[305641]: 2025-12-06 10:11:09.562615661 +0000 UTC m=+0.079802732 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller)
Dec 06 10:11:09 np0005548790.localdomain podman[305641]: 2025-12-06 10:11:09.639453501 +0000 UTC m=+0.156640552 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 10:11:09 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:11:09 np0005548790.localdomain ceph-mon[301742]: pgmap v19: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:11 np0005548790.localdomain ceph-mon[301742]: pgmap v20: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:13 np0005548790.localdomain ceph-mon[301742]: pgmap v21: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:13 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:11:15 np0005548790.localdomain ceph-mon[301742]: pgmap v22: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:17 np0005548790.localdomain ceph-mon[301742]: pgmap v23: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:11:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:11:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:11:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:11:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:11:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18715 "" "Go-http-client/1.1"
Dec 06 10:11:18 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:11:19 np0005548790.localdomain ceph-mon[301742]: pgmap v24: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:19 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.200:0/483164750' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 06 10:11:21 np0005548790.localdomain ceph-mon[301742]: pgmap v25: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:11:23 np0005548790.localdomain podman[305667]: 2025-12-06 10:11:23.565582561 +0000 UTC m=+0.079504033 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 06 10:11:23 np0005548790.localdomain podman[305667]: 2025-12-06 10:11:23.595299152 +0000 UTC m=+0.109220664 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 10:11:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:11:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:11:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:11:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:11:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:11:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:11:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:11:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:11:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:11:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:11:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:11:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:11:23 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:11:23 np0005548790.localdomain ceph-mon[301742]: pgmap v26: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:23 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:11:24 np0005548790.localdomain ceph-mon[301742]: from='client.44667 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:11:25 np0005548790.localdomain ceph-mon[301742]: pgmap v27: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:27 np0005548790.localdomain ceph-mon[301742]: pgmap v28: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:11:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:11:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:11:28 np0005548790.localdomain systemd[1]: tmp-crun.chlRtZ.mount: Deactivated successfully.
Dec 06 10:11:28 np0005548790.localdomain podman[305687]: 2025-12-06 10:11:28.58698299 +0000 UTC m=+0.096044570 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, tcib_managed=true)
Dec 06 10:11:28 np0005548790.localdomain systemd[1]: tmp-crun.iAutjs.mount: Deactivated successfully.
Dec 06 10:11:28 np0005548790.localdomain podman[305688]: 2025-12-06 10:11:28.628121907 +0000 UTC m=+0.132990984 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, name=ubi9-minimal, version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 10:11:28 np0005548790.localdomain podman[305688]: 2025-12-06 10:11:28.669193955 +0000 UTC m=+0.174063002 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.tags=minimal rhel9, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 06 10:11:28 np0005548790.localdomain podman[305686]: 2025-12-06 10:11:28.679460152 +0000 UTC m=+0.191693597 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 10:11:28 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:11:28 np0005548790.localdomain podman[305686]: 2025-12-06 10:11:28.686976443 +0000 UTC m=+0.199209868 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:11:28 np0005548790.localdomain podman[305687]: 2025-12-06 10:11:28.698215626 +0000 UTC m=+0.207277196 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 06 10:11:28 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:11:28 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:11:28 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/1823885354' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:11:28 np0005548790.localdomain sshd[305750]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:11:28 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:11:29 np0005548790.localdomain ceph-mon[301742]: pgmap v29: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:29 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.200:0/2405516431' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 06 10:11:29 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/1834841947' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:11:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:31.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:11:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:31.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:11:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:31.334 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:11:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:31.351 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:11:31 np0005548790.localdomain sshd[305750]: Received disconnect from 101.47.160.186 port 39188:11: Bye Bye [preauth]
Dec 06 10:11:31 np0005548790.localdomain sshd[305750]: Disconnected from authenticating user root 101.47.160.186 port 39188 [preauth]
Dec 06 10:11:31 np0005548790.localdomain ceph-mon[301742]: pgmap v30: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:32.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:11:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:32.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:11:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:33.328 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:11:33 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:11:34 np0005548790.localdomain ceph-mon[301742]: pgmap v31: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:35 np0005548790.localdomain ceph-mon[301742]: from='client.44673 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 06 10:11:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:35.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:11:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:35.362 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:11:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:35.362 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:11:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:35.363 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:11:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:35.363 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:11:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:35.363 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:11:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:11:35 np0005548790.localdomain systemd[1]: tmp-crun.M1qIf5.mount: Deactivated successfully.
Dec 06 10:11:35 np0005548790.localdomain podman[305754]: 2025-12-06 10:11:35.602905542 +0000 UTC m=+0.113528339 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 10:11:35 np0005548790.localdomain podman[305754]: 2025-12-06 10:11:35.613737074 +0000 UTC m=+0.124359891 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:11:35 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:11:35 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:11:35 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1742337711' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:11:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:35.815 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:11:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:36.024 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:11:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:36.026 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=12023MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:11:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:36.026 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:11:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:36.027 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:11:36 np0005548790.localdomain ceph-mon[301742]: pgmap v32: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:36 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1742337711' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:11:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:36.116 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:11:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:36.117 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:11:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:36.138 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:11:36 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:11:36 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/246951837' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:11:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:36.585 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:11:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:36.592 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updating inventory in ProviderTree for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 10:11:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:36.653 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updated inventory for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 with generation 4 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 06 10:11:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:36.654 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updating resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 generation from 4 to 5 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 06 10:11:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:36.655 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updating inventory in ProviderTree for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 with inventory: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 10:11:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:36.695 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:11:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:36.696 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:11:37 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/246951837' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:11:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:11:37 np0005548790.localdomain systemd[1]: tmp-crun.bLDZoz.mount: Deactivated successfully.
Dec 06 10:11:37 np0005548790.localdomain podman[305815]: 2025-12-06 10:11:37.580213294 +0000 UTC m=+0.094831576 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 10:11:37 np0005548790.localdomain podman[305815]: 2025-12-06 10:11:37.587663325 +0000 UTC m=+0.102281587 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:11:37 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:11:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:37.697 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:11:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:37.698 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:11:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:37.698 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:11:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:37.699 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:11:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:11:37.699 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:11:38 np0005548790.localdomain ceph-mon[301742]: pgmap v33: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:38 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:11:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/291434841' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:11:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/416774902' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:11:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/416774902' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:11:40 np0005548790.localdomain ceph-mon[301742]: pgmap v34: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:40 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.200:0/259866061' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:11:40 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/1381932184' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:11:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:11:40 np0005548790.localdomain podman[305839]: 2025-12-06 10:11:40.573634566 +0000 UTC m=+0.084035655 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Dec 06 10:11:40 np0005548790.localdomain podman[305839]: 2025-12-06 10:11:40.633154 +0000 UTC m=+0.143554999 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 10:11:40 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e95 e95: 6 total, 6 up, 6 in
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: mgr handle_mgr_map Activating!
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: mgr handle_mgr_map I am now activating
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005548788"} v 0)
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005548789"} v 0)
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005548790"} v 0)
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005548789.vxwwsq"} v 0)
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mds metadata", "who": "mds.np0005548789.vxwwsq"} : dispatch
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).mds e16 all = 0
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005548788.erzujf"} v 0)
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mds metadata", "who": "mds.np0005548788.erzujf"} : dispatch
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).mds e16 all = 0
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005548790.vhcezv"} v 0)
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mds metadata", "who": "mds.np0005548790.vhcezv"} : dispatch
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).mds e16 all = 0
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005548790.kvkfyr", "id": "np0005548790.kvkfyr"} v 0)
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mgr metadata", "who": "np0005548790.kvkfyr", "id": "np0005548790.kvkfyr"} : dispatch
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005548785.vhqlsq", "id": "np0005548785.vhqlsq"} v 0)
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mgr metadata", "who": "np0005548785.vhqlsq", "id": "np0005548785.vhqlsq"} : dispatch
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005548789.mzhmje", "id": "np0005548789.mzhmje"} v 0)
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mgr metadata", "who": "np0005548789.mzhmje", "id": "np0005548789.mzhmje"} : dispatch
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0)
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0)
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0)
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mds metadata"} : dispatch
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).mds e16 all = 1
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata"} : dispatch
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mon metadata"} : dispatch
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: balancer
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Starting
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Optimize plan auto_2025-12-06_10:11:41
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec 06 10:11:41 np0005548790.localdomain sshd[304382]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 06 10:11:41 np0005548790.localdomain systemd[1]: session-72.scope: Deactivated successfully.
Dec 06 10:11:41 np0005548790.localdomain systemd[1]: session-72.scope: Consumed 7.143s CPU time.
Dec 06 10:11:41 np0005548790.localdomain systemd-logind[760]: Session 72 logged out. Waiting for processes to exit.
Dec 06 10:11:41 np0005548790.localdomain systemd-logind[760]: Removed session 72.
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: cephadm
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: crash
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: devicehealth
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: iostat
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: nfs
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: orchestrator
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: pg_autoscaler
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: progress
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Loading...
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f0654f3a400>, <progress.module.GhostEvent object at 0x7f0654f3a790>, <progress.module.GhostEvent object at 0x7f0654f3a7c0>, <progress.module.GhostEvent object at 0x7f0654f3a7f0>, <progress.module.GhostEvent object at 0x7f0654f3a820>, <progress.module.GhostEvent object at 0x7f0654f3a850>, <progress.module.GhostEvent object at 0x7f0654f3a880>, <progress.module.GhostEvent object at 0x7f0654f3a8b0>, <progress.module.GhostEvent object at 0x7f0654f3a8e0>, <progress.module.GhostEvent object at 0x7f0654f3a910>, <progress.module.GhostEvent object at 0x7f0654f3a940>, <progress.module.GhostEvent object at 0x7f0654f3a970>, <progress.module.GhostEvent object at 0x7f0654f3a9a0>, <progress.module.GhostEvent object at 0x7f0654f3a9d0>, <progress.module.GhostEvent object at 0x7f0654f3aa00>, <progress.module.GhostEvent object at 0x7f0654f3aa30>, <progress.module.GhostEvent object at 0x7f0654f3aa60>, <progress.module.GhostEvent object at 0x7f0654f3aa90>, <progress.module.GhostEvent object at 0x7f0654f3aac0>, <progress.module.GhostEvent object at 0x7f0654f3aaf0>, <progress.module.GhostEvent object at 0x7f0654f3ab20>, <progress.module.GhostEvent object at 0x7f0654f3ab50>, <progress.module.GhostEvent object at 0x7f0654f3ab80>, <progress.module.GhostEvent object at 0x7f0654f3abb0>, <progress.module.GhostEvent object at 0x7f0654f3abe0>, <progress.module.GhostEvent object at 0x7f0654f3ac10>, <progress.module.GhostEvent object at 0x7f0654f3ac40>, <progress.module.GhostEvent object at 0x7f0654f3ac70>, <progress.module.GhostEvent object at 0x7f0654f3aca0>, <progress.module.GhostEvent object at 0x7f0654f3acd0>, <progress.module.GhostEvent object at 0x7f0654f3ad00>, <progress.module.GhostEvent object at 0x7f0654f3ad30>, <progress.module.GhostEvent object at 0x7f0654f3ad60>, <progress.module.GhostEvent object at 0x7f0654f3ad90>, <progress.module.GhostEvent object at 0x7f0654f3adc0>, <progress.module.GhostEvent object at 0x7f0654f3adf0>, <progress.module.GhostEvent object at 0x7f0654f3ae20>, <progress.module.GhostEvent object at 0x7f0654f3ae50>, <progress.module.GhostEvent object at 0x7f0654f3ae80>, <progress.module.GhostEvent object at 0x7f0654f3aeb0>, <progress.module.GhostEvent object at 0x7f0654f3aee0>, <progress.module.GhostEvent object at 0x7f0654f3af10>, <progress.module.GhostEvent object at 0x7f0654f3af40>, <progress.module.GhostEvent object at 0x7f0654f3af70>, <progress.module.GhostEvent object at 0x7f0654f3afa0>, <progress.module.GhostEvent object at 0x7f0654f3afd0>, <progress.module.GhostEvent object at 0x7f064deba040>, <progress.module.GhostEvent object at 0x7f064deba070>, <progress.module.GhostEvent object at 0x7f064deba0a0>, <progress.module.GhostEvent object at 0x7f064deba0d0>] historic events
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Loaded OSDMap, ready.
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [devicehealth INFO root] Starting
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] recovery thread starting
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] starting setup
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548790.kvkfyr/mirror_snapshot_schedule"} v 0)
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548790.kvkfyr/mirror_snapshot_schedule"} : dispatch
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: rbd_support
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: restful
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [restful INFO root] server_addr: :: server_port: 8003
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: status
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: telemetry
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [restful WARNING root] server not running: no certificate configured
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: mgr load Constructed class from module: volumes
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:11:41 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:11:41.915+0000 7f0638df5640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:11:41 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:11:41.915+0000 7f0638df5640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:11:41 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:11:41.915+0000 7f0638df5640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:11:41 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:11:41.915+0000 7f0638df5640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:11:41 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:11:41.915+0000 7f0638df5640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:11:41 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:11:41.915+0000 7f06375f2640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:11:41 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:11:41.915+0000 7f06375f2640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:11:41 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:11:41.915+0000 7f06375f2640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] PerfHandler: starting
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:11:41 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:11:41.915+0000 7f06375f2640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:11:41 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:11:41.918+0000 7f06375f2640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_task_task: images, start_after=
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TaskHandler: starting
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548790.kvkfyr/trash_purge_schedule"} v 0)
Dec 06 10:11:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548790.kvkfyr/trash_purge_schedule"} : dispatch
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 06 10:11:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] setup complete
Dec 06 10:11:42 np0005548790.localdomain sshd[306003]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:11:42 np0005548790.localdomain sshd[306003]: Accepted publickey for ceph-admin from 192.168.122.108 port 57270 ssh2: RSA SHA256:HQYBT8n3HbnicDtP9tehd3+gJXMFtkw+fTMlmR2wCsE
Dec 06 10:11:42 np0005548790.localdomain systemd-logind[760]: New session 73 of user ceph-admin.
Dec 06 10:11:42 np0005548790.localdomain systemd[1]: Started Session 73 of User ceph-admin.
Dec 06 10:11:42 np0005548790.localdomain sshd[306003]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: pgmap v35: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.200:0/3000437731' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: Activating manager daemon np0005548790.kvkfyr
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: osdmap e95: 6 total, 6 up, 6 in
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.200:0/3000437731' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: mgrmap e42: np0005548790.kvkfyr(active, starting, since 0.0402134s), standbys: np0005548785.vhqlsq, np0005548789.mzhmje
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mon metadata", "id": "np0005548788"} : dispatch
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mon metadata", "id": "np0005548789"} : dispatch
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mon metadata", "id": "np0005548790"} : dispatch
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mds metadata", "who": "mds.np0005548789.vxwwsq"} : dispatch
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mds metadata", "who": "mds.np0005548788.erzujf"} : dispatch
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mds metadata", "who": "mds.np0005548790.vhcezv"} : dispatch
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mgr metadata", "who": "np0005548790.kvkfyr", "id": "np0005548790.kvkfyr"} : dispatch
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mgr metadata", "who": "np0005548785.vhqlsq", "id": "np0005548785.vhqlsq"} : dispatch
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mgr metadata", "who": "np0005548789.mzhmje", "id": "np0005548789.mzhmje"} : dispatch
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mds metadata"} : dispatch
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd metadata"} : dispatch
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mon metadata"} : dispatch
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: Manager daemon np0005548790.kvkfyr is now available
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548790.kvkfyr/mirror_snapshot_schedule"} : dispatch
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548790.kvkfyr/mirror_snapshot_schedule"} : dispatch
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548790.kvkfyr/trash_purge_schedule"} : dispatch
Dec 06 10:11:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005548790.kvkfyr/trash_purge_schedule"} : dispatch
Dec 06 10:11:42 np0005548790.localdomain sudo[306007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:11:42 np0005548790.localdomain sudo[306007]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:42 np0005548790.localdomain sudo[306007]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:42 np0005548790.localdomain sudo[306025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 10:11:42 np0005548790.localdomain sudo[306025]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:42 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:43 np0005548790.localdomain systemd[1]: tmp-crun.2qpceW.mount: Deactivated successfully.
Dec 06 10:11:43 np0005548790.localdomain podman[306112]: 2025-12-06 10:11:43.154343377 +0000 UTC m=+0.105888195 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, architecture=x86_64, name=rhceph, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_BRANCH=main, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, ceph=True, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 06 10:11:43 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cherrypy.error] [06/Dec/2025:10:11:43] ENGINE Bus STARTING
Dec 06 10:11:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : [06/Dec/2025:10:11:43] ENGINE Bus STARTING
Dec 06 10:11:43 np0005548790.localdomain podman[306112]: 2025-12-06 10:11:43.28287039 +0000 UTC m=+0.234415258 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.openshift.expose-services=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, version=7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, name=rhceph, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 10:11:43 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cherrypy.error] [06/Dec/2025:10:11:43] ENGINE Serving on https://172.18.0.108:7150
Dec 06 10:11:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : [06/Dec/2025:10:11:43] ENGINE Serving on https://172.18.0.108:7150
Dec 06 10:11:43 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cherrypy.error] [06/Dec/2025:10:11:43] ENGINE Client ('172.18.0.108', 49786) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 10:11:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : [06/Dec/2025:10:11:43] ENGINE Client ('172.18.0.108', 49786) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 10:11:43 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cherrypy.error] [06/Dec/2025:10:11:43] ENGINE Serving on http://172.18.0.108:8765
Dec 06 10:11:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : [06/Dec/2025:10:11:43] ENGINE Serving on http://172.18.0.108:8765
Dec 06 10:11:43 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cherrypy.error] [06/Dec/2025:10:11:43] ENGINE Bus STARTED
Dec 06 10:11:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : [06/Dec/2025:10:11:43] ENGINE Bus STARTED
Dec 06 10:11:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:43 np0005548790.localdomain ceph-mon[301742]: mgrmap e43: np0005548790.kvkfyr(active, since 1.10215s), standbys: np0005548785.vhqlsq, np0005548789.mzhmje
Dec 06 10:11:43 np0005548790.localdomain ceph-mon[301742]: [06/Dec/2025:10:11:43] ENGINE Bus STARTING
Dec 06 10:11:43 np0005548790.localdomain ceph-mon[301742]: [06/Dec/2025:10:11:43] ENGINE Serving on https://172.18.0.108:7150
Dec 06 10:11:43 np0005548790.localdomain ceph-mon[301742]: [06/Dec/2025:10:11:43] ENGINE Client ('172.18.0.108', 49786) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 06 10:11:43 np0005548790.localdomain ceph-mon[301742]: [06/Dec/2025:10:11:43] ENGINE Serving on http://172.18.0.108:8765
Dec 06 10:11:43 np0005548790.localdomain ceph-mon[301742]: [06/Dec/2025:10:11:43] ENGINE Bus STARTED
Dec 06 10:11:43 np0005548790.localdomain sudo[306025]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:43 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0)
Dec 06 10:11:43 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0)
Dec 06 10:11:43 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:11:43 np0005548790.localdomain ceph-mgr[286934]: [devicehealth INFO root] Check health
Dec 06 10:11:44 np0005548790.localdomain sudo[306260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:11:44 np0005548790.localdomain sudo[306260]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:44 np0005548790.localdomain sudo[306260]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0)
Dec 06 10:11:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0)
Dec 06 10:11:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:11:44 np0005548790.localdomain sudo[306285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:11:44 np0005548790.localdomain sudo[306285]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:11:44 np0005548790.localdomain sudo[306285]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:44 np0005548790.localdomain sudo[306336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:11:44 np0005548790.localdomain sudo[306336]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:44 np0005548790.localdomain sudo[306336]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:44 np0005548790.localdomain ceph-mon[301742]: pgmap v4: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:44 np0005548790.localdomain ceph-mon[301742]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm)
Dec 06 10:11:44 np0005548790.localdomain ceph-mon[301742]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm)
Dec 06 10:11:44 np0005548790.localdomain ceph-mon[301742]: Cluster is now healthy
Dec 06 10:11:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:11:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:11:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:11:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:11:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:11:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:11:44 np0005548790.localdomain sudo[306354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 06 10:11:44 np0005548790.localdomain sudo[306354]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:45 np0005548790.localdomain sudo[306354]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0)
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0)
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0)
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 06 10:11:45 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO root] Adjusting osd_memory_target on np0005548790.localdomain to 836.6M
Dec 06 10:11:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005548790.localdomain to 836.6M
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 06 10:11:45 np0005548790.localdomain ceph-mgr[286934]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:11:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0)
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 06 10:11:45 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO root] Adjusting osd_memory_target on np0005548788.localdomain to 836.6M
Dec 06 10:11:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005548788.localdomain to 836.6M
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 06 10:11:45 np0005548790.localdomain ceph-mgr[286934]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Dec 06 10:11:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0)
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0)
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0)
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 06 10:11:45 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO root] Adjusting osd_memory_target on np0005548789.localdomain to 836.6M
Dec 06 10:11:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005548789.localdomain to 836.6M
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 06 10:11:45 np0005548790.localdomain ceph-mgr[286934]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:11:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:11:45 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:11:45 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548788.localdomain:/etc/ceph/ceph.conf
Dec 06 10:11:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548788.localdomain:/etc/ceph/ceph.conf
Dec 06 10:11:45 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548789.localdomain:/etc/ceph/ceph.conf
Dec 06 10:11:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548789.localdomain:/etc/ceph/ceph.conf
Dec 06 10:11:45 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548790.localdomain:/etc/ceph/ceph.conf
Dec 06 10:11:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548790.localdomain:/etc/ceph/ceph.conf
Dec 06 10:11:45 np0005548790.localdomain sudo[306392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 10:11:45 np0005548790.localdomain sudo[306392]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:45 np0005548790.localdomain sudo[306392]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:45 np0005548790.localdomain sudo[306410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph
Dec 06 10:11:45 np0005548790.localdomain sudo[306410]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:45 np0005548790.localdomain sudo[306410]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:45 np0005548790.localdomain sudo[306428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:11:45 np0005548790.localdomain sudo[306428]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:45 np0005548790.localdomain sudo[306428]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:45 np0005548790.localdomain sudo[306446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:11:46 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:11:46 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:11:46 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:11:46 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:11:46 np0005548790.localdomain ceph-mgr[286934]: mgr.server handle_open ignoring open from mgr.np0005548788.yvwbqq 172.18.0.106:0/4125374171; not ready for session (expect reconnect)
Dec 06 10:11:46 np0005548790.localdomain sudo[306446]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:46 np0005548790.localdomain sudo[306446]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: mgrmap e44: np0005548790.kvkfyr(active, since 3s), standbys: np0005548785.vhqlsq, np0005548789.mzhmje
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: Adjusting osd_memory_target on np0005548790.localdomain to 836.6M
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: Adjusting osd_memory_target on np0005548788.localdomain to 836.6M
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: Adjusting osd_memory_target on np0005548789.localdomain to 836.6M
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: Updating np0005548788.localdomain:/etc/ceph/ceph.conf
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: Updating np0005548789.localdomain:/etc/ceph/ceph.conf
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: Updating np0005548790.localdomain:/etc/ceph/ceph.conf
Dec 06 10:11:46 np0005548790.localdomain sudo[306464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:11:46 np0005548790.localdomain sudo[306464]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:46 np0005548790.localdomain sudo[306464]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005548788.yvwbqq", "id": "np0005548788.yvwbqq"} v 0)
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mgr metadata", "who": "np0005548788.yvwbqq", "id": "np0005548788.yvwbqq"} : dispatch
Dec 06 10:11:46 np0005548790.localdomain sudo[306498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:11:46 np0005548790.localdomain sudo[306498]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:46 np0005548790.localdomain sudo[306498]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:11:46.805375) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015906805451, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2714, "num_deletes": 257, "total_data_size": 11905829, "memory_usage": 12424560, "flush_reason": "Manual Compaction"}
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Dec 06 10:11:46 np0005548790.localdomain sudo[306516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new
Dec 06 10:11:46 np0005548790.localdomain sudo[306516]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:46 np0005548790.localdomain sudo[306516]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015906840148, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 7341396, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12567, "largest_seqno": 15276, "table_properties": {"data_size": 7330634, "index_size": 6691, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 26504, "raw_average_key_size": 22, "raw_value_size": 7307651, "raw_average_value_size": 6130, "num_data_blocks": 287, "num_entries": 1192, "num_filter_entries": 1192, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015789, "oldest_key_time": 1765015789, "file_creation_time": 1765015906, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 34875 microseconds, and 12509 cpu microseconds.
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:11:46.840254) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 7341396 bytes OK
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:11:46.840313) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:11:46.843704) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:11:46.843730) EVENT_LOG_v1 {"time_micros": 1765015906843724, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:11:46.843751) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 11892922, prev total WAL file size 11892922, number of live WAL files 2.
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:11:46.846238) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131323935' seq:72057594037927935, type:22 .. '7061786F73003131353437' seq:0, type:0; will stop at (end)
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(7169KB)], [15(16MB)]
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015906846304, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 24966610, "oldest_snapshot_seqno": -1}
Dec 06 10:11:46 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548789.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:11:46 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548789.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:11:46 np0005548790.localdomain sudo[306534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 06 10:11:46 np0005548790.localdomain sudo[306534]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:46 np0005548790.localdomain sudo[306534]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:46 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:11:46 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:11:46 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548788.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:11:46 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548788.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:11:46 np0005548790.localdomain sudo[306552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:11:46 np0005548790.localdomain sudo[306552]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:46 np0005548790.localdomain sudo[306552]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 12265 keys, 20846698 bytes, temperature: kUnknown
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015906961760, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 20846698, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 20774615, "index_size": 40302, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30725, "raw_key_size": 327314, "raw_average_key_size": 26, "raw_value_size": 20563854, "raw_average_value_size": 1676, "num_data_blocks": 1548, "num_entries": 12265, "num_filter_entries": 12265, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015768, "oldest_key_time": 0, "file_creation_time": 1765015906, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:11:46.962063) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 20846698 bytes
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:11:46.963973) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.1 rd, 180.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(7.0, 16.8 +0.0 blob) out(19.9 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 12806, records dropped: 541 output_compression: NoCompression
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:11:46.964004) EVENT_LOG_v1 {"time_micros": 1765015906963991, "job": 6, "event": "compaction_finished", "compaction_time_micros": 115558, "compaction_time_cpu_micros": 47290, "output_level": 6, "num_output_files": 1, "total_output_size": 20846698, "num_input_records": 12806, "num_output_records": 12265, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015906965105, "job": 6, "event": "table_file_deletion", "file_number": 17}
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015906967722, "job": 6, "event": "table_file_deletion", "file_number": 15}
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:11:46.846140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:11:46.967798) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:11:46.967804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:11:46.967807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:11:46.967810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:11:46 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:11:46.967813) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:11:47 np0005548790.localdomain sudo[306570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:11:47 np0005548790.localdomain sudo[306570]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:47 np0005548790.localdomain sudo[306570]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:47 np0005548790.localdomain sudo[306588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:11:47 np0005548790.localdomain sudo[306588]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:47 np0005548790.localdomain sudo[306588]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:47 np0005548790.localdomain sudo[306606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:11:47 np0005548790.localdomain sudo[306606]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:47 np0005548790.localdomain sudo[306606]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:47 np0005548790.localdomain sudo[306624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:11:47 np0005548790.localdomain sudo[306624]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:47 np0005548790.localdomain sudo[306624]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:47 np0005548790.localdomain sudo[306658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:11:47 np0005548790.localdomain sudo[306658]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:47 np0005548790.localdomain sudo[306658]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:47 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:11:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:11:47 np0005548790.localdomain sudo[306676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new
Dec 06 10:11:47 np0005548790.localdomain sudo[306676]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:47 np0005548790.localdomain sudo[306676]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:47 np0005548790.localdomain sudo[306694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:11:47 np0005548790.localdomain sudo[306694]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:47 np0005548790.localdomain sudo[306694]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:47 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548790.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:11:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548790.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:11:47 np0005548790.localdomain ceph-mon[301742]: pgmap v5: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:47 np0005548790.localdomain ceph-mon[301742]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:11:47 np0005548790.localdomain ceph-mon[301742]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:11:47 np0005548790.localdomain ceph-mon[301742]: Standby manager daemon np0005548788.yvwbqq started
Dec 06 10:11:47 np0005548790.localdomain ceph-mon[301742]: mgrmap e45: np0005548790.kvkfyr(active, since 4s), standbys: np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548789.mzhmje
Dec 06 10:11:47 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "mgr metadata", "who": "np0005548788.yvwbqq", "id": "np0005548788.yvwbqq"} : dispatch
Dec 06 10:11:47 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:11:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:11:47 np0005548790.localdomain sudo[306712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 06 10:11:47 np0005548790.localdomain sudo[306712]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:47 np0005548790.localdomain sudo[306712]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:47 np0005548790.localdomain sudo[306730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph
Dec 06 10:11:47 np0005548790.localdomain sudo[306730]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:47 np0005548790.localdomain sudo[306730]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:47 np0005548790.localdomain sudo[306748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:11:47 np0005548790.localdomain sudo[306748]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:47 np0005548790.localdomain sudo[306748]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:47 np0005548790.localdomain sudo[306766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:11:47 np0005548790.localdomain sudo[306766]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:47 np0005548790.localdomain sudo[306766]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:47 np0005548790.localdomain sudo[306784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:11:47 np0005548790.localdomain sudo[306784]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:47 np0005548790.localdomain sudo[306784]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:48 np0005548790.localdomain sudo[306818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:11:48 np0005548790.localdomain sudo[306818]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:48 np0005548790.localdomain sudo[306818]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:48 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0)
Dec 06 10:11:48 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0)
Dec 06 10:11:48 np0005548790.localdomain sudo[306836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new
Dec 06 10:11:48 np0005548790.localdomain sudo[306836]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:48 np0005548790.localdomain sudo[306836]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:48 np0005548790.localdomain sudo[306854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 06 10:11:48 np0005548790.localdomain sudo[306854]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:48 np0005548790.localdomain sudo[306854]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:48 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO cephadm.serve] Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:11:48 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:11:48 np0005548790.localdomain sudo[306872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:11:48 np0005548790.localdomain sudo[306872]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:48 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:11:48 np0005548790.localdomain sudo[306872]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:48 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:11:48 np0005548790.localdomain sudo[306890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config
Dec 06 10:11:48 np0005548790.localdomain sudo[306890]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:48 np0005548790.localdomain sudo[306890]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:11:48.394 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:11:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:11:48.395 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:11:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:11:48.395 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:11:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:11:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:11:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:11:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:11:48 np0005548790.localdomain sudo[306908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:11:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:11:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18719 "" "Go-http-client/1.1"
Dec 06 10:11:48 np0005548790.localdomain sudo[306908]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:48 np0005548790.localdomain sudo[306908]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:48 np0005548790.localdomain sudo[306926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8
Dec 06 10:11:48 np0005548790.localdomain sudo[306926]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:48 np0005548790.localdomain sudo[306926]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:48 np0005548790.localdomain ceph-mon[301742]: Updating np0005548789.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:11:48 np0005548790.localdomain ceph-mon[301742]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.conf
Dec 06 10:11:48 np0005548790.localdomain ceph-mon[301742]: Updating np0005548788.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:11:48 np0005548790.localdomain ceph-mon[301742]: Updating np0005548789.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:11:48 np0005548790.localdomain ceph-mon[301742]: Updating np0005548790.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 06 10:11:48 np0005548790.localdomain ceph-mon[301742]: Updating np0005548788.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:11:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:11:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:11:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:11:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:11:48 np0005548790.localdomain sudo[306944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:11:48 np0005548790.localdomain sudo[306944]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:48 np0005548790.localdomain sudo[306944]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:48 np0005548790.localdomain sudo[306978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:11:48 np0005548790.localdomain sudo[306978]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:48 np0005548790.localdomain sudo[306978]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:48 np0005548790.localdomain sudo[306996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new
Dec 06 10:11:48 np0005548790.localdomain sudo[306996]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:48 np0005548790.localdomain sudo[306996]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:48 np0005548790.localdomain sudo[307014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-1939e851-b10c-5c3b-9bb7-8e7f380233e8/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring.new /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:11:48 np0005548790.localdomain sudo[307014]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:48 np0005548790.localdomain sudo[307014]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:48 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0)
Dec 06 10:11:48 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0)
Dec 06 10:11:48 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:11:48 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:11:48 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 0 B/s wr, 18 op/s
Dec 06 10:11:48 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] update: starting ev 9bee9321-81f3-494f-b1e1-452bdccb25d3 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:11:48 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] complete: finished ev 9bee9321-81f3-494f-b1e1-452bdccb25d3 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:11:48 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Completed event 9bee9321-81f3-494f-b1e1-452bdccb25d3 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 06 10:11:48 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:11:48 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:11:49 np0005548790.localdomain sudo[307032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:11:49 np0005548790.localdomain sudo[307032]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:49 np0005548790.localdomain sudo[307032]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:11:49 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:11:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:11:49 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:11:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:11:49 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] update: starting ev c8e2e267-e996-4b94-b294-29ea3bcac06d (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:11:49 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] complete: finished ev c8e2e267-e996-4b94-b294-29ea3bcac06d (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:11:49 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Completed event c8e2e267-e996-4b94-b294-29ea3bcac06d (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 06 10:11:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:11:49 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:11:49 np0005548790.localdomain sudo[307050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:11:49 np0005548790.localdomain sudo[307050]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:11:49 np0005548790.localdomain sudo[307050]: pam_unix(sudo:session): session closed for user root
Dec 06 10:11:49 np0005548790.localdomain ceph-mon[301742]: pgmap v6: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:11:49 np0005548790.localdomain ceph-mon[301742]: Updating np0005548790.localdomain:/var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/config/ceph.client.admin.keyring
Dec 06 10:11:49 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:11:49 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:11:49 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:11:49 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:11:49 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:11:49 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:11:49 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:11:49 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:11:50 np0005548790.localdomain ceph-mon[301742]: pgmap v7: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 0 B/s wr, 18 op/s
Dec 06 10:11:50 np0005548790.localdomain ceph-mon[301742]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)
Dec 06 10:11:50 np0005548790.localdomain ceph-mon[301742]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST)
Dec 06 10:11:50 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 0 B/s wr, 14 op/s
Dec 06 10:11:51 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Writing back 50 completed events
Dec 06 10:11:51 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:11:52 np0005548790.localdomain ceph-mon[301742]: pgmap v8: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 0 B/s wr, 14 op/s
Dec 06 10:11:52 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:11:52 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 0 B/s wr, 11 op/s
Dec 06 10:11:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:11:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:11:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:11:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:11:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:11:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:11:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:11:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:11:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:11:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:11:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:11:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:11:53 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:11:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:11:54 np0005548790.localdomain podman[307068]: 2025-12-06 10:11:54.573680656 +0000 UTC m=+0.087744754 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 06 10:11:54 np0005548790.localdomain podman[307068]: 2025-12-06 10:11:54.611332751 +0000 UTC m=+0.125396809 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 10:11:54 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:11:54 np0005548790.localdomain ceph-mon[301742]: pgmap v9: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 0 B/s wr, 11 op/s
Dec 06 10:11:54 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 0 B/s wr, 10 op/s
Dec 06 10:11:55 np0005548790.localdomain ceph-mon[301742]: pgmap v10: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 0 B/s wr, 10 op/s
Dec 06 10:11:56 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 0 B/s wr, 10 op/s
Dec 06 10:11:57 np0005548790.localdomain ceph-mon[301742]: pgmap v11: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 0 B/s wr, 10 op/s
Dec 06 10:11:58 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:11:58 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 0 B/s wr, 10 op/s
Dec 06 10:11:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:11:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:11:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:11:59 np0005548790.localdomain systemd[1]: tmp-crun.dc5SI0.mount: Deactivated successfully.
Dec 06 10:11:59 np0005548790.localdomain podman[307086]: 2025-12-06 10:11:59.570927605 +0000 UTC m=+0.078213489 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:11:59 np0005548790.localdomain podman[307086]: 2025-12-06 10:11:59.605214088 +0000 UTC m=+0.112499992 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:11:59 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:11:59 np0005548790.localdomain podman[307093]: 2025-12-06 10:11:59.691695699 +0000 UTC m=+0.186984850 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, version=9.6, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 06 10:11:59 np0005548790.localdomain podman[307093]: 2025-12-06 10:11:59.729066585 +0000 UTC m=+0.224355736 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, name=ubi9-minimal, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 10:11:59 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:11:59 np0005548790.localdomain podman[307087]: 2025-12-06 10:11:59.732711664 +0000 UTC m=+0.233130493 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:11:59 np0005548790.localdomain podman[307087]: 2025-12-06 10:11:59.816388859 +0000 UTC m=+0.316807678 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute)
Dec 06 10:11:59 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:12:00 np0005548790.localdomain ceph-mon[301742]: pgmap v12: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 0 B/s wr, 10 op/s
Dec 06 10:12:00 np0005548790.localdomain systemd[1]: tmp-crun.4QpTLh.mount: Deactivated successfully.
Dec 06 10:12:00 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:02 np0005548790.localdomain ceph-mon[301742]: pgmap v13: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:02 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:12:04 np0005548790.localdomain ceph-mon[301742]: pgmap v14: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:04 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:06 np0005548790.localdomain ceph-mon[301742]: pgmap v15: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:12:06 np0005548790.localdomain podman[307149]: 2025-12-06 10:12:06.30157664 +0000 UTC m=+0.081725314 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS)
Dec 06 10:12:06 np0005548790.localdomain podman[307149]: 2025-12-06 10:12:06.338573717 +0000 UTC m=+0.118722341 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:12:06 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:12:06 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:07 np0005548790.localdomain ceph-mon[301742]: pgmap v16: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:08 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:12:08 np0005548790.localdomain podman[307168]: 2025-12-06 10:12:08.582247196 +0000 UTC m=+0.079240237 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:12:08 np0005548790.localdomain podman[307168]: 2025-12-06 10:12:08.590088217 +0000 UTC m=+0.087081278 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:12:08 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:12:08 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:12:08 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:10 np0005548790.localdomain ceph-mon[301742]: pgmap v17: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:12:11 np0005548790.localdomain systemd[1]: tmp-crun.V1rTEP.mount: Deactivated successfully.
Dec 06 10:12:11 np0005548790.localdomain podman[307192]: 2025-12-06 10:12:11.580398155 +0000 UTC m=+0.091714703 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:12:11 np0005548790.localdomain podman[307192]: 2025-12-06 10:12:11.644438811 +0000 UTC m=+0.155755289 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Dec 06 10:12:11 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:12:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:12:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:12:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:12:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:12:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:12:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:12:12 np0005548790.localdomain ceph-mon[301742]: pgmap v18: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:12 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:13 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:12:14 np0005548790.localdomain ceph-mon[301742]: pgmap v19: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:14 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:16 np0005548790.localdomain ceph-mon[301742]: pgmap v20: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:16 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:17 np0005548790.localdomain ceph-mon[301742]: pgmap v21: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:12:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:12:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:12:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:12:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:12:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18718 "" "Go-http-client/1.1"
Dec 06 10:12:18 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:12:18 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:20 np0005548790.localdomain ceph-mon[301742]: pgmap v22: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:20 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:22 np0005548790.localdomain ceph-mon[301742]: pgmap v23: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:22 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:12:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:12:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:12:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:12:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:12:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:12:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:12:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:12:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:12:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:12:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:12:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:12:23 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:12:24 np0005548790.localdomain ceph-mon[301742]: pgmap v24: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:24 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:12:25 np0005548790.localdomain podman[307217]: 2025-12-06 10:12:25.570259161 +0000 UTC m=+0.084296691 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 06 10:12:25 np0005548790.localdomain podman[307217]: 2025-12-06 10:12:25.604448572 +0000 UTC m=+0.118486172 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:12:25 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:12:26 np0005548790.localdomain ceph-mon[301742]: pgmap v25: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:26 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:27 np0005548790.localdomain ceph-mon[301742]: pgmap v26: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:28 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:12:28 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:30 np0005548790.localdomain ceph-mon[301742]: pgmap v27: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:30 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/825719281' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:12:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:12:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:12:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:12:30 np0005548790.localdomain systemd[1]: tmp-crun.KQ8DEF.mount: Deactivated successfully.
Dec 06 10:12:30 np0005548790.localdomain podman[307235]: 2025-12-06 10:12:30.588489894 +0000 UTC m=+0.093066268 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 10:12:30 np0005548790.localdomain podman[307235]: 2025-12-06 10:12:30.593982502 +0000 UTC m=+0.098558876 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:12:30 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:12:30 np0005548790.localdomain podman[307236]: 2025-12-06 10:12:30.691374437 +0000 UTC m=+0.191499341 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 10:12:30 np0005548790.localdomain podman[307236]: 2025-12-06 10:12:30.73043538 +0000 UTC m=+0.230560254 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:12:30 np0005548790.localdomain podman[307237]: 2025-12-06 10:12:30.733685377 +0000 UTC m=+0.230920093 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, name=ubi9-minimal, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, build-date=2025-08-20T13:12:41)
Dec 06 10:12:30 np0005548790.localdomain podman[307237]: 2025-12-06 10:12:30.749263147 +0000 UTC m=+0.246497853 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, io.buildah.version=1.33.7, managed_by=edpm_ansible, container_name=openstack_network_exporter, distribution-scope=public, com.redhat.component=ubi9-minimal-container)
Dec 06 10:12:30 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:12:30 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:12:30 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:31 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/2307179959' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:12:31 np0005548790.localdomain systemd[1]: tmp-crun.loUjwv.mount: Deactivated successfully.
Dec 06 10:12:32 np0005548790.localdomain ceph-mon[301742]: pgmap v28: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:32.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:12:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:32.334 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:12:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:32.335 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:12:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:32.523 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:12:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:32.524 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:12:32 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:33.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:12:33 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:12:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:34.329 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:12:34 np0005548790.localdomain ceph-mon[301742]: pgmap v29: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:34 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:35.328 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:12:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:36.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:12:36 np0005548790.localdomain ceph-mon[301742]: pgmap v30: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:12:36 np0005548790.localdomain podman[307296]: 2025-12-06 10:12:36.56427447 +0000 UTC m=+0.083344486 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2)
Dec 06 10:12:36 np0005548790.localdomain podman[307296]: 2025-12-06 10:12:36.604183146 +0000 UTC m=+0.123253113 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125)
Dec 06 10:12:36 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:12:36 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:37.420 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:37.421 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:37.421 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:37.421 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:12:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:37.422 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:12:37 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:12:37 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1486122464' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:12:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:37.845 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:12:37 np0005548790.localdomain ceph-mon[301742]: pgmap v31: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:37 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1486122464' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:12:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:38.070 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:12:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:38.072 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=12007MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:12:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:38.072 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:38.073 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:38.151 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:12:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:38.151 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:12:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:38.170 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:12:38 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:12:38 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2157659444' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:12:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:38.621 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:12:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:38.627 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:12:38 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/4173888289' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:12:38 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2157659444' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:12:38 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:12:38 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1131690724' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:12:38 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:12:38 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1131690724' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:12:38 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:12:38 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:39 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:12:39 np0005548790.localdomain systemd[1]: tmp-crun.ewmg47.mount: Deactivated successfully.
Dec 06 10:12:39 np0005548790.localdomain podman[307360]: 2025-12-06 10:12:39.571042423 +0000 UTC m=+0.088629970 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 10:12:39 np0005548790.localdomain podman[307360]: 2025-12-06 10:12:39.6047069 +0000 UTC m=+0.122294387 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:12:39 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:12:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:39.808 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:12:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:39.811 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:12:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:39.812 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1131690724' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:12:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1131690724' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:12:39 np0005548790.localdomain ceph-mon[301742]: pgmap v32: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:40.813 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:12:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:40.814 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:12:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:40.814 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:12:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:40.814 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:12:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:12:40.814 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:12:40 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/233671784' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:12:40 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Optimize plan auto_2025-12-06_10:12:41
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] do_upmap
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] pools ['images', 'volumes', '.mgr', 'manila_metadata', 'vms', 'backups', 'manila_data']
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] prepared 0/10 changes
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32)
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32)
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.7263051367950866e-06 of space, bias 4.0, pg target 0.0021774090359203426 quantized to 16 (current 16)
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:12:41 np0005548790.localdomain ceph-mon[301742]: pgmap v33: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:12:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:12:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:12:42 np0005548790.localdomain podman[307384]: 2025-12-06 10:12:42.59065553 +0000 UTC m=+0.084743655 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller)
Dec 06 10:12:42 np0005548790.localdomain podman[307384]: 2025-12-06 10:12:42.626182837 +0000 UTC m=+0.120271012 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 10:12:42 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:12:42 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:43 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:12:44 np0005548790.localdomain ceph-mon[301742]: pgmap v34: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:44 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:46 np0005548790.localdomain ceph-mon[301742]: pgmap v35: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:46 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:47 np0005548790.localdomain ceph-mon[301742]: pgmap v36: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:12:48.395 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:12:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:12:48.396 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:12:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:12:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:12:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:12:48.397 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:12:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:12:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:12:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:12:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18715 "" "Go-http-client/1.1"
Dec 06 10:12:48 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:12:48 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:49 np0005548790.localdomain sudo[307409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:12:49 np0005548790.localdomain sudo[307409]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:12:49 np0005548790.localdomain sudo[307409]: pam_unix(sudo:session): session closed for user root
Dec 06 10:12:49 np0005548790.localdomain sudo[307427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:12:49 np0005548790.localdomain sudo[307427]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:12:50 np0005548790.localdomain ceph-mon[301742]: pgmap v37: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:50 np0005548790.localdomain sudo[307427]: pam_unix(sudo:session): session closed for user root
Dec 06 10:12:50 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:12:50 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:12:50 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:12:50 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:12:50 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:12:50 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] update: starting ev 7e9f103d-494d-4986-8920-602dc67ecc3b (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:12:50 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] complete: finished ev 7e9f103d-494d-4986-8920-602dc67ecc3b (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:12:50 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Completed event 7e9f103d-494d-4986-8920-602dc67ecc3b (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 06 10:12:50 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:12:50 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:12:50 np0005548790.localdomain sudo[307478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:12:50 np0005548790.localdomain sudo[307478]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:12:50 np0005548790.localdomain sudo[307478]: pam_unix(sudo:session): session closed for user root
Dec 06 10:12:50 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:51 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:12:51 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:12:51 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:12:51 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:12:51 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Writing back 50 completed events
Dec 06 10:12:51 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:12:52 np0005548790.localdomain ceph-mon[301742]: pgmap v38: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:52 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:12:52 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:12:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:12:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:12:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:12:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:12:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:12:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:12:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:12:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:12:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:12:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:12:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:12:53 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:12:54 np0005548790.localdomain ceph-mon[301742]: pgmap v39: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:54 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:12:56 np0005548790.localdomain ceph-mon[301742]: pgmap v40: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:56 np0005548790.localdomain podman[307496]: 2025-12-06 10:12:56.560896418 +0000 UTC m=+0.078574120 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 10:12:56 np0005548790.localdomain podman[307496]: 2025-12-06 10:12:56.59253549 +0000 UTC m=+0.110213182 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 06 10:12:56 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:12:56 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:57 np0005548790.localdomain ceph-mon[301742]: pgmap v41: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:12:58 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:12:58 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:00 np0005548790.localdomain ceph-mon[301742]: pgmap v42: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:13:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:13:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:13:01 np0005548790.localdomain systemd[1]: tmp-crun.NUoJdC.mount: Deactivated successfully.
Dec 06 10:13:01 np0005548790.localdomain podman[307515]: 2025-12-06 10:13:01.589963793 +0000 UTC m=+0.100238572 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:13:01 np0005548790.localdomain podman[307515]: 2025-12-06 10:13:01.628218633 +0000 UTC m=+0.138493412 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:13:01 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:13:01 np0005548790.localdomain podman[307516]: 2025-12-06 10:13:01.632074448 +0000 UTC m=+0.137126796 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 06 10:13:01 np0005548790.localdomain podman[307517]: 2025-12-06 10:13:01.696832892 +0000 UTC m=+0.199964119 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-type=git, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, container_name=openstack_network_exporter)
Dec 06 10:13:01 np0005548790.localdomain podman[307517]: 2025-12-06 10:13:01.711131488 +0000 UTC m=+0.214262715 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 06 10:13:01 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:13:01 np0005548790.localdomain podman[307516]: 2025-12-06 10:13:01.771842684 +0000 UTC m=+0.276895052 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 06 10:13:01 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:13:02 np0005548790.localdomain ceph-mon[301742]: pgmap v43: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:02 np0005548790.localdomain systemd[1]: tmp-crun.qNJqQF.mount: Deactivated successfully.
Dec 06 10:13:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v44: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:13:04 np0005548790.localdomain ceph-mon[301742]: pgmap v44: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v45: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:06 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:13:06.289 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:13:06 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:13:06.290 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:13:06 np0005548790.localdomain ceph-mon[301742]: pgmap v45: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v46: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.324 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:13:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:13:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:13:07 np0005548790.localdomain systemd[296658]: Created slice User Background Tasks Slice.
Dec 06 10:13:07 np0005548790.localdomain podman[307576]: 2025-12-06 10:13:07.564632578 +0000 UTC m=+0.080091108 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 06 10:13:07 np0005548790.localdomain systemd[296658]: Starting Cleanup of User's Temporary Files and Directories...
Dec 06 10:13:07 np0005548790.localdomain podman[307576]: 2025-12-06 10:13:07.57616079 +0000 UTC m=+0.091619270 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Dec 06 10:13:07 np0005548790.localdomain systemd[296658]: Finished Cleanup of User's Temporary Files and Directories.
Dec 06 10:13:07 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:13:07 np0005548790.localdomain ceph-mon[301742]: pgmap v46: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:08 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:13:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v47: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:10 np0005548790.localdomain ceph-mon[301742]: pgmap v47: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:10 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:13:10 np0005548790.localdomain podman[307596]: 2025-12-06 10:13:10.561845492 +0000 UTC m=+0.077754805 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:13:10 np0005548790.localdomain podman[307596]: 2025-12-06 10:13:10.571692118 +0000 UTC m=+0.087601481 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 10:13:10 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:13:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v48: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:11 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Dec 06 10:13:11 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:11.873834) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:13:11 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Dec 06 10:13:11 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015991873917, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1291, "num_deletes": 255, "total_data_size": 1637152, "memory_usage": 1661632, "flush_reason": "Manual Compaction"}
Dec 06 10:13:11 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Dec 06 10:13:11 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015991885892, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 1044636, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15281, "largest_seqno": 16567, "table_properties": {"data_size": 1039395, "index_size": 2712, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11725, "raw_average_key_size": 20, "raw_value_size": 1028560, "raw_average_value_size": 1764, "num_data_blocks": 115, "num_entries": 583, "num_filter_entries": 583, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015907, "oldest_key_time": 1765015907, "file_creation_time": 1765015991, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:13:11 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 12092 microseconds, and 4154 cpu microseconds.
Dec 06 10:13:11 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:13:11 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:11.885941) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 1044636 bytes OK
Dec 06 10:13:11 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:11.885971) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Dec 06 10:13:11 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:11.887728) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Dec 06 10:13:11 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:11.887752) EVENT_LOG_v1 {"time_micros": 1765015991887745, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:13:11 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:11.887797) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:13:11 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1630920, prev total WAL file size 1631244, number of live WAL files 2.
Dec 06 10:13:11 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:13:11 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:11.888760) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373637' seq:72057594037927935, type:22 .. '6C6F676D0034303138' seq:0, type:0; will stop at (end)
Dec 06 10:13:11 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:13:11 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(1020KB)], [18(19MB)]
Dec 06 10:13:11 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015991888863, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 21891334, "oldest_snapshot_seqno": -1}
Dec 06 10:13:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:13:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:13:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:13:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f0645e48dc0>)]
Dec 06 10:13:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 06 10:13:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:13:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f0645e48fd0>)]
Dec 06 10:13:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 06 10:13:12 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 12315 keys, 21754063 bytes, temperature: kUnknown
Dec 06 10:13:12 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015992031754, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 21754063, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 21679732, "index_size": 42422, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30853, "raw_key_size": 329431, "raw_average_key_size": 26, "raw_value_size": 21466173, "raw_average_value_size": 1743, "num_data_blocks": 1635, "num_entries": 12315, "num_filter_entries": 12315, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015768, "oldest_key_time": 0, "file_creation_time": 1765015991, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:13:12 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:13:12 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:12.032112) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 21754063 bytes
Dec 06 10:13:12 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:12.034257) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.1 rd, 152.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 19.9 +0.0 blob) out(20.7 +0.0 blob), read-write-amplify(41.8) write-amplify(20.8) OK, records in: 12848, records dropped: 533 output_compression: NoCompression
Dec 06 10:13:12 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:12.034298) EVENT_LOG_v1 {"time_micros": 1765015992034281, "job": 8, "event": "compaction_finished", "compaction_time_micros": 143018, "compaction_time_cpu_micros": 58778, "output_level": 6, "num_output_files": 1, "total_output_size": 21754063, "num_input_records": 12848, "num_output_records": 12315, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:13:12 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:13:12 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015992034628, "job": 8, "event": "table_file_deletion", "file_number": 20}
Dec 06 10:13:12 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:13:12 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765015992037656, "job": 8, "event": "table_file_deletion", "file_number": 18}
Dec 06 10:13:12 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:11.888616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:13:12 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:12.037691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:13:12 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:12.037697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:13:12 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:12.037701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:13:12 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:12.037704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:13:12 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:12.037706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:13:12 np0005548790.localdomain ceph-mon[301742]: pgmap v48: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v49: 177 pgs: 177 active+clean; 125 MiB data, 637 MiB used, 41 GiB / 42 GiB avail; 8.2 KiB/s rd, 1.7 MiB/s wr, 12 op/s
Dec 06 10:13:13 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e96 e96: 6 total, 6 up, 6 in
Dec 06 10:13:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:13:13 np0005548790.localdomain podman[307619]: 2025-12-06 10:13:13.56545011 +0000 UTC m=+0.078911599 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:13:13 np0005548790.localdomain podman[307619]: 2025-12-06 10:13:13.628850117 +0000 UTC m=+0.142311596 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec 06 10:13:13 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:13:13 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:13:14 np0005548790.localdomain ceph-mon[301742]: pgmap v49: 177 pgs: 177 active+clean; 125 MiB data, 637 MiB used, 41 GiB / 42 GiB avail; 8.2 KiB/s rd, 1.7 MiB/s wr, 12 op/s
Dec 06 10:13:14 np0005548790.localdomain ceph-mon[301742]: osdmap e96: 6 total, 6 up, 6 in
Dec 06 10:13:14 np0005548790.localdomain ceph-mon[301742]: mgrmap e46: np0005548790.kvkfyr(active, since 91s), standbys: np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548789.mzhmje
Dec 06 10:13:14 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:13:14.293 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=33b2d0f4-3dae-458c-b286-c937c7cb3d9e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:13:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v51: 177 pgs: 177 active+clean; 125 MiB data, 637 MiB used, 41 GiB / 42 GiB avail; 9.9 KiB/s rd, 2.0 MiB/s wr, 14 op/s
Dec 06 10:13:15 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e97 e97: 6 total, 6 up, 6 in
Dec 06 10:13:16 np0005548790.localdomain ceph-mon[301742]: pgmap v51: 177 pgs: 177 active+clean; 125 MiB data, 637 MiB used, 41 GiB / 42 GiB avail; 9.9 KiB/s rd, 2.0 MiB/s wr, 14 op/s
Dec 06 10:13:16 np0005548790.localdomain ceph-mon[301742]: osdmap e97: 6 total, 6 up, 6 in
Dec 06 10:13:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v53: 177 pgs: 177 active+clean; 125 MiB data, 637 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 2.6 MiB/s wr, 18 op/s
Dec 06 10:13:17 np0005548790.localdomain ceph-mon[301742]: pgmap v53: 177 pgs: 177 active+clean; 125 MiB data, 637 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 2.6 MiB/s wr, 18 op/s
Dec 06 10:13:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:13:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:13:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:13:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:13:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:13:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18720 "" "Go-http-client/1.1"
Dec 06 10:13:18 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:13:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v54: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 48 op/s
Dec 06 10:13:20 np0005548790.localdomain ceph-mon[301742]: pgmap v54: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 48 op/s
Dec 06 10:13:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v55: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 2.6 MiB/s wr, 29 op/s
Dec 06 10:13:22 np0005548790.localdomain ceph-mon[301742]: pgmap v55: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 2.6 MiB/s wr, 29 op/s
Dec 06 10:13:23 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v56: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 2.1 MiB/s wr, 23 op/s
Dec 06 10:13:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:13:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:13:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:13:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:13:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:13:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:13:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:13:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:13:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:13:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:13:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:13:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:13:23 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:13:24 np0005548790.localdomain ceph-mon[301742]: pgmap v56: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 2.1 MiB/s wr, 23 op/s
Dec 06 10:13:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v57: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 2.0 MiB/s wr, 23 op/s
Dec 06 10:13:26 np0005548790.localdomain ceph-mon[301742]: pgmap v57: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 2.0 MiB/s wr, 23 op/s
Dec 06 10:13:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v58: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 1.7 MiB/s wr, 19 op/s
Dec 06 10:13:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:13:27 np0005548790.localdomain systemd[1]: tmp-crun.zoZVCx.mount: Deactivated successfully.
Dec 06 10:13:27 np0005548790.localdomain podman[307643]: 2025-12-06 10:13:27.580371859 +0000 UTC m=+0.094656061 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:13:27 np0005548790.localdomain podman[307643]: 2025-12-06 10:13:27.614166929 +0000 UTC m=+0.128451081 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 10:13:27 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:13:27 np0005548790.localdomain ceph-mon[301742]: pgmap v58: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 1.7 MiB/s wr, 19 op/s
Dec 06 10:13:28 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:13:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v59: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 1.7 MiB/s wr, 19 op/s
Dec 06 10:13:30 np0005548790.localdomain ceph-mon[301742]: pgmap v59: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 1.7 MiB/s wr, 19 op/s
Dec 06 10:13:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v60: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:31 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/2010010665' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:13:32 np0005548790.localdomain ceph-mon[301742]: pgmap v60: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:13:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:13:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:13:32 np0005548790.localdomain podman[307662]: 2025-12-06 10:13:32.583972168 +0000 UTC m=+0.090319655 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:13:32 np0005548790.localdomain podman[307662]: 2025-12-06 10:13:32.628967841 +0000 UTC m=+0.135315378 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:13:32 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:13:32 np0005548790.localdomain podman[307661]: 2025-12-06 10:13:32.630411659 +0000 UTC m=+0.140576729 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:13:32 np0005548790.localdomain podman[307661]: 2025-12-06 10:13:32.713266172 +0000 UTC m=+0.223431232 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 10:13:32 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:13:32 np0005548790.localdomain podman[307663]: 2025-12-06 10:13:32.683650554 +0000 UTC m=+0.188379077 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, managed_by=edpm_ansible, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, config_id=edpm, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 06 10:13:32 np0005548790.localdomain podman[307663]: 2025-12-06 10:13:32.76404432 +0000 UTC m=+0.268772823 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public)
Dec 06 10:13:32 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:13:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v61: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:33.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:13:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:33.334 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:13:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:33.334 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:13:33 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/113296038' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:13:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:33.357 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:13:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:33.358 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:13:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:33.358 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:13:33 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:13:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:34.354 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:13:34 np0005548790.localdomain ceph-mon[301742]: pgmap v61: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v62: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:36 np0005548790.localdomain ceph-mon[301742]: pgmap v62: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v63: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:37.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:13:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:37.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 10:13:37 np0005548790.localdomain ceph-mon[301742]: pgmap v63: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:38.351 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:13:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:38.352 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:13:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:38.352 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:13:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:38.353 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:13:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:38.353 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:13:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:38.374 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:13:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:38.374 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:13:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:38.375 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:13:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:38.375 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:13:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:38.375 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:13:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:13:38 np0005548790.localdomain podman[307726]: 2025-12-06 10:13:38.569573487 +0000 UTC m=+0.085504484 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.schema-version=1.0)
Dec 06 10:13:38 np0005548790.localdomain podman[307726]: 2025-12-06 10:13:38.579955007 +0000 UTC m=+0.095885974 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd)
Dec 06 10:13:38 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:13:38 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:13:38 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2709377790' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:13:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:38.854 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:13:38 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2709377790' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:13:38 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:13:38 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2247424638' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:13:38 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:13:38 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2247424638' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:13:38 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:13:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v64: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:39.081 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:13:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:39.083 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=11983MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:13:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:39.084 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:13:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:39.084 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:13:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:39.824 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:13:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:39.824 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:13:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2247424638' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:13:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2247424638' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:13:39 np0005548790.localdomain ceph-mon[301742]: pgmap v64: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2976675022' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:13:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e98 e98: 6 total, 6 up, 6 in
Dec 06 10:13:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:40.094 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Refreshing inventories for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 10:13:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:40.480 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updating ProviderTree inventory for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 10:13:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:40.481 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updating inventory in ProviderTree for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 10:13:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:40.496 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Refreshing aggregate associations for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 10:13:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:40.522 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Refreshing trait associations for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0, traits: HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AMD_SVM,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_F16C,HW_CPU_X86_ABM,HW_CPU_X86_BMI,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SVM,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 10:13:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:40.537 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:13:40 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:13:40Z|00035|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Dec 06 10:13:40 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:13:40 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/92571660' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:13:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:40.957 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:13:40 np0005548790.localdomain ceph-mon[301742]: osdmap e98: 6 total, 6 up, 6 in
Dec 06 10:13:40 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/92571660' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:13:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:40.963 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:13:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:40.981 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:13:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:40.984 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:13:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:40.985 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:13:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:40.985 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:13:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:40.986 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 10:13:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:41.002 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v66: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:41 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:13:41 np0005548790.localdomain podman[307785]: 2025-12-06 10:13:41.597342654 +0000 UTC m=+0.115282186 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:13:41 np0005548790.localdomain podman[307785]: 2025-12-06 10:13:41.61052365 +0000 UTC m=+0.128463222 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:13:41 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Optimize plan auto_2025-12-06_10:13:41
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] do_upmap
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] pools ['volumes', 'manila_metadata', 'images', '.mgr', 'backups', 'vms', 'manila_data']
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] prepared 0/10 changes
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:13:41 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Dec 06 10:13:41 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:41.901357) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:13:41 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Dec 06 10:13:41 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016021901425, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 625, "num_deletes": 250, "total_data_size": 1101095, "memory_usage": 1118760, "flush_reason": "Manual Compaction"}
Dec 06 10:13:41 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Dec 06 10:13:41 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016021909050, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 653774, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16572, "largest_seqno": 17192, "table_properties": {"data_size": 651051, "index_size": 706, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7627, "raw_average_key_size": 20, "raw_value_size": 645236, "raw_average_value_size": 1743, "num_data_blocks": 31, "num_entries": 370, "num_filter_entries": 370, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015991, "oldest_key_time": 1765015991, "file_creation_time": 1765016021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:13:41 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 7740 microseconds, and 3095 cpu microseconds.
Dec 06 10:13:41 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:13:41 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:41.909096) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 653774 bytes OK
Dec 06 10:13:41 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:41.909118) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Dec 06 10:13:41 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:41.911070) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Dec 06 10:13:41 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:41.911095) EVENT_LOG_v1 {"time_micros": 1765016021911088, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:13:41 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:41.911115) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:13:41 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 1097586, prev total WAL file size 1097910, number of live WAL files 2.
Dec 06 10:13:41 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:13:41 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:41.911993) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373534' seq:72057594037927935, type:22 .. '6D6772737461740034303035' seq:0, type:0; will stop at (end)
Dec 06 10:13:41 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:13:41 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(638KB)], [21(20MB)]
Dec 06 10:13:41 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016021912046, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 22407837, "oldest_snapshot_seqno": -1}
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32)
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.7263051367950866e-06 of space, bias 4.0, pg target 0.002170138888888889 quantized to 16 (current 16)
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:13:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:13:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:41.984 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:13:42 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 12173 keys, 20264803 bytes, temperature: kUnknown
Dec 06 10:13:42 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016022020067, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 20264803, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 20195917, "index_size": 37361, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30469, "raw_key_size": 326703, "raw_average_key_size": 26, "raw_value_size": 19989281, "raw_average_value_size": 1642, "num_data_blocks": 1423, "num_entries": 12173, "num_filter_entries": 12173, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015768, "oldest_key_time": 0, "file_creation_time": 1765016021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:13:42 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:13:42 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:42.020359) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 20264803 bytes
Dec 06 10:13:42 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:42.022226) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 207.3 rd, 187.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 20.7 +0.0 blob) out(19.3 +0.0 blob), read-write-amplify(65.3) write-amplify(31.0) OK, records in: 12685, records dropped: 512 output_compression: NoCompression
Dec 06 10:13:42 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:42.022256) EVENT_LOG_v1 {"time_micros": 1765016022022244, "job": 10, "event": "compaction_finished", "compaction_time_micros": 108105, "compaction_time_cpu_micros": 51776, "output_level": 6, "num_output_files": 1, "total_output_size": 20264803, "num_input_records": 12685, "num_output_records": 12173, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:13:42 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:13:42 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016022022507, "job": 10, "event": "table_file_deletion", "file_number": 23}
Dec 06 10:13:42 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:13:42 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016022025897, "job": 10, "event": "table_file_deletion", "file_number": 21}
Dec 06 10:13:42 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:41.911918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:13:42 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:42.026025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:13:42 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:42.026034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:13:42 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:42.026037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:13:42 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:42.026040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:13:42 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:13:42.026045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:13:42 np0005548790.localdomain ceph-mon[301742]: pgmap v66: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:13:42 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/3225179418' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:13:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v67: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 2.3 KiB/s wr, 23 op/s
Dec 06 10:13:43 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e99 e99: 6 total, 6 up, 6 in
Dec 06 10:13:43 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:13:44 np0005548790.localdomain ceph-mon[301742]: pgmap v67: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 2.3 KiB/s wr, 23 op/s
Dec 06 10:13:44 np0005548790.localdomain ceph-mon[301742]: osdmap e99: 6 total, 6 up, 6 in
Dec 06 10:13:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:13:44 np0005548790.localdomain podman[307808]: 2025-12-06 10:13:44.569680589 +0000 UTC m=+0.079365849 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Dec 06 10:13:44 np0005548790.localdomain podman[307808]: 2025-12-06 10:13:44.607309092 +0000 UTC m=+0.116994312 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller)
Dec 06 10:13:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:13:44.608 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:13:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:13:44.609 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:13:44 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:13:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v69: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 2.9 KiB/s wr, 29 op/s
Dec 06 10:13:46 np0005548790.localdomain ceph-mon[301742]: pgmap v69: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 2.9 KiB/s wr, 29 op/s
Dec 06 10:13:46 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:13:46.611 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=33b2d0f4-3dae-458c-b286-c937c7cb3d9e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:13:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v70: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 2.9 KiB/s wr, 29 op/s
Dec 06 10:13:47 np0005548790.localdomain ceph-mon[301742]: pgmap v70: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 2.9 KiB/s wr, 29 op/s
Dec 06 10:13:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:13:48.334 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:13:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:13:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:13:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:13:48.395 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:13:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:13:48.396 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:13:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:13:48.396 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:13:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:13:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:13:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:13:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18718 "" "Go-http-client/1.1"
Dec 06 10:13:48 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:13:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v71: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 3.4 KiB/s wr, 44 op/s
Dec 06 10:13:50 np0005548790.localdomain ceph-mon[301742]: pgmap v71: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 3.4 KiB/s wr, 44 op/s
Dec 06 10:13:50 np0005548790.localdomain sudo[307834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:13:50 np0005548790.localdomain sudo[307834]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:13:50 np0005548790.localdomain sudo[307834]: pam_unix(sudo:session): session closed for user root
Dec 06 10:13:50 np0005548790.localdomain sudo[307852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 06 10:13:50 np0005548790.localdomain sudo[307852]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:13:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v72: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 3.1 KiB/s wr, 39 op/s
Dec 06 10:13:51 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e100 e100: 6 total, 6 up, 6 in
Dec 06 10:13:51 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:13:51 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:13:51 np0005548790.localdomain sudo[307852]: pam_unix(sudo:session): session closed for user root
Dec 06 10:13:51 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0)
Dec 06 10:13:51 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0)
Dec 06 10:13:51 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0)
Dec 06 10:13:51 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0)
Dec 06 10:13:51 np0005548790.localdomain sudo[307891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:13:51 np0005548790.localdomain sudo[307891]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:13:51 np0005548790.localdomain sudo[307891]: pam_unix(sudo:session): session closed for user root
Dec 06 10:13:51 np0005548790.localdomain sudo[307909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:13:51 np0005548790.localdomain sudo[307909]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:13:51 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e101 e101: 6 total, 6 up, 6 in
Dec 06 10:13:52 np0005548790.localdomain ceph-mon[301742]: pgmap v72: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 3.1 KiB/s wr, 39 op/s
Dec 06 10:13:52 np0005548790.localdomain ceph-mon[301742]: osdmap e100: 6 total, 6 up, 6 in
Dec 06 10:13:52 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:13:52 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:13:52 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:13:52 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:13:52 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:13:52 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:13:52 np0005548790.localdomain ceph-mon[301742]: osdmap e101: 6 total, 6 up, 6 in
Dec 06 10:13:52 np0005548790.localdomain sudo[307909]: pam_unix(sudo:session): session closed for user root
Dec 06 10:13:52 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:13:52 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:13:52 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:13:52 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:13:52 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:13:52 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] update: starting ev 020ecd1e-d98c-449d-b5dd-f450ebc51259 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:13:52 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] complete: finished ev 020ecd1e-d98c-449d-b5dd-f450ebc51259 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:13:52 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Completed event 020ecd1e-d98c-449d-b5dd-f450ebc51259 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 06 10:13:52 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:13:52 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:13:52 np0005548790.localdomain sudo[307959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:13:52 np0005548790.localdomain sudo[307959]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:13:52 np0005548790.localdomain sudo[307959]: pam_unix(sudo:session): session closed for user root
Dec 06 10:13:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v75: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 3.0 KiB/s wr, 38 op/s
Dec 06 10:13:53 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:13:53 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:13:53 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:13:53 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:13:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:13:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:13:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:13:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:13:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:13:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:13:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:13:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:13:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:13:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:13:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:13:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:13:53 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:13:54 np0005548790.localdomain ceph-mon[301742]: pgmap v75: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 3.0 KiB/s wr, 38 op/s
Dec 06 10:13:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v76: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 3.0 KiB/s wr, 38 op/s
Dec 06 10:13:56 np0005548790.localdomain ceph-mon[301742]: pgmap v76: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 3.0 KiB/s wr, 38 op/s
Dec 06 10:13:56 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Writing back 50 completed events
Dec 06 10:13:56 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:13:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v77: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Dec 06 10:13:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:13:57 np0005548790.localdomain ceph-mon[301742]: pgmap v77: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Dec 06 10:13:57 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e102 e102: 6 total, 6 up, 6 in
Dec 06 10:13:58 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:13:58 np0005548790.localdomain podman[307977]: 2025-12-06 10:13:58.576099161 +0000 UTC m=+0.081843827 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Dec 06 10:13:58 np0005548790.localdomain podman[307977]: 2025-12-06 10:13:58.609267505 +0000 UTC m=+0.115012141 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec 06 10:13:58 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:13:58 np0005548790.localdomain ceph-mon[301742]: osdmap e102: 6 total, 6 up, 6 in
Dec 06 10:13:58 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:13:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v79: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Dec 06 10:13:59 np0005548790.localdomain ceph-mon[301742]: pgmap v79: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Dec 06 10:14:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v80: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 2.4 KiB/s wr, 26 op/s
Dec 06 10:14:02 np0005548790.localdomain ceph-mon[301742]: pgmap v80: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 2.4 KiB/s wr, 26 op/s
Dec 06 10:14:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v81: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 06 10:14:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:14:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:14:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:14:03 np0005548790.localdomain systemd[1]: tmp-crun.oUfoqM.mount: Deactivated successfully.
Dec 06 10:14:03 np0005548790.localdomain podman[307996]: 2025-12-06 10:14:03.575472466 +0000 UTC m=+0.090276613 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:14:03 np0005548790.localdomain podman[307996]: 2025-12-06 10:14:03.595390553 +0000 UTC m=+0.110194670 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:14:03 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:14:03 np0005548790.localdomain podman[307997]: 2025-12-06 10:14:03.666977621 +0000 UTC m=+0.179754673 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 10:14:03 np0005548790.localdomain podman[307997]: 2025-12-06 10:14:03.67509774 +0000 UTC m=+0.187874792 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Dec 06 10:14:03 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:14:03 np0005548790.localdomain podman[307998]: 2025-12-06 10:14:03.59491131 +0000 UTC m=+0.102993275 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, distribution-scope=public, name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.expose-services=)
Dec 06 10:14:03 np0005548790.localdomain podman[307998]: 2025-12-06 10:14:03.725357764 +0000 UTC m=+0.233439729 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, release=1755695350, architecture=x86_64, io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, version=9.6, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 06 10:14:03 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:14:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:14:04 np0005548790.localdomain ceph-mon[301742]: pgmap v81: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 06 10:14:04 np0005548790.localdomain sshd[308058]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:14:04 np0005548790.localdomain sshd[308060]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:14:04 np0005548790.localdomain sshd[308060]: error: kex_exchange_identification: client sent invalid protocol identifier ""
Dec 06 10:14:04 np0005548790.localdomain sshd[308060]: banner exchange: Connection from 3.137.73.221 port 46168: invalid format
Dec 06 10:14:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v82: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 06 10:14:06 np0005548790.localdomain sshd[308061]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:14:06 np0005548790.localdomain sshd[308061]: error: kex_exchange_identification: banner line contains invalid characters
Dec 06 10:14:06 np0005548790.localdomain sshd[308061]: banner exchange: Connection from 3.137.73.221 port 46184: invalid format
Dec 06 10:14:06 np0005548790.localdomain ceph-mon[301742]: pgmap v82: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Dec 06 10:14:06 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e103 e103: 6 total, 6 up, 6 in
Dec 06 10:14:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v84: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 904 B/s wr, 17 op/s
Dec 06 10:14:07 np0005548790.localdomain ceph-mon[301742]: osdmap e103: 6 total, 6 up, 6 in
Dec 06 10:14:07 np0005548790.localdomain ceph-mon[301742]: pgmap v84: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 904 B/s wr, 17 op/s
Dec 06 10:14:08 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:14:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v85: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Dec 06 10:14:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:14:09 np0005548790.localdomain podman[308062]: 2025-12-06 10:14:09.569499672 +0000 UTC m=+0.082402971 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 06 10:14:09 np0005548790.localdomain podman[308062]: 2025-12-06 10:14:09.584253749 +0000 UTC m=+0.097157018 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 06 10:14:09 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:14:10 np0005548790.localdomain ceph-mon[301742]: pgmap v85: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Dec 06 10:14:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v86: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Dec 06 10:14:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:14:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:14:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:14:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:14:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:14:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:14:12 np0005548790.localdomain ceph-mon[301742]: pgmap v86: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Dec 06 10:14:12 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:14:12 np0005548790.localdomain podman[308083]: 2025-12-06 10:14:12.561047454 +0000 UTC m=+0.075293470 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 10:14:12 np0005548790.localdomain podman[308083]: 2025-12-06 10:14:12.572536403 +0000 UTC m=+0.086782439 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:14:12 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:14:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v87: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:14:13 np0005548790.localdomain sshd[308058]: Received disconnect from 101.47.160.186 port 55628:11: Bye Bye [preauth]
Dec 06 10:14:13 np0005548790.localdomain sshd[308058]: Disconnected from authenticating user root 101.47.160.186 port 55628 [preauth]
Dec 06 10:14:13 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:14:14 np0005548790.localdomain ceph-mon[301742]: pgmap v87: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:14:14 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:14:14.595 262327 INFO neutron.agent.linux.ip_lib [None req-84d42a8d-5c19-4fb7-bdc9-fe3a9409552a - - - - - -] Device tap949736e1-2a cannot be used as it has no MAC address
Dec 06 10:14:14 np0005548790.localdomain kernel: device tap949736e1-2a entered promiscuous mode
Dec 06 10:14:14 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016054.6291] manager: (tap949736e1-2a): new Generic device (/org/freedesktop/NetworkManager/Devices/14)
Dec 06 10:14:14 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:14Z|00036|binding|INFO|Claiming lport 949736e1-2ab1-4514-a633-0325abfed8f8 for this chassis.
Dec 06 10:14:14 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:14Z|00037|binding|INFO|949736e1-2ab1-4514-a633-0325abfed8f8: Claiming unknown
Dec 06 10:14:14 np0005548790.localdomain systemd-udevd[308116]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:14:14 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:14.646 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-04974db5-7261-4ae2-b659-99265ca8d091', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-04974db5-7261-4ae2-b659-99265ca8d091', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5bd426c09dd743399e71eb5c44db45cb', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=664f45b3-7d97-4383-8014-1d1ca469c527, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=949736e1-2ab1-4514-a633-0325abfed8f8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:14:14 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:14.648 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 949736e1-2ab1-4514-a633-0325abfed8f8 in datapath 04974db5-7261-4ae2-b659-99265ca8d091 bound to our chassis
Dec 06 10:14:14 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:14.650 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 04974db5-7261-4ae2-b659-99265ca8d091 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:14:14 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:14.651 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[056f786a-1c7d-4145-a7bf-ca892fe6ee27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:14 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:14:14 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap949736e1-2a: No such device
Dec 06 10:14:14 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:14Z|00038|binding|INFO|Setting lport 949736e1-2ab1-4514-a633-0325abfed8f8 ovn-installed in OVS
Dec 06 10:14:14 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:14Z|00039|binding|INFO|Setting lport 949736e1-2ab1-4514-a633-0325abfed8f8 up in Southbound
Dec 06 10:14:14 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap949736e1-2a: No such device
Dec 06 10:14:14 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap949736e1-2a: No such device
Dec 06 10:14:14 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap949736e1-2a: No such device
Dec 06 10:14:14 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap949736e1-2a: No such device
Dec 06 10:14:14 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap949736e1-2a: No such device
Dec 06 10:14:14 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap949736e1-2a: No such device
Dec 06 10:14:14 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap949736e1-2a: No such device
Dec 06 10:14:14 np0005548790.localdomain podman[308120]: 2025-12-06 10:14:14.759446383 +0000 UTC m=+0.084829588 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 06 10:14:14 np0005548790.localdomain podman[308120]: 2025-12-06 10:14:14.815196495 +0000 UTC m=+0.140579690 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:14:14 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:14:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v88: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:14:15 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2845146463' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:14:15 np0005548790.localdomain podman[308214]: 
Dec 06 10:14:15 np0005548790.localdomain podman[308214]: 2025-12-06 10:14:15.584246558 +0000 UTC m=+0.087493198 container create 415e4886690b573ead76f3c09ac49387a8740a490afdbfcb8b1d0b62c0851d1a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04974db5-7261-4ae2-b659-99265ca8d091, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:14:15 np0005548790.localdomain systemd[1]: Started libpod-conmon-415e4886690b573ead76f3c09ac49387a8740a490afdbfcb8b1d0b62c0851d1a.scope.
Dec 06 10:14:15 np0005548790.localdomain podman[308214]: 2025-12-06 10:14:15.54201209 +0000 UTC m=+0.045258750 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:14:15 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:14:15 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ea2a1aea5fa4d28291b0e1707cea6272f5dd98200e5e0fadb479df92052417a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:14:15 np0005548790.localdomain podman[308214]: 2025-12-06 10:14:15.658455548 +0000 UTC m=+0.161702178 container init 415e4886690b573ead76f3c09ac49387a8740a490afdbfcb8b1d0b62c0851d1a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04974db5-7261-4ae2-b659-99265ca8d091, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 10:14:15 np0005548790.localdomain podman[308214]: 2025-12-06 10:14:15.670007419 +0000 UTC m=+0.173254059 container start 415e4886690b573ead76f3c09ac49387a8740a490afdbfcb8b1d0b62c0851d1a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04974db5-7261-4ae2-b659-99265ca8d091, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 06 10:14:15 np0005548790.localdomain dnsmasq[308232]: started, version 2.85 cachesize 150
Dec 06 10:14:15 np0005548790.localdomain dnsmasq[308232]: DNS service limited to local subnets
Dec 06 10:14:15 np0005548790.localdomain dnsmasq[308232]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:14:15 np0005548790.localdomain dnsmasq[308232]: warning: no upstream servers configured
Dec 06 10:14:15 np0005548790.localdomain dnsmasq-dhcp[308232]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 06 10:14:15 np0005548790.localdomain dnsmasq[308232]: read /var/lib/neutron/dhcp/04974db5-7261-4ae2-b659-99265ca8d091/addn_hosts - 0 addresses
Dec 06 10:14:15 np0005548790.localdomain dnsmasq-dhcp[308232]: read /var/lib/neutron/dhcp/04974db5-7261-4ae2-b659-99265ca8d091/host
Dec 06 10:14:15 np0005548790.localdomain dnsmasq-dhcp[308232]: read /var/lib/neutron/dhcp/04974db5-7261-4ae2-b659-99265ca8d091/opts
Dec 06 10:14:15 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:14:15.840 262327 INFO neutron.agent.dhcp.agent [None req-19ccee7e-fdfc-4d70-9bdd-8c4bf3c76c5f - - - - - -] DHCP configuration for ports {'3872b474-2156-4e5d-b67f-4ad99bc71ba4'} is completed
Dec 06 10:14:16 np0005548790.localdomain ceph-mon[301742]: pgmap v88: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:14:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v89: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:14:17 np0005548790.localdomain ceph-mon[301742]: pgmap v89: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:14:17 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/989352054' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:14:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:14:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:14:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:14:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156742 "" "Go-http-client/1.1"
Dec 06 10:14:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:14:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19196 "" "Go-http-client/1.1"
Dec 06 10:14:18 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:14:18.569 262327 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:14:18Z, description=, device_id=9588c462-2236-4443-8871-1214f0871ce4, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c857ef880>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c86128d30>], id=29bde6cc-95c4-46ef-ad15-a414473ca462, ip_allocation=immediate, mac_address=fa:16:3e:56:e6:e0, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:14:12Z, description=, dns_domain=, id=04974db5-7261-4ae2-b659-99265ca8d091, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-1518606624-network, port_security_enabled=True, project_id=5bd426c09dd743399e71eb5c44db45cb, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=55244, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=337, status=ACTIVE, subnets=['571232f4-8599-400a-a83a-3da9e2000d76'], tags=[], tenant_id=5bd426c09dd743399e71eb5c44db45cb, updated_at=2025-12-06T10:14:13Z, vlan_transparent=None, network_id=04974db5-7261-4ae2-b659-99265ca8d091, port_security_enabled=False, project_id=5bd426c09dd743399e71eb5c44db45cb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=374, status=DOWN, tags=[], tenant_id=5bd426c09dd743399e71eb5c44db45cb, updated_at=2025-12-06T10:14:18Z on network 04974db5-7261-4ae2-b659-99265ca8d091
Dec 06 10:14:18 np0005548790.localdomain podman[308250]: 2025-12-06 10:14:18.785518211 +0000 UTC m=+0.057288195 container kill 415e4886690b573ead76f3c09ac49387a8740a490afdbfcb8b1d0b62c0851d1a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04974db5-7261-4ae2-b659-99265ca8d091, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 10:14:18 np0005548790.localdomain dnsmasq[308232]: read /var/lib/neutron/dhcp/04974db5-7261-4ae2-b659-99265ca8d091/addn_hosts - 1 addresses
Dec 06 10:14:18 np0005548790.localdomain dnsmasq-dhcp[308232]: read /var/lib/neutron/dhcp/04974db5-7261-4ae2-b659-99265ca8d091/host
Dec 06 10:14:18 np0005548790.localdomain dnsmasq-dhcp[308232]: read /var/lib/neutron/dhcp/04974db5-7261-4ae2-b659-99265ca8d091/opts
Dec 06 10:14:18 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:14:18 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/3413727768' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:14:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v90: 177 pgs: 177 active+clean; 192 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 06 10:14:19 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:14:19.048 262327 INFO neutron.agent.dhcp.agent [None req-200e92d4-b09e-470b-9cff-3fbdafa43613 - - - - - -] DHCP configuration for ports {'29bde6cc-95c4-46ef-ad15-a414473ca462'} is completed
Dec 06 10:14:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:19.798 280869 DEBUG oslo_concurrency.processutils [None req-27bed448-d1b6-4520-a06f-03871de7a496 01f72bc0493e447b8e64205b986c543b 57c9f39ae20545aca7be3f46dd0caee1 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:14:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:19.817 280869 DEBUG oslo_concurrency.processutils [None req-27bed448-d1b6-4520-a06f-03871de7a496 01f72bc0493e447b8e64205b986c543b 57c9f39ae20545aca7be3f46dd0caee1 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:14:20 np0005548790.localdomain ceph-mon[301742]: pgmap v90: 177 pgs: 177 active+clean; 192 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 06 10:14:20 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:14:20.514 262327 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:14:18Z, description=, device_id=9588c462-2236-4443-8871-1214f0871ce4, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c85842be0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c85842700>], id=29bde6cc-95c4-46ef-ad15-a414473ca462, ip_allocation=immediate, mac_address=fa:16:3e:56:e6:e0, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:14:12Z, description=, dns_domain=, id=04974db5-7261-4ae2-b659-99265ca8d091, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-1518606624-network, port_security_enabled=True, project_id=5bd426c09dd743399e71eb5c44db45cb, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=55244, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=337, status=ACTIVE, subnets=['571232f4-8599-400a-a83a-3da9e2000d76'], tags=[], tenant_id=5bd426c09dd743399e71eb5c44db45cb, updated_at=2025-12-06T10:14:13Z, vlan_transparent=None, network_id=04974db5-7261-4ae2-b659-99265ca8d091, port_security_enabled=False, project_id=5bd426c09dd743399e71eb5c44db45cb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=374, status=DOWN, tags=[], tenant_id=5bd426c09dd743399e71eb5c44db45cb, updated_at=2025-12-06T10:14:18Z on network 04974db5-7261-4ae2-b659-99265ca8d091
Dec 06 10:14:20 np0005548790.localdomain dnsmasq[308232]: read /var/lib/neutron/dhcp/04974db5-7261-4ae2-b659-99265ca8d091/addn_hosts - 1 addresses
Dec 06 10:14:20 np0005548790.localdomain dnsmasq-dhcp[308232]: read /var/lib/neutron/dhcp/04974db5-7261-4ae2-b659-99265ca8d091/host
Dec 06 10:14:20 np0005548790.localdomain dnsmasq-dhcp[308232]: read /var/lib/neutron/dhcp/04974db5-7261-4ae2-b659-99265ca8d091/opts
Dec 06 10:14:20 np0005548790.localdomain podman[308289]: 2025-12-06 10:14:20.761860457 +0000 UTC m=+0.063924333 container kill 415e4886690b573ead76f3c09ac49387a8740a490afdbfcb8b1d0b62c0851d1a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04974db5-7261-4ae2-b659-99265ca8d091, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 06 10:14:21 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:14:21.024 262327 INFO neutron.agent.dhcp.agent [None req-c59ec503-51bb-40a2-a437-ac2e1e13a478 - - - - - -] DHCP configuration for ports {'29bde6cc-95c4-46ef-ad15-a414473ca462'} is completed
Dec 06 10:14:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v91: 177 pgs: 177 active+clean; 192 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 06 10:14:22 np0005548790.localdomain ceph-mon[301742]: pgmap v91: 177 pgs: 177 active+clean; 192 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 33 op/s
Dec 06 10:14:23 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v92: 177 pgs: 177 active+clean; 192 MiB data, 776 MiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Dec 06 10:14:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:14:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:14:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:14:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:14:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:14:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:14:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:14:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:14:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:14:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:14:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:14:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:14:23 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:14:24 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:14:24.056 2 INFO neutron.agent.securitygroups_rpc [None req-713c535f-db70-452f-a97f-68d844244da8 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Security group member updated ['bfad329a-0ea3-4b02-8e91-9d15749f8c9b']
Dec 06 10:14:24 np0005548790.localdomain ceph-mon[301742]: pgmap v92: 177 pgs: 177 active+clean; 192 MiB data, 776 MiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Dec 06 10:14:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v93: 177 pgs: 177 active+clean; 192 MiB data, 776 MiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Dec 06 10:14:26 np0005548790.localdomain ceph-mon[301742]: pgmap v93: 177 pgs: 177 active+clean; 192 MiB data, 776 MiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Dec 06 10:14:26 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:14:26.721 262327 INFO neutron.agent.linux.ip_lib [None req-0a5f9e9e-9029-4ff2-b320-9ca810f21049 - - - - - -] Device tap90d179d9-70 cannot be used as it has no MAC address
Dec 06 10:14:26 np0005548790.localdomain kernel: device tap90d179d9-70 entered promiscuous mode
Dec 06 10:14:26 np0005548790.localdomain systemd-udevd[308320]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:14:26 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016066.7465] manager: (tap90d179d9-70): new Generic device (/org/freedesktop/NetworkManager/Devices/15)
Dec 06 10:14:26 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:26Z|00040|binding|INFO|Claiming lport 90d179d9-70ee-4fd4-b3f7-244a3cbb2cac for this chassis.
Dec 06 10:14:26 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:26Z|00041|binding|INFO|90d179d9-70ee-4fd4-b3f7-244a3cbb2cac: Claiming unknown
Dec 06 10:14:26 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:26.761 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.2/24', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-932e7489-8895-41d4-92c6-0d944505e7e6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-932e7489-8895-41d4-92c6-0d944505e7e6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7897d6398eb64eb29c66df8db792e581', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9bb405c-aea0-4a81-a300-475f8e1e8050, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=90d179d9-70ee-4fd4-b3f7-244a3cbb2cac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:14:26 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:26.763 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 90d179d9-70ee-4fd4-b3f7-244a3cbb2cac in datapath 932e7489-8895-41d4-92c6-0d944505e7e6 bound to our chassis
Dec 06 10:14:26 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:26.764 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 932e7489-8895-41d4-92c6-0d944505e7e6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:14:26 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap90d179d9-70: No such device
Dec 06 10:14:26 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:26.765 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[91a69dac-a7fc-4c04-a910-c6c439fc9817]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:26 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap90d179d9-70: No such device
Dec 06 10:14:26 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap90d179d9-70: No such device
Dec 06 10:14:26 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:26Z|00042|binding|INFO|Setting lport 90d179d9-70ee-4fd4-b3f7-244a3cbb2cac ovn-installed in OVS
Dec 06 10:14:26 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:26Z|00043|binding|INFO|Setting lport 90d179d9-70ee-4fd4-b3f7-244a3cbb2cac up in Southbound
Dec 06 10:14:26 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap90d179d9-70: No such device
Dec 06 10:14:26 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap90d179d9-70: No such device
Dec 06 10:14:26 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap90d179d9-70: No such device
Dec 06 10:14:26 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap90d179d9-70: No such device
Dec 06 10:14:26 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap90d179d9-70: No such device
Dec 06 10:14:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v94: 177 pgs: 177 active+clean; 192 MiB data, 776 MiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Dec 06 10:14:27 np0005548790.localdomain podman[308392]: 
Dec 06 10:14:27 np0005548790.localdomain podman[308392]: 2025-12-06 10:14:27.6371251 +0000 UTC m=+0.088978179 container create 83809e57dadc6f0e97f71242f3d8afe88fd8bab9d6a9d45c0ecb989d8e337ff2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-932e7489-8895-41d4-92c6-0d944505e7e6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 10:14:27 np0005548790.localdomain systemd[1]: Started libpod-conmon-83809e57dadc6f0e97f71242f3d8afe88fd8bab9d6a9d45c0ecb989d8e337ff2.scope.
Dec 06 10:14:27 np0005548790.localdomain podman[308392]: 2025-12-06 10:14:27.593966487 +0000 UTC m=+0.045819616 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:14:27 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:14:27 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/148f1b7c6389b4a4b7cc0c9d16e7695e44bf29396406f91e0b7e03b1be1b2340/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:14:27 np0005548790.localdomain podman[308392]: 2025-12-06 10:14:27.713982761 +0000 UTC m=+0.165835820 container init 83809e57dadc6f0e97f71242f3d8afe88fd8bab9d6a9d45c0ecb989d8e337ff2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-932e7489-8895-41d4-92c6-0d944505e7e6, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 06 10:14:27 np0005548790.localdomain podman[308392]: 2025-12-06 10:14:27.722638904 +0000 UTC m=+0.174491993 container start 83809e57dadc6f0e97f71242f3d8afe88fd8bab9d6a9d45c0ecb989d8e337ff2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-932e7489-8895-41d4-92c6-0d944505e7e6, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 06 10:14:27 np0005548790.localdomain dnsmasq[308410]: started, version 2.85 cachesize 150
Dec 06 10:14:27 np0005548790.localdomain dnsmasq[308410]: DNS service limited to local subnets
Dec 06 10:14:27 np0005548790.localdomain dnsmasq[308410]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:14:27 np0005548790.localdomain dnsmasq[308410]: warning: no upstream servers configured
Dec 06 10:14:27 np0005548790.localdomain dnsmasq-dhcp[308410]: DHCP, static leases only on 19.80.0.0, lease time 1d
Dec 06 10:14:27 np0005548790.localdomain dnsmasq[308410]: read /var/lib/neutron/dhcp/932e7489-8895-41d4-92c6-0d944505e7e6/addn_hosts - 0 addresses
Dec 06 10:14:27 np0005548790.localdomain dnsmasq-dhcp[308410]: read /var/lib/neutron/dhcp/932e7489-8895-41d4-92c6-0d944505e7e6/host
Dec 06 10:14:27 np0005548790.localdomain dnsmasq-dhcp[308410]: read /var/lib/neutron/dhcp/932e7489-8895-41d4-92c6-0d944505e7e6/opts
Dec 06 10:14:27 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:14:27.902 262327 INFO neutron.agent.dhcp.agent [None req-d33915ab-6b3e-4de2-ae34-a9e468e45965 - - - - - -] DHCP configuration for ports {'9a87eef5-19db-4fcf-a021-4f61b153af33'} is completed
Dec 06 10:14:27 np0005548790.localdomain ceph-mon[301742]: pgmap v94: 177 pgs: 177 active+clean; 192 MiB data, 776 MiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Dec 06 10:14:28 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:14:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v95: 177 pgs: 177 active+clean; 192 MiB data, 776 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Dec 06 10:14:29 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:14:29.200 2 INFO neutron.agent.securitygroups_rpc [None req-42741e53-1189-4d3e-a617-18fc0438f9c5 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Security group member updated ['bfad329a-0ea3-4b02-8e91-9d15749f8c9b']
Dec 06 10:14:29 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:14:29.311 262327 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:14:28Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c857b8340>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c857b8310>], id=3b69daca-b91a-4923-9795-2e6a02ee3d59, ip_allocation=immediate, mac_address=fa:16:3e:a8:e1:a6, name=tempest-subport-546955816, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:14:24Z, description=, dns_domain=, id=932e7489-8895-41d4-92c6-0d944505e7e6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-subport_net-416741188, port_security_enabled=True, project_id=7897d6398eb64eb29c66df8db792e581, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=49569, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=437, status=ACTIVE, subnets=['ad498427-0c8a-4a31-b804-e92b3a9c90aa'], tags=[], tenant_id=7897d6398eb64eb29c66df8db792e581, updated_at=2025-12-06T10:14:25Z, vlan_transparent=None, network_id=932e7489-8895-41d4-92c6-0d944505e7e6, port_security_enabled=True, project_id=7897d6398eb64eb29c66df8db792e581, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['bfad329a-0ea3-4b02-8e91-9d15749f8c9b'], standard_attr_id=482, status=DOWN, tags=[], tenant_id=7897d6398eb64eb29c66df8db792e581, updated_at=2025-12-06T10:14:28Z on network 932e7489-8895-41d4-92c6-0d944505e7e6
Dec 06 10:14:29 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:14:29 np0005548790.localdomain systemd[1]: tmp-crun.RHH5cL.mount: Deactivated successfully.
Dec 06 10:14:29 np0005548790.localdomain dnsmasq[308410]: read /var/lib/neutron/dhcp/932e7489-8895-41d4-92c6-0d944505e7e6/addn_hosts - 1 addresses
Dec 06 10:14:29 np0005548790.localdomain dnsmasq-dhcp[308410]: read /var/lib/neutron/dhcp/932e7489-8895-41d4-92c6-0d944505e7e6/host
Dec 06 10:14:29 np0005548790.localdomain dnsmasq-dhcp[308410]: read /var/lib/neutron/dhcp/932e7489-8895-41d4-92c6-0d944505e7e6/opts
Dec 06 10:14:29 np0005548790.localdomain podman[308427]: 2025-12-06 10:14:29.531839085 +0000 UTC m=+0.074208380 container kill 83809e57dadc6f0e97f71242f3d8afe88fd8bab9d6a9d45c0ecb989d8e337ff2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-932e7489-8895-41d4-92c6-0d944505e7e6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 10:14:29 np0005548790.localdomain podman[308437]: 2025-12-06 10:14:29.588915403 +0000 UTC m=+0.098609668 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 10:14:29 np0005548790.localdomain podman[308437]: 2025-12-06 10:14:29.599388095 +0000 UTC m=+0.109082360 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 10:14:29 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:14:29 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:14:29.762 262327 INFO neutron.agent.dhcp.agent [None req-70a4d6ad-21ea-41fe-b288-6ee990f0fa4f - - - - - -] DHCP configuration for ports {'3b69daca-b91a-4923-9795-2e6a02ee3d59'} is completed
Dec 06 10:14:30 np0005548790.localdomain ceph-mon[301742]: pgmap v95: 177 pgs: 177 active+clean; 192 MiB data, 776 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Dec 06 10:14:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v96: 177 pgs: 177 active+clean; 192 MiB data, 776 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 10:14:32 np0005548790.localdomain ceph-mon[301742]: pgmap v96: 177 pgs: 177 active+clean; 192 MiB data, 776 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 06 10:14:32 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/3766673990' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:14:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v97: 177 pgs: 177 active+clean; 213 MiB data, 849 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Dec 06 10:14:33 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/4228215768' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:14:33 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:14:34 np0005548790.localdomain ceph-mon[301742]: pgmap v97: 177 pgs: 177 active+clean; 213 MiB data, 849 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Dec 06 10:14:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:34.218 280869 DEBUG oslo_concurrency.lockutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Acquiring lock "87dc2ce3-2b16-4764-9803-711c2d12c20f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:34.219 280869 DEBUG oslo_concurrency.lockutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:34.239 280869 DEBUG nova.compute.manager [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 10:14:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:34.342 280869 DEBUG oslo_concurrency.lockutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:34.343 280869 DEBUG oslo_concurrency.lockutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:34.345 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:14:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:34.345 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:14:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:34.346 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:14:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:34.347 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:14:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:34.355 280869 DEBUG nova.virt.hardware [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 10:14:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:34.356 280869 INFO nova.compute.claims [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Claim successful on node np0005548790.localdomain
Dec 06 10:14:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:34.382 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:14:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:34.383 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:14:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:34.384 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:14:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:14:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:14:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:14:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:34.494 280869 DEBUG oslo_concurrency.processutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:14:34 np0005548790.localdomain systemd[1]: tmp-crun.qDNLr9.mount: Deactivated successfully.
Dec 06 10:14:34 np0005548790.localdomain podman[308469]: 2025-12-06 10:14:34.586624184 +0000 UTC m=+0.095275669 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 10:14:34 np0005548790.localdomain podman[308469]: 2025-12-06 10:14:34.627477105 +0000 UTC m=+0.136128600 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:14:34 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:14:34 np0005548790.localdomain podman[308468]: 2025-12-06 10:14:34.634590057 +0000 UTC m=+0.146018107 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:14:34 np0005548790.localdomain podman[308470]: 2025-12-06 10:14:34.691852389 +0000 UTC m=+0.195584491 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=edpm, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal)
Dec 06 10:14:34 np0005548790.localdomain podman[308470]: 2025-12-06 10:14:34.706059432 +0000 UTC m=+0.209791534 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, version=9.6, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Dec 06 10:14:34 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:14:34 np0005548790.localdomain podman[308468]: 2025-12-06 10:14:34.715150497 +0000 UTC m=+0.226578527 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:14:34 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:14:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:14:34 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/838118298' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:14:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:34.942 280869 DEBUG oslo_concurrency.processutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:14:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:34.949 280869 DEBUG nova.compute.provider_tree [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:14:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:34.966 280869 DEBUG nova.scheduler.client.report [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:14:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:34.990 280869 DEBUG oslo_concurrency.lockutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:34.992 280869 DEBUG nova.compute.manager [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 10:14:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:35.035 280869 DEBUG nova.compute.manager [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 10:14:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:35.035 280869 DEBUG nova.network.neutron [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 10:14:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v98: 177 pgs: 177 active+clean; 213 MiB data, 849 MiB used, 41 GiB / 42 GiB avail; 475 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Dec 06 10:14:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:35.051 280869 INFO nova.virt.libvirt.driver [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 10:14:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:35.095 280869 DEBUG nova.compute.manager [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 10:14:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:35.189 280869 DEBUG nova.compute.manager [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 10:14:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:35.192 280869 DEBUG nova.virt.libvirt.driver [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 10:14:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:35.193 280869 INFO nova.virt.libvirt.driver [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Creating image(s)
Dec 06 10:14:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:35.237 280869 DEBUG nova.storage.rbd_utils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] rbd image 87dc2ce3-2b16-4764-9803-711c2d12c20f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:14:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:35.282 280869 DEBUG nova.storage.rbd_utils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] rbd image 87dc2ce3-2b16-4764-9803-711c2d12c20f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:14:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:35.322 280869 DEBUG nova.storage.rbd_utils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] rbd image 87dc2ce3-2b16-4764-9803-711c2d12c20f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:14:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:35.327 280869 DEBUG oslo_concurrency.lockutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Acquiring lock "cb68b180567fda17719a7393615b2f958ad3226e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:35.328 280869 DEBUG oslo_concurrency.lockutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Lock "cb68b180567fda17719a7393615b2f958ad3226e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:35.366 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:14:35 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/838118298' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:14:35 np0005548790.localdomain sshd[308602]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:14:35 np0005548790.localdomain sshd[308603]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:14:35 np0005548790.localdomain sshd[308602]: error: kex_exchange_identification: Connection closed by remote host
Dec 06 10:14:35 np0005548790.localdomain sshd[308602]: Connection closed by 3.137.73.221 port 57412
Dec 06 10:14:36 np0005548790.localdomain sshd[308603]: error: kex_exchange_identification: banner line contains invalid characters
Dec 06 10:14:36 np0005548790.localdomain sshd[308603]: banner exchange: Connection from 3.137.73.221 port 57408: invalid format
Dec 06 10:14:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:36.156 280869 DEBUG nova.virt.libvirt.imagebackend [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Image locations are: [{'url': 'rbd://1939e851-b10c-5c3b-9bb7-8e7f380233e8/images/6a944ab6-8965-4055-b7fc-af6e395005ea/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://1939e851-b10c-5c3b-9bb7-8e7f380233e8/images/6a944ab6-8965-4055-b7fc-af6e395005ea/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 06 10:14:36 np0005548790.localdomain ceph-mon[301742]: pgmap v98: 177 pgs: 177 active+clean; 213 MiB data, 849 MiB used, 41 GiB / 42 GiB avail; 475 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Dec 06 10:14:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:36.466 280869 WARNING oslo_policy.policy [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Dec 06 10:14:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:36.467 280869 WARNING oslo_policy.policy [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Dec 06 10:14:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:36.470 280869 DEBUG nova.policy [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ac2e85103fd14829ad4e6df2357da95b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7897d6398eb64eb29c66df8db792e581', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 10:14:36 np0005548790.localdomain sshd[308604]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:14:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:36.964 280869 DEBUG oslo_concurrency.processutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb68b180567fda17719a7393615b2f958ad3226e.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:14:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:37.036 280869 DEBUG oslo_concurrency.processutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb68b180567fda17719a7393615b2f958ad3226e.part --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:14:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:37.037 280869 DEBUG nova.virt.images [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] 6a944ab6-8965-4055-b7fc-af6e395005ea was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 06 10:14:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:37.039 280869 DEBUG nova.privsep.utils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 06 10:14:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:37.039 280869 DEBUG oslo_concurrency.processutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/cb68b180567fda17719a7393615b2f958ad3226e.part /var/lib/nova/instances/_base/cb68b180567fda17719a7393615b2f958ad3226e.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:14:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v99: 177 pgs: 177 active+clean; 213 MiB data, 849 MiB used, 41 GiB / 42 GiB avail; 475 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Dec 06 10:14:37 np0005548790.localdomain sshd[308604]: error: kex_exchange_identification: Connection closed by remote host
Dec 06 10:14:37 np0005548790.localdomain sshd[308604]: Connection closed by 3.137.73.221 port 38862
Dec 06 10:14:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:37.230 280869 DEBUG oslo_concurrency.processutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/cb68b180567fda17719a7393615b2f958ad3226e.part /var/lib/nova/instances/_base/cb68b180567fda17719a7393615b2f958ad3226e.converted" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:14:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:37.232 280869 DEBUG oslo_concurrency.processutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb68b180567fda17719a7393615b2f958ad3226e.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:14:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:37.301 280869 DEBUG oslo_concurrency.processutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb68b180567fda17719a7393615b2f958ad3226e.converted --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:14:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:37.303 280869 DEBUG oslo_concurrency.lockutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Lock "cb68b180567fda17719a7393615b2f958ad3226e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:37.349 280869 DEBUG nova.storage.rbd_utils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] rbd image 87dc2ce3-2b16-4764-9803-711c2d12c20f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:14:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:37.354 280869 DEBUG oslo_concurrency.processutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb68b180567fda17719a7393615b2f958ad3226e 87dc2ce3-2b16-4764-9803-711c2d12c20f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:14:37 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:37Z|00044|memory|INFO|peak resident set size grew 52% in last 2314.0 seconds, from 14240 kB to 21716 kB
Dec 06 10:14:37 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:37Z|00045|memory|INFO|idl-cells-OVN_Southbound:7806 idl-cells-Open_vSwitch:1098 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:215 lflow-cache-entries-cache-matches:241 lflow-cache-size-KB:826 local_datapath_usage-KB:2 ofctrl_desired_flow_usage-KB:409 ofctrl_installed_flow_usage-KB:299 ofctrl_sb_flow_ref_usage-KB:156
Dec 06 10:14:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:37.834 280869 DEBUG nova.network.neutron [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Successfully updated port: e87832d3-ffc3-44e0-9f77-cd2eb6073d62 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 10:14:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:37.854 280869 DEBUG oslo_concurrency.lockutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Acquiring lock "refresh_cache-87dc2ce3-2b16-4764-9803-711c2d12c20f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:14:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:37.855 280869 DEBUG oslo_concurrency.lockutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Acquired lock "refresh_cache-87dc2ce3-2b16-4764-9803-711c2d12c20f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:14:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:37.855 280869 DEBUG nova.network.neutron [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 10:14:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:37.939 280869 DEBUG oslo_concurrency.processutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb68b180567fda17719a7393615b2f958ad3226e 87dc2ce3-2b16-4764-9803-711c2d12c20f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:14:37 np0005548790.localdomain ceph-mon[301742]: pgmap v99: 177 pgs: 177 active+clean; 213 MiB data, 849 MiB used, 41 GiB / 42 GiB avail; 475 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Dec 06 10:14:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:37.976 280869 DEBUG nova.network.neutron [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.051 280869 DEBUG nova.storage.rbd_utils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] resizing rbd image 87dc2ce3-2b16-4764-9803-711c2d12c20f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.093 280869 DEBUG nova.compute.manager [req-4b698cbf-f050-4db0-bfd8-e4da92371f26 req-af655f8b-2c0f-4bf5-98b6-976b564933fb 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Received event network-changed-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.093 280869 DEBUG nova.compute.manager [req-4b698cbf-f050-4db0-bfd8-e4da92371f26 req-af655f8b-2c0f-4bf5-98b6-976b564933fb 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Refreshing instance network info cache due to event network-changed-e87832d3-ffc3-44e0-9f77-cd2eb6073d62. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.094 280869 DEBUG oslo_concurrency.lockutils [req-4b698cbf-f050-4db0-bfd8-e4da92371f26 req-af655f8b-2c0f-4bf5-98b6-976b564933fb 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "refresh_cache-87dc2ce3-2b16-4764-9803-711c2d12c20f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.190 280869 DEBUG nova.objects.instance [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Lazy-loading 'migration_context' on Instance uuid 87dc2ce3-2b16-4764-9803-711c2d12c20f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.213 280869 DEBUG nova.virt.libvirt.driver [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.213 280869 DEBUG nova.virt.libvirt.driver [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Ensure instance console log exists: /var/lib/nova/instances/87dc2ce3-2b16-4764-9803-711c2d12c20f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.214 280869 DEBUG oslo_concurrency.lockutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.214 280869 DEBUG oslo_concurrency.lockutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.214 280869 DEBUG oslo_concurrency.lockutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.298 280869 DEBUG nova.network.neutron [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Updating instance_info_cache with network_info: [{"id": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "address": "fa:16:3e:0e:f5:37", "network": {"id": "47d636a7-c520-4320-aa94-bfb41f418584", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1313845827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "7897d6398eb64eb29c66df8db792e581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape87832d3-ff", "ovs_interfaceid": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.320 280869 DEBUG oslo_concurrency.lockutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Releasing lock "refresh_cache-87dc2ce3-2b16-4764-9803-711c2d12c20f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.321 280869 DEBUG nova.compute.manager [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Instance network_info: |[{"id": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "address": "fa:16:3e:0e:f5:37", "network": {"id": "47d636a7-c520-4320-aa94-bfb41f418584", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1313845827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "7897d6398eb64eb29c66df8db792e581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape87832d3-ff", "ovs_interfaceid": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.322 280869 DEBUG oslo_concurrency.lockutils [req-4b698cbf-f050-4db0-bfd8-e4da92371f26 req-af655f8b-2c0f-4bf5-98b6-976b564933fb 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquired lock "refresh_cache-87dc2ce3-2b16-4764-9803-711c2d12c20f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.322 280869 DEBUG nova.network.neutron [req-4b698cbf-f050-4db0-bfd8-e4da92371f26 req-af655f8b-2c0f-4bf5-98b6-976b564933fb 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Refreshing network info cache for port e87832d3-ffc3-44e0-9f77-cd2eb6073d62 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.330 280869 DEBUG nova.virt.libvirt.driver [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Start _get_guest_xml network_info=[{"id": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "address": "fa:16:3e:0e:f5:37", "network": {"id": "47d636a7-c520-4320-aa94-bfb41f418584", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1313845827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "7897d6398eb64eb29c66df8db792e581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape87832d3-ff", "ovs_interfaceid": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:13:11Z,direct_url=<?>,disk_format='qcow2',id=6a944ab6-8965-4055-b7fc-af6e395005ea,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3d603431c0bb4967bafc7a0aa6108bfe',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:13:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'boot_index': 0, 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'image_id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.338 280869 WARNING nova.virt.libvirt.driver [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.345 280869 DEBUG nova.virt.libvirt.host [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Searching host: 'np0005548790.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.346 280869 DEBUG nova.virt.libvirt.host [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.348 280869 DEBUG nova.virt.libvirt.host [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Searching host: 'np0005548790.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.349 280869 DEBUG nova.virt.libvirt.host [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.350 280869 DEBUG nova.virt.libvirt.driver [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.350 280869 DEBUG nova.virt.hardware [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T10:13:10Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a0a7498e-22eb-495c-a2e3-89ba9e483bf6',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:13:11Z,direct_url=<?>,disk_format='qcow2',id=6a944ab6-8965-4055-b7fc-af6e395005ea,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3d603431c0bb4967bafc7a0aa6108bfe',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:13:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.351 280869 DEBUG nova.virt.hardware [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.351 280869 DEBUG nova.virt.hardware [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.352 280869 DEBUG nova.virt.hardware [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.352 280869 DEBUG nova.virt.hardware [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.353 280869 DEBUG nova.virt.hardware [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.353 280869 DEBUG nova.virt.hardware [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.354 280869 DEBUG nova.virt.hardware [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.354 280869 DEBUG nova.virt.hardware [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.355 280869 DEBUG nova.virt.hardware [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.355 280869 DEBUG nova.virt.hardware [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.361 280869 DEBUG nova.privsep.utils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.361 280869 DEBUG oslo_concurrency.processutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.686 280869 DEBUG nova.network.neutron [req-4b698cbf-f050-4db0-bfd8-e4da92371f26 req-af655f8b-2c0f-4bf5-98b6-976b564933fb 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Updated VIF entry in instance network info cache for port e87832d3-ffc3-44e0-9f77-cd2eb6073d62. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.687 280869 DEBUG nova.network.neutron [req-4b698cbf-f050-4db0-bfd8-e4da92371f26 req-af655f8b-2c0f-4bf5-98b6-976b564933fb 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Updating instance_info_cache with network_info: [{"id": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "address": "fa:16:3e:0e:f5:37", "network": {"id": "47d636a7-c520-4320-aa94-bfb41f418584", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1313845827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "7897d6398eb64eb29c66df8db792e581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape87832d3-ff", "ovs_interfaceid": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.713 280869 DEBUG oslo_concurrency.lockutils [req-4b698cbf-f050-4db0-bfd8-e4da92371f26 req-af655f8b-2c0f-4bf5-98b6-976b564933fb 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Releasing lock "refresh_cache-87dc2ce3-2b16-4764-9803-711c2d12c20f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:14:38 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:14:38 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4100360430' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.818 280869 DEBUG oslo_concurrency.processutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.857 280869 DEBUG nova.storage.rbd_utils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] rbd image 87dc2ce3-2b16-4764-9803-711c2d12c20f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:14:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:38.863 280869 DEBUG oslo_concurrency.processutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:14:38 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e104 e104: 6 total, 6 up, 6 in
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:14:39.001115) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016079001152, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1054, "num_deletes": 252, "total_data_size": 1314089, "memory_usage": 1341232, "flush_reason": "Manual Compaction"}
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/4100360430' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016079010026, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 849236, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17197, "largest_seqno": 18246, "table_properties": {"data_size": 844735, "index_size": 2164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10723, "raw_average_key_size": 20, "raw_value_size": 835388, "raw_average_value_size": 1634, "num_data_blocks": 90, "num_entries": 511, "num_filter_entries": 511, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016021, "oldest_key_time": 1765016021, "file_creation_time": 1765016079, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 8963 microseconds, and 3371 cpu microseconds.
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:14:39.010079) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 849236 bytes OK
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:14:39.010100) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:14:39.012917) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:14:39.012940) EVENT_LOG_v1 {"time_micros": 1765016079012933, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:14:39.012963) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 1308823, prev total WAL file size 1308823, number of live WAL files 2.
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:14:39.013610) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131353436' seq:72057594037927935, type:22 .. '7061786F73003131373938' seq:0, type:0; will stop at (end)
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(829KB)], [24(19MB)]
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016079013672, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 21114039, "oldest_snapshot_seqno": -1}
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:14:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v101: 177 pgs: 177 active+clean; 257 MiB data, 870 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 3.8 MiB/s wr, 108 op/s
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 12154 keys, 19283376 bytes, temperature: kUnknown
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016079153284, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 19283376, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19215811, "index_size": 36104, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30405, "raw_key_size": 326844, "raw_average_key_size": 26, "raw_value_size": 19010464, "raw_average_value_size": 1564, "num_data_blocks": 1367, "num_entries": 12154, "num_filter_entries": 12154, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015768, "oldest_key_time": 0, "file_creation_time": 1765016079, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:14:39.153537) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 19283376 bytes
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:14:39.154803) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.1 rd, 138.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 19.3 +0.0 blob) out(18.4 +0.0 blob), read-write-amplify(47.6) write-amplify(22.7) OK, records in: 12684, records dropped: 530 output_compression: NoCompression
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:14:39.154823) EVENT_LOG_v1 {"time_micros": 1765016079154814, "job": 12, "event": "compaction_finished", "compaction_time_micros": 139694, "compaction_time_cpu_micros": 51272, "output_level": 6, "num_output_files": 1, "total_output_size": 19283376, "num_input_records": 12684, "num_output_records": 12154, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016079155017, "job": 12, "event": "table_file_deletion", "file_number": 26}
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016079157114, "job": 12, "event": "table_file_deletion", "file_number": 24}
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:14:39.013532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:14:39.157236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:14:39.157246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:14:39.157249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:14:39.157254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:14:39.157257) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/227781456' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.299 280869 DEBUG oslo_concurrency.processutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.302 280869 DEBUG nova.virt.libvirt.vif [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:14:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1999616987',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005548790.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-1999616987',id=7,image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005548790.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005548790.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7897d6398eb64eb29c66df8db792e581',ramdisk_id='',reservation_id='r-tcv45ne4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-265776820',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-265776820-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:14:35Z,user_data=None,user_id='ac2e85103fd14829ad4e6df2357da95b',uuid=87dc2ce3-2b16-4764-9803-711c2d12c20f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "address": "fa:16:3e:0e:f5:37", "network": {"id": "47d636a7-c520-4320-aa94-bfb41f418584", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1313845827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "7897d6398eb64eb29c66df8db792e581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape87832d3-ff", "ovs_interfaceid": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.303 280869 DEBUG nova.network.os_vif_util [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Converting VIF {"id": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "address": "fa:16:3e:0e:f5:37", "network": {"id": "47d636a7-c520-4320-aa94-bfb41f418584", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1313845827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "7897d6398eb64eb29c66df8db792e581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape87832d3-ff", "ovs_interfaceid": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.305 280869 DEBUG nova.network.os_vif_util [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:f5:37,bridge_name='br-int',has_traffic_filtering=True,id=e87832d3-ffc3-44e0-9f77-cd2eb6073d62,network=Network(47d636a7-c520-4320-aa94-bfb41f418584),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape87832d3-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.308 280869 DEBUG nova.objects.instance [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Lazy-loading 'pci_devices' on Instance uuid 87dc2ce3-2b16-4764-9803-711c2d12c20f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.329 280869 DEBUG nova.virt.libvirt.driver [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] End _get_guest_xml xml=<domain type="kvm">
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:   <uuid>87dc2ce3-2b16-4764-9803-711c2d12c20f</uuid>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:   <name>instance-00000007</name>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:   <memory>131072</memory>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:   <vcpu>1</vcpu>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:   <metadata>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <nova:name>tempest-LiveAutoBlockMigrationV225Test-server-1999616987</nova:name>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <nova:creationTime>2025-12-06 10:14:38</nova:creationTime>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <nova:flavor name="m1.nano">
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:         <nova:memory>128</nova:memory>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:         <nova:disk>1</nova:disk>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:         <nova:swap>0</nova:swap>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:         <nova:vcpus>1</nova:vcpus>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       </nova:flavor>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <nova:owner>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:         <nova:user uuid="ac2e85103fd14829ad4e6df2357da95b">tempest-LiveAutoBlockMigrationV225Test-265776820-project-member</nova:user>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:         <nova:project uuid="7897d6398eb64eb29c66df8db792e581">tempest-LiveAutoBlockMigrationV225Test-265776820</nova:project>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       </nova:owner>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <nova:root type="image" uuid="6a944ab6-8965-4055-b7fc-af6e395005ea"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <nova:ports>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:         <nova:port uuid="e87832d3-ffc3-44e0-9f77-cd2eb6073d62">
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:         </nova:port>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       </nova:ports>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     </nova:instance>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:   </metadata>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:   <sysinfo type="smbios">
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <system>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <entry name="manufacturer">RDO</entry>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <entry name="product">OpenStack Compute</entry>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <entry name="serial">87dc2ce3-2b16-4764-9803-711c2d12c20f</entry>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <entry name="uuid">87dc2ce3-2b16-4764-9803-711c2d12c20f</entry>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <entry name="family">Virtual Machine</entry>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     </system>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:   </sysinfo>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:   <os>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <boot dev="hd"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <smbios mode="sysinfo"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:   </os>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:   <features>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <acpi/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <apic/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <vmcoreinfo/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:   </features>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:   <clock offset="utc">
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <timer name="hpet" present="no"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:   </clock>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:   <cpu mode="host-model" match="exact">
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:   </cpu>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:   <devices>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <disk type="network" device="disk">
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <driver type="raw" cache="none"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <source protocol="rbd" name="vms/87dc2ce3-2b16-4764-9803-711c2d12c20f_disk">
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:         <host name="172.18.0.103" port="6789"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:         <host name="172.18.0.104" port="6789"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:         <host name="172.18.0.105" port="6789"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       </source>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <auth username="openstack">
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:         <secret type="ceph" uuid="1939e851-b10c-5c3b-9bb7-8e7f380233e8"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       </auth>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <target dev="vda" bus="virtio"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     </disk>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <disk type="network" device="cdrom">
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <driver type="raw" cache="none"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <source protocol="rbd" name="vms/87dc2ce3-2b16-4764-9803-711c2d12c20f_disk.config">
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:         <host name="172.18.0.103" port="6789"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:         <host name="172.18.0.104" port="6789"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:         <host name="172.18.0.105" port="6789"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       </source>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <auth username="openstack">
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:         <secret type="ceph" uuid="1939e851-b10c-5c3b-9bb7-8e7f380233e8"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       </auth>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <target dev="sda" bus="sata"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     </disk>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <interface type="ethernet">
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <mac address="fa:16:3e:0e:f5:37"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <model type="virtio"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <mtu size="1442"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <target dev="tape87832d3-ff"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     </interface>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <serial type="pty">
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <log file="/var/lib/nova/instances/87dc2ce3-2b16-4764-9803-711c2d12c20f/console.log" append="off"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     </serial>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <video>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <model type="virtio"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     </video>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <input type="tablet" bus="usb"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <rng model="virtio">
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <backend model="random">/dev/urandom</backend>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     </rng>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <controller type="usb" index="0"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     <memballoon model="virtio">
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:       <stats period="10"/>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:     </memballoon>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:   </devices>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: </domain>
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.330 280869 DEBUG nova.compute.manager [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Preparing to wait for external event network-vif-plugged-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.331 280869 DEBUG oslo_concurrency.lockutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Acquiring lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.332 280869 DEBUG oslo_concurrency.lockutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.332 280869 DEBUG oslo_concurrency.lockutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.333 280869 DEBUG nova.virt.libvirt.vif [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:14:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1999616987',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005548790.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-1999616987',id=7,image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005548790.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005548790.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7897d6398eb64eb29c66df8db792e581',ramdisk_id='',reservation_id='r-tcv45ne4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-265776820',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-265776820-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:14:35Z,user_data=None,user_id='ac2e85103fd14829ad4e6df2357da95b',uuid=87dc2ce3-2b16-4764-9803-711c2d12c20f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "address": "fa:16:3e:0e:f5:37", "network": {"id": "47d636a7-c520-4320-aa94-bfb41f418584", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1313845827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "7897d6398eb64eb29c66df8db792e581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape87832d3-ff", "ovs_interfaceid": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.334 280869 DEBUG nova.network.os_vif_util [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Converting VIF {"id": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "address": "fa:16:3e:0e:f5:37", "network": {"id": "47d636a7-c520-4320-aa94-bfb41f418584", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1313845827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "7897d6398eb64eb29c66df8db792e581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape87832d3-ff", "ovs_interfaceid": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.335 280869 DEBUG nova.network.os_vif_util [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:f5:37,bridge_name='br-int',has_traffic_filtering=True,id=e87832d3-ffc3-44e0-9f77-cd2eb6073d62,network=Network(47d636a7-c520-4320-aa94-bfb41f418584),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape87832d3-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.336 280869 DEBUG os_vif [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:f5:37,bridge_name='br-int',has_traffic_filtering=True,id=e87832d3-ffc3-44e0-9f77-cd2eb6073d62,network=Network(47d636a7-c520-4320-aa94-bfb41f418584),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape87832d3-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.373 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.382 280869 DEBUG ovsdbapp.backend.ovs_idl [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.383 280869 DEBUG ovsdbapp.backend.ovs_idl [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.383 280869 DEBUG ovsdbapp.backend.ovs_idl [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.383 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.384 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.384 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.386 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.388 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.392 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.411 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.412 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.412 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.412 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.412 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.427 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.428 280869 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.429 280869 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.430 280869 INFO oslo.privsep.daemon [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmprgegvg4o/privsep.sock']
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:14:39 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/492198068' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:14:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.956 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:14:40 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1657712167' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:14:40 np0005548790.localdomain ceph-mon[301742]: osdmap e104: 6 total, 6 up, 6 in
Dec 06 10:14:40 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1657712167' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:14:40 np0005548790.localdomain ceph-mon[301742]: pgmap v101: 177 pgs: 177 active+clean; 257 MiB data, 870 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 3.8 MiB/s wr, 108 op/s
Dec 06 10:14:40 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/227781456' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:14:40 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/492198068' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.111 280869 INFO oslo.privsep.daemon [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Spawned new privsep daemon via rootwrap
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.989 308819 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.993 308819 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.997 308819 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:39.997 308819 INFO oslo.privsep.daemon [-] privsep daemon running as pid 308819
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.199 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.201 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=11835MB free_disk=41.702293395996094GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.202 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.203 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.273 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.321 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Instance 87dc2ce3-2b16-4764-9803-711c2d12c20f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.321 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.322 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=640MB phys_disk=41GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.373 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.425 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.426 280869 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape87832d3-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.427 280869 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape87832d3-ff, col_values=(('external_ids', {'iface-id': 'e87832d3-ffc3-44e0-9f77-cd2eb6073d62', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0e:f5:37', 'vm-uuid': '87dc2ce3-2b16-4764-9803-711c2d12c20f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.429 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.432 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.435 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.436 280869 INFO os_vif [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:f5:37,bridge_name='br-int',has_traffic_filtering=True,id=e87832d3-ffc3-44e0-9f77-cd2eb6073d62,network=Network(47d636a7-c520-4320-aa94-bfb41f418584),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape87832d3-ff')
Dec 06 10:14:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.488 280869 DEBUG nova.virt.libvirt.driver [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.489 280869 DEBUG nova.virt.libvirt.driver [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.489 280869 DEBUG nova.virt.libvirt.driver [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] No VIF found with MAC fa:16:3e:0e:f5:37, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.491 280869 INFO nova.virt.libvirt.driver [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Using config drive
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.533 280869 DEBUG nova.storage.rbd_utils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] rbd image 87dc2ce3-2b16-4764-9803-711c2d12c20f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:14:40 np0005548790.localdomain podman[308829]: 2025-12-06 10:14:40.580649353 +0000 UTC m=+0.090707955 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:14:40 np0005548790.localdomain podman[308829]: 2025-12-06 10:14:40.619096429 +0000 UTC m=+0.129155021 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd)
Dec 06 10:14:40 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.734 280869 INFO nova.virt.libvirt.driver [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Creating config drive at /var/lib/nova/instances/87dc2ce3-2b16-4764-9803-711c2d12c20f/disk.config
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.740 280869 DEBUG oslo_concurrency.processutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/87dc2ce3-2b16-4764-9803-711c2d12c20f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_hggci5f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:14:40 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:14:40 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3664938098' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.809 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.816 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updating inventory in ProviderTree for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.851 280869 ERROR nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] [req-eed1e772-2544-4603-b17b-eeeaa366e8ca] Failed to update inventory to [{'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 9d142787-bd19-4b53-bf45-24c0e0c1cff0.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-eed1e772-2544-4603-b17b-eeeaa366e8ca"}]}
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.864 280869 DEBUG oslo_concurrency.processutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/87dc2ce3-2b16-4764-9803-711c2d12c20f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_hggci5f" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.904 280869 DEBUG nova.storage.rbd_utils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] rbd image 87dc2ce3-2b16-4764-9803-711c2d12c20f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.908 280869 DEBUG oslo_concurrency.processutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/87dc2ce3-2b16-4764-9803-711c2d12c20f/disk.config 87dc2ce3-2b16-4764-9803-711c2d12c20f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.926 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Refreshing inventories for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.951 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updating ProviderTree inventory for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.951 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updating inventory in ProviderTree for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 10:14:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:40.971 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Refreshing aggregate associations for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 10:14:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:41.002 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Refreshing trait associations for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0, traits: HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AMD_SVM,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_F16C,HW_CPU_X86_ABM,HW_CPU_X86_BMI,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SVM,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 10:14:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:41.039 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v102: 177 pgs: 177 active+clean; 257 MiB data, 870 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 3.8 MiB/s wr, 108 op/s
Dec 06 10:14:41 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/3664938098' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:14:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e105 e105: 6 total, 6 up, 6 in
Dec 06 10:14:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:41.139 280869 DEBUG oslo_concurrency.processutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/87dc2ce3-2b16-4764-9803-711c2d12c20f/disk.config 87dc2ce3-2b16-4764-9803-711c2d12c20f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.231s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:14:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:41.140 280869 INFO nova.virt.libvirt.driver [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Deleting local config drive /var/lib/nova/instances/87dc2ce3-2b16-4764-9803-711c2d12c20f/disk.config because it was imported into RBD.
Dec 06 10:14:41 np0005548790.localdomain systemd[1]: Started libvirt secret daemon.
Dec 06 10:14:41 np0005548790.localdomain kernel: tun: Universal TUN/TAP device driver, 1.6
Dec 06 10:14:41 np0005548790.localdomain kernel: device tape87832d3-ff entered promiscuous mode
Dec 06 10:14:41 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:41Z|00046|binding|INFO|Claiming lport e87832d3-ffc3-44e0-9f77-cd2eb6073d62 for this chassis.
Dec 06 10:14:41 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:41Z|00047|binding|INFO|e87832d3-ffc3-44e0-9f77-cd2eb6073d62: Claiming fa:16:3e:0e:f5:37 10.100.0.14
Dec 06 10:14:41 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:41Z|00048|binding|INFO|Claiming lport 3b69daca-b91a-4923-9795-2e6a02ee3d59 for this chassis.
Dec 06 10:14:41 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:41Z|00049|binding|INFO|3b69daca-b91a-4923-9795-2e6a02ee3d59: Claiming fa:16:3e:a8:e1:a6 19.80.0.214
Dec 06 10:14:41 np0005548790.localdomain systemd-udevd[308973]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:14:41 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016081.2627] manager: (tape87832d3-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/16)
Dec 06 10:14:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:41.261 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:41 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:41.266 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:f5:37 10.100.0.14'], port_security=['fa:16:3e:0e:f5:37 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-876689022', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-47d636a7-c520-4320-aa94-bfb41f418584', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-876689022', 'neutron:project_id': '7897d6398eb64eb29c66df8db792e581', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bfad329a-0ea3-4b02-8e91-9d15749f8c9b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6898c302-0153-460c-9cb1-4c62ebc9ff31, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=e87832d3-ffc3-44e0-9f77-cd2eb6073d62) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:14:41 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:41.269 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:e1:a6 19.80.0.214'], port_security=['fa:16:3e:a8:e1:a6 19.80.0.214'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['e87832d3-ffc3-44e0-9f77-cd2eb6073d62'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-546955816', 'neutron:cidrs': '19.80.0.214/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-932e7489-8895-41d4-92c6-0d944505e7e6', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-546955816', 'neutron:project_id': '7897d6398eb64eb29c66df8db792e581', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bfad329a-0ea3-4b02-8e91-9d15749f8c9b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=f9bb405c-aea0-4a81-a300-475f8e1e8050, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=3b69daca-b91a-4923-9795-2e6a02ee3d59) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:14:41 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:41.270 159200 INFO neutron.agent.ovn.metadata.agent [-] Port e87832d3-ffc3-44e0-9f77-cd2eb6073d62 in datapath 47d636a7-c520-4320-aa94-bfb41f418584 bound to our chassis
Dec 06 10:14:41 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:41.276 159200 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 47d636a7-c520-4320-aa94-bfb41f418584
Dec 06 10:14:41 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016081.2880] device (tape87832d3-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Dec 06 10:14:41 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016081.2885] device (tape87832d3-ff): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external')
Dec 06 10:14:41 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:41Z|00050|binding|INFO|Setting lport e87832d3-ffc3-44e0-9f77-cd2eb6073d62 ovn-installed in OVS
Dec 06 10:14:41 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:41Z|00051|binding|INFO|Setting lport e87832d3-ffc3-44e0-9f77-cd2eb6073d62 up in Southbound
Dec 06 10:14:41 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:41Z|00052|binding|INFO|Setting lport 3b69daca-b91a-4923-9795-2e6a02ee3d59 up in Southbound
Dec 06 10:14:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:41.292 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:41.294 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:41.348 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:41 np0005548790.localdomain systemd-machined[202564]: New machine qemu-1-instance-00000007.
Dec 06 10:14:41 np0005548790.localdomain systemd[1]: Started Virtual Machine qemu-1-instance-00000007.
Dec 06 10:14:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:14:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/95423717' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:14:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:41.496 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:14:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:41.501 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updating inventory in ProviderTree for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 10:14:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:41.551 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updated inventory for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 with generation 6 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 06 10:14:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:41.551 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updating resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 generation from 6 to 7 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 06 10:14:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:41.552 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updating inventory in ProviderTree for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 with inventory: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 10:14:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:41.583 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:14:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:41.583 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.380s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:41.624 280869 DEBUG nova.virt.driver [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Emitting event <LifecycleEvent: 1765016081.6233268, 87dc2ce3-2b16-4764-9803-711c2d12c20f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:14:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:41.624 280869 INFO nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] VM Started (Lifecycle Event)
Dec 06 10:14:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:41.644 280869 DEBUG nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:14:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:41.648 280869 DEBUG nova.virt.driver [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Emitting event <LifecycleEvent: 1765016081.6234531, 87dc2ce3-2b16-4764-9803-711c2d12c20f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:14:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:41.648 280869 INFO nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] VM Paused (Lifecycle Event)
Dec 06 10:14:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:41.669 280869 DEBUG nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:14:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:41.671 280869 DEBUG nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:14:41 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:41.691 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[e5d15f73-b5d9-4da6-9ce8-2c8c2181b72c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:41 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:41.692 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap47d636a7-c1 in ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 10:14:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:41.693 280869 INFO nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:14:41 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:41.695 262518 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap47d636a7-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 10:14:41 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:41.695 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[0a27bffb-33ad-4304-8c0c-8d1da43796c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:41 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:41.696 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[2befd88b-9181-4fb9-9791-28e8e6bcb770]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:41 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:41.719 159379 DEBUG oslo.privsep.daemon [-] privsep: reply[27d41fbe-a755-410e-8ab8-a02c1cc92402]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:41 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:41.726 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[64ef835f-ca20-440f-8eeb-1fbe645b3b88]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:41 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:41.729 159200 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpql91ay0q/privsep.sock']
Dec 06 10:14:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:41.773 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Optimize plan auto_2025-12-06_10:14:41
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] do_upmap
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] pools ['vms', 'images', 'backups', 'manila_data', 'volumes', '.mgr', 'manila_metadata']
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] prepared 0/10 changes
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007865481196491718 of space, bias 1.0, pg target 1.5730962392983434 quantized to 32 (current 32)
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.854144129210869 quantized to 32 (current 32)
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.7263051367950866e-06 of space, bias 4.0, pg target 0.002159233668341709 quantized to 16 (current 16)
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:14:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:14:42 np0005548790.localdomain ceph-mon[301742]: pgmap v102: 177 pgs: 177 active+clean; 257 MiB data, 870 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 3.8 MiB/s wr, 108 op/s
Dec 06 10:14:42 np0005548790.localdomain ceph-mon[301742]: osdmap e105: 6 total, 6 up, 6 in
Dec 06 10:14:42 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/95423717' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:14:42 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/4088784122' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:14:42 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e106 e106: 6 total, 6 up, 6 in
Dec 06 10:14:42 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:42.351 159200 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 06 10:14:42 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:42.352 159200 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpql91ay0q/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 06 10:14:42 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:42.243 309039 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 10:14:42 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:42.246 309039 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 10:14:42 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:42.248 309039 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 06 10:14:42 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:42.248 309039 INFO oslo.privsep.daemon [-] privsep daemon running as pid 309039
Dec 06 10:14:42 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:42.355 309039 DEBUG oslo.privsep.daemon [-] privsep: reply[0073cf35-654b-45f6-94b0-93fac0876a0b]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:42.543 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:14:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:42.544 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:14:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:42.545 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:14:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:42.545 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:14:42 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:42.793 309039 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:42 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:42.793 309039 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:42 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:42.793 309039 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v105: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 11 MiB/s rd, 11 MiB/s wr, 304 op/s
Dec 06 10:14:43 np0005548790.localdomain ceph-mon[301742]: osdmap e106: 6 total, 6 up, 6 in
Dec 06 10:14:43 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/1318618644' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:14:43 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/3370650055' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:43.266 309039 DEBUG oslo.privsep.daemon [-] privsep: reply[d5114d3e-7036-4ca1-84a7-789ee563268a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:43 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016083.2847] manager: (tap47d636a7-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/17)
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:43.283 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[f4991209-0051-4596-a32a-67539a524fc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:43.314 309039 DEBUG oslo.privsep.daemon [-] privsep: reply[8304bd5a-21a5-4e1b-98ad-aef3cde0519e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:43.317 309039 DEBUG oslo.privsep.daemon [-] privsep: reply[0007559b-fff0-4a8c-8159-baf27afc3b0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:43 np0005548790.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap47d636a7-c1: link becomes ready
Dec 06 10:14:43 np0005548790.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap47d636a7-c0: link becomes ready
Dec 06 10:14:43 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016083.3398] device (tap47d636a7-c0): carrier: link connected
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:43.342 309039 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3237d5-10f3-4418-9b40-bcc9909cb33a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:43.361 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c8952b-dd86-4b83-a1b9-97ec06c1d0dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap47d636a7-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:94:11:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1249847, 'reachable_time': 19066, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309078, 'error': None, 'target': 'ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:43.374 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[b603aa72-ffd5-4f35-bbc3-7cd6a69f9720]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe94:1187'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1249847, 'tstamp': 1249847}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309082, 'error': None, 'target': 'ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:43 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:14:43.381 262327 INFO neutron.agent.linux.ip_lib [None req-0aa65b71-b05b-4d03-9ebf-387dfbe9a448 - - - - - -] Device tap8a4f3ee4-23 cannot be used as it has no MAC address
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:43.390 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[9842b82a-6b3d-4583-be14-65a38f57f9a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap47d636a7-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:94:11:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1249847, 'reachable_time': 19066, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309084, 'error': None, 'target': 'ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:43.407 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:43 np0005548790.localdomain kernel: device tap8a4f3ee4-23 entered promiscuous mode
Dec 06 10:14:43 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016083.4117] manager: (tap8a4f3ee4-23): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Dec 06 10:14:43 np0005548790.localdomain systemd-udevd[309057]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:14:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:43.413 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:43Z|00053|binding|INFO|Claiming lport 8a4f3ee4-2353-4fac-b8d8-f7a7aa1b44b2 for this chassis.
Dec 06 10:14:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:43Z|00054|binding|INFO|8a4f3ee4-2353-4fac-b8d8-f7a7aa1b44b2: Claiming unknown
Dec 06 10:14:43 np0005548790.localdomain podman[309066]: 2025-12-06 10:14:43.420915106 +0000 UTC m=+0.099923224 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:43.425 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-024ac467-702d-4aa3-9f11-5a052a7660a7', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-024ac467-702d-4aa3-9f11-5a052a7660a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd60454a44a4b4482bf705ee4e3667605', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8eef2af5-d600-4158-b892-77d6b006b733, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=8a4f3ee4-2353-4fac-b8d8-f7a7aa1b44b2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:43.428 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[4b9db43e-b8e6-4a9f-b3bf-73a5ee5dedb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:43Z|00055|binding|INFO|Setting lport 8a4f3ee4-2353-4fac-b8d8-f7a7aa1b44b2 ovn-installed in OVS
Dec 06 10:14:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:43Z|00056|binding|INFO|Setting lport 8a4f3ee4-2353-4fac-b8d8-f7a7aa1b44b2 up in Southbound
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:43.466 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[7a6438bd-de43-429d-910b-3d1a241b239d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:43.468 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47d636a7-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:43.469 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:43.469 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap47d636a7-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:14:43 np0005548790.localdomain podman[309066]: 2025-12-06 10:14:43.488269081 +0000 UTC m=+0.167277179 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:14:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:43.489 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:43 np0005548790.localdomain kernel: device tap47d636a7-c0 entered promiscuous mode
Dec 06 10:14:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:43.492 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:43.494 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap47d636a7-c0, col_values=(('external_ids', {'iface-id': '8839eeed-ff6b-46d9-b40d-610788617728'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:14:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:43.496 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:43 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:43Z|00057|binding|INFO|Releasing lport 8839eeed-ff6b-46d9-b40d-610788617728 from this chassis (sb_readonly=0)
Dec 06 10:14:43 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:14:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:43.503 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:43.504 159200 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/47d636a7-c520-4320-aa94-bfb41f418584.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/47d636a7-c520-4320-aa94-bfb41f418584.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:43.506 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[b934dfe1-3b9f-4784-bda2-530b2f355647]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:43.507 159200 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: global
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]:     log         /dev/log local0 debug
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]:     log-tag     haproxy-metadata-proxy-47d636a7-c520-4320-aa94-bfb41f418584
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]:     user        root
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]:     group       root
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]:     maxconn     1024
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]:     pidfile     /var/lib/neutron/external/pids/47d636a7-c520-4320-aa94-bfb41f418584.pid.haproxy
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]:     daemon
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: defaults
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]:     log global
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]:     mode http
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]:     option httplog
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]:     option dontlognull
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]:     option http-server-close
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]:     option forwardfor
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]:     retries                 3
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout http-request    30s
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout connect         30s
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout client          32s
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout server          32s
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout http-keep-alive 30s
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: listen listener
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]:     bind 169.254.169.254:80
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]:     http-request add-header X-OVN-Network-ID 47d636a7-c520-4320-aa94-bfb41f418584
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:43.507 159200 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584', 'env', 'PROCESS_TAG=haproxy-47d636a7-c520-4320-aa94-bfb41f418584', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/47d636a7-c520-4320-aa94-bfb41f418584.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 10:14:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:43.603 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:43 np0005548790.localdomain podman[309161]: 
Dec 06 10:14:43 np0005548790.localdomain podman[309161]: 2025-12-06 10:14:43.854544941 +0000 UTC m=+0.070288075 container create f23da036ec21d29dbbeb71d57b75ba4f866c722448867bc959dfe5de00e8418b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:14:43 np0005548790.localdomain systemd[1]: Started libpod-conmon-f23da036ec21d29dbbeb71d57b75ba4f866c722448867bc959dfe5de00e8418b.scope.
Dec 06 10:14:43 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:14:43 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/791bcba787042cfbc057054e4e5355ffb1adbc4cf7d2d55b357c0159c846e703/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:14:43 np0005548790.localdomain podman[309161]: 2025-12-06 10:14:43.917351553 +0000 UTC m=+0.133094717 container init f23da036ec21d29dbbeb71d57b75ba4f866c722448867bc959dfe5de00e8418b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 10:14:43 np0005548790.localdomain podman[309161]: 2025-12-06 10:14:43.823534755 +0000 UTC m=+0.039277899 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 10:14:43 np0005548790.localdomain podman[309161]: 2025-12-06 10:14:43.92653922 +0000 UTC m=+0.142282384 container start f23da036ec21d29dbbeb71d57b75ba4f866c722448867bc959dfe5de00e8418b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:14:43 np0005548790.localdomain neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584[309176]: [NOTICE]   (309180) : New worker (309182) forked
Dec 06 10:14:43 np0005548790.localdomain neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584[309176]: [NOTICE]   (309180) : Loading success.
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:43.985 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 3b69daca-b91a-4923-9795-2e6a02ee3d59 in datapath 932e7489-8895-41d4-92c6-0d944505e7e6 unbound from our chassis
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:43.989 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Port 26f7da5c-ac29-4b53-b23e-4d7e1fbc3e37 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:43.990 159200 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 932e7489-8895-41d4-92c6-0d944505e7e6
Dec 06 10:14:43 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:43.999 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[9f3fd49b-2a05-427d-b435-2407e3dab0e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:43.999 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap932e7489-81 in ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.002 262518 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap932e7489-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.002 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[e60d5d26-e513-4b89-846b-02adc43b33e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.003 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[fab1325c-0930-4991-b2f1-75ff12f7f91a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.020 159379 DEBUG oslo.privsep.daemon [-] privsep: reply[b5f2f233-3ad4-454f-9f52-5953989df098]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.030 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[c5f6c28b-b8a2-4e25-b3ac-03db4ea4f7d1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.055 309039 DEBUG oslo.privsep.daemon [-] privsep: reply[ff85b255-08e8-448f-bcc6-20db80fb5d8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:44 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016084.0648] manager: (tap932e7489-80): new Veth device (/org/freedesktop/NetworkManager/Devices/19)
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.064 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[41b87e86-4e0e-4b99-907a-f54a6b62f9b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.093 309039 DEBUG oslo.privsep.daemon [-] privsep: reply[a8f6e027-fc9a-46d2-baab-7b331b48e0da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.097 309039 DEBUG oslo.privsep.daemon [-] privsep: reply[74d0ecf1-f342-4ce8-8299-252b163c1f89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:44 np0005548790.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap932e7489-80: link becomes ready
Dec 06 10:14:44 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016084.1213] device (tap932e7489-80): carrier: link connected
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.125 309039 DEBUG oslo.privsep.daemon [-] privsep: reply[7774cd09-f6df-428f-af77-e86ad0128a3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:44 np0005548790.localdomain ceph-mon[301742]: pgmap v105: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 11 MiB/s rd, 11 MiB/s wr, 304 op/s
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.142 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f25d71-4332-4949-81da-e9171df15e72]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap932e7489-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:4b:f3:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1249925, 'reachable_time': 29209, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309203, 'error': None, 'target': 'ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.155 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[483e5422-0db3-4715-b619-11daadff561d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:f3ac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1249925, 'tstamp': 1249925}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309204, 'error': None, 'target': 'ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.169 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[ccba6e41-1639-41b4-8c14-cce20ee8cad4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap932e7489-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:4b:f3:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1249925, 'reachable_time': 29209, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309205, 'error': None, 'target': 'ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.191 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[3d94887c-7fa6-49ba-ace8-d86016ab0835]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.241 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[4421c713-ec5e-4a4e-91df-27ae397b50a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.243 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap932e7489-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.243 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.244 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap932e7489-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:14:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:44.246 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:44 np0005548790.localdomain kernel: device tap932e7489-80 entered promiscuous mode
Dec 06 10:14:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:44.250 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.252 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap932e7489-80, col_values=(('external_ids', {'iface-id': '9a87eef5-19db-4fcf-a021-4f61b153af33'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:14:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:44.254 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:44 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:44Z|00058|binding|INFO|Releasing lport 9a87eef5-19db-4fcf-a021-4f61b153af33 from this chassis (sb_readonly=0)
Dec 06 10:14:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:44.266 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.267 159200 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/932e7489-8895-41d4-92c6-0d944505e7e6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/932e7489-8895-41d4-92c6-0d944505e7e6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.269 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[6431df0a-78a6-4639-8fa8-7da496f542f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.270 159200 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: global
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]:     log         /dev/log local0 debug
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]:     log-tag     haproxy-metadata-proxy-932e7489-8895-41d4-92c6-0d944505e7e6
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]:     user        root
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]:     group       root
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]:     maxconn     1024
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]:     pidfile     /var/lib/neutron/external/pids/932e7489-8895-41d4-92c6-0d944505e7e6.pid.haproxy
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]:     daemon
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: defaults
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]:     log global
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]:     mode http
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]:     option httplog
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]:     option dontlognull
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]:     option http-server-close
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]:     option forwardfor
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]:     retries                 3
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout http-request    30s
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout connect         30s
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout client          32s
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout server          32s
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout http-keep-alive 30s
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: listen listener
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]:     bind 169.254.169.254:80
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]:     http-request add-header X-OVN-Network-ID 932e7489-8895-41d4-92c6-0d944505e7e6
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.271 159200 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6', 'env', 'PROCESS_TAG=haproxy-932e7489-8895-41d4-92c6-0d944505e7e6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/932e7489-8895-41d4-92c6-0d944505e7e6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 10:14:44 np0005548790.localdomain podman[309254]: 
Dec 06 10:14:44 np0005548790.localdomain podman[309254]: 2025-12-06 10:14:44.71328876 +0000 UTC m=+0.099458440 container create b1793513deb709e28d68456ad9689ea69694c7890010a7670d49f3a289382a75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 10:14:44 np0005548790.localdomain podman[309275]: 
Dec 06 10:14:44 np0005548790.localdomain podman[309275]: 2025-12-06 10:14:44.764198892 +0000 UTC m=+0.092767181 container create 6aa6ea9e6d6dc8e6b6af635c0e9adfd7e75dc372a4584cf6919347b15aefda10 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-024ac467-702d-4aa3-9f11-5a052a7660a7, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:14:44 np0005548790.localdomain podman[309254]: 2025-12-06 10:14:44.666213681 +0000 UTC m=+0.052383391 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 10:14:44 np0005548790.localdomain systemd[1]: Started libpod-conmon-b1793513deb709e28d68456ad9689ea69694c7890010a7670d49f3a289382a75.scope.
Dec 06 10:14:44 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:14:44 np0005548790.localdomain systemd[1]: Started libpod-conmon-6aa6ea9e6d6dc8e6b6af635c0e9adfd7e75dc372a4584cf6919347b15aefda10.scope.
Dec 06 10:14:44 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acebbc5a7fd326383fb625da03e1e57c1b8c73dc122d6355e7c20e587b0c2b07/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:14:44 np0005548790.localdomain podman[309254]: 2025-12-06 10:14:44.803506741 +0000 UTC m=+0.189676391 container init b1793513deb709e28d68456ad9689ea69694c7890010a7670d49f3a289382a75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 06 10:14:44 np0005548790.localdomain podman[309254]: 2025-12-06 10:14:44.811824225 +0000 UTC m=+0.197993875 container start b1793513deb709e28d68456ad9689ea69694c7890010a7670d49f3a289382a75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 10:14:44 np0005548790.localdomain podman[309275]: 2025-12-06 10:14:44.730957057 +0000 UTC m=+0.059525366 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:14:44 np0005548790.localdomain neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6[309288]: [NOTICE]   (309296) : New worker (309298) forked
Dec 06 10:14:44 np0005548790.localdomain neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6[309288]: [NOTICE]   (309296) : Loading success.
Dec 06 10:14:44 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.871 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:14:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.877 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 8a4f3ee4-2353-4fac-b8d8-f7a7aa1b44b2 in datapath 024ac467-702d-4aa3-9f11-5a052a7660a7 unbound from our chassis
Dec 06 10:14:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:44.878 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.883 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Port 4d6816c5-0b20-421d-8882-59ad9c8e9986 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.883 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 024ac467-702d-4aa3-9f11-5a052a7660a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:14:44 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d9d2cc3cfb184effd5973b8db2ceccd9f50b7d1e73e08ee5ddad56bd1deb823/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.884 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[571ac48f-c022-4fad-8c88-7e1d4f4fa99a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:44 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:44.885 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:14:44 np0005548790.localdomain podman[309275]: 2025-12-06 10:14:44.891309908 +0000 UTC m=+0.219878167 container init 6aa6ea9e6d6dc8e6b6af635c0e9adfd7e75dc372a4584cf6919347b15aefda10 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-024ac467-702d-4aa3-9f11-5a052a7660a7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 06 10:14:44 np0005548790.localdomain podman[309275]: 2025-12-06 10:14:44.900082874 +0000 UTC m=+0.228651133 container start 6aa6ea9e6d6dc8e6b6af635c0e9adfd7e75dc372a4584cf6919347b15aefda10 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-024ac467-702d-4aa3-9f11-5a052a7660a7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 10:14:44 np0005548790.localdomain dnsmasq[309319]: started, version 2.85 cachesize 150
Dec 06 10:14:44 np0005548790.localdomain dnsmasq[309319]: DNS service limited to local subnets
Dec 06 10:14:44 np0005548790.localdomain dnsmasq[309319]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:14:44 np0005548790.localdomain dnsmasq[309319]: warning: no upstream servers configured
Dec 06 10:14:44 np0005548790.localdomain dnsmasq-dhcp[309319]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 06 10:14:44 np0005548790.localdomain dnsmasq[309319]: read /var/lib/neutron/dhcp/024ac467-702d-4aa3-9f11-5a052a7660a7/addn_hosts - 0 addresses
Dec 06 10:14:44 np0005548790.localdomain dnsmasq-dhcp[309319]: read /var/lib/neutron/dhcp/024ac467-702d-4aa3-9f11-5a052a7660a7/host
Dec 06 10:14:44 np0005548790.localdomain dnsmasq-dhcp[309319]: read /var/lib/neutron/dhcp/024ac467-702d-4aa3-9f11-5a052a7660a7/opts
Dec 06 10:14:44 np0005548790.localdomain podman[309307]: 2025-12-06 10:14:44.941005867 +0000 UTC m=+0.058881148 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 10:14:45 np0005548790.localdomain podman[309307]: 2025-12-06 10:14:45.00125196 +0000 UTC m=+0.119127271 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 10:14:45 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:14:45 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:14:45.015 262327 INFO neutron.agent.dhcp.agent [None req-a37e27e5-e573-4315-9488-46f29315e59f - - - - - -] DHCP configuration for ports {'4d5a5dfd-bb95-48d4-ad06-828775df30d0'} is completed
Dec 06 10:14:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v106: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 7.8 MiB/s rd, 9.2 MiB/s wr, 208 op/s
Dec 06 10:14:45 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2238572858' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.362 280869 DEBUG nova.compute.manager [req-5c3388a7-d9f7-44fd-8867-ff6f8d572006 req-6da19b09-9faa-40ef-bf32-3c078a9e65f0 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Received event network-vif-plugged-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.362 280869 DEBUG oslo_concurrency.lockutils [req-5c3388a7-d9f7-44fd-8867-ff6f8d572006 req-6da19b09-9faa-40ef-bf32-3c078a9e65f0 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.362 280869 DEBUG oslo_concurrency.lockutils [req-5c3388a7-d9f7-44fd-8867-ff6f8d572006 req-6da19b09-9faa-40ef-bf32-3c078a9e65f0 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.362 280869 DEBUG oslo_concurrency.lockutils [req-5c3388a7-d9f7-44fd-8867-ff6f8d572006 req-6da19b09-9faa-40ef-bf32-3c078a9e65f0 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.363 280869 DEBUG nova.compute.manager [req-5c3388a7-d9f7-44fd-8867-ff6f8d572006 req-6da19b09-9faa-40ef-bf32-3c078a9e65f0 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Processing event network-vif-plugged-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.363 280869 DEBUG nova.compute.manager [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.368 280869 DEBUG nova.virt.driver [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Emitting event <LifecycleEvent: 1765016085.3687108, 87dc2ce3-2b16-4764-9803-711c2d12c20f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.369 280869 INFO nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] VM Resumed (Lifecycle Event)
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.370 280869 DEBUG nova.virt.libvirt.driver [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.374 280869 INFO nova.virt.libvirt.driver [-] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Instance spawned successfully.
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.374 280869 DEBUG nova.virt.libvirt.driver [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.397 280869 DEBUG nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.402 280869 DEBUG nova.virt.libvirt.driver [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.403 280869 DEBUG nova.virt.libvirt.driver [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.403 280869 DEBUG nova.virt.libvirt.driver [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.404 280869 DEBUG nova.virt.libvirt.driver [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.404 280869 DEBUG nova.virt.libvirt.driver [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.405 280869 DEBUG nova.virt.libvirt.driver [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.408 280869 DEBUG nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.429 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.450 280869 INFO nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.469 280869 INFO nova.compute.manager [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Took 10.28 seconds to spawn the instance on the hypervisor.
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.470 280869 DEBUG nova.compute.manager [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.534 280869 INFO nova.compute.manager [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Took 11.23 seconds to build instance.
Dec 06 10:14:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:45.568 280869 DEBUG oslo_concurrency.lockutils [None req-a6adf663-8680-4725-bbbb-726f7a6f6614 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.350s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:46 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:14:46.016 2 INFO neutron.agent.securitygroups_rpc [req-842dc70c-4c90-4d04-97b8-ca0a150f47f3 req-694d7e2d-322f-485d-ac12-0a632bb0d8f8 3a50fae64027482ba5b10005ed97189e 024b6fbc052c4ed7a93c855bd2ae77da - - default default] Security group rule updated ['e6cef3ed-f2f1-4e9f-8bb7-b8303074aa1b']
Dec 06 10:14:46 np0005548790.localdomain ceph-mon[301742]: pgmap v106: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 7.8 MiB/s rd, 9.2 MiB/s wr, 208 op/s
Dec 06 10:14:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:46.398 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:46.778 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:46 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:14:46.816 2 INFO neutron.agent.securitygroups_rpc [req-3aac6f95-4738-40dc-9407-49685a717c88 req-a8e1311a-c6c3-4f2f-8fef-2b7b3e5084e1 3a50fae64027482ba5b10005ed97189e 024b6fbc052c4ed7a93c855bd2ae77da - - default default] Security group rule updated ['e6cef3ed-f2f1-4e9f-8bb7-b8303074aa1b']
Dec 06 10:14:46 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e107 e107: 6 total, 6 up, 6 in
Dec 06 10:14:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v108: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 7.9 MiB/s rd, 9.3 MiB/s wr, 210 op/s
Dec 06 10:14:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:47.459 280869 DEBUG nova.compute.manager [req-82a9ee3a-2169-4d9d-80cf-2104d57fb047 req-5f8715eb-4fd4-4fda-936d-3f1d92ea32b0 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Received event network-vif-plugged-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:14:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:47.460 280869 DEBUG oslo_concurrency.lockutils [req-82a9ee3a-2169-4d9d-80cf-2104d57fb047 req-5f8715eb-4fd4-4fda-936d-3f1d92ea32b0 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:47.461 280869 DEBUG oslo_concurrency.lockutils [req-82a9ee3a-2169-4d9d-80cf-2104d57fb047 req-5f8715eb-4fd4-4fda-936d-3f1d92ea32b0 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:47.461 280869 DEBUG oslo_concurrency.lockutils [req-82a9ee3a-2169-4d9d-80cf-2104d57fb047 req-5f8715eb-4fd4-4fda-936d-3f1d92ea32b0 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:47.468 280869 DEBUG nova.compute.manager [req-82a9ee3a-2169-4d9d-80cf-2104d57fb047 req-5f8715eb-4fd4-4fda-936d-3f1d92ea32b0 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] No waiting events found dispatching network-vif-plugged-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:14:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:47.468 280869 WARNING nova.compute.manager [req-82a9ee3a-2169-4d9d-80cf-2104d57fb047 req-5f8715eb-4fd4-4fda-936d-3f1d92ea32b0 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Received unexpected event network-vif-plugged-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 for instance with vm_state active and task_state None.
Dec 06 10:14:47 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:14:47.512 262327 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:14:47Z, description=, device_id=8ac18363-2c8c-4254-a57a-690b1714b140, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c857654f0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c85765490>], id=59d0f9d7-2a18-488d-9ba2-b794e63aecec, ip_allocation=immediate, mac_address=fa:16:3e:d9:28:55, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:14:40Z, description=, dns_domain=, id=024ac467-702d-4aa3-9f11-5a052a7660a7, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-444852489-network, port_security_enabled=True, project_id=d60454a44a4b4482bf705ee4e3667605, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=32717, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=531, status=ACTIVE, subnets=['c252a2e1-7157-4762-bcad-39d0167e8158'], tags=[], tenant_id=d60454a44a4b4482bf705ee4e3667605, updated_at=2025-12-06T10:14:41Z, vlan_transparent=None, network_id=024ac467-702d-4aa3-9f11-5a052a7660a7, port_security_enabled=False, project_id=d60454a44a4b4482bf705ee4e3667605, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=596, status=DOWN, tags=[], tenant_id=d60454a44a4b4482bf705ee4e3667605, updated_at=2025-12-06T10:14:47Z on network 024ac467-702d-4aa3-9f11-5a052a7660a7
Dec 06 10:14:47 np0005548790.localdomain podman[309349]: 2025-12-06 10:14:47.765346852 +0000 UTC m=+0.047290895 container kill 6aa6ea9e6d6dc8e6b6af635c0e9adfd7e75dc372a4584cf6919347b15aefda10 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-024ac467-702d-4aa3-9f11-5a052a7660a7, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 10:14:47 np0005548790.localdomain dnsmasq[309319]: read /var/lib/neutron/dhcp/024ac467-702d-4aa3-9f11-5a052a7660a7/addn_hosts - 1 addresses
Dec 06 10:14:47 np0005548790.localdomain dnsmasq-dhcp[309319]: read /var/lib/neutron/dhcp/024ac467-702d-4aa3-9f11-5a052a7660a7/host
Dec 06 10:14:47 np0005548790.localdomain dnsmasq-dhcp[309319]: read /var/lib/neutron/dhcp/024ac467-702d-4aa3-9f11-5a052a7660a7/opts
Dec 06 10:14:47 np0005548790.localdomain ceph-mon[301742]: osdmap e107: 6 total, 6 up, 6 in
Dec 06 10:14:47 np0005548790.localdomain ceph-mon[301742]: pgmap v108: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 7.9 MiB/s rd, 9.3 MiB/s wr, 210 op/s
Dec 06 10:14:47 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:14:47.964 262327 INFO neutron.agent.dhcp.agent [None req-3e4d4548-4c73-4429-bc10-95e4f69534fd - - - - - -] DHCP configuration for ports {'59d0f9d7-2a18-488d-9ba2-b794e63aecec'} is completed
Dec 06 10:14:48 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:14:48 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1632793843' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:14:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:48.396 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:48.396 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:48.397 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:14:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:14:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:14:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 162761 "" "Go-http-client/1.1"
Dec 06 10:14:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:14:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 21093 "" "Go-http-client/1.1"
Dec 06 10:14:48 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:14:48.586 262327 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:14:47Z, description=, device_id=8ac18363-2c8c-4254-a57a-690b1714b140, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c85762d00>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c857628e0>], id=59d0f9d7-2a18-488d-9ba2-b794e63aecec, ip_allocation=immediate, mac_address=fa:16:3e:d9:28:55, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:14:40Z, description=, dns_domain=, id=024ac467-702d-4aa3-9f11-5a052a7660a7, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-444852489-network, port_security_enabled=True, project_id=d60454a44a4b4482bf705ee4e3667605, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=32717, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=531, status=ACTIVE, subnets=['c252a2e1-7157-4762-bcad-39d0167e8158'], tags=[], tenant_id=d60454a44a4b4482bf705ee4e3667605, updated_at=2025-12-06T10:14:41Z, vlan_transparent=None, network_id=024ac467-702d-4aa3-9f11-5a052a7660a7, port_security_enabled=False, project_id=d60454a44a4b4482bf705ee4e3667605, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=596, status=DOWN, tags=[], tenant_id=d60454a44a4b4482bf705ee4e3667605, updated_at=2025-12-06T10:14:47Z on network 024ac467-702d-4aa3-9f11-5a052a7660a7
Dec 06 10:14:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:48.775 280869 DEBUG nova.virt.libvirt.driver [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Check if temp file /var/lib/nova/instances/tmpe77a5ohg exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Dec 06 10:14:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:48.776 280869 DEBUG nova.compute.manager [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=12288,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpe77a5ohg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='87dc2ce3-2b16-4764-9803-711c2d12c20f',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Dec 06 10:14:48 np0005548790.localdomain podman[309387]: 2025-12-06 10:14:48.82439781 +0000 UTC m=+0.072240528 container kill 6aa6ea9e6d6dc8e6b6af635c0e9adfd7e75dc372a4584cf6919347b15aefda10 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-024ac467-702d-4aa3-9f11-5a052a7660a7, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:14:48 np0005548790.localdomain dnsmasq[309319]: read /var/lib/neutron/dhcp/024ac467-702d-4aa3-9f11-5a052a7660a7/addn_hosts - 1 addresses
Dec 06 10:14:48 np0005548790.localdomain dnsmasq-dhcp[309319]: read /var/lib/neutron/dhcp/024ac467-702d-4aa3-9f11-5a052a7660a7/host
Dec 06 10:14:48 np0005548790.localdomain dnsmasq-dhcp[309319]: read /var/lib/neutron/dhcp/024ac467-702d-4aa3-9f11-5a052a7660a7/opts
Dec 06 10:14:48 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/1632793843' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:14:49 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:14:49.001 262327 INFO neutron.agent.dhcp.agent [None req-68a4ce7e-7a1a-4c95-b0db-a76faa870dab - - - - - -] DHCP configuration for ports {'59d0f9d7-2a18-488d-9ba2-b794e63aecec'} is completed
Dec 06 10:14:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:14:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v109: 177 pgs: 177 active+clean; 271 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 7.8 MiB/s rd, 7.0 MiB/s wr, 263 op/s
Dec 06 10:14:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:49.700 280869 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:14:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:49.701 280869 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:14:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:49.711 280869 INFO nova.compute.rpcapi [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66
Dec 06 10:14:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:49.712 280869 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:14:50 np0005548790.localdomain ceph-mon[301742]: pgmap v109: 177 pgs: 177 active+clean; 271 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 7.8 MiB/s rd, 7.0 MiB/s wr, 263 op/s
Dec 06 10:14:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:50.128 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:50.431 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v110: 177 pgs: 177 active+clean; 271 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 6.2 MiB/s wr, 234 op/s
Dec 06 10:14:51 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2395424858' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:14:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:51.780 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:52 np0005548790.localdomain ceph-mon[301742]: pgmap v110: 177 pgs: 177 active+clean; 271 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 6.2 MiB/s wr, 234 op/s
Dec 06 10:14:52 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/840869338' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:14:52 np0005548790.localdomain sudo[309407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:14:52 np0005548790.localdomain sudo[309407]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:14:52 np0005548790.localdomain sudo[309407]: pam_unix(sudo:session): session closed for user root
Dec 06 10:14:52 np0005548790.localdomain sudo[309425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:14:52 np0005548790.localdomain sudo[309425]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:14:52 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:52.886 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=33b2d0f4-3dae-458c-b286-c937c7cb3d9e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:14:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v111: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 221 op/s
Dec 06 10:14:53 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e108 e108: 6 total, 6 up, 6 in
Dec 06 10:14:53 np0005548790.localdomain sudo[309425]: pam_unix(sudo:session): session closed for user root
Dec 06 10:14:53 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:14:53 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:14:53 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:14:53 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:14:53 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:14:53 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] update: starting ev 46c1d003-a8a7-4445-a766-181fa437c3dd (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:14:53 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] complete: finished ev 46c1d003-a8a7-4445-a766-181fa437c3dd (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:14:53 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Completed event 46c1d003-a8a7-4445-a766-181fa437c3dd (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 06 10:14:53 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:14:53 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:14:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:14:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:14:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:14:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:14:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:14:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:14:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:14:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:14:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:14:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:14:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:14:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:14:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:53.629 280869 DEBUG nova.compute.manager [req-0d76660a-af77-43f9-8ea3-05f0110216e9 req-23537c79-b501-4743-940b-78db4c50c8d6 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Received event network-vif-unplugged-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:14:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:53.630 280869 DEBUG oslo_concurrency.lockutils [req-0d76660a-af77-43f9-8ea3-05f0110216e9 req-23537c79-b501-4743-940b-78db4c50c8d6 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:53.630 280869 DEBUG oslo_concurrency.lockutils [req-0d76660a-af77-43f9-8ea3-05f0110216e9 req-23537c79-b501-4743-940b-78db4c50c8d6 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:53.630 280869 DEBUG oslo_concurrency.lockutils [req-0d76660a-af77-43f9-8ea3-05f0110216e9 req-23537c79-b501-4743-940b-78db4c50c8d6 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:53.631 280869 DEBUG nova.compute.manager [req-0d76660a-af77-43f9-8ea3-05f0110216e9 req-23537c79-b501-4743-940b-78db4c50c8d6 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] No waiting events found dispatching network-vif-unplugged-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:14:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:53.631 280869 DEBUG nova.compute.manager [req-0d76660a-af77-43f9-8ea3-05f0110216e9 req-23537c79-b501-4743-940b-78db4c50c8d6 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Received event network-vif-unplugged-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 10:14:53 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:14:53.701 2 INFO neutron.agent.securitygroups_rpc [None req-2bc0f0e9-228c-4272-bb0d-cc31a9019510 b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Security group member updated ['4c82b56e-0fc5-4c7f-8922-ceb8236815fd']
Dec 06 10:14:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:53.710 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:53 np0005548790.localdomain sudo[309475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:14:53 np0005548790.localdomain sudo[309475]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:14:53 np0005548790.localdomain sudo[309475]: pam_unix(sudo:session): session closed for user root
Dec 06 10:14:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:14:54 np0005548790.localdomain ceph-mon[301742]: pgmap v111: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 221 op/s
Dec 06 10:14:54 np0005548790.localdomain ceph-mon[301742]: osdmap e108: 6 total, 6 up, 6 in
Dec 06 10:14:54 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:14:54 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:14:54 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:14:54 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:14:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v113: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 8.7 MiB/s rd, 5.8 MiB/s wr, 273 op/s
Dec 06 10:14:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:55.480 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:55.755 280869 DEBUG nova.compute.manager [req-57a07487-b192-457a-863e-3f05eea64eea req-78d40883-b549-463b-aaa9-1b84b3e5d3f9 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Received event network-vif-plugged-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:14:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:55.755 280869 DEBUG oslo_concurrency.lockutils [req-57a07487-b192-457a-863e-3f05eea64eea req-78d40883-b549-463b-aaa9-1b84b3e5d3f9 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:14:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:55.756 280869 DEBUG oslo_concurrency.lockutils [req-57a07487-b192-457a-863e-3f05eea64eea req-78d40883-b549-463b-aaa9-1b84b3e5d3f9 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:14:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:55.757 280869 DEBUG oslo_concurrency.lockutils [req-57a07487-b192-457a-863e-3f05eea64eea req-78d40883-b549-463b-aaa9-1b84b3e5d3f9 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:14:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:55.757 280869 DEBUG nova.compute.manager [req-57a07487-b192-457a-863e-3f05eea64eea req-78d40883-b549-463b-aaa9-1b84b3e5d3f9 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] No waiting events found dispatching network-vif-plugged-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:14:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:55.758 280869 WARNING nova.compute.manager [req-57a07487-b192-457a-863e-3f05eea64eea req-78d40883-b549-463b-aaa9-1b84b3e5d3f9 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Received unexpected event network-vif-plugged-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 for instance with vm_state active and task_state migrating.
Dec 06 10:14:56 np0005548790.localdomain ceph-mon[301742]: pgmap v113: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 8.7 MiB/s rd, 5.8 MiB/s wr, 273 op/s
Dec 06 10:14:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:56.782 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:56 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Writing back 50 completed events
Dec 06 10:14:56 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:14:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v114: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 221 op/s
Dec 06 10:14:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:57.069 280869 INFO nova.compute.manager [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Took 7.37 seconds for pre_live_migration on destination host np0005548789.localdomain.
Dec 06 10:14:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:57.070 280869 DEBUG nova.compute.manager [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 10:14:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:57.094 280869 DEBUG nova.compute.manager [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpe77a5ohg',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='87dc2ce3-2b16-4764-9803-711c2d12c20f',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(b90e00bf-bb94-4755-ba96-2ce831a9f185),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Dec 06 10:14:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:57.097 280869 DEBUG nova.objects.instance [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Lazy-loading 'migration_context' on Instance uuid 87dc2ce3-2b16-4764-9803-711c2d12c20f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:14:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:57.098 280869 DEBUG nova.virt.libvirt.driver [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Dec 06 10:14:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:57.101 280869 DEBUG nova.virt.libvirt.driver [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Dec 06 10:14:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:57.101 280869 DEBUG nova.virt.libvirt.driver [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Dec 06 10:14:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:57.121 280869 DEBUG nova.virt.libvirt.vif [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:14:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1999616987',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005548790.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-1999616987',id=7,image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:14:45Z,launched_on='np0005548790.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005548790.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7897d6398eb64eb29c66df8db792e581',ramdisk_id='',reservation_id='r-tcv45ne4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-265776820',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-265776820-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:14:45Z,user_data=None,user_id='ac2e85103fd14829ad4e6df2357da95b',uuid=87dc2ce3-2b16-4764-9803-711c2d12c20f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "address": "fa:16:3e:0e:f5:37", "network": {"id": "47d636a7-c520-4320-aa94-bfb41f418584", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1313845827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "7897d6398eb64eb29c66df8db792e581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tape87832d3-ff", "ovs_interfaceid": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 10:14:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:57.121 280869 DEBUG nova.network.os_vif_util [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Converting VIF {"id": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "address": "fa:16:3e:0e:f5:37", "network": {"id": "47d636a7-c520-4320-aa94-bfb41f418584", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1313845827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "7897d6398eb64eb29c66df8db792e581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tape87832d3-ff", "ovs_interfaceid": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:14:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:57.122 280869 DEBUG nova.network.os_vif_util [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:f5:37,bridge_name='br-int',has_traffic_filtering=True,id=e87832d3-ffc3-44e0-9f77-cd2eb6073d62,network=Network(47d636a7-c520-4320-aa94-bfb41f418584),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape87832d3-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:14:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:57.123 280869 DEBUG nova.virt.libvirt.migration [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Updating guest XML with vif config: <interface type="ethernet">
Dec 06 10:14:57 np0005548790.localdomain nova_compute[280865]:   <mac address="fa:16:3e:0e:f5:37"/>
Dec 06 10:14:57 np0005548790.localdomain nova_compute[280865]:   <model type="virtio"/>
Dec 06 10:14:57 np0005548790.localdomain nova_compute[280865]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 10:14:57 np0005548790.localdomain nova_compute[280865]:   <mtu size="1442"/>
Dec 06 10:14:57 np0005548790.localdomain nova_compute[280865]:   <target dev="tape87832d3-ff"/>
Dec 06 10:14:57 np0005548790.localdomain nova_compute[280865]: </interface>
Dec 06 10:14:57 np0005548790.localdomain nova_compute[280865]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Dec 06 10:14:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:57.123 280869 DEBUG nova.virt.libvirt.driver [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Dec 06 10:14:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:57.604 280869 DEBUG nova.virt.libvirt.migration [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 06 10:14:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:57.605 280869 INFO nova.virt.libvirt.migration [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Increasing downtime to 50 ms after 0 sec elapsed time
Dec 06 10:14:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:57.686 280869 INFO nova.virt.libvirt.driver [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Dec 06 10:14:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:14:57 np0005548790.localdomain ceph-mon[301742]: pgmap v114: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 221 op/s
Dec 06 10:14:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:58.107 280869 DEBUG nova.compute.manager [req-282ec658-8f1d-4f9b-857c-cee8f0584596 req-55622473-aeb4-4782-bbbc-0916f6a65c6f 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Received event network-changed-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:14:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:58.108 280869 DEBUG nova.compute.manager [req-282ec658-8f1d-4f9b-857c-cee8f0584596 req-55622473-aeb4-4782-bbbc-0916f6a65c6f 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Refreshing instance network info cache due to event network-changed-e87832d3-ffc3-44e0-9f77-cd2eb6073d62. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:14:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:58.108 280869 DEBUG oslo_concurrency.lockutils [req-282ec658-8f1d-4f9b-857c-cee8f0584596 req-55622473-aeb4-4782-bbbc-0916f6a65c6f 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "refresh_cache-87dc2ce3-2b16-4764-9803-711c2d12c20f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:14:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:58.109 280869 DEBUG oslo_concurrency.lockutils [req-282ec658-8f1d-4f9b-857c-cee8f0584596 req-55622473-aeb4-4782-bbbc-0916f6a65c6f 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquired lock "refresh_cache-87dc2ce3-2b16-4764-9803-711c2d12c20f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:14:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:58.109 280869 DEBUG nova.network.neutron [req-282ec658-8f1d-4f9b-857c-cee8f0584596 req-55622473-aeb4-4782-bbbc-0916f6a65c6f 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Refreshing network info cache for port e87832d3-ffc3-44e0-9f77-cd2eb6073d62 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:14:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:58.195 280869 DEBUG nova.virt.libvirt.migration [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 06 10:14:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:58.196 280869 DEBUG nova.virt.libvirt.migration [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Dec 06 10:14:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:58.700 280869 DEBUG nova.virt.libvirt.migration [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 06 10:14:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:58.701 280869 DEBUG nova.virt.libvirt.migration [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Dec 06 10:14:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:58.725 280869 DEBUG nova.network.neutron [req-282ec658-8f1d-4f9b-857c-cee8f0584596 req-55622473-aeb4-4782-bbbc-0916f6a65c6f 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Updated VIF entry in instance network info cache for port e87832d3-ffc3-44e0-9f77-cd2eb6073d62. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:14:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:58.725 280869 DEBUG nova.network.neutron [req-282ec658-8f1d-4f9b-857c-cee8f0584596 req-55622473-aeb4-4782-bbbc-0916f6a65c6f 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Updating instance_info_cache with network_info: [{"id": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "address": "fa:16:3e:0e:f5:37", "network": {"id": "47d636a7-c520-4320-aa94-bfb41f418584", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1313845827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "7897d6398eb64eb29c66df8db792e581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape87832d3-ff", "ovs_interfaceid": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "np0005548789.localdomain"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:14:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:58.747 280869 DEBUG oslo_concurrency.lockutils [req-282ec658-8f1d-4f9b-857c-cee8f0584596 req-55622473-aeb4-4782-bbbc-0916f6a65c6f 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Releasing lock "refresh_cache-87dc2ce3-2b16-4764-9803-711c2d12c20f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:14:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:58.856 280869 DEBUG nova.virt.driver [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Emitting event <LifecycleEvent: 1765016098.8564496, 87dc2ce3-2b16-4764-9803-711c2d12c20f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:14:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:58.857 280869 INFO nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] VM Paused (Lifecycle Event)
Dec 06 10:14:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:58.880 280869 DEBUG nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:14:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:58.883 280869 DEBUG nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:14:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:58.930 280869 INFO nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] During sync_power_state the instance has a pending task (migrating). Skip.
Dec 06 10:14:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:14:59 np0005548790.localdomain kernel: device tape87832d3-ff left promiscuous mode
Dec 06 10:14:59 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016099.0444] device (tape87832d3-ff): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed')
Dec 06 10:14:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v115: 177 pgs: 177 active+clean; 283 MiB data, 934 MiB used, 41 GiB / 42 GiB avail; 7.9 MiB/s rd, 6.3 MiB/s wr, 256 op/s
Dec 06 10:14:59 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:59Z|00059|binding|INFO|Releasing lport e87832d3-ffc3-44e0-9f77-cd2eb6073d62 from this chassis (sb_readonly=0)
Dec 06 10:14:59 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:59Z|00060|binding|INFO|Setting lport e87832d3-ffc3-44e0-9f77-cd2eb6073d62 down in Southbound
Dec 06 10:14:59 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:59Z|00061|binding|INFO|Releasing lport 3b69daca-b91a-4923-9795-2e6a02ee3d59 from this chassis (sb_readonly=0)
Dec 06 10:14:59 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:59Z|00062|binding|INFO|Setting lport 3b69daca-b91a-4923-9795-2e6a02ee3d59 down in Southbound
Dec 06 10:14:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:59.096 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:59 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:59Z|00063|binding|INFO|Removing iface tape87832d3-ff ovn-installed in OVS
Dec 06 10:14:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:59.100 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:59 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:59Z|00064|binding|INFO|Releasing lport 9a87eef5-19db-4fcf-a021-4f61b153af33 from this chassis (sb_readonly=0)
Dec 06 10:14:59 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:59Z|00065|binding|INFO|Releasing lport 8839eeed-ff6b-46d9-b40d-610788617728 from this chassis (sb_readonly=0)
Dec 06 10:14:59 np0005548790.localdomain systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000007.scope: Deactivated successfully.
Dec 06 10:14:59 np0005548790.localdomain systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000007.scope: Consumed 13.165s CPU time.
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.112 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:f5:37 10.100.0.14'], port_security=['fa:16:3e:0e:f5:37 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain,np0005548789.localdomain', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'b142a5ef-fbed-4e92-aa78-e3ad080c6370'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-876689022', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '87dc2ce3-2b16-4764-9803-711c2d12c20f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-47d636a7-c520-4320-aa94-bfb41f418584', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-876689022', 'neutron:project_id': '7897d6398eb64eb29c66df8db792e581', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'bfad329a-0ea3-4b02-8e91-9d15749f8c9b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6898c302-0153-460c-9cb1-4c62ebc9ff31, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=e87832d3-ffc3-44e0-9f77-cd2eb6073d62) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.114 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:e1:a6 19.80.0.214'], port_security=['fa:16:3e:a8:e1:a6 19.80.0.214'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['e87832d3-ffc3-44e0-9f77-cd2eb6073d62'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-546955816', 'neutron:cidrs': '19.80.0.214/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-932e7489-8895-41d4-92c6-0d944505e7e6', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-546955816', 'neutron:project_id': '7897d6398eb64eb29c66df8db792e581', 'neutron:revision_number': '3', 'neutron:security_group_ids': 'bfad329a-0ea3-4b02-8e91-9d15749f8c9b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=f9bb405c-aea0-4a81-a300-475f8e1e8050, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=3b69daca-b91a-4923-9795-2e6a02ee3d59) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.115 159200 INFO neutron.agent.ovn.metadata.agent [-] Port e87832d3-ffc3-44e0-9f77-cd2eb6073d62 in datapath 47d636a7-c520-4320-aa94-bfb41f418584 unbound from our chassis
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.119 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 47d636a7-c520-4320-aa94-bfb41f418584, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:14:59 np0005548790.localdomain systemd-machined[202564]: Machine qemu-1-instance-00000007 terminated.
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.119 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc13aff-3e11-4860-b2f7-fc41e14492e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.120 159200 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584 namespace which is not needed anymore
Dec 06 10:14:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:59.146 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:59.159 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:59 np0005548790.localdomain virtqemud[228868]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/87dc2ce3-2b16-4764-9803-711c2d12c20f_disk: No such file or directory
Dec 06 10:14:59 np0005548790.localdomain virtqemud[228868]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/87dc2ce3-2b16-4764-9803-711c2d12c20f_disk: No such file or directory
Dec 06 10:14:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:59.206 280869 DEBUG nova.virt.libvirt.guest [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Dec 06 10:14:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:59.207 280869 INFO nova.virt.libvirt.driver [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Migration operation has completed
Dec 06 10:14:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:59.208 280869 INFO nova.compute.manager [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] _post_live_migration() is started..
Dec 06 10:14:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:59.220 280869 DEBUG nova.virt.libvirt.driver [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Dec 06 10:14:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:59.220 280869 DEBUG nova.virt.libvirt.driver [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Dec 06 10:14:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:59.221 280869 DEBUG nova.virt.libvirt.driver [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Dec 06 10:14:59 np0005548790.localdomain neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584[309176]: [NOTICE]   (309180) : haproxy version is 2.8.14-c23fe91
Dec 06 10:14:59 np0005548790.localdomain neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584[309176]: [NOTICE]   (309180) : path to executable is /usr/sbin/haproxy
Dec 06 10:14:59 np0005548790.localdomain neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584[309176]: [WARNING]  (309180) : Exiting Master process...
Dec 06 10:14:59 np0005548790.localdomain neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584[309176]: [ALERT]    (309180) : Current worker (309182) exited with code 143 (Terminated)
Dec 06 10:14:59 np0005548790.localdomain neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584[309176]: [WARNING]  (309180) : All workers exited. Exiting... (0)
Dec 06 10:14:59 np0005548790.localdomain systemd[1]: libpod-f23da036ec21d29dbbeb71d57b75ba4f866c722448867bc959dfe5de00e8418b.scope: Deactivated successfully.
Dec 06 10:14:59 np0005548790.localdomain podman[309529]: 2025-12-06 10:14:59.30005113 +0000 UTC m=+0.072467214 container died f23da036ec21d29dbbeb71d57b75ba4f866c722448867bc959dfe5de00e8418b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:14:59 np0005548790.localdomain podman[309529]: 2025-12-06 10:14:59.343627525 +0000 UTC m=+0.116043509 container cleanup f23da036ec21d29dbbeb71d57b75ba4f866c722448867bc959dfe5de00e8418b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 10:14:59 np0005548790.localdomain podman[309541]: 2025-12-06 10:14:59.371482675 +0000 UTC m=+0.062925447 container cleanup f23da036ec21d29dbbeb71d57b75ba4f866c722448867bc959dfe5de00e8418b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:14:59 np0005548790.localdomain systemd[1]: libpod-conmon-f23da036ec21d29dbbeb71d57b75ba4f866c722448867bc959dfe5de00e8418b.scope: Deactivated successfully.
Dec 06 10:14:59 np0005548790.localdomain podman[309555]: 2025-12-06 10:14:59.426871228 +0000 UTC m=+0.067300415 container remove f23da036ec21d29dbbeb71d57b75ba4f866c722448867bc959dfe5de00e8418b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.431 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[b12df255-d7ec-4e20-a647-b3ddd6a47f5b]: (4, ('Sat Dec  6 10:14:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584 (f23da036ec21d29dbbeb71d57b75ba4f866c722448867bc959dfe5de00e8418b)\nf23da036ec21d29dbbeb71d57b75ba4f866c722448867bc959dfe5de00e8418b\nSat Dec  6 10:14:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584 (f23da036ec21d29dbbeb71d57b75ba4f866c722448867bc959dfe5de00e8418b)\nf23da036ec21d29dbbeb71d57b75ba4f866c722448867bc959dfe5de00e8418b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.434 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[8988d506-67e9-472d-baaa-6a9e48078306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.435 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47d636a7-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:14:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:59.437 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:59 np0005548790.localdomain kernel: device tap47d636a7-c0 left promiscuous mode
Dec 06 10:14:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:59.450 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.454 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[5dfaa4e1-a026-4b8b-8fe5-da2331c5ba7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.472 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd08767-0c07-46cb-ae99-b56c1d537e78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.473 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[cffb33da-86e7-4778-b41d-5361f6dc84d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.488 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[33ba8325-52e9-4a2c-be26-1865d300c9e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1249839, 'reachable_time': 18003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309576, 'error': None, 'target': 'ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.496 159379 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-47d636a7-c520-4320-aa94-bfb41f418584 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.497 159379 DEBUG oslo.privsep.daemon [-] privsep: reply[2becc0d4-de03-4fbb-90d3-e5472c3e08ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.498 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 3b69daca-b91a-4923-9795-2e6a02ee3d59 in datapath 932e7489-8895-41d4-92c6-0d944505e7e6 unbound from our chassis
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.502 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Port 26f7da5c-ac29-4b53-b23e-4d7e1fbc3e37 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.503 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 932e7489-8895-41d4-92c6-0d944505e7e6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.503 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[13097bb0-ce58-42d8-a6e2-ad8dd62d7d66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.504 159200 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6 namespace which is not needed anymore
Dec 06 10:14:59 np0005548790.localdomain neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6[309288]: [NOTICE]   (309296) : haproxy version is 2.8.14-c23fe91
Dec 06 10:14:59 np0005548790.localdomain neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6[309288]: [NOTICE]   (309296) : path to executable is /usr/sbin/haproxy
Dec 06 10:14:59 np0005548790.localdomain neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6[309288]: [WARNING]  (309296) : Exiting Master process...
Dec 06 10:14:59 np0005548790.localdomain neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6[309288]: [ALERT]    (309296) : Current worker (309298) exited with code 143 (Terminated)
Dec 06 10:14:59 np0005548790.localdomain neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6[309288]: [WARNING]  (309296) : All workers exited. Exiting... (0)
Dec 06 10:14:59 np0005548790.localdomain systemd[1]: libpod-b1793513deb709e28d68456ad9689ea69694c7890010a7670d49f3a289382a75.scope: Deactivated successfully.
Dec 06 10:14:59 np0005548790.localdomain podman[309596]: 2025-12-06 10:14:59.693435561 +0000 UTC m=+0.076085901 container died b1793513deb709e28d68456ad9689ea69694c7890010a7670d49f3a289382a75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 10:14:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:14:59 np0005548790.localdomain podman[309596]: 2025-12-06 10:14:59.722757211 +0000 UTC m=+0.105407541 container cleanup b1793513deb709e28d68456ad9689ea69694c7890010a7670d49f3a289382a75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 10:14:59 np0005548790.localdomain podman[309608]: 2025-12-06 10:14:59.76911468 +0000 UTC m=+0.067419027 container cleanup b1793513deb709e28d68456ad9689ea69694c7890010a7670d49f3a289382a75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:14:59 np0005548790.localdomain systemd[1]: libpod-conmon-b1793513deb709e28d68456ad9689ea69694c7890010a7670d49f3a289382a75.scope: Deactivated successfully.
Dec 06 10:14:59 np0005548790.localdomain podman[309629]: 2025-12-06 10:14:59.820883485 +0000 UTC m=+0.078669551 container remove b1793513deb709e28d68456ad9689ea69694c7890010a7670d49f3a289382a75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.828 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[d3cc0a83-ce9f-48b9-81e3-bf6a1c367ab4]: (4, ('Sat Dec  6 10:14:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6 (b1793513deb709e28d68456ad9689ea69694c7890010a7670d49f3a289382a75)\nb1793513deb709e28d68456ad9689ea69694c7890010a7670d49f3a289382a75\nSat Dec  6 10:14:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6 (b1793513deb709e28d68456ad9689ea69694c7890010a7670d49f3a289382a75)\nb1793513deb709e28d68456ad9689ea69694c7890010a7670d49f3a289382a75\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.829 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[efe7de0d-7e08-4593-ac76-d26b101f80b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.830 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap932e7489-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:14:59 np0005548790.localdomain kernel: device tap932e7489-80 left promiscuous mode
Dec 06 10:14:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:59.834 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:59.842 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.847 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[f7e13d8e-632d-40bb-bdff-e1bc29f9ed10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.869 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[21d7104e-6447-4085-9a3b-acbc8160cced]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.870 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[5570befa-6f6f-4749-a02b-f958506b5195]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.889 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[4b2d98a1-89df-403d-99d0-5489ad7075b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1249918, 'reachable_time': 17904, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309662, 'error': None, 'target': 'ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.891 159379 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-932e7489-8895-41d4-92c6-0d944505e7e6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.891 159379 DEBUG oslo.privsep.daemon [-] privsep: reply[ad2f370a-ae3e-455e-880f-ab79769c3840]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:14:59 np0005548790.localdomain podman[309610]: 2025-12-06 10:14:59.908140416 +0000 UTC m=+0.195735216 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:14:59 np0005548790.localdomain podman[309610]: 2025-12-06 10:14:59.913462669 +0000 UTC m=+0.201057489 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 06 10:14:59 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:14:59 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:14:59.933 262327 INFO neutron.agent.linux.ip_lib [None req-1013b1cd-d791-498f-932f-fa4ea573d338 - - - - - -] Device tap1766b235-0b cannot be used as it has no MAC address
Dec 06 10:14:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:59.961 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:59 np0005548790.localdomain kernel: device tap1766b235-0b entered promiscuous mode
Dec 06 10:14:59 np0005548790.localdomain systemd-udevd[309498]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:14:59 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016099.9668] manager: (tap1766b235-0b): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Dec 06 10:14:59 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:59Z|00066|binding|INFO|Claiming lport 1766b235-0baa-458c-b553-7258f331e206 for this chassis.
Dec 06 10:14:59 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:14:59Z|00067|binding|INFO|1766b235-0baa-458c-b553-7258f331e206: Claiming unknown
Dec 06 10:14:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:14:59.970 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.996 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.2/24', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-19043ea6-c6b2-4272-aa60-1b11a7b5bd93', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19043ea6-c6b2-4272-aa60-1b11a7b5bd93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9167331b2c424ef6961b096b551f8434', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927c8639-172d-4240-b8a1-85db1fd6c03d, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=1766b235-0baa-458c-b553-7258f331e206) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.996 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 1766b235-0baa-458c-b553-7258f331e206 in datapath 19043ea6-c6b2-4272-aa60-1b11a7b5bd93 bound to our chassis
Dec 06 10:14:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.998 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 19043ea6-c6b2-4272-aa60-1b11a7b5bd93 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:15:00 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:14:59.999 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[22c32345-cce3-4ae8-ba8c-6b5dd6f54cb9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:00 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:00Z|00068|binding|INFO|Setting lport 1766b235-0baa-458c-b553-7258f331e206 ovn-installed in OVS
Dec 06 10:15:00 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:00Z|00069|binding|INFO|Setting lport 1766b235-0baa-458c-b553-7258f331e206 up in Southbound
Dec 06 10:15:00 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:00.007 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:00 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:00.043 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:00 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:00.073 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:00 np0005548790.localdomain ceph-mon[301742]: pgmap v115: 177 pgs: 177 active+clean; 283 MiB data, 934 MiB used, 41 GiB / 42 GiB avail; 7.9 MiB/s rd, 6.3 MiB/s wr, 256 op/s
Dec 06 10:15:00 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-acebbc5a7fd326383fb625da03e1e57c1b8c73dc122d6355e7c20e587b0c2b07-merged.mount: Deactivated successfully.
Dec 06 10:15:00 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b1793513deb709e28d68456ad9689ea69694c7890010a7670d49f3a289382a75-userdata-shm.mount: Deactivated successfully.
Dec 06 10:15:00 np0005548790.localdomain systemd[1]: run-netns-ovnmeta\x2d932e7489\x2d8895\x2d41d4\x2d92c6\x2d0d944505e7e6.mount: Deactivated successfully.
Dec 06 10:15:00 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-791bcba787042cfbc057054e4e5355ffb1adbc4cf7d2d55b357c0159c846e703-merged.mount: Deactivated successfully.
Dec 06 10:15:00 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f23da036ec21d29dbbeb71d57b75ba4f866c722448867bc959dfe5de00e8418b-userdata-shm.mount: Deactivated successfully.
Dec 06 10:15:00 np0005548790.localdomain systemd[1]: run-netns-ovnmeta\x2d47d636a7\x2dc520\x2d4320\x2daa94\x2dbfb41f418584.mount: Deactivated successfully.
Dec 06 10:15:00 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:00.482 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:00 np0005548790.localdomain podman[309731]: 
Dec 06 10:15:00 np0005548790.localdomain podman[309731]: 2025-12-06 10:15:00.924326199 +0000 UTC m=+0.085761793 container create 7b9bb58a6331674495e5f9f3d408ccf521c163d2293bf9d3195a368f77417d81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19043ea6-c6b2-4272-aa60-1b11a7b5bd93, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 06 10:15:00 np0005548790.localdomain systemd[1]: Started libpod-conmon-7b9bb58a6331674495e5f9f3d408ccf521c163d2293bf9d3195a368f77417d81.scope.
Dec 06 10:15:00 np0005548790.localdomain systemd[1]: tmp-crun.oJer6j.mount: Deactivated successfully.
Dec 06 10:15:00 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:15:00 np0005548790.localdomain podman[309731]: 2025-12-06 10:15:00.882649205 +0000 UTC m=+0.044084799 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:15:00 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d31bfadcf4150577d36d5f63ebd34ecf371bac720edb6b14f5d4d13960aa3a3c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:15:00 np0005548790.localdomain podman[309731]: 2025-12-06 10:15:00.992527126 +0000 UTC m=+0.153962760 container init 7b9bb58a6331674495e5f9f3d408ccf521c163d2293bf9d3195a368f77417d81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19043ea6-c6b2-4272-aa60-1b11a7b5bd93, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 10:15:01 np0005548790.localdomain podman[309731]: 2025-12-06 10:15:01.000952903 +0000 UTC m=+0.162388527 container start 7b9bb58a6331674495e5f9f3d408ccf521c163d2293bf9d3195a368f77417d81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19043ea6-c6b2-4272-aa60-1b11a7b5bd93, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 10:15:01 np0005548790.localdomain dnsmasq[309749]: started, version 2.85 cachesize 150
Dec 06 10:15:01 np0005548790.localdomain dnsmasq[309749]: DNS service limited to local subnets
Dec 06 10:15:01 np0005548790.localdomain dnsmasq[309749]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:15:01 np0005548790.localdomain dnsmasq[309749]: warning: no upstream servers configured
Dec 06 10:15:01 np0005548790.localdomain dnsmasq-dhcp[309749]: DHCP, static leases only on 19.80.0.0, lease time 1d
Dec 06 10:15:01 np0005548790.localdomain dnsmasq[309749]: read /var/lib/neutron/dhcp/19043ea6-c6b2-4272-aa60-1b11a7b5bd93/addn_hosts - 0 addresses
Dec 06 10:15:01 np0005548790.localdomain dnsmasq-dhcp[309749]: read /var/lib/neutron/dhcp/19043ea6-c6b2-4272-aa60-1b11a7b5bd93/host
Dec 06 10:15:01 np0005548790.localdomain dnsmasq-dhcp[309749]: read /var/lib/neutron/dhcp/19043ea6-c6b2-4272-aa60-1b11a7b5bd93/opts
Dec 06 10:15:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v116: 177 pgs: 177 active+clean; 283 MiB data, 934 MiB used, 41 GiB / 42 GiB avail; 7.9 MiB/s rd, 6.3 MiB/s wr, 256 op/s
Dec 06 10:15:01 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:15:01.202 262327 INFO neutron.agent.dhcp.agent [None req-6cfd4e0f-5738-4b3a-af2b-fa554799fcf2 - - - - - -] DHCP configuration for ports {'b960e3cf-838e-4b32-93f1-7da76cedadcc'} is completed
Dec 06 10:15:01 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:01.608 280869 DEBUG nova.compute.manager [req-25f97bb0-b450-465a-9d87-fe26de9af559 req-acfa77b7-2928-496a-b2d5-5355dbf0f8ba 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Received event network-vif-unplugged-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:15:01 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:01.609 280869 DEBUG oslo_concurrency.lockutils [req-25f97bb0-b450-465a-9d87-fe26de9af559 req-acfa77b7-2928-496a-b2d5-5355dbf0f8ba 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:01 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:01.610 280869 DEBUG oslo_concurrency.lockutils [req-25f97bb0-b450-465a-9d87-fe26de9af559 req-acfa77b7-2928-496a-b2d5-5355dbf0f8ba 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:01 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:01.611 280869 DEBUG oslo_concurrency.lockutils [req-25f97bb0-b450-465a-9d87-fe26de9af559 req-acfa77b7-2928-496a-b2d5-5355dbf0f8ba 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:01 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:01.611 280869 DEBUG nova.compute.manager [req-25f97bb0-b450-465a-9d87-fe26de9af559 req-acfa77b7-2928-496a-b2d5-5355dbf0f8ba 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] No waiting events found dispatching network-vif-unplugged-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:15:01 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:01.612 280869 DEBUG nova.compute.manager [req-25f97bb0-b450-465a-9d87-fe26de9af559 req-acfa77b7-2928-496a-b2d5-5355dbf0f8ba 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Received event network-vif-unplugged-e87832d3-ffc3-44e0-9f77-cd2eb6073d62 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 10:15:01 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:01.786 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:01 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e109 e109: 6 total, 6 up, 6 in
Dec 06 10:15:02 np0005548790.localdomain ceph-mon[301742]: pgmap v116: 177 pgs: 177 active+clean; 283 MiB data, 934 MiB used, 41 GiB / 42 GiB avail; 7.9 MiB/s rd, 6.3 MiB/s wr, 256 op/s
Dec 06 10:15:02 np0005548790.localdomain ceph-mon[301742]: osdmap e109: 6 total, 6 up, 6 in
Dec 06 10:15:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v118: 177 pgs: 177 active+clean; 304 MiB data, 1009 MiB used, 41 GiB / 42 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 179 op/s
Dec 06 10:15:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:15:04 np0005548790.localdomain ceph-mon[301742]: pgmap v118: 177 pgs: 177 active+clean; 304 MiB data, 1009 MiB used, 41 GiB / 42 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 179 op/s
Dec 06 10:15:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:04.675 280869 DEBUG nova.network.neutron [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Activated binding for port e87832d3-ffc3-44e0-9f77-cd2eb6073d62 and host np0005548789.localdomain migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Dec 06 10:15:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:04.675 280869 DEBUG nova.compute.manager [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "address": "fa:16:3e:0e:f5:37", "network": {"id": "47d636a7-c520-4320-aa94-bfb41f418584", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1313845827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "7897d6398eb64eb29c66df8db792e581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape87832d3-ff", "ovs_interfaceid": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Dec 06 10:15:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:04.676 280869 DEBUG nova.virt.libvirt.vif [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:14:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1999616987',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005548790.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-1999616987',id=7,image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:14:45Z,launched_on='np0005548790.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005548790.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='7897d6398eb64eb29c66df8db792e581',ramdisk_id='',reservation_id='r-tcv45ne4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-265776820',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-265776820-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:14:47Z,user_data=None,user_id='ac2e85103fd14829ad4e6df2357da95b',uuid=87dc2ce3-2b16-4764-9803-711c2d12c20f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "address": "fa:16:3e:0e:f5:37", "network": {"id": "47d636a7-c520-4320-aa94-bfb41f418584", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1313845827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "7897d6398eb64eb29c66df8db792e581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape87832d3-ff", "ovs_interfaceid": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 10:15:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:04.677 280869 DEBUG nova.network.os_vif_util [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Converting VIF {"id": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "address": "fa:16:3e:0e:f5:37", "network": {"id": "47d636a7-c520-4320-aa94-bfb41f418584", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1313845827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "7897d6398eb64eb29c66df8db792e581", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape87832d3-ff", "ovs_interfaceid": "e87832d3-ffc3-44e0-9f77-cd2eb6073d62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:15:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:04.678 280869 DEBUG nova.network.os_vif_util [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:f5:37,bridge_name='br-int',has_traffic_filtering=True,id=e87832d3-ffc3-44e0-9f77-cd2eb6073d62,network=Network(47d636a7-c520-4320-aa94-bfb41f418584),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape87832d3-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:15:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:04.679 280869 DEBUG os_vif [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:f5:37,bridge_name='br-int',has_traffic_filtering=True,id=e87832d3-ffc3-44e0-9f77-cd2eb6073d62,network=Network(47d636a7-c520-4320-aa94-bfb41f418584),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape87832d3-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 10:15:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:04.681 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:04.682 280869 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape87832d3-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:15:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:04.684 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:04.687 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:15:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:04.687 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:04.690 280869 INFO os_vif [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:f5:37,bridge_name='br-int',has_traffic_filtering=True,id=e87832d3-ffc3-44e0-9f77-cd2eb6073d62,network=Network(47d636a7-c520-4320-aa94-bfb41f418584),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape87832d3-ff')
Dec 06 10:15:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:04.691 280869 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:04.691 280869 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:04.691 280869 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:04.692 280869 DEBUG nova.compute.manager [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Dec 06 10:15:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:04.692 280869 INFO nova.virt.libvirt.driver [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Deleting instance files /var/lib/nova/instances/87dc2ce3-2b16-4764-9803-711c2d12c20f_del
Dec 06 10:15:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:04.693 280869 INFO nova.virt.libvirt.driver [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Deletion of /var/lib/nova/instances/87dc2ce3-2b16-4764-9803-711c2d12c20f_del complete
Dec 06 10:15:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v119: 177 pgs: 177 active+clean; 304 MiB data, 1009 MiB used, 41 GiB / 42 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 178 op/s
Dec 06 10:15:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:15:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:15:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:15:05 np0005548790.localdomain podman[309750]: 2025-12-06 10:15:05.577899795 +0000 UTC m=+0.087798826 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:15:05 np0005548790.localdomain podman[309750]: 2025-12-06 10:15:05.590176877 +0000 UTC m=+0.100075968 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:15:05 np0005548790.localdomain podman[309751]: 2025-12-06 10:15:05.633856623 +0000 UTC m=+0.141760610 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 06 10:15:05 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:15:05 np0005548790.localdomain podman[309752]: 2025-12-06 10:15:05.694140308 +0000 UTC m=+0.197050011 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, architecture=x86_64, name=ubi9-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=)
Dec 06 10:15:05 np0005548790.localdomain podman[309752]: 2025-12-06 10:15:05.708011692 +0000 UTC m=+0.210921365 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-type=git, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 06 10:15:05 np0005548790.localdomain podman[309751]: 2025-12-06 10:15:05.718532284 +0000 UTC m=+0.226436261 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 10:15:05 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:15:05 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:15:06 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:15:06.020 2 INFO neutron.agent.securitygroups_rpc [None req-32f5fe5b-2f75-4cad-9292-d5acba05dc94 b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Security group member updated ['4c82b56e-0fc5-4c7f-8922-ceb8236815fd']
Dec 06 10:15:06 np0005548790.localdomain ceph-mon[301742]: pgmap v119: 177 pgs: 177 active+clean; 304 MiB data, 1009 MiB used, 41 GiB / 42 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 178 op/s
Dec 06 10:15:06 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:15:06.342 262327 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:15:05Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c8581fb80>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c8581f6a0>], id=99b309b3-9e3d-4a23-b110-d99707c2eb4e, ip_allocation=immediate, mac_address=fa:16:3e:11:27:4d, name=tempest-subport-2060007817, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:14:54Z, description=, dns_domain=, id=19043ea6-c6b2-4272-aa60-1b11a7b5bd93, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-subport_net-1488836961, port_security_enabled=True, project_id=9167331b2c424ef6961b096b551f8434, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=19442, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=636, status=ACTIVE, subnets=['f8f82a3a-fd97-4442-8b92-e8f382ef6fe8'], tags=[], tenant_id=9167331b2c424ef6961b096b551f8434, updated_at=2025-12-06T10:14:58Z, vlan_transparent=None, network_id=19043ea6-c6b2-4272-aa60-1b11a7b5bd93, port_security_enabled=True, project_id=9167331b2c424ef6961b096b551f8434, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['4c82b56e-0fc5-4c7f-8922-ceb8236815fd'], standard_attr_id=656, status=DOWN, tags=[], tenant_id=9167331b2c424ef6961b096b551f8434, updated_at=2025-12-06T10:15:05Z on network 19043ea6-c6b2-4272-aa60-1b11a7b5bd93
Dec 06 10:15:06 np0005548790.localdomain podman[309825]: 2025-12-06 10:15:06.546309261 +0000 UTC m=+0.046097338 container kill 7b9bb58a6331674495e5f9f3d408ccf521c163d2293bf9d3195a368f77417d81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19043ea6-c6b2-4272-aa60-1b11a7b5bd93, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 10:15:06 np0005548790.localdomain dnsmasq[309749]: read /var/lib/neutron/dhcp/19043ea6-c6b2-4272-aa60-1b11a7b5bd93/addn_hosts - 1 addresses
Dec 06 10:15:06 np0005548790.localdomain dnsmasq-dhcp[309749]: read /var/lib/neutron/dhcp/19043ea6-c6b2-4272-aa60-1b11a7b5bd93/host
Dec 06 10:15:06 np0005548790.localdomain dnsmasq-dhcp[309749]: read /var/lib/neutron/dhcp/19043ea6-c6b2-4272-aa60-1b11a7b5bd93/opts
Dec 06 10:15:06 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:06.790 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:06 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:15:06.807 262327 INFO neutron.agent.dhcp.agent [None req-01e1302c-2aa7-4d0d-bb56-c6956037cd15 - - - - - -] DHCP configuration for ports {'99b309b3-9e3d-4a23-b110-d99707c2eb4e'} is completed
Dec 06 10:15:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v120: 177 pgs: 177 active+clean; 304 MiB data, 1009 MiB used, 41 GiB / 42 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 178 op/s
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:15:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:15:07 np0005548790.localdomain ceph-mon[301742]: pgmap v120: 177 pgs: 177 active+clean; 304 MiB data, 1009 MiB used, 41 GiB / 42 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 178 op/s
Dec 06 10:15:08 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:08.082 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:08 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e110 e110: 6 total, 6 up, 6 in
Dec 06 10:15:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:15:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v122: 177 pgs: 177 active+clean; 306 MiB data, 1009 MiB used, 41 GiB / 42 GiB avail; 1.2 MiB/s rd, 1.2 MiB/s wr, 144 op/s
Dec 06 10:15:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:09.610 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:09.649 280869 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Acquiring lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:09.650 280869 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:09.650 280869 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Lock "87dc2ce3-2b16-4764-9803-711c2d12c20f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:09.683 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:09.710 280869 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:09.711 280869 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:09.711 280869 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:09.711 280869 DEBUG nova.compute.resource_tracker [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:15:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:09.712 280869 DEBUG oslo_concurrency.processutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:10 np0005548790.localdomain ceph-mon[301742]: osdmap e110: 6 total, 6 up, 6 in
Dec 06 10:15:10 np0005548790.localdomain ceph-mon[301742]: pgmap v122: 177 pgs: 177 active+clean; 306 MiB data, 1009 MiB used, 41 GiB / 42 GiB avail; 1.2 MiB/s rd, 1.2 MiB/s wr, 144 op/s
Dec 06 10:15:10 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:15:10 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/263018422' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:10 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:10.215 280869 DEBUG oslo_concurrency.processutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:10 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:10.420 280869 WARNING nova.virt.libvirt.driver [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:15:10 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:10.422 280869 DEBUG nova.compute.resource_tracker [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=11647MB free_disk=41.563785552978516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:15:10 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:10.422 280869 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:10 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:10.423 280869 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:10 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:10.466 280869 DEBUG nova.compute.resource_tracker [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Migration for instance 87dc2ce3-2b16-4764-9803-711c2d12c20f refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Dec 06 10:15:10 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:10.508 280869 DEBUG nova.compute.resource_tracker [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Dec 06 10:15:10 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:10.540 280869 DEBUG nova.compute.resource_tracker [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Migration b90e00bf-bb94-4755-ba96-2ce831a9f185 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Dec 06 10:15:10 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:10.540 280869 DEBUG nova.compute.resource_tracker [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:15:10 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:10.541 280869 DEBUG nova.compute.resource_tracker [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:15:10 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:10.609 280869 DEBUG oslo_concurrency.processutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:11 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:15:11 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2133323897' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:11 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:11.032 280869 DEBUG oslo_concurrency.processutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:11 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:11.039 280869 DEBUG nova.compute.provider_tree [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:15:11 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e111 e111: 6 total, 6 up, 6 in
Dec 06 10:15:11 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:11.051 280869 DEBUG nova.scheduler.client.report [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:15:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v124: 177 pgs: 177 active+clean; 306 MiB data, 1009 MiB used, 41 GiB / 42 GiB avail; 793 KiB/s rd, 64 KiB/s wr, 70 op/s
Dec 06 10:15:11 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/263018422' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:11 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2133323897' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:11 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:11.075 280869 DEBUG nova.compute.resource_tracker [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:15:11 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:11.075 280869 DEBUG oslo_concurrency.lockutils [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:11 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:11.081 280869 INFO nova.compute.manager [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Migrating instance to np0005548789.localdomain finished successfully.
Dec 06 10:15:11 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:11.244 280869 INFO nova.scheduler.client.report [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] Deleted allocation for migration b90e00bf-bb94-4755-ba96-2ce831a9f185
Dec 06 10:15:11 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:11.245 280869 DEBUG nova.virt.libvirt.driver [None req-72030b45-f187-4b30-a9d9-59e0042b4b0f 310ca84346a742a09c8478aa7405cb30 5bd426c09dd743399e71eb5c44db45cb - - default default] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Dec 06 10:15:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:15:11 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:11.546 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:11 np0005548790.localdomain systemd[1]: tmp-crun.zn2ZlQ.mount: Deactivated successfully.
Dec 06 10:15:11 np0005548790.localdomain podman[309891]: 2025-12-06 10:15:11.583962642 +0000 UTC m=+0.095424542 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 10:15:11 np0005548790.localdomain podman[309891]: 2025-12-06 10:15:11.5931447 +0000 UTC m=+0.104606600 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 10:15:11 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:15:11 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:11.835 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:15:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:15:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:15:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:15:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:15:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:15:12 np0005548790.localdomain ceph-mon[301742]: pgmap v124: 177 pgs: 177 active+clean; 306 MiB data, 1009 MiB used, 41 GiB / 42 GiB avail; 793 KiB/s rd, 64 KiB/s wr, 70 op/s
Dec 06 10:15:12 np0005548790.localdomain ceph-mon[301742]: osdmap e111: 6 total, 6 up, 6 in
Dec 06 10:15:12 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e112 e112: 6 total, 6 up, 6 in
Dec 06 10:15:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:12.232 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:12.801 280869 DEBUG oslo_concurrency.lockutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Acquiring lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:12.802 280869 DEBUG oslo_concurrency.lockutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:12.827 280869 DEBUG nova.compute.manager [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 10:15:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:12.995 280869 DEBUG oslo_concurrency.lockutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:12.996 280869 DEBUG oslo_concurrency.lockutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:13.002 280869 DEBUG nova.virt.hardware [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 10:15:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:13.003 280869 INFO nova.compute.claims [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Claim successful on node np0005548790.localdomain
Dec 06 10:15:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v126: 177 pgs: 177 active+clean; 387 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 8.9 MiB/s rd, 7.8 MiB/s wr, 266 op/s
Dec 06 10:15:13 np0005548790.localdomain ceph-mon[301742]: osdmap e112: 6 total, 6 up, 6 in
Dec 06 10:15:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:13.151 280869 DEBUG oslo_concurrency.processutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:13 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:15:13 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2629540065' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:13.596 280869 DEBUG oslo_concurrency.processutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:13.604 280869 DEBUG nova.compute.provider_tree [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:15:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:13.627 280869 DEBUG nova.scheduler.client.report [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:15:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:13.650 280869 DEBUG oslo_concurrency.lockutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:13.652 280869 DEBUG nova.compute.manager [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 10:15:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:13.698 280869 DEBUG nova.compute.manager [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 10:15:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:13.699 280869 DEBUG nova.network.neutron [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 10:15:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:13.716 280869 INFO nova.virt.libvirt.driver [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 10:15:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:13.746 280869 DEBUG nova.compute.manager [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 10:15:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:13.868 280869 DEBUG nova.compute.manager [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 10:15:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:13.870 280869 DEBUG nova.virt.libvirt.driver [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 10:15:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:13.871 280869 INFO nova.virt.libvirt.driver [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Creating image(s)
Dec 06 10:15:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:13.911 280869 DEBUG nova.storage.rbd_utils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] rbd image ed40901b-0bfc-426a-bf70-48d87ce95aa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:15:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:13.964 280869 DEBUG nova.storage.rbd_utils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] rbd image ed40901b-0bfc-426a-bf70-48d87ce95aa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:15:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:14.002 280869 DEBUG nova.storage.rbd_utils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] rbd image ed40901b-0bfc-426a-bf70-48d87ce95aa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:15:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:14.006 280869 DEBUG oslo_concurrency.processutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb68b180567fda17719a7393615b2f958ad3226e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:15:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:14.079 280869 DEBUG oslo_concurrency.processutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb68b180567fda17719a7393615b2f958ad3226e --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:14.080 280869 DEBUG oslo_concurrency.lockutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Acquiring lock "cb68b180567fda17719a7393615b2f958ad3226e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:14.081 280869 DEBUG oslo_concurrency.lockutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Lock "cb68b180567fda17719a7393615b2f958ad3226e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:14.081 280869 DEBUG oslo_concurrency.lockutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Lock "cb68b180567fda17719a7393615b2f958ad3226e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:14.117 280869 DEBUG nova.storage.rbd_utils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] rbd image ed40901b-0bfc-426a-bf70-48d87ce95aa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:15:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:14.122 280869 DEBUG oslo_concurrency.processutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb68b180567fda17719a7393615b2f958ad3226e ed40901b-0bfc-426a-bf70-48d87ce95aa6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:14 np0005548790.localdomain ceph-mon[301742]: pgmap v126: 177 pgs: 177 active+clean; 387 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 8.9 MiB/s rd, 7.8 MiB/s wr, 266 op/s
Dec 06 10:15:14 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2629540065' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:14.169 280869 DEBUG nova.policy [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b25d9e5ec9eb4368a764482a325b9dda', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9167331b2c424ef6961b096b551f8434', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 10:15:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:14.206 280869 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765016099.205541, 87dc2ce3-2b16-4764-9803-711c2d12c20f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:15:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:14.207 280869 INFO nova.compute.manager [-] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] VM Stopped (Lifecycle Event)
Dec 06 10:15:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:14.237 280869 DEBUG nova.compute.manager [None req-ae40be30-0ac6-453e-971f-f231fd76e28b - - - - - -] [instance: 87dc2ce3-2b16-4764-9803-711c2d12c20f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:15:14 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:15:14 np0005548790.localdomain podman[310025]: 2025-12-06 10:15:14.584916427 +0000 UTC m=+0.098492424 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:15:14 np0005548790.localdomain podman[310025]: 2025-12-06 10:15:14.628838086 +0000 UTC m=+0.142414023 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:15:14 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:15:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:14.673 280869 DEBUG oslo_concurrency.processutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb68b180567fda17719a7393615b2f958ad3226e ed40901b-0bfc-426a-bf70-48d87ce95aa6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:14.714 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:14.771 280869 DEBUG nova.storage.rbd_utils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] resizing rbd image ed40901b-0bfc-426a-bf70-48d87ce95aa6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 10:15:14 np0005548790.localdomain snmpd[67989]: empty variable list in _query
Dec 06 10:15:14 np0005548790.localdomain snmpd[67989]: empty variable list in _query
Dec 06 10:15:14 np0005548790.localdomain snmpd[67989]: empty variable list in _query
Dec 06 10:15:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:14.920 280869 DEBUG nova.objects.instance [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Lazy-loading 'migration_context' on Instance uuid ed40901b-0bfc-426a-bf70-48d87ce95aa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:15:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:14.941 280869 DEBUG nova.virt.libvirt.driver [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 10:15:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:14.942 280869 DEBUG nova.virt.libvirt.driver [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Ensure instance console log exists: /var/lib/nova/instances/ed40901b-0bfc-426a-bf70-48d87ce95aa6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 10:15:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:14.943 280869 DEBUG oslo_concurrency.lockutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:14.943 280869 DEBUG oslo_concurrency.lockutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:14.944 280869 DEBUG oslo_concurrency.lockutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v127: 177 pgs: 177 active+clean; 387 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 7.8 MiB/s rd, 7.7 MiB/s wr, 171 op/s
Dec 06 10:15:15 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/975209313' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:15:15 np0005548790.localdomain podman[310120]: 2025-12-06 10:15:15.561986504 +0000 UTC m=+0.076257334 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:15:15 np0005548790.localdomain podman[310120]: 2025-12-06 10:15:15.598724888 +0000 UTC m=+0.112995708 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:15:15 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:15:15 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:15.789 280869 DEBUG oslo_concurrency.lockutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Acquiring lock "3d34a856-7613-4158-b859-fb3089fe3bc7" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:15 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:15.791 280869 DEBUG oslo_concurrency.lockutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Lock "3d34a856-7613-4158-b859-fb3089fe3bc7" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:15 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:15.792 280869 INFO nova.compute.manager [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Unshelving
Dec 06 10:15:15 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:15.867 280869 DEBUG oslo_concurrency.lockutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:15 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:15.868 280869 DEBUG oslo_concurrency.lockutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:15 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:15.870 280869 DEBUG nova.objects.instance [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Lazy-loading 'pci_requests' on Instance uuid 3d34a856-7613-4158-b859-fb3089fe3bc7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:15:15 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:15.884 280869 DEBUG nova.objects.instance [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3d34a856-7613-4158-b859-fb3089fe3bc7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:15:15 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:15.903 280869 DEBUG nova.virt.hardware [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 10:15:15 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:15.903 280869 INFO nova.compute.claims [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Claim successful on node np0005548790.localdomain
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.030 280869 DEBUG oslo_concurrency.processutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:16 np0005548790.localdomain ceph-mon[301742]: pgmap v127: 177 pgs: 177 active+clean; 387 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 7.8 MiB/s rd, 7.7 MiB/s wr, 171 op/s
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.474 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:16 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:15:16 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2224318560' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.492 280869 DEBUG oslo_concurrency.processutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.500 280869 DEBUG nova.compute.provider_tree [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.524 280869 DEBUG nova.scheduler.client.report [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.527 280869 DEBUG nova.network.neutron [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Successfully updated port: feb6a13d-305a-4541-a50e-4988833ecf82 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.553 280869 DEBUG oslo_concurrency.lockutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Acquiring lock "refresh_cache-ed40901b-0bfc-426a-bf70-48d87ce95aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.554 280869 DEBUG oslo_concurrency.lockutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Acquired lock "refresh_cache-ed40901b-0bfc-426a-bf70-48d87ce95aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.554 280869 DEBUG nova.network.neutron [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.557 280869 DEBUG oslo_concurrency.lockutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.588 280869 DEBUG nova.compute.manager [req-fb17961e-788b-4ff9-b9dd-6a4b5f4f37b3 req-172193ad-682c-4e9e-b5bf-5ec2704e1c0b 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Received event network-changed-feb6a13d-305a-4541-a50e-4988833ecf82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.589 280869 DEBUG nova.compute.manager [req-fb17961e-788b-4ff9-b9dd-6a4b5f4f37b3 req-172193ad-682c-4e9e-b5bf-5ec2704e1c0b 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Refreshing instance network info cache due to event network-changed-feb6a13d-305a-4541-a50e-4988833ecf82. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.589 280869 DEBUG oslo_concurrency.lockutils [req-fb17961e-788b-4ff9-b9dd-6a4b5f4f37b3 req-172193ad-682c-4e9e-b5bf-5ec2704e1c0b 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "refresh_cache-ed40901b-0bfc-426a-bf70-48d87ce95aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.599 280869 DEBUG oslo_concurrency.lockutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Acquiring lock "refresh_cache-3d34a856-7613-4158-b859-fb3089fe3bc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.599 280869 DEBUG oslo_concurrency.lockutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Acquired lock "refresh_cache-3d34a856-7613-4158-b859-fb3089fe3bc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.599 280869 DEBUG nova.network.neutron [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.623 280869 DEBUG nova.network.neutron [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.640 280869 DEBUG nova.network.neutron [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.759 280869 DEBUG nova.network.neutron [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.776 280869 DEBUG oslo_concurrency.lockutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Releasing lock "refresh_cache-3d34a856-7613-4158-b859-fb3089fe3bc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.777 280869 DEBUG nova.virt.libvirt.driver [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.778 280869 INFO nova.virt.libvirt.driver [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Creating image(s)
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.805 280869 DEBUG nova.storage.rbd_utils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] rbd image 3d34a856-7613-4158-b859-fb3089fe3bc7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.809 280869 DEBUG nova.objects.instance [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 3d34a856-7613-4158-b859-fb3089fe3bc7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.870 280869 DEBUG nova.storage.rbd_utils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] rbd image 3d34a856-7613-4158-b859-fb3089fe3bc7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.898 280869 DEBUG nova.storage.rbd_utils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] rbd image 3d34a856-7613-4158-b859-fb3089fe3bc7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.903 280869 DEBUG oslo_concurrency.lockutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Acquiring lock "eb6ae4f9f77a50e8bd51ea1693a49e681e852e85" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.903 280869 DEBUG oslo_concurrency.lockutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Lock "eb6ae4f9f77a50e8bd51ea1693a49e681e852e85" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.908 280869 DEBUG nova.network.neutron [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Updating instance_info_cache with network_info: [{"id": "feb6a13d-305a-4541-a50e-4988833ecf82", "address": "fa:16:3e:e5:ea:4a", "network": {"id": "45604602-bc87-4608-9881-9568cbf90870", "bridge": "br-int", "label": "tempest-LiveMigrationTest-802114316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9167331b2c424ef6961b096b551f8434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeb6a13d-30", "ovs_interfaceid": "feb6a13d-305a-4541-a50e-4988833ecf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.912 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.930 280869 DEBUG oslo_concurrency.lockutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Releasing lock "refresh_cache-ed40901b-0bfc-426a-bf70-48d87ce95aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.930 280869 DEBUG nova.compute.manager [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Instance network_info: |[{"id": "feb6a13d-305a-4541-a50e-4988833ecf82", "address": "fa:16:3e:e5:ea:4a", "network": {"id": "45604602-bc87-4608-9881-9568cbf90870", "bridge": "br-int", "label": "tempest-LiveMigrationTest-802114316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9167331b2c424ef6961b096b551f8434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeb6a13d-30", "ovs_interfaceid": "feb6a13d-305a-4541-a50e-4988833ecf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.930 280869 DEBUG oslo_concurrency.lockutils [req-fb17961e-788b-4ff9-b9dd-6a4b5f4f37b3 req-172193ad-682c-4e9e-b5bf-5ec2704e1c0b 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquired lock "refresh_cache-ed40901b-0bfc-426a-bf70-48d87ce95aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.931 280869 DEBUG nova.network.neutron [req-fb17961e-788b-4ff9-b9dd-6a4b5f4f37b3 req-172193ad-682c-4e9e-b5bf-5ec2704e1c0b 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Refreshing network info cache for port feb6a13d-305a-4541-a50e-4988833ecf82 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.933 280869 DEBUG nova.virt.libvirt.driver [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Start _get_guest_xml network_info=[{"id": "feb6a13d-305a-4541-a50e-4988833ecf82", "address": "fa:16:3e:e5:ea:4a", "network": {"id": "45604602-bc87-4608-9881-9568cbf90870", "bridge": "br-int", "label": "tempest-LiveMigrationTest-802114316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9167331b2c424ef6961b096b551f8434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeb6a13d-30", "ovs_interfaceid": "feb6a13d-305a-4541-a50e-4988833ecf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:13:11Z,direct_url=<?>,disk_format='qcow2',id=6a944ab6-8965-4055-b7fc-af6e395005ea,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3d603431c0bb4967bafc7a0aa6108bfe',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:13:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'boot_index': 0, 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'image_id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.938 280869 WARNING nova.virt.libvirt.driver [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.945 280869 DEBUG nova.virt.libvirt.host [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Searching host: 'np0005548790.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.946 280869 DEBUG nova.virt.libvirt.host [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 10:15:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:16.949 280869 DEBUG nova.virt.libvirt.imagebackend [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Image locations are: [{'url': 'rbd://1939e851-b10c-5c3b-9bb7-8e7f380233e8/images/af540be2-bf52-4bff-b4bd-6dea5cca6542/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://1939e851-b10c-5c3b-9bb7-8e7f380233e8/images/af540be2-bf52-4bff-b4bd-6dea5cca6542/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 06 10:15:16 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e113 e113: 6 total, 6 up, 6 in
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.019 280869 DEBUG nova.virt.libvirt.host [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Searching host: 'np0005548790.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.020 280869 DEBUG nova.virt.libvirt.host [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.020 280869 DEBUG nova.virt.libvirt.driver [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.021 280869 DEBUG nova.virt.hardware [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T10:13:10Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a0a7498e-22eb-495c-a2e3-89ba9e483bf6',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:13:11Z,direct_url=<?>,disk_format='qcow2',id=6a944ab6-8965-4055-b7fc-af6e395005ea,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3d603431c0bb4967bafc7a0aa6108bfe',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:13:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.021 280869 DEBUG nova.virt.hardware [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.022 280869 DEBUG nova.virt.hardware [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.022 280869 DEBUG nova.virt.hardware [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.022 280869 DEBUG nova.virt.hardware [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.023 280869 DEBUG nova.virt.hardware [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.023 280869 DEBUG nova.virt.hardware [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.024 280869 DEBUG nova.virt.hardware [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.024 280869 DEBUG nova.virt.hardware [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.025 280869 DEBUG nova.virt.hardware [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.025 280869 DEBUG nova.virt.hardware [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.028 280869 DEBUG oslo_concurrency.processutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.051 280869 DEBUG nova.virt.libvirt.imagebackend [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Selected location: {'url': 'rbd://1939e851-b10c-5c3b-9bb7-8e7f380233e8/images/af540be2-bf52-4bff-b4bd-6dea5cca6542/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.052 280869 DEBUG nova.storage.rbd_utils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] cloning images/af540be2-bf52-4bff-b4bd-6dea5cca6542@snap to None/3d34a856-7613-4158-b859-fb3089fe3bc7_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 06 10:15:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v129: 177 pgs: 177 active+clean; 387 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 173 op/s
Dec 06 10:15:17 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2224318560' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:17 np0005548790.localdomain ceph-mon[301742]: osdmap e113: 6 total, 6 up, 6 in
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.237 280869 DEBUG oslo_concurrency.lockutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Lock "eb6ae4f9f77a50e8bd51ea1693a49e681e852e85" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.421 280869 DEBUG nova.network.neutron [req-fb17961e-788b-4ff9-b9dd-6a4b5f4f37b3 req-172193ad-682c-4e9e-b5bf-5ec2704e1c0b 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Updated VIF entry in instance network info cache for port feb6a13d-305a-4541-a50e-4988833ecf82. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.422 280869 DEBUG nova.network.neutron [req-fb17961e-788b-4ff9-b9dd-6a4b5f4f37b3 req-172193ad-682c-4e9e-b5bf-5ec2704e1c0b 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Updating instance_info_cache with network_info: [{"id": "feb6a13d-305a-4541-a50e-4988833ecf82", "address": "fa:16:3e:e5:ea:4a", "network": {"id": "45604602-bc87-4608-9881-9568cbf90870", "bridge": "br-int", "label": "tempest-LiveMigrationTest-802114316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9167331b2c424ef6961b096b551f8434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeb6a13d-30", "ovs_interfaceid": "feb6a13d-305a-4541-a50e-4988833ecf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.432 280869 DEBUG nova.objects.instance [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Lazy-loading 'migration_context' on Instance uuid 3d34a856-7613-4158-b859-fb3089fe3bc7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:15:17 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:15:17 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4212878909' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.521 280869 DEBUG oslo_concurrency.lockutils [req-fb17961e-788b-4ff9-b9dd-6a4b5f4f37b3 req-172193ad-682c-4e9e-b5bf-5ec2704e1c0b 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Releasing lock "refresh_cache-ed40901b-0bfc-426a-bf70-48d87ce95aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.523 280869 DEBUG oslo_concurrency.processutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:17 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:15:17.545 2 INFO neutron.agent.securitygroups_rpc [None req-5e443fd1-82aa-48be-b4ff-976554ebf448 da7bbd24eb95438897585b10577ea2e0 da995d8e002548889747013c0eeca935 - - default default] Security group rule updated ['581a4637-eff2-45f4-92f3-d575b736a840']
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.566 280869 DEBUG nova.storage.rbd_utils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] rbd image ed40901b-0bfc-426a-bf70-48d87ce95aa6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.571 280869 DEBUG oslo_concurrency.processutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.600 280869 DEBUG nova.storage.rbd_utils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] flattening vms/3d34a856-7613-4158-b859-fb3089fe3bc7_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 06 10:15:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:17.730 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:17 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:15:17.772 2 INFO neutron.agent.securitygroups_rpc [None req-54187745-6fe9-48d8-bbb3-7e399880134e da7bbd24eb95438897585b10577ea2e0 da995d8e002548889747013c0eeca935 - - default default] Security group rule updated ['581a4637-eff2-45f4-92f3-d575b736a840']
Dec 06 10:15:18 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:15:18 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/212981679' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.067 280869 DEBUG oslo_concurrency.processutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.069 280869 DEBUG nova.virt.libvirt.vif [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:15:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-571789410',display_name='tempest-LiveMigrationTest-server-571789410',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005548790.localdomain',hostname='tempest-livemigrationtest-server-571789410',id=8,image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005548790.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005548790.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9167331b2c424ef6961b096b551f8434',ramdisk_id='',reservation_id='r-9204byw5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-1593322913',owner_user_name='tempest-LiveMigrationTest-1593322913-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:15:13Z,user_data=None,user_id='b25d9e5ec9eb4368a764482a325b9dda',uuid=ed40901b-0bfc-426a-bf70-48d87ce95aa6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "feb6a13d-305a-4541-a50e-4988833ecf82", "address": "fa:16:3e:e5:ea:4a", "network": {"id": "45604602-bc87-4608-9881-9568cbf90870", "bridge": "br-int", "label": "tempest-LiveMigrationTest-802114316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9167331b2c424ef6961b096b551f8434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeb6a13d-30", "ovs_interfaceid": "feb6a13d-305a-4541-a50e-4988833ecf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.070 280869 DEBUG nova.network.os_vif_util [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Converting VIF {"id": "feb6a13d-305a-4541-a50e-4988833ecf82", "address": "fa:16:3e:e5:ea:4a", "network": {"id": "45604602-bc87-4608-9881-9568cbf90870", "bridge": "br-int", "label": "tempest-LiveMigrationTest-802114316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9167331b2c424ef6961b096b551f8434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeb6a13d-30", "ovs_interfaceid": "feb6a13d-305a-4541-a50e-4988833ecf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.071 280869 DEBUG nova.network.os_vif_util [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:ea:4a,bridge_name='br-int',has_traffic_filtering=True,id=feb6a13d-305a-4541-a50e-4988833ecf82,network=Network(45604602-bc87-4608-9881-9568cbf90870),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfeb6a13d-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.075 280869 DEBUG nova.objects.instance [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Lazy-loading 'pci_devices' on Instance uuid ed40901b-0bfc-426a-bf70-48d87ce95aa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.092 280869 DEBUG nova.virt.libvirt.driver [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] End _get_guest_xml xml=<domain type="kvm">
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:   <uuid>ed40901b-0bfc-426a-bf70-48d87ce95aa6</uuid>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:   <name>instance-00000008</name>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:   <memory>131072</memory>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:   <vcpu>1</vcpu>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:   <metadata>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <nova:name>tempest-LiveMigrationTest-server-571789410</nova:name>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <nova:creationTime>2025-12-06 10:15:16</nova:creationTime>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <nova:flavor name="m1.nano">
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:         <nova:memory>128</nova:memory>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:         <nova:disk>1</nova:disk>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:         <nova:swap>0</nova:swap>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:         <nova:vcpus>1</nova:vcpus>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       </nova:flavor>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <nova:owner>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:         <nova:user uuid="b25d9e5ec9eb4368a764482a325b9dda">tempest-LiveMigrationTest-1593322913-project-member</nova:user>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:         <nova:project uuid="9167331b2c424ef6961b096b551f8434">tempest-LiveMigrationTest-1593322913</nova:project>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       </nova:owner>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <nova:root type="image" uuid="6a944ab6-8965-4055-b7fc-af6e395005ea"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <nova:ports>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:         <nova:port uuid="feb6a13d-305a-4541-a50e-4988833ecf82">
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:         </nova:port>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       </nova:ports>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     </nova:instance>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:   </metadata>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:   <sysinfo type="smbios">
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <system>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <entry name="manufacturer">RDO</entry>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <entry name="product">OpenStack Compute</entry>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <entry name="serial">ed40901b-0bfc-426a-bf70-48d87ce95aa6</entry>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <entry name="uuid">ed40901b-0bfc-426a-bf70-48d87ce95aa6</entry>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <entry name="family">Virtual Machine</entry>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     </system>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:   </sysinfo>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:   <os>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <boot dev="hd"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <smbios mode="sysinfo"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:   </os>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:   <features>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <acpi/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <apic/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <vmcoreinfo/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:   </features>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:   <clock offset="utc">
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <timer name="hpet" present="no"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:   </clock>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:   <cpu mode="host-model" match="exact">
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:   </cpu>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:   <devices>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <disk type="network" device="disk">
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <driver type="raw" cache="none"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <source protocol="rbd" name="vms/ed40901b-0bfc-426a-bf70-48d87ce95aa6_disk">
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:         <host name="172.18.0.103" port="6789"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:         <host name="172.18.0.104" port="6789"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:         <host name="172.18.0.105" port="6789"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       </source>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <auth username="openstack">
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:         <secret type="ceph" uuid="1939e851-b10c-5c3b-9bb7-8e7f380233e8"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       </auth>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <target dev="vda" bus="virtio"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     </disk>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <disk type="network" device="cdrom">
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <driver type="raw" cache="none"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <source protocol="rbd" name="vms/ed40901b-0bfc-426a-bf70-48d87ce95aa6_disk.config">
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:         <host name="172.18.0.103" port="6789"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:         <host name="172.18.0.104" port="6789"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:         <host name="172.18.0.105" port="6789"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       </source>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <auth username="openstack">
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:         <secret type="ceph" uuid="1939e851-b10c-5c3b-9bb7-8e7f380233e8"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       </auth>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <target dev="sda" bus="sata"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     </disk>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <interface type="ethernet">
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <mac address="fa:16:3e:e5:ea:4a"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <model type="virtio"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <mtu size="1442"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <target dev="tapfeb6a13d-30"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     </interface>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <serial type="pty">
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <log file="/var/lib/nova/instances/ed40901b-0bfc-426a-bf70-48d87ce95aa6/console.log" append="off"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     </serial>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <video>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <model type="virtio"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     </video>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <input type="tablet" bus="usb"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <rng model="virtio">
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <backend model="random">/dev/urandom</backend>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     </rng>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <controller type="usb" index="0"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     <memballoon model="virtio">
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:       <stats period="10"/>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:     </memballoon>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:   </devices>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: </domain>
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.093 280869 DEBUG nova.compute.manager [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Preparing to wait for external event network-vif-plugged-feb6a13d-305a-4541-a50e-4988833ecf82 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.093 280869 DEBUG oslo_concurrency.lockutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Acquiring lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.094 280869 DEBUG oslo_concurrency.lockutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.094 280869 DEBUG oslo_concurrency.lockutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.096 280869 DEBUG nova.virt.libvirt.vif [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:15:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-571789410',display_name='tempest-LiveMigrationTest-server-571789410',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005548790.localdomain',hostname='tempest-livemigrationtest-server-571789410',id=8,image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005548790.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005548790.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9167331b2c424ef6961b096b551f8434',ramdisk_id='',reservation_id='r-9204byw5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-1593322913',owner_user_name='tempest-LiveMigrationTest-1593322913-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:15:13Z,user_data=None,user_id='b25d9e5ec9eb4368a764482a325b9dda',uuid=ed40901b-0bfc-426a-bf70-48d87ce95aa6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "feb6a13d-305a-4541-a50e-4988833ecf82", "address": "fa:16:3e:e5:ea:4a", "network": {"id": "45604602-bc87-4608-9881-9568cbf90870", "bridge": "br-int", "label": "tempest-LiveMigrationTest-802114316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9167331b2c424ef6961b096b551f8434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeb6a13d-30", "ovs_interfaceid": "feb6a13d-305a-4541-a50e-4988833ecf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.097 280869 DEBUG nova.network.os_vif_util [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Converting VIF {"id": "feb6a13d-305a-4541-a50e-4988833ecf82", "address": "fa:16:3e:e5:ea:4a", "network": {"id": "45604602-bc87-4608-9881-9568cbf90870", "bridge": "br-int", "label": "tempest-LiveMigrationTest-802114316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9167331b2c424ef6961b096b551f8434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeb6a13d-30", "ovs_interfaceid": "feb6a13d-305a-4541-a50e-4988833ecf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.099 280869 DEBUG nova.network.os_vif_util [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:ea:4a,bridge_name='br-int',has_traffic_filtering=True,id=feb6a13d-305a-4541-a50e-4988833ecf82,network=Network(45604602-bc87-4608-9881-9568cbf90870),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfeb6a13d-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.100 280869 DEBUG os_vif [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:ea:4a,bridge_name='br-int',has_traffic_filtering=True,id=feb6a13d-305a-4541-a50e-4988833ecf82,network=Network(45604602-bc87-4608-9881-9568cbf90870),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfeb6a13d-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.101 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.102 280869 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.104 280869 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.108 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.108 280869 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfeb6a13d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.109 280869 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfeb6a13d-30, col_values=(('external_ids', {'iface-id': 'feb6a13d-305a-4541-a50e-4988833ecf82', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:ea:4a', 'vm-uuid': 'ed40901b-0bfc-426a-bf70-48d87ce95aa6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.112 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.117 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.122 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.124 280869 INFO os_vif [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:ea:4a,bridge_name='br-int',has_traffic_filtering=True,id=feb6a13d-305a-4541-a50e-4988833ecf82,network=Network(45604602-bc87-4608-9881-9568cbf90870),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfeb6a13d-30')
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.187 280869 DEBUG nova.virt.libvirt.driver [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.188 280869 DEBUG nova.virt.libvirt.driver [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.188 280869 DEBUG nova.virt.libvirt.driver [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] No VIF found with MAC fa:16:3e:e5:ea:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.189 280869 INFO nova.virt.libvirt.driver [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Using config drive
Dec 06 10:15:18 np0005548790.localdomain ceph-mon[301742]: pgmap v129: 177 pgs: 177 active+clean; 387 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 173 op/s
Dec 06 10:15:18 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/4212878909' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:15:18 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/212981679' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.233 280869 DEBUG nova.storage.rbd_utils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] rbd image ed40901b-0bfc-426a-bf70-48d87ce95aa6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:15:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:15:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:15:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:15:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 162210 "" "Go-http-client/1.1"
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.446 280869 INFO nova.virt.libvirt.driver [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Creating config drive at /var/lib/nova/instances/ed40901b-0bfc-426a-bf70-48d87ce95aa6/disk.config
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.454 280869 DEBUG oslo_concurrency.processutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ed40901b-0bfc-426a-bf70-48d87ce95aa6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxfni6y44 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:15:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20634 "" "Go-http-client/1.1"
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.587 280869 DEBUG oslo_concurrency.processutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ed40901b-0bfc-426a-bf70-48d87ce95aa6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxfni6y44" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.615 280869 DEBUG nova.storage.rbd_utils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] rbd image ed40901b-0bfc-426a-bf70-48d87ce95aa6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.620 280869 DEBUG oslo_concurrency.processutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ed40901b-0bfc-426a-bf70-48d87ce95aa6/disk.config ed40901b-0bfc-426a-bf70-48d87ce95aa6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.664 280869 DEBUG nova.virt.libvirt.driver [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Image rbd:vms/3d34a856-7613-4158-b859-fb3089fe3bc7_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.665 280869 DEBUG nova.virt.libvirt.driver [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.665 280869 DEBUG nova.virt.libvirt.driver [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Ensure instance console log exists: /var/lib/nova/instances/3d34a856-7613-4158-b859-fb3089fe3bc7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.666 280869 DEBUG oslo_concurrency.lockutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.666 280869 DEBUG oslo_concurrency.lockutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.666 280869 DEBUG oslo_concurrency.lockutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.668 280869 DEBUG nova.virt.libvirt.driver [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-06T10:14:55Z,direct_url=<?>,disk_format='raw',id=af540be2-bf52-4bff-b4bd-6dea5cca6542,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-65395191-shelved',owner='c6d84801a8b44d9da497e9761a0cd10c',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-06T10:15:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'boot_index': 0, 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'image_id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.671 280869 WARNING nova.virt.libvirt.driver [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.674 280869 DEBUG nova.virt.libvirt.host [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Searching host: 'np0005548790.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.674 280869 DEBUG nova.virt.libvirt.host [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.675 280869 DEBUG nova.virt.libvirt.host [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Searching host: 'np0005548790.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.675 280869 DEBUG nova.virt.libvirt.host [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.676 280869 DEBUG nova.virt.libvirt.driver [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.676 280869 DEBUG nova.virt.hardware [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T10:13:10Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a0a7498e-22eb-495c-a2e3-89ba9e483bf6',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-06T10:14:55Z,direct_url=<?>,disk_format='raw',id=af540be2-bf52-4bff-b4bd-6dea5cca6542,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-65395191-shelved',owner='c6d84801a8b44d9da497e9761a0cd10c',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-06T10:15:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.676 280869 DEBUG nova.virt.hardware [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.676 280869 DEBUG nova.virt.hardware [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.677 280869 DEBUG nova.virt.hardware [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.677 280869 DEBUG nova.virt.hardware [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.677 280869 DEBUG nova.virt.hardware [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.677 280869 DEBUG nova.virt.hardware [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.677 280869 DEBUG nova.virt.hardware [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.678 280869 DEBUG nova.virt.hardware [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.678 280869 DEBUG nova.virt.hardware [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.678 280869 DEBUG nova.virt.hardware [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.678 280869 DEBUG nova.objects.instance [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3d34a856-7613-4158-b859-fb3089fe3bc7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.759 280869 DEBUG oslo_concurrency.processutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.791 280869 DEBUG oslo_concurrency.processutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ed40901b-0bfc-426a-bf70-48d87ce95aa6/disk.config ed40901b-0bfc-426a-bf70-48d87ce95aa6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.792 280869 INFO nova.virt.libvirt.driver [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Deleting local config drive /var/lib/nova/instances/ed40901b-0bfc-426a-bf70-48d87ce95aa6/disk.config because it was imported into RBD.
Dec 06 10:15:18 np0005548790.localdomain kernel: device tapfeb6a13d-30 entered promiscuous mode
Dec 06 10:15:18 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016118.8372] manager: (tapfeb6a13d-30): new Tun device (/org/freedesktop/NetworkManager/Devices/21)
Dec 06 10:15:18 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:18Z|00070|binding|INFO|Claiming lport feb6a13d-305a-4541-a50e-4988833ecf82 for this chassis.
Dec 06 10:15:18 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:18Z|00071|binding|INFO|feb6a13d-305a-4541-a50e-4988833ecf82: Claiming fa:16:3e:e5:ea:4a 10.100.0.10
Dec 06 10:15:18 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:18Z|00072|binding|INFO|Claiming lport 99b309b3-9e3d-4a23-b110-d99707c2eb4e for this chassis.
Dec 06 10:15:18 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:18Z|00073|binding|INFO|99b309b3-9e3d-4a23-b110-d99707c2eb4e: Claiming fa:16:3e:11:27:4d 19.80.0.152
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.842 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:18 np0005548790.localdomain systemd-udevd[310512]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:15:18 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:18.852 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:27:4d 19.80.0.152'], port_security=['fa:16:3e:11:27:4d 19.80.0.152'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['feb6a13d-305a-4541-a50e-4988833ecf82'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-2060007817', 'neutron:cidrs': '19.80.0.152/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19043ea6-c6b2-4272-aa60-1b11a7b5bd93', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-2060007817', 'neutron:project_id': '9167331b2c424ef6961b096b551f8434', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4c82b56e-0fc5-4c7f-8922-ceb8236815fd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=927c8639-172d-4240-b8a1-85db1fd6c03d, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=99b309b3-9e3d-4a23-b110-d99707c2eb4e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:15:18 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016118.8566] device (tapfeb6a13d-30): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Dec 06 10:15:18 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016118.8572] device (tapfeb6a13d-30): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external')
Dec 06 10:15:18 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:18.854 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:ea:4a 10.100.0.10'], port_security=['fa:16:3e:e5:ea:4a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1146072664', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ed40901b-0bfc-426a-bf70-48d87ce95aa6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45604602-bc87-4608-9881-9568cbf90870', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1146072664', 'neutron:project_id': '9167331b2c424ef6961b096b551f8434', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4c82b56e-0fc5-4c7f-8922-ceb8236815fd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d40d335f-7e85-43c3-894d-993c12735497, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=feb6a13d-305a-4541-a50e-4988833ecf82) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:15:18 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:18Z|00074|binding|INFO|Setting lport feb6a13d-305a-4541-a50e-4988833ecf82 ovn-installed in OVS
Dec 06 10:15:18 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:18Z|00075|binding|INFO|Setting lport feb6a13d-305a-4541-a50e-4988833ecf82 up in Southbound
Dec 06 10:15:18 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:18Z|00076|binding|INFO|Setting lport 99b309b3-9e3d-4a23-b110-d99707c2eb4e up in Southbound
Dec 06 10:15:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:18.859 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:18 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:18.860 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 99b309b3-9e3d-4a23-b110-d99707c2eb4e in datapath 19043ea6-c6b2-4272-aa60-1b11a7b5bd93 bound to our chassis
Dec 06 10:15:18 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:18.865 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Port 03ab6bb0-f7de-47ce-96d3-8bc70e314b38 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 06 10:15:18 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:18.865 159200 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 19043ea6-c6b2-4272-aa60-1b11a7b5bd93
Dec 06 10:15:18 np0005548790.localdomain systemd-machined[202564]: New machine qemu-2-instance-00000008.
Dec 06 10:15:18 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:18.876 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[3f610d04-dff1-4064-a6b0-1ca9b3401738]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:18 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:18.877 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap19043ea6-c1 in ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 10:15:18 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:18.878 262518 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap19043ea6-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 10:15:18 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:18.879 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[b86047cd-ba45-4246-9d72-7cb927ea6fc9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:18 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:18.880 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[782297f7-230d-436c-ae57-71b767758049]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:18 np0005548790.localdomain systemd[1]: Started Virtual Machine qemu-2-instance-00000008.
Dec 06 10:15:18 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:18.890 159379 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd01de8-a85b-467a-99f3-3fb6cfe4f8ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:18 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:18.900 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[b08f2859-e85d-4772-a685-3b57310140c6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:18 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:18.923 309039 DEBUG oslo.privsep.daemon [-] privsep: reply[1ceb9b72-dce7-4cf9-bf9e-d73877059678]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:18 np0005548790.localdomain systemd-udevd[310517]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:15:18 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016118.9446] manager: (tap19043ea6-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/22)
Dec 06 10:15:18 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:18.943 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[4da5a97a-0e87-4b82-ac64-1fa0abddaf2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:18 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:18.965 309039 DEBUG oslo.privsep.daemon [-] privsep: reply[7e027f87-c0c3-4788-9511-61003340a09a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:18 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:18.968 309039 DEBUG oslo.privsep.daemon [-] privsep: reply[4edaed1b-6d16-485e-8c7c-bafd97ea09c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:18 np0005548790.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap19043ea6-c1: link becomes ready
Dec 06 10:15:18 np0005548790.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap19043ea6-c0: link becomes ready
Dec 06 10:15:18 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016118.9851] device (tap19043ea6-c0): carrier: link connected
Dec 06 10:15:18 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:18.989 309039 DEBUG oslo.privsep.daemon [-] privsep: reply[b5911931-7ee7-47a9-8bd5-24bc4bcaf695]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.004 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[3576a84e-e222-4451-8424-b276abf16bb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19043ea6-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:00:81:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1253412, 'reachable_time': 39502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310583, 'error': None, 'target': 'ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.015 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[6c5066c4-be4c-46a5-929d-baea55a172e8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe00:8115'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1253412, 'tstamp': 1253412}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310586, 'error': None, 'target': 'ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:19 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.029 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[9262fbf6-3aec-4fd1-b2bd-3a5c002db80a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap19043ea6-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:00:81:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1253412, 'reachable_time': 39502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310594, 'error': None, 'target': 'ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.045 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[8355070b-6b0a-4ff1-8f5e-0bdd90882f6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v130: 177 pgs: 177 active+clean; 352 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 9.0 MiB/s rd, 8.5 MiB/s wr, 280 op/s
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.076 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[c71e361b-0292-4c04-ba94-724ca6b34360]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.078 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19043ea6-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.078 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.079 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19043ea6-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.112 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:19 np0005548790.localdomain kernel: device tap19043ea6-c0 entered promiscuous mode
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.114 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.115 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap19043ea6-c0, col_values=(('external_ids', {'iface-id': 'b960e3cf-838e-4b32-93f1-7da76cedadcc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.117 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:19 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:19Z|00077|binding|INFO|Releasing lport b960e3cf-838e-4b32-93f1-7da76cedadcc from this chassis (sb_readonly=0)
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.117 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.118 159200 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/19043ea6-c6b2-4272-aa60-1b11a7b5bd93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/19043ea6-c6b2-4272-aa60-1b11a7b5bd93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.118 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6c917a-fdac-4c93-9bbd-5afbeaa8de3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.119 159200 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: global
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     log         /dev/log local0 debug
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     log-tag     haproxy-metadata-proxy-19043ea6-c6b2-4272-aa60-1b11a7b5bd93
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     user        root
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     group       root
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     maxconn     1024
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     pidfile     /var/lib/neutron/external/pids/19043ea6-c6b2-4272-aa60-1b11a7b5bd93.pid.haproxy
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     daemon
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: defaults
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     log global
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     mode http
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     option httplog
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     option dontlognull
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     option http-server-close
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     option forwardfor
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     retries                 3
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout http-request    30s
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout connect         30s
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout client          32s
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout server          32s
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout http-keep-alive 30s
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: listen listener
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     bind 169.254.169.254:80
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     http-request add-header X-OVN-Network-ID 19043ea6-c6b2-4272-aa60-1b11a7b5bd93
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.120 159200 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93', 'env', 'PROCESS_TAG=haproxy-19043ea6-c6b2-4272-aa60-1b11a7b5bd93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/19043ea6-c6b2-4272-aa60-1b11a7b5bd93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.122 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.170 280869 DEBUG nova.virt.driver [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Emitting event <LifecycleEvent: 1765016119.170042, ed40901b-0bfc-426a-bf70-48d87ce95aa6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.171 280869 INFO nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] VM Started (Lifecycle Event)
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.190 280869 DEBUG nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.193 280869 DEBUG nova.virt.driver [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Emitting event <LifecycleEvent: 1765016119.1705306, ed40901b-0bfc-426a-bf70-48d87ce95aa6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.193 280869 INFO nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] VM Paused (Lifecycle Event)
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.208 280869 DEBUG nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.211 280869 DEBUG nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.231 280869 DEBUG oslo_concurrency.processutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.257 280869 DEBUG nova.storage.rbd_utils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] rbd image 3d34a856-7613-4158-b859-fb3089fe3bc7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:15:19 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/412049606' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.260 280869 DEBUG oslo_concurrency.processutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.272 280869 INFO nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:15:19 np0005548790.localdomain podman[310681]: 
Dec 06 10:15:19 np0005548790.localdomain podman[310681]: 2025-12-06 10:15:19.503295024 +0000 UTC m=+0.095179126 container create bb253b6be4038423d7c90019a179d26f72c00ba6ea6b2e994076f4a529103bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:15:19 np0005548790.localdomain systemd[1]: Started libpod-conmon-bb253b6be4038423d7c90019a179d26f72c00ba6ea6b2e994076f4a529103bd4.scope.
Dec 06 10:15:19 np0005548790.localdomain podman[310681]: 2025-12-06 10:15:19.461660707 +0000 UTC m=+0.053544869 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 10:15:19 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:15:19 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f377242eabb2a26dd4beb6b01f7cad3751ae6c88498a41b76979474b10de6b3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:15:19 np0005548790.localdomain podman[310681]: 2025-12-06 10:15:19.587825829 +0000 UTC m=+0.179709971 container init bb253b6be4038423d7c90019a179d26f72c00ba6ea6b2e994076f4a529103bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 10:15:19 np0005548790.localdomain podman[310681]: 2025-12-06 10:15:19.595562509 +0000 UTC m=+0.187446621 container start bb253b6be4038423d7c90019a179d26f72c00ba6ea6b2e994076f4a529103bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 10:15:19 np0005548790.localdomain neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93[310696]: [NOTICE]   (310700) : New worker (310702) forked
Dec 06 10:15:19 np0005548790.localdomain neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93[310696]: [NOTICE]   (310700) : Loading success.
Dec 06 10:15:19 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:15:19 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/46663597' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.653 159200 INFO neutron.agent.ovn.metadata.agent [-] Port feb6a13d-305a-4541-a50e-4988833ecf82 in datapath 45604602-bc87-4608-9881-9568cbf90870 unbound from our chassis
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.656 159200 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 45604602-bc87-4608-9881-9568cbf90870
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.667 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[0846b0e0-cb14-4a0f-bfa6-d1fa27a01b71]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.667 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap45604602-b1 in ovnmeta-45604602-bc87-4608-9881-9568cbf90870 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.668 280869 DEBUG oslo_concurrency.processutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.670 262518 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap45604602-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.670 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[482f1069-efb0-496a-9e1e-bd8ac0c9921a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.670 280869 DEBUG nova.objects.instance [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3d34a856-7613-4158-b859-fb3089fe3bc7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.671 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[cee679ea-ac3b-4d96-95d7-3222a322c907]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.679 159379 DEBUG oslo.privsep.daemon [-] privsep: reply[0fa85965-7d1e-4aa1-b2d3-28dd5980877f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.686 280869 DEBUG nova.virt.libvirt.driver [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] End _get_guest_xml xml=<domain type="kvm">
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:   <uuid>3d34a856-7613-4158-b859-fb3089fe3bc7</uuid>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:   <name>instance-00000006</name>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:   <memory>131072</memory>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:   <vcpu>1</vcpu>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:   <metadata>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <nova:name>tempest-UnshelveToHostMultiNodesTest-server-65395191</nova:name>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <nova:creationTime>2025-12-06 10:15:18</nova:creationTime>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <nova:flavor name="m1.nano">
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:         <nova:memory>128</nova:memory>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:         <nova:disk>1</nova:disk>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:         <nova:swap>0</nova:swap>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:         <nova:vcpus>1</nova:vcpus>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       </nova:flavor>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <nova:owner>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:         <nova:user uuid="496ca8bf29dc4e81ba0b08a592dc45d3">tempest-UnshelveToHostMultiNodesTest-912460009-project-member</nova:user>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:         <nova:project uuid="c6d84801a8b44d9da497e9761a0cd10c">tempest-UnshelveToHostMultiNodesTest-912460009</nova:project>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       </nova:owner>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <nova:root type="image" uuid="af540be2-bf52-4bff-b4bd-6dea5cca6542"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <nova:ports/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     </nova:instance>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:   </metadata>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:   <sysinfo type="smbios">
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <system>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <entry name="manufacturer">RDO</entry>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <entry name="product">OpenStack Compute</entry>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <entry name="serial">3d34a856-7613-4158-b859-fb3089fe3bc7</entry>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <entry name="uuid">3d34a856-7613-4158-b859-fb3089fe3bc7</entry>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <entry name="family">Virtual Machine</entry>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     </system>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:   </sysinfo>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:   <os>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <boot dev="hd"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <smbios mode="sysinfo"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:   </os>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:   <features>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <acpi/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <apic/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <vmcoreinfo/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:   </features>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:   <clock offset="utc">
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <timer name="hpet" present="no"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:   </clock>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:   <cpu mode="host-model" match="exact">
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:   </cpu>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:   <devices>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <disk type="network" device="disk">
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <driver type="raw" cache="none"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <source protocol="rbd" name="vms/3d34a856-7613-4158-b859-fb3089fe3bc7_disk">
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:         <host name="172.18.0.103" port="6789"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:         <host name="172.18.0.104" port="6789"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:         <host name="172.18.0.105" port="6789"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       </source>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <auth username="openstack">
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:         <secret type="ceph" uuid="1939e851-b10c-5c3b-9bb7-8e7f380233e8"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       </auth>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <target dev="vda" bus="virtio"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     </disk>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <disk type="network" device="cdrom">
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <driver type="raw" cache="none"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <source protocol="rbd" name="vms/3d34a856-7613-4158-b859-fb3089fe3bc7_disk.config">
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:         <host name="172.18.0.103" port="6789"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:         <host name="172.18.0.104" port="6789"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:         <host name="172.18.0.105" port="6789"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       </source>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <auth username="openstack">
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:         <secret type="ceph" uuid="1939e851-b10c-5c3b-9bb7-8e7f380233e8"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       </auth>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <target dev="sda" bus="sata"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     </disk>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <serial type="pty">
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <log file="/var/lib/nova/instances/3d34a856-7613-4158-b859-fb3089fe3bc7/console.log" append="off"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     </serial>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <video>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <model type="virtio"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     </video>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <input type="tablet" bus="usb"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <input type="keyboard" bus="usb"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <rng model="virtio">
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <backend model="random">/dev/urandom</backend>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     </rng>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <controller type="usb" index="0"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     <memballoon model="virtio">
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:       <stats period="10"/>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:     </memballoon>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:   </devices>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: </domain>
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.688 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[0df3434e-7569-4faa-84a4-4868b399189e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.716 309039 DEBUG oslo.privsep.daemon [-] privsep: reply[03104970-fadc-45d9-b49f-1e5c53bc5c55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:19 np0005548790.localdomain systemd-udevd[310564]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.721 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[b212b479-be3f-4141-a8ab-98c13010f371]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:19 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016119.7228] manager: (tap45604602-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/23)
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.739 280869 DEBUG nova.virt.libvirt.driver [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.740 280869 DEBUG nova.virt.libvirt.driver [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.741 280869 INFO nova.virt.libvirt.driver [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Using config drive
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.748 309039 DEBUG oslo.privsep.daemon [-] privsep: reply[3f93a450-7e75-4e5b-8960-cb0bf3982970]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.754 309039 DEBUG oslo.privsep.daemon [-] privsep: reply[4495ad75-65b9-49ef-bd67-9aace7553b5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:19 np0005548790.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap45604602-b0: link becomes ready
Dec 06 10:15:19 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016119.7748] device (tap45604602-b0): carrier: link connected
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.779 309039 DEBUG oslo.privsep.daemon [-] privsep: reply[29b7b36a-3c09-4146-974c-937ebe6692fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.789 280869 DEBUG nova.storage.rbd_utils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] rbd image 3d34a856-7613-4158-b859-fb3089fe3bc7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.791 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[d3693f47-d31d-4366-b7d1-190a7d23d05b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45604602-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:48:e6:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1253491, 'reachable_time': 32763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310740, 'error': None, 'target': 'ovnmeta-45604602-bc87-4608-9881-9568cbf90870', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.804 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[e81869f1-650c-4ef4-ac3d-fb6ae6d95700]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe48:e68f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1253491, 'tstamp': 1253491}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310743, 'error': None, 'target': 'ovnmeta-45604602-bc87-4608-9881-9568cbf90870', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.808 280869 DEBUG nova.objects.instance [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 3d34a856-7613-4158-b859-fb3089fe3bc7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.821 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[7992d346-3112-45af-bac2-f1a19f3785c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45604602-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:48:e6:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1253491, 'reachable_time': 32763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310744, 'error': None, 'target': 'ovnmeta-45604602-bc87-4608-9881-9568cbf90870', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.844 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[f559e717-fa41-4dfb-b0cb-f6cb91a23069]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.847 280869 DEBUG nova.objects.instance [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Lazy-loading 'keypairs' on Instance uuid 3d34a856-7613-4158-b859-fb3089fe3bc7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.885 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[c7f6df85-97c4-4655-b79f-c57950919828]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.886 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45604602-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.887 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.887 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45604602-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:15:19 np0005548790.localdomain kernel: device tap45604602-b0 entered promiscuous mode
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.889 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.893 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap45604602-b0, col_values=(('external_ids', {'iface-id': 'd57132cf-ea52-419a-82d6-37dcdb5dd89a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.895 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:19 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:19Z|00078|binding|INFO|Releasing lport d57132cf-ea52-419a-82d6-37dcdb5dd89a from this chassis (sb_readonly=0)
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.897 159200 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/45604602-bc87-4608-9881-9568cbf90870.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/45604602-bc87-4608-9881-9568cbf90870.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.898 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[842ca70a-0b04-4673-8809-01883f6944f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.899 159200 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: global
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     log         /dev/log local0 debug
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     log-tag     haproxy-metadata-proxy-45604602-bc87-4608-9881-9568cbf90870
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     user        root
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     group       root
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     maxconn     1024
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     pidfile     /var/lib/neutron/external/pids/45604602-bc87-4608-9881-9568cbf90870.pid.haproxy
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     daemon
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: defaults
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     log global
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     mode http
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     option httplog
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     option dontlognull
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     option http-server-close
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     option forwardfor
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     retries                 3
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout http-request    30s
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout connect         30s
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout client          32s
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout server          32s
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout http-keep-alive 30s
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: listen listener
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     bind 169.254.169.254:80
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:     http-request add-header X-OVN-Network-ID 45604602-bc87-4608-9881-9568cbf90870
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 10:15:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:19.900 159200 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-45604602-bc87-4608-9881-9568cbf90870', 'env', 'PROCESS_TAG=haproxy-45604602-bc87-4608-9881-9568cbf90870', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/45604602-bc87-4608-9881-9568cbf90870.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.901 280869 INFO nova.virt.libvirt.driver [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Creating config drive at /var/lib/nova/instances/3d34a856-7613-4158-b859-fb3089fe3bc7/disk.config
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.906 280869 DEBUG oslo_concurrency.processutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3d34a856-7613-4158-b859-fb3089fe3bc7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1veqlscu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:19.918 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:20.023 280869 DEBUG oslo_concurrency.processutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3d34a856-7613-4158-b859-fb3089fe3bc7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1veqlscu" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:20.059 280869 DEBUG nova.storage.rbd_utils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] rbd image 3d34a856-7613-4158-b859-fb3089fe3bc7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:15:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:20.064 280869 DEBUG oslo_concurrency.processutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3d34a856-7613-4158-b859-fb3089fe3bc7/disk.config 3d34a856-7613-4158-b859-fb3089fe3bc7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:20.264 280869 DEBUG oslo_concurrency.processutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3d34a856-7613-4158-b859-fb3089fe3bc7/disk.config 3d34a856-7613-4158-b859-fb3089fe3bc7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:20.264 280869 INFO nova.virt.libvirt.driver [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Deleting local config drive /var/lib/nova/instances/3d34a856-7613-4158-b859-fb3089fe3bc7/disk.config because it was imported into RBD.
Dec 06 10:15:20 np0005548790.localdomain ceph-mon[301742]: pgmap v130: 177 pgs: 177 active+clean; 352 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 9.0 MiB/s rd, 8.5 MiB/s wr, 280 op/s
Dec 06 10:15:20 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/46663597' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:15:20 np0005548790.localdomain systemd-machined[202564]: New machine qemu-3-instance-00000006.
Dec 06 10:15:20 np0005548790.localdomain systemd[1]: Started Virtual Machine qemu-3-instance-00000006.
Dec 06 10:15:20 np0005548790.localdomain podman[310816]: 
Dec 06 10:15:20 np0005548790.localdomain podman[310816]: 2025-12-06 10:15:20.339038227 +0000 UTC m=+0.080345184 container create b03f29c8d036c16411e2eb8529a3b5777ec86b2802bb660da46d276f7acfbce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 10:15:20 np0005548790.localdomain systemd[1]: Started libpod-conmon-b03f29c8d036c16411e2eb8529a3b5777ec86b2802bb660da46d276f7acfbce9.scope.
Dec 06 10:15:20 np0005548790.localdomain systemd[1]: tmp-crun.07h5Q9.mount: Deactivated successfully.
Dec 06 10:15:20 np0005548790.localdomain podman[310816]: 2025-12-06 10:15:20.29923101 +0000 UTC m=+0.040538017 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 10:15:20 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:15:20 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9474eed38b2a4893a7c08b7b4e3ceb711cea97001938ba768121db76a59f4d1e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:15:20 np0005548790.localdomain podman[310816]: 2025-12-06 10:15:20.41050331 +0000 UTC m=+0.151810257 container init b03f29c8d036c16411e2eb8529a3b5777ec86b2802bb660da46d276f7acfbce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 06 10:15:20 np0005548790.localdomain podman[310816]: 2025-12-06 10:15:20.421419855 +0000 UTC m=+0.162726802 container start b03f29c8d036c16411e2eb8529a3b5777ec86b2802bb660da46d276f7acfbce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 06 10:15:20 np0005548790.localdomain neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870[310845]: [NOTICE]   (310864) : New worker (310869) forked
Dec 06 10:15:20 np0005548790.localdomain neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870[310845]: [NOTICE]   (310864) : Loading success.
Dec 06 10:15:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:20.561 280869 DEBUG nova.virt.driver [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Emitting event <LifecycleEvent: 1765016120.5613291, 3d34a856-7613-4158-b859-fb3089fe3bc7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:15:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:20.562 280869 INFO nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] VM Resumed (Lifecycle Event)
Dec 06 10:15:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:20.564 280869 DEBUG nova.compute.manager [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 10:15:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:20.564 280869 DEBUG nova.virt.libvirt.driver [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 10:15:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:20.566 280869 INFO nova.virt.libvirt.driver [-] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Instance spawned successfully.
Dec 06 10:15:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:20.585 280869 DEBUG nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:15:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:20.588 280869 DEBUG nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:15:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:20.614 280869 INFO nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:15:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:20.614 280869 DEBUG nova.virt.driver [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Emitting event <LifecycleEvent: 1765016120.5636466, 3d34a856-7613-4158-b859-fb3089fe3bc7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:15:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:20.614 280869 INFO nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] VM Started (Lifecycle Event)
Dec 06 10:15:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:20.638 280869 DEBUG nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:15:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:20.641 280869 DEBUG nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:15:20 np0005548790.localdomain sshd[310902]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:15:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:20.666 280869 INFO nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:15:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v131: 177 pgs: 177 active+clean; 352 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 8.0 MiB/s rd, 7.6 MiB/s wr, 250 op/s
Dec 06 10:15:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e114 e114: 6 total, 6 up, 6 in
Dec 06 10:15:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:21.803 280869 DEBUG nova.compute.manager [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:15:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:21.870 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:21.874 280869 DEBUG oslo_concurrency.lockutils [None req-63c839a9-3490-4f1d-8f8e-9bd145d3fe53 83cd451c5dc442c78643ba7608b7134a 48c16b78edd4478f9497cfefc6873978 - - default default] Lock "3d34a856-7613-4158-b859-fb3089fe3bc7" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 6.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:22 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:22Z|00079|binding|INFO|Releasing lport b960e3cf-838e-4b32-93f1-7da76cedadcc from this chassis (sb_readonly=0)
Dec 06 10:15:22 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:22Z|00080|binding|INFO|Releasing lport d57132cf-ea52-419a-82d6-37dcdb5dd89a from this chassis (sb_readonly=0)
Dec 06 10:15:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:22.211 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:22 np0005548790.localdomain ceph-mon[301742]: pgmap v131: 177 pgs: 177 active+clean; 352 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 8.0 MiB/s rd, 7.6 MiB/s wr, 250 op/s
Dec 06 10:15:22 np0005548790.localdomain ceph-mon[301742]: osdmap e114: 6 total, 6 up, 6 in
Dec 06 10:15:22 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/1244880858' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:22.527 280869 DEBUG oslo_concurrency.lockutils [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] Acquiring lock "3d34a856-7613-4158-b859-fb3089fe3bc7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:22.528 280869 DEBUG oslo_concurrency.lockutils [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] Lock "3d34a856-7613-4158-b859-fb3089fe3bc7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:22.528 280869 DEBUG oslo_concurrency.lockutils [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] Acquiring lock "3d34a856-7613-4158-b859-fb3089fe3bc7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:22.529 280869 DEBUG oslo_concurrency.lockutils [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] Lock "3d34a856-7613-4158-b859-fb3089fe3bc7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:22.529 280869 DEBUG oslo_concurrency.lockutils [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] Lock "3d34a856-7613-4158-b859-fb3089fe3bc7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:22.531 280869 INFO nova.compute.manager [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Terminating instance
Dec 06 10:15:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:22.532 280869 DEBUG oslo_concurrency.lockutils [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] Acquiring lock "refresh_cache-3d34a856-7613-4158-b859-fb3089fe3bc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:15:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:22.533 280869 DEBUG oslo_concurrency.lockutils [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] Acquired lock "refresh_cache-3d34a856-7613-4158-b859-fb3089fe3bc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:15:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:22.533 280869 DEBUG nova.network.neutron [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 10:15:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:22.601 280869 DEBUG nova.network.neutron [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 10:15:22 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:15:22.678 2 INFO neutron.agent.securitygroups_rpc [req-32f9c27c-7e39-487b-9f96-37ea07c2a545 req-64092713-96b8-4823-87de-00cf06a3e614 da7bbd24eb95438897585b10577ea2e0 da995d8e002548889747013c0eeca935 - - default default] Security group member updated ['581a4637-eff2-45f4-92f3-d575b736a840']
Dec 06 10:15:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:22.767 280869 DEBUG nova.network.neutron [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:15:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:22.790 280869 DEBUG oslo_concurrency.lockutils [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] Releasing lock "refresh_cache-3d34a856-7613-4158-b859-fb3089fe3bc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:15:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:22.792 280869 DEBUG nova.compute.manager [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 10:15:22 np0005548790.localdomain systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Deactivated successfully.
Dec 06 10:15:22 np0005548790.localdomain systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Consumed 2.528s CPU time.
Dec 06 10:15:22 np0005548790.localdomain systemd-machined[202564]: Machine qemu-3-instance-00000006 terminated.
Dec 06 10:15:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:23.013 280869 INFO nova.virt.libvirt.driver [-] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Instance destroyed successfully.
Dec 06 10:15:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:23.014 280869 DEBUG nova.objects.instance [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] Lazy-loading 'resources' on Instance uuid 3d34a856-7613-4158-b859-fb3089fe3bc7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:15:23 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v133: 177 pgs: 177 active+clean; 399 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 7.4 MiB/s rd, 11 MiB/s wr, 345 op/s
Dec 06 10:15:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:23.112 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:15:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:15:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:15:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:15:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:15:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:15:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:15:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:15:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:15:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:15:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:15:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:15:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:23.702 280869 INFO nova.virt.libvirt.driver [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Deleting instance files /var/lib/nova/instances/3d34a856-7613-4158-b859-fb3089fe3bc7_del
Dec 06 10:15:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:23.703 280869 INFO nova.virt.libvirt.driver [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Deletion of /var/lib/nova/instances/3d34a856-7613-4158-b859-fb3089fe3bc7_del complete
Dec 06 10:15:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:23.744 280869 DEBUG nova.virt.libvirt.host [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Dec 06 10:15:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:23.744 280869 INFO nova.virt.libvirt.host [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] UEFI support detected
Dec 06 10:15:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:23.746 280869 INFO nova.compute.manager [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Took 0.95 seconds to destroy the instance on the hypervisor.
Dec 06 10:15:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:23.746 280869 DEBUG oslo.service.loopingcall [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 10:15:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:23.746 280869 DEBUG nova.compute.manager [-] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 10:15:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:23.746 280869 DEBUG nova.network.neutron [-] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 10:15:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:23.772 280869 DEBUG nova.network.neutron [-] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 10:15:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:23.787 280869 DEBUG nova.network.neutron [-] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:15:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:23.803 280869 INFO nova.compute.manager [-] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Took 0.06 seconds to deallocate network for instance.
Dec 06 10:15:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:23.866 280869 DEBUG oslo_concurrency.lockutils [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:23.867 280869 DEBUG oslo_concurrency.lockutils [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:23.952 280869 DEBUG oslo_concurrency.processutils [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:15:24 np0005548790.localdomain ceph-mon[301742]: pgmap v133: 177 pgs: 177 active+clean; 399 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 7.4 MiB/s rd, 11 MiB/s wr, 345 op/s
Dec 06 10:15:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:15:24 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1760790147' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:24.357 280869 DEBUG oslo_concurrency.processutils [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:24.363 280869 DEBUG nova.compute.provider_tree [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:15:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:24.392 280869 DEBUG nova.scheduler.client.report [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:15:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:24.422 280869 DEBUG oslo_concurrency.lockutils [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:24.457 280869 INFO nova.scheduler.client.report [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] Deleted allocations for instance 3d34a856-7613-4158-b859-fb3089fe3bc7
Dec 06 10:15:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:24.551 280869 DEBUG oslo_concurrency.lockutils [None req-fb29ae78-228a-42e6-a787-8121843c1b9d 496ca8bf29dc4e81ba0b08a592dc45d3 c6d84801a8b44d9da497e9761a0cd10c - - default default] Lock "3d34a856-7613-4158-b859-fb3089fe3bc7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v134: 177 pgs: 177 active+clean; 399 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 7.4 MiB/s rd, 11 MiB/s wr, 343 op/s
Dec 06 10:15:25 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1760790147' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:25 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2447503790' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.113 280869 DEBUG nova.compute.manager [req-f57c28bf-8dc2-4349-9fbd-875640eca46c req-7520693a-f86d-4624-a38a-168fa2ad7f3c 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Received event network-vif-plugged-feb6a13d-305a-4541-a50e-4988833ecf82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.114 280869 DEBUG oslo_concurrency.lockutils [req-f57c28bf-8dc2-4349-9fbd-875640eca46c req-7520693a-f86d-4624-a38a-168fa2ad7f3c 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.114 280869 DEBUG oslo_concurrency.lockutils [req-f57c28bf-8dc2-4349-9fbd-875640eca46c req-7520693a-f86d-4624-a38a-168fa2ad7f3c 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.114 280869 DEBUG oslo_concurrency.lockutils [req-f57c28bf-8dc2-4349-9fbd-875640eca46c req-7520693a-f86d-4624-a38a-168fa2ad7f3c 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.115 280869 DEBUG nova.compute.manager [req-f57c28bf-8dc2-4349-9fbd-875640eca46c req-7520693a-f86d-4624-a38a-168fa2ad7f3c 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Processing event network-vif-plugged-feb6a13d-305a-4541-a50e-4988833ecf82 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.115 280869 DEBUG nova.compute.manager [req-f57c28bf-8dc2-4349-9fbd-875640eca46c req-7520693a-f86d-4624-a38a-168fa2ad7f3c 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Received event network-vif-plugged-feb6a13d-305a-4541-a50e-4988833ecf82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.115 280869 DEBUG oslo_concurrency.lockutils [req-f57c28bf-8dc2-4349-9fbd-875640eca46c req-7520693a-f86d-4624-a38a-168fa2ad7f3c 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.116 280869 DEBUG oslo_concurrency.lockutils [req-f57c28bf-8dc2-4349-9fbd-875640eca46c req-7520693a-f86d-4624-a38a-168fa2ad7f3c 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.116 280869 DEBUG oslo_concurrency.lockutils [req-f57c28bf-8dc2-4349-9fbd-875640eca46c req-7520693a-f86d-4624-a38a-168fa2ad7f3c 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.116 280869 DEBUG nova.compute.manager [req-f57c28bf-8dc2-4349-9fbd-875640eca46c req-7520693a-f86d-4624-a38a-168fa2ad7f3c 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] No waiting events found dispatching network-vif-plugged-feb6a13d-305a-4541-a50e-4988833ecf82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.116 280869 WARNING nova.compute.manager [req-f57c28bf-8dc2-4349-9fbd-875640eca46c req-7520693a-f86d-4624-a38a-168fa2ad7f3c 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Received unexpected event network-vif-plugged-feb6a13d-305a-4541-a50e-4988833ecf82 for instance with vm_state building and task_state spawning.
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.117 280869 DEBUG nova.compute.manager [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.122 280869 DEBUG nova.virt.driver [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Emitting event <LifecycleEvent: 1765016126.1221528, ed40901b-0bfc-426a-bf70-48d87ce95aa6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.122 280869 INFO nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] VM Resumed (Lifecycle Event)
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.125 280869 DEBUG nova.virt.libvirt.driver [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.132 280869 INFO nova.virt.libvirt.driver [-] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Instance spawned successfully.
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.133 280869 DEBUG nova.virt.libvirt.driver [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.145 280869 DEBUG nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.153 280869 DEBUG nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.160 280869 DEBUG nova.virt.libvirt.driver [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.160 280869 DEBUG nova.virt.libvirt.driver [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.161 280869 DEBUG nova.virt.libvirt.driver [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.162 280869 DEBUG nova.virt.libvirt.driver [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.163 280869 DEBUG nova.virt.libvirt.driver [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.164 280869 DEBUG nova.virt.libvirt.driver [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.195 280869 INFO nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.242 280869 INFO nova.compute.manager [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Took 12.37 seconds to spawn the instance on the hypervisor.
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.243 280869 DEBUG nova.compute.manager [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:15:26 np0005548790.localdomain ceph-mon[301742]: pgmap v134: 177 pgs: 177 active+clean; 399 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 7.4 MiB/s rd, 11 MiB/s wr, 343 op/s
Dec 06 10:15:26 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2463120775' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.432 280869 INFO nova.compute.manager [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Took 13.46 seconds to build instance.
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.446 280869 DEBUG oslo_concurrency.lockutils [None req-4fba5bfd-6d88-454e-bdae-88583f78a5ba b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:26.873 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:26 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:15:26.981 2 INFO neutron.agent.securitygroups_rpc [None req-4bf7090f-619c-441c-8a74-44ff051b2a47 ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Security group member updated ['bfad329a-0ea3-4b02-8e91-9d15749f8c9b']
Dec 06 10:15:26 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e115 e115: 6 total, 6 up, 6 in
Dec 06 10:15:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v136: 177 pgs: 177 active+clean; 399 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 4.3 MiB/s rd, 8.5 MiB/s wr, 195 op/s
Dec 06 10:15:27 np0005548790.localdomain dnsmasq[308410]: read /var/lib/neutron/dhcp/932e7489-8895-41d4-92c6-0d944505e7e6/addn_hosts - 0 addresses
Dec 06 10:15:27 np0005548790.localdomain dnsmasq-dhcp[308410]: read /var/lib/neutron/dhcp/932e7489-8895-41d4-92c6-0d944505e7e6/host
Dec 06 10:15:27 np0005548790.localdomain dnsmasq-dhcp[308410]: read /var/lib/neutron/dhcp/932e7489-8895-41d4-92c6-0d944505e7e6/opts
Dec 06 10:15:27 np0005548790.localdomain podman[310964]: 2025-12-06 10:15:27.26755795 +0000 UTC m=+0.084041014 container kill 83809e57dadc6f0e97f71242f3d8afe88fd8bab9d6a9d45c0ecb989d8e337ff2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-932e7489-8895-41d4-92c6-0d944505e7e6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 10:15:27 np0005548790.localdomain dnsmasq[308410]: exiting on receipt of SIGTERM
Dec 06 10:15:27 np0005548790.localdomain podman[311004]: 2025-12-06 10:15:27.738631041 +0000 UTC m=+0.046323925 container kill 83809e57dadc6f0e97f71242f3d8afe88fd8bab9d6a9d45c0ecb989d8e337ff2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-932e7489-8895-41d4-92c6-0d944505e7e6, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:15:27 np0005548790.localdomain systemd[1]: libpod-83809e57dadc6f0e97f71242f3d8afe88fd8bab9d6a9d45c0ecb989d8e337ff2.scope: Deactivated successfully.
Dec 06 10:15:27 np0005548790.localdomain podman[311019]: 2025-12-06 10:15:27.798558561 +0000 UTC m=+0.044134394 container died 83809e57dadc6f0e97f71242f3d8afe88fd8bab9d6a9d45c0ecb989d8e337ff2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-932e7489-8895-41d4-92c6-0d944505e7e6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:15:27 np0005548790.localdomain podman[311019]: 2025-12-06 10:15:27.836300032 +0000 UTC m=+0.081875875 container cleanup 83809e57dadc6f0e97f71242f3d8afe88fd8bab9d6a9d45c0ecb989d8e337ff2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-932e7489-8895-41d4-92c6-0d944505e7e6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 06 10:15:27 np0005548790.localdomain systemd[1]: libpod-conmon-83809e57dadc6f0e97f71242f3d8afe88fd8bab9d6a9d45c0ecb989d8e337ff2.scope: Deactivated successfully.
Dec 06 10:15:27 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:27Z|00081|binding|INFO|Removing iface tap90d179d9-70 ovn-installed in OVS
Dec 06 10:15:27 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:27Z|00082|binding|INFO|Removing lport 90d179d9-70ee-4fd4-b3f7-244a3cbb2cac ovn-installed in OVS
Dec 06 10:15:27 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:27.890 159200 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 26f7da5c-ac29-4b53-b23e-4d7e1fbc3e37 with type ""
Dec 06 10:15:27 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:27.892 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.2/24', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-932e7489-8895-41d4-92c6-0d944505e7e6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-932e7489-8895-41d4-92c6-0d944505e7e6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7897d6398eb64eb29c66df8db792e581', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9bb405c-aea0-4a81-a300-475f8e1e8050, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=90d179d9-70ee-4fd4-b3f7-244a3cbb2cac) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:15:27 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:27.894 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 90d179d9-70ee-4fd4-b3f7-244a3cbb2cac in datapath 932e7489-8895-41d4-92c6-0d944505e7e6 unbound from our chassis
Dec 06 10:15:27 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:27.898 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 932e7489-8895-41d4-92c6-0d944505e7e6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:15:27 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:27.920 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[17e08601-4f05-4c91-9baf-0b69c77f52eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:27.921 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:27.922 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:27 np0005548790.localdomain podman[311024]: 2025-12-06 10:15:27.94087121 +0000 UTC m=+0.174952282 container remove 83809e57dadc6f0e97f71242f3d8afe88fd8bab9d6a9d45c0ecb989d8e337ff2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-932e7489-8895-41d4-92c6-0d944505e7e6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 06 10:15:27 np0005548790.localdomain kernel: device tap90d179d9-70 left promiscuous mode
Dec 06 10:15:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:27.957 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:27.973 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:27 np0005548790.localdomain ceph-mon[301742]: osdmap e115: 6 total, 6 up, 6 in
Dec 06 10:15:27 np0005548790.localdomain ceph-mon[301742]: pgmap v136: 177 pgs: 177 active+clean; 399 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 4.3 MiB/s rd, 8.5 MiB/s wr, 195 op/s
Dec 06 10:15:27 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:15:27.998 262327 INFO neutron.agent.dhcp.agent [None req-1a38a049-51a4-45a1-b4f7-bf0fd99820f7 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:15:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:28.113 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:28 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:15:28.192 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:15:28 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-148f1b7c6389b4a4b7cc0c9d16e7695e44bf29396406f91e0b7e03b1be1b2340-merged.mount: Deactivated successfully.
Dec 06 10:15:28 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-83809e57dadc6f0e97f71242f3d8afe88fd8bab9d6a9d45c0ecb989d8e337ff2-userdata-shm.mount: Deactivated successfully.
Dec 06 10:15:28 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2d932e7489\x2d8895\x2d41d4\x2d92c6\x2d0d944505e7e6.mount: Deactivated successfully.
Dec 06 10:15:28 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:28Z|00083|binding|INFO|Releasing lport b960e3cf-838e-4b32-93f1-7da76cedadcc from this chassis (sb_readonly=0)
Dec 06 10:15:28 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:28Z|00084|binding|INFO|Releasing lport d57132cf-ea52-419a-82d6-37dcdb5dd89a from this chassis (sb_readonly=0)
Dec 06 10:15:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:28.449 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:28 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:15:28.683 2 INFO neutron.agent.securitygroups_rpc [None req-9fa949f8-0732-40f0-9fd9-bacbdfb578db ac2e85103fd14829ad4e6df2357da95b 7897d6398eb64eb29c66df8db792e581 - - default default] Security group member updated ['bfad329a-0ea3-4b02-8e91-9d15749f8c9b']
Dec 06 10:15:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:28.901 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:28 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:28.903 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:15:28 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:28.905 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:15:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:15:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v137: 177 pgs: 177 active+clean; 238 MiB data, 916 MiB used, 41 GiB / 42 GiB avail; 5.5 MiB/s rd, 8.5 MiB/s wr, 336 op/s
Dec 06 10:15:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:29.434 280869 DEBUG nova.virt.libvirt.driver [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Check if temp file /var/lib/nova/instances/tmpm9_iowog exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Dec 06 10:15:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:29.435 280869 DEBUG nova.compute.manager [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpm9_iowog',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='ed40901b-0bfc-426a-bf70-48d87ce95aa6',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Dec 06 10:15:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:29.612 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:30 np0005548790.localdomain ceph-mon[301742]: pgmap v137: 177 pgs: 177 active+clean; 238 MiB data, 916 MiB used, 41 GiB / 42 GiB avail; 5.5 MiB/s rd, 8.5 MiB/s wr, 336 op/s
Dec 06 10:15:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:15:30 np0005548790.localdomain systemd[1]: tmp-crun.8cHgJc.mount: Deactivated successfully.
Dec 06 10:15:30 np0005548790.localdomain podman[311049]: 2025-12-06 10:15:30.598549012 +0000 UTC m=+0.103739467 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 06 10:15:30 np0005548790.localdomain podman[311049]: 2025-12-06 10:15:30.630391053 +0000 UTC m=+0.135581538 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 06 10:15:30 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:15:30 np0005548790.localdomain sshd[310902]: Connection closed by 3.137.73.221 port 37288 [preauth]
Dec 06 10:15:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v138: 177 pgs: 177 active+clean; 238 MiB data, 916 MiB used, 41 GiB / 42 GiB avail; 4.5 MiB/s rd, 7.0 MiB/s wr, 275 op/s
Dec 06 10:15:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:31.874 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:32 np0005548790.localdomain ceph-mon[301742]: pgmap v138: 177 pgs: 177 active+clean; 238 MiB data, 916 MiB used, 41 GiB / 42 GiB avail; 4.5 MiB/s rd, 7.0 MiB/s wr, 275 op/s
Dec 06 10:15:32 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/936145217' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:32.620 280869 DEBUG nova.compute.manager [req-bf19c6dd-53fd-4b1d-941a-80fc00f840d4 req-6005763a-df87-40f5-bfb7-4911f8a1241c 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Received event network-vif-unplugged-feb6a13d-305a-4541-a50e-4988833ecf82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:15:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:32.620 280869 DEBUG oslo_concurrency.lockutils [req-bf19c6dd-53fd-4b1d-941a-80fc00f840d4 req-6005763a-df87-40f5-bfb7-4911f8a1241c 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:32.621 280869 DEBUG oslo_concurrency.lockutils [req-bf19c6dd-53fd-4b1d-941a-80fc00f840d4 req-6005763a-df87-40f5-bfb7-4911f8a1241c 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:32.621 280869 DEBUG oslo_concurrency.lockutils [req-bf19c6dd-53fd-4b1d-941a-80fc00f840d4 req-6005763a-df87-40f5-bfb7-4911f8a1241c 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:32.621 280869 DEBUG nova.compute.manager [req-bf19c6dd-53fd-4b1d-941a-80fc00f840d4 req-6005763a-df87-40f5-bfb7-4911f8a1241c 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] No waiting events found dispatching network-vif-unplugged-feb6a13d-305a-4541-a50e-4988833ecf82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:15:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:32.621 280869 DEBUG nova.compute.manager [req-bf19c6dd-53fd-4b1d-941a-80fc00f840d4 req-6005763a-df87-40f5-bfb7-4911f8a1241c 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Received event network-vif-unplugged-feb6a13d-305a-4541-a50e-4988833ecf82 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 10:15:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v139: 177 pgs: 177 active+clean; 238 MiB data, 871 MiB used, 41 GiB / 42 GiB avail; 4.6 MiB/s rd, 20 KiB/s wr, 234 op/s
Dec 06 10:15:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:33.115 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:33 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/843309173' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:15:34 np0005548790.localdomain ceph-mon[301742]: pgmap v139: 177 pgs: 177 active+clean; 238 MiB data, 871 MiB used, 41 GiB / 42 GiB avail; 4.6 MiB/s rd, 20 KiB/s wr, 234 op/s
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.334 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.335 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.336 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.363 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "refresh_cache-ed40901b-0bfc-426a-bf70-48d87ce95aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.364 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquired lock "refresh_cache-ed40901b-0bfc-426a-bf70-48d87ce95aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.365 280869 DEBUG nova.network.neutron [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.365 280869 DEBUG nova.objects.instance [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ed40901b-0bfc-426a-bf70-48d87ce95aa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.538 280869 INFO nova.compute.manager [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Took 4.30 seconds for pre_live_migration on destination host np0005548789.localdomain.
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.539 280869 DEBUG nova.compute.manager [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.557 280869 DEBUG nova.compute.manager [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpm9_iowog',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='ed40901b-0bfc-426a-bf70-48d87ce95aa6',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(0c4fb838-191a-43fb-92ad-31ab3b6d11ce),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.560 280869 DEBUG nova.objects.instance [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Lazy-loading 'migration_context' on Instance uuid ed40901b-0bfc-426a-bf70-48d87ce95aa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.562 280869 DEBUG nova.virt.libvirt.driver [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.564 280869 DEBUG nova.virt.libvirt.driver [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.564 280869 DEBUG nova.virt.libvirt.driver [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.584 280869 DEBUG nova.virt.libvirt.vif [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:15:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-571789410',display_name='tempest-LiveMigrationTest-server-571789410',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005548790.localdomain',hostname='tempest-livemigrationtest-server-571789410',id=8,image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:15:26Z,launched_on='np0005548790.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005548790.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9167331b2c424ef6961b096b551f8434',ramdisk_id='',reservation_id='r-9204byw5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1593322913',owner_user_name='tempest-LiveMigrationTest-1593322913-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:15:26Z,user_data=None,user_id='b25d9e5ec9eb4368a764482a325b9dda',uuid=ed40901b-0bfc-426a-bf70-48d87ce95aa6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "feb6a13d-305a-4541-a50e-4988833ecf82", "address": "fa:16:3e:e5:ea:4a", "network": {"id": "45604602-bc87-4608-9881-9568cbf90870", "bridge": "br-int", "label": "tempest-LiveMigrationTest-802114316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9167331b2c424ef6961b096b551f8434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapfeb6a13d-30", "ovs_interfaceid": "feb6a13d-305a-4541-a50e-4988833ecf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.584 280869 DEBUG nova.network.os_vif_util [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Converting VIF {"id": "feb6a13d-305a-4541-a50e-4988833ecf82", "address": "fa:16:3e:e5:ea:4a", "network": {"id": "45604602-bc87-4608-9881-9568cbf90870", "bridge": "br-int", "label": "tempest-LiveMigrationTest-802114316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9167331b2c424ef6961b096b551f8434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapfeb6a13d-30", "ovs_interfaceid": "feb6a13d-305a-4541-a50e-4988833ecf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.585 280869 DEBUG nova.network.os_vif_util [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:ea:4a,bridge_name='br-int',has_traffic_filtering=True,id=feb6a13d-305a-4541-a50e-4988833ecf82,network=Network(45604602-bc87-4608-9881-9568cbf90870),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfeb6a13d-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.586 280869 DEBUG nova.virt.libvirt.migration [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Updating guest XML with vif config: <interface type="ethernet">
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]:   <mac address="fa:16:3e:e5:ea:4a"/>
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]:   <model type="virtio"/>
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]:   <driver name="vhost" rx_queue_size="512"/>
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]:   <mtu size="1442"/>
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]:   <target dev="tapfeb6a13d-30"/>
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: </interface>
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.587 280869 DEBUG nova.virt.libvirt.driver [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.668 280869 DEBUG nova.compute.manager [req-05bfdd54-c7df-46b9-b021-54f4e17a02fa req-061cdd20-3c9f-4420-809c-c919378b21b6 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Received event network-vif-plugged-feb6a13d-305a-4541-a50e-4988833ecf82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.668 280869 DEBUG oslo_concurrency.lockutils [req-05bfdd54-c7df-46b9-b021-54f4e17a02fa req-061cdd20-3c9f-4420-809c-c919378b21b6 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.669 280869 DEBUG oslo_concurrency.lockutils [req-05bfdd54-c7df-46b9-b021-54f4e17a02fa req-061cdd20-3c9f-4420-809c-c919378b21b6 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.669 280869 DEBUG oslo_concurrency.lockutils [req-05bfdd54-c7df-46b9-b021-54f4e17a02fa req-061cdd20-3c9f-4420-809c-c919378b21b6 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.669 280869 DEBUG nova.compute.manager [req-05bfdd54-c7df-46b9-b021-54f4e17a02fa req-061cdd20-3c9f-4420-809c-c919378b21b6 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] No waiting events found dispatching network-vif-plugged-feb6a13d-305a-4541-a50e-4988833ecf82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.669 280869 WARNING nova.compute.manager [req-05bfdd54-c7df-46b9-b021-54f4e17a02fa req-061cdd20-3c9f-4420-809c-c919378b21b6 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Received unexpected event network-vif-plugged-feb6a13d-305a-4541-a50e-4988833ecf82 for instance with vm_state active and task_state migrating.
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.670 280869 DEBUG nova.compute.manager [req-05bfdd54-c7df-46b9-b021-54f4e17a02fa req-061cdd20-3c9f-4420-809c-c919378b21b6 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Received event network-changed-feb6a13d-305a-4541-a50e-4988833ecf82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.670 280869 DEBUG nova.compute.manager [req-05bfdd54-c7df-46b9-b021-54f4e17a02fa req-061cdd20-3c9f-4420-809c-c919378b21b6 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Refreshing instance network info cache due to event network-changed-feb6a13d-305a-4541-a50e-4988833ecf82. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:15:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:34.670 280869 DEBUG oslo_concurrency.lockutils [req-05bfdd54-c7df-46b9-b021-54f4e17a02fa req-061cdd20-3c9f-4420-809c-c919378b21b6 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "refresh_cache-ed40901b-0bfc-426a-bf70-48d87ce95aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:15:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v140: 177 pgs: 177 active+clean; 238 MiB data, 871 MiB used, 41 GiB / 42 GiB avail; 4.6 MiB/s rd, 20 KiB/s wr, 234 op/s
Dec 06 10:15:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:35.066 280869 DEBUG nova.virt.libvirt.migration [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 06 10:15:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:35.068 280869 INFO nova.virt.libvirt.migration [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Increasing downtime to 50 ms after 0 sec elapsed time
Dec 06 10:15:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:35.171 280869 INFO nova.virt.libvirt.driver [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Dec 06 10:15:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:35.336 280869 DEBUG nova.network.neutron [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Updating instance_info_cache with network_info: [{"id": "feb6a13d-305a-4541-a50e-4988833ecf82", "address": "fa:16:3e:e5:ea:4a", "network": {"id": "45604602-bc87-4608-9881-9568cbf90870", "bridge": "br-int", "label": "tempest-LiveMigrationTest-802114316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9167331b2c424ef6961b096b551f8434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeb6a13d-30", "ovs_interfaceid": "feb6a13d-305a-4541-a50e-4988833ecf82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "np0005548789.localdomain"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:15:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:35.370 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Releasing lock "refresh_cache-ed40901b-0bfc-426a-bf70-48d87ce95aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:15:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:35.370 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 10:15:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:35.371 280869 DEBUG oslo_concurrency.lockutils [req-05bfdd54-c7df-46b9-b021-54f4e17a02fa req-061cdd20-3c9f-4420-809c-c919378b21b6 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquired lock "refresh_cache-ed40901b-0bfc-426a-bf70-48d87ce95aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:15:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:35.371 280869 DEBUG nova.network.neutron [req-05bfdd54-c7df-46b9-b021-54f4e17a02fa req-061cdd20-3c9f-4420-809c-c919378b21b6 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Refreshing network info cache for port feb6a13d-305a-4541-a50e-4988833ecf82 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:15:35 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:35Z|00085|binding|INFO|Releasing lport b960e3cf-838e-4b32-93f1-7da76cedadcc from this chassis (sb_readonly=0)
Dec 06 10:15:35 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:35Z|00086|binding|INFO|Releasing lport d57132cf-ea52-419a-82d6-37dcdb5dd89a from this chassis (sb_readonly=0)
Dec 06 10:15:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:35.528 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:35.675 280869 DEBUG nova.virt.libvirt.migration [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 06 10:15:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:35.676 280869 DEBUG nova.virt.libvirt.migration [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.086 280869 DEBUG nova.virt.driver [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Emitting event <LifecycleEvent: 1765016136.0864658, ed40901b-0bfc-426a-bf70-48d87ce95aa6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.087 280869 INFO nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] VM Paused (Lifecycle Event)
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.112 280869 DEBUG nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.116 280869 DEBUG nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.142 280869 INFO nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] During sync_power_state the instance has a pending task (migrating). Skip.
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.180 280869 DEBUG nova.virt.libvirt.migration [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.181 280869 DEBUG nova.virt.libvirt.migration [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Dec 06 10:15:36 np0005548790.localdomain ceph-mon[301742]: pgmap v140: 177 pgs: 177 active+clean; 238 MiB data, 871 MiB used, 41 GiB / 42 GiB avail; 4.6 MiB/s rd, 20 KiB/s wr, 234 op/s
Dec 06 10:15:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:15:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:15:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:15:36 np0005548790.localdomain kernel: device tapfeb6a13d-30 left promiscuous mode
Dec 06 10:15:36 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016136.2466] device (tapfeb6a13d-30): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed')
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.265 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:36 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:36Z|00087|binding|INFO|Releasing lport feb6a13d-305a-4541-a50e-4988833ecf82 from this chassis (sb_readonly=0)
Dec 06 10:15:36 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:36Z|00088|binding|INFO|Setting lport feb6a13d-305a-4541-a50e-4988833ecf82 down in Southbound
Dec 06 10:15:36 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:36Z|00089|binding|INFO|Releasing lport 99b309b3-9e3d-4a23-b110-d99707c2eb4e from this chassis (sb_readonly=0)
Dec 06 10:15:36 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:36Z|00090|binding|INFO|Setting lport 99b309b3-9e3d-4a23-b110-d99707c2eb4e down in Southbound
Dec 06 10:15:36 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:36Z|00091|binding|INFO|Removing iface tapfeb6a13d-30 ovn-installed in OVS
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.270 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:36 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:36Z|00092|binding|INFO|Releasing lport b960e3cf-838e-4b32-93f1-7da76cedadcc from this chassis (sb_readonly=0)
Dec 06 10:15:36 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:36Z|00093|binding|INFO|Releasing lport d57132cf-ea52-419a-82d6-37dcdb5dd89a from this chassis (sb_readonly=0)
Dec 06 10:15:36 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.276 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:27:4d 19.80.0.152'], port_security=['fa:16:3e:11:27:4d 19.80.0.152'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['feb6a13d-305a-4541-a50e-4988833ecf82'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-2060007817', 'neutron:cidrs': '19.80.0.152/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19043ea6-c6b2-4272-aa60-1b11a7b5bd93', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-2060007817', 'neutron:project_id': '9167331b2c424ef6961b096b551f8434', 'neutron:revision_number': '3', 'neutron:security_group_ids': '4c82b56e-0fc5-4c7f-8922-ceb8236815fd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=927c8639-172d-4240-b8a1-85db1fd6c03d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=99b309b3-9e3d-4a23-b110-d99707c2eb4e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:15:36 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.279 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:ea:4a 10.100.0.10'], port_security=['fa:16:3e:e5:ea:4a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain,np0005548789.localdomain', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'b142a5ef-fbed-4e92-aa78-e3ad080c6370'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1146072664', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ed40901b-0bfc-426a-bf70-48d87ce95aa6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45604602-bc87-4608-9881-9568cbf90870', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1146072664', 'neutron:project_id': '9167331b2c424ef6961b096b551f8434', 'neutron:revision_number': '8', 'neutron:security_group_ids': '4c82b56e-0fc5-4c7f-8922-ceb8236815fd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d40d335f-7e85-43c3-894d-993c12735497, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=feb6a13d-305a-4541-a50e-4988833ecf82) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:15:36 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.282 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 99b309b3-9e3d-4a23-b110-d99707c2eb4e in datapath 19043ea6-c6b2-4272-aa60-1b11a7b5bd93 unbound from our chassis
Dec 06 10:15:36 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.285 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Port 03ab6bb0-f7de-47ce-96d3-8bc70e314b38 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 06 10:15:36 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.286 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 19043ea6-c6b2-4272-aa60-1b11a7b5bd93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:15:36 np0005548790.localdomain systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec 06 10:15:36 np0005548790.localdomain systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000008.scope: Consumed 10.567s CPU time.
Dec 06 10:15:36 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.287 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[73721b3d-edff-43e4-bd42-e97b4fb5bcd3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:36 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.289 159200 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93 namespace which is not needed anymore
Dec 06 10:15:36 np0005548790.localdomain systemd-machined[202564]: Machine qemu-2-instance-00000008 terminated.
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.304 280869 DEBUG nova.network.neutron [req-05bfdd54-c7df-46b9-b021-54f4e17a02fa req-061cdd20-3c9f-4420-809c-c919378b21b6 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Updated VIF entry in instance network info cache for port feb6a13d-305a-4541-a50e-4988833ecf82. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.304 280869 DEBUG nova.network.neutron [req-05bfdd54-c7df-46b9-b021-54f4e17a02fa req-061cdd20-3c9f-4420-809c-c919378b21b6 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Updating instance_info_cache with network_info: [{"id": "feb6a13d-305a-4541-a50e-4988833ecf82", "address": "fa:16:3e:e5:ea:4a", "network": {"id": "45604602-bc87-4608-9881-9568cbf90870", "bridge": "br-int", "label": "tempest-LiveMigrationTest-802114316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9167331b2c424ef6961b096b551f8434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeb6a13d-30", "ovs_interfaceid": "feb6a13d-305a-4541-a50e-4988833ecf82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "np0005548789.localdomain"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.308 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.317 280869 DEBUG oslo_concurrency.lockutils [req-05bfdd54-c7df-46b9-b021-54f4e17a02fa req-061cdd20-3c9f-4420-809c-c919378b21b6 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Releasing lock "refresh_cache-ed40901b-0bfc-426a-bf70-48d87ce95aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.324 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:15:36 np0005548790.localdomain systemd[1]: tmp-crun.ZSqe7q.mount: Deactivated successfully.
Dec 06 10:15:36 np0005548790.localdomain podman[311072]: 2025-12-06 10:15:36.336945055 +0000 UTC m=+0.108223457 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 10:15:36 np0005548790.localdomain podman[311072]: 2025-12-06 10:15:36.399214379 +0000 UTC m=+0.170492751 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 10:15:36 np0005548790.localdomain virtqemud[228868]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/ed40901b-0bfc-426a-bf70-48d87ce95aa6_disk: No such file or directory
Dec 06 10:15:36 np0005548790.localdomain virtqemud[228868]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/ed40901b-0bfc-426a-bf70-48d87ce95aa6_disk: No such file or directory
Dec 06 10:15:36 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016136.4076] manager: (tapfeb6a13d-30): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Dec 06 10:15:36 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:15:36 np0005548790.localdomain podman[311071]: 2025-12-06 10:15:36.371956823 +0000 UTC m=+0.141766076 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.433 280869 DEBUG nova.virt.libvirt.driver [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.434 280869 DEBUG nova.virt.libvirt.driver [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.434 280869 DEBUG nova.virt.libvirt.driver [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Dec 06 10:15:36 np0005548790.localdomain neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93[310696]: [NOTICE]   (310700) : haproxy version is 2.8.14-c23fe91
Dec 06 10:15:36 np0005548790.localdomain neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93[310696]: [NOTICE]   (310700) : path to executable is /usr/sbin/haproxy
Dec 06 10:15:36 np0005548790.localdomain neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93[310696]: [WARNING]  (310700) : Exiting Master process...
Dec 06 10:15:36 np0005548790.localdomain neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93[310696]: [ALERT]    (310700) : Current worker (310702) exited with code 143 (Terminated)
Dec 06 10:15:36 np0005548790.localdomain neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93[310696]: [WARNING]  (310700) : All workers exited. Exiting... (0)
Dec 06 10:15:36 np0005548790.localdomain systemd[1]: libpod-bb253b6be4038423d7c90019a179d26f72c00ba6ea6b2e994076f4a529103bd4.scope: Deactivated successfully.
Dec 06 10:15:36 np0005548790.localdomain podman[311152]: 2025-12-06 10:15:36.451490064 +0000 UTC m=+0.060673453 container died bb253b6be4038423d7c90019a179d26f72c00ba6ea6b2e994076f4a529103bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:15:36 np0005548790.localdomain podman[311152]: 2025-12-06 10:15:36.47944667 +0000 UTC m=+0.088630049 container cleanup bb253b6be4038423d7c90019a179d26f72c00ba6ea6b2e994076f4a529103bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 10:15:36 np0005548790.localdomain podman[311073]: 2025-12-06 10:15:36.352482366 +0000 UTC m=+0.118345242 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, managed_by=edpm_ansible)
Dec 06 10:15:36 np0005548790.localdomain podman[311190]: 2025-12-06 10:15:36.54931684 +0000 UTC m=+0.054797784 container remove bb253b6be4038423d7c90019a179d26f72c00ba6ea6b2e994076f4a529103bd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 10:15:36 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.553 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[ae2b54f5-65d2-4022-a811-461697beb7b1]: (4, ('Sat Dec  6 10:15:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93 (bb253b6be4038423d7c90019a179d26f72c00ba6ea6b2e994076f4a529103bd4)\nbb253b6be4038423d7c90019a179d26f72c00ba6ea6b2e994076f4a529103bd4\nSat Dec  6 10:15:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93 (bb253b6be4038423d7c90019a179d26f72c00ba6ea6b2e994076f4a529103bd4)\nbb253b6be4038423d7c90019a179d26f72c00ba6ea6b2e994076f4a529103bd4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:36 np0005548790.localdomain podman[311071]: 2025-12-06 10:15:36.554834159 +0000 UTC m=+0.324643392 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:15:36 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.555 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[dc210268-f8da-42ce-b490-9d0a0bbe8bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:36 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.556 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19043ea6-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:15:36 np0005548790.localdomain podman[311073]: 2025-12-06 10:15:36.584185442 +0000 UTC m=+0.350048318 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, vcs-type=git, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, release=1755695350, version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc.)
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.592 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:36 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:15:36 np0005548790.localdomain kernel: device tap19043ea6-c0 left promiscuous mode
Dec 06 10:15:36 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.608 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:36 np0005548790.localdomain systemd[1]: libpod-conmon-bb253b6be4038423d7c90019a179d26f72c00ba6ea6b2e994076f4a529103bd4.scope: Deactivated successfully.
Dec 06 10:15:36 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.611 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c6c2ea-b0a9-4667-9695-5b004fc95c43]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:36 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.628 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[66813066-a941-4065-bbec-e8f5a36b9ff8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:36 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.629 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[8ea1273e-c186-4d62-878d-b4f34a8d007d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:36 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.639 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[ee436d76-b896-424f-9627-a83554fd9e1f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1253405, 'reachable_time': 17423, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311212, 'error': None, 'target': 'ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:36 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.641 159379 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-19043ea6-c6b2-4272-aa60-1b11a7b5bd93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 10:15:36 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.641 159379 DEBUG oslo.privsep.daemon [-] privsep: reply[09d37a35-86fa-42c3-aad7-8f63435189dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:36 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.643 159200 INFO neutron.agent.ovn.metadata.agent [-] Port feb6a13d-305a-4541-a50e-4988833ecf82 in datapath 45604602-bc87-4608-9881-9568cbf90870 unbound from our chassis
Dec 06 10:15:36 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.646 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 45604602-bc87-4608-9881-9568cbf90870, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:15:36 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.647 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[69cc7b46-1864-48d5-b27c-0ab2fa237921]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:36 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.647 159200 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-45604602-bc87-4608-9881-9568cbf90870 namespace which is not needed anymore
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.685 280869 DEBUG nova.virt.libvirt.guest [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid 'ed40901b-0bfc-426a-bf70-48d87ce95aa6' (instance-00000008) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.686 280869 INFO nova.virt.libvirt.driver [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Migration operation has completed
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.686 280869 INFO nova.compute.manager [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] _post_live_migration() is started..
Dec 06 10:15:36 np0005548790.localdomain neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870[310845]: [NOTICE]   (310864) : haproxy version is 2.8.14-c23fe91
Dec 06 10:15:36 np0005548790.localdomain neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870[310845]: [NOTICE]   (310864) : path to executable is /usr/sbin/haproxy
Dec 06 10:15:36 np0005548790.localdomain neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870[310845]: [WARNING]  (310864) : Exiting Master process...
Dec 06 10:15:36 np0005548790.localdomain neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870[310845]: [ALERT]    (310864) : Current worker (310869) exited with code 143 (Terminated)
Dec 06 10:15:36 np0005548790.localdomain neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870[310845]: [WARNING]  (310864) : All workers exited. Exiting... (0)
Dec 06 10:15:36 np0005548790.localdomain systemd[1]: libpod-b03f29c8d036c16411e2eb8529a3b5777ec86b2802bb660da46d276f7acfbce9.scope: Deactivated successfully.
Dec 06 10:15:36 np0005548790.localdomain podman[311229]: 2025-12-06 10:15:36.816535367 +0000 UTC m=+0.067856767 container died b03f29c8d036c16411e2eb8529a3b5777ec86b2802bb660da46d276f7acfbce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:15:36 np0005548790.localdomain podman[311229]: 2025-12-06 10:15:36.858586484 +0000 UTC m=+0.109907904 container cleanup b03f29c8d036c16411e2eb8529a3b5777ec86b2802bb660da46d276f7acfbce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.881 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:36 np0005548790.localdomain podman[311241]: 2025-12-06 10:15:36.890531029 +0000 UTC m=+0.070702664 container cleanup b03f29c8d036c16411e2eb8529a3b5777ec86b2802bb660da46d276f7acfbce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:15:36 np0005548790.localdomain systemd[1]: libpod-conmon-b03f29c8d036c16411e2eb8529a3b5777ec86b2802bb660da46d276f7acfbce9.scope: Deactivated successfully.
Dec 06 10:15:36 np0005548790.localdomain podman[311257]: 2025-12-06 10:15:36.958087146 +0000 UTC m=+0.080570871 container remove b03f29c8d036c16411e2eb8529a3b5777ec86b2802bb660da46d276f7acfbce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 06 10:15:36 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.962 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[058e0174-ba9f-49c3-8f75-bd40d065e14a]: (4, ('Sat Dec  6 10:15:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870 (b03f29c8d036c16411e2eb8529a3b5777ec86b2802bb660da46d276f7acfbce9)\nb03f29c8d036c16411e2eb8529a3b5777ec86b2802bb660da46d276f7acfbce9\nSat Dec  6 10:15:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-45604602-bc87-4608-9881-9568cbf90870 (b03f29c8d036c16411e2eb8529a3b5777ec86b2802bb660da46d276f7acfbce9)\nb03f29c8d036c16411e2eb8529a3b5777ec86b2802bb660da46d276f7acfbce9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:36 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.963 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[0c65f88d-5a37-45f6-8bcb-fc02fe2304c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:36 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.964 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45604602-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.967 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:36 np0005548790.localdomain kernel: device tap45604602-b0 left promiscuous mode
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.981 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:36.983 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:36 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.984 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[b67f1d96-4022-4bd1-a79f-39a49518bb1a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:37 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:36.999 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[af71e5b3-100a-4850-b171-aa1d8d53719d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:37 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:37.002 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[8bb02c6f-ab6b-4c00-8cdf-7168476d91a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:37 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:37.018 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[79807470-97b2-4d59-8feb-9d7853ca760e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1253484, 'reachable_time': 41391, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311278, 'error': None, 'target': 'ovnmeta-45604602-bc87-4608-9881-9568cbf90870', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:37 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:37.020 159379 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-45604602-bc87-4608-9881-9568cbf90870 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 10:15:37 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:37.020 159379 DEBUG oslo.privsep.daemon [-] privsep: reply[0c7ff4e8-3e26-4708-9328-97903fd7c6e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v141: 177 pgs: 177 active+clean; 238 MiB data, 871 MiB used, 41 GiB / 42 GiB avail; 4.6 MiB/s rd, 20 KiB/s wr, 232 op/s
Dec 06 10:15:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:37.169 280869 DEBUG nova.compute.manager [req-9cb79581-e6f3-4480-b9a8-99de246f622a req-71addda5-47a9-4ca8-95fb-8a5e44177214 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Received event network-vif-unplugged-feb6a13d-305a-4541-a50e-4988833ecf82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:15:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:37.170 280869 DEBUG oslo_concurrency.lockutils [req-9cb79581-e6f3-4480-b9a8-99de246f622a req-71addda5-47a9-4ca8-95fb-8a5e44177214 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:37.171 280869 DEBUG oslo_concurrency.lockutils [req-9cb79581-e6f3-4480-b9a8-99de246f622a req-71addda5-47a9-4ca8-95fb-8a5e44177214 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:37.172 280869 DEBUG oslo_concurrency.lockutils [req-9cb79581-e6f3-4480-b9a8-99de246f622a req-71addda5-47a9-4ca8-95fb-8a5e44177214 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:37.172 280869 DEBUG nova.compute.manager [req-9cb79581-e6f3-4480-b9a8-99de246f622a req-71addda5-47a9-4ca8-95fb-8a5e44177214 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] No waiting events found dispatching network-vif-unplugged-feb6a13d-305a-4541-a50e-4988833ecf82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:15:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:37.173 280869 DEBUG nova.compute.manager [req-9cb79581-e6f3-4480-b9a8-99de246f622a req-71addda5-47a9-4ca8-95fb-8a5e44177214 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Received event network-vif-unplugged-feb6a13d-305a-4541-a50e-4988833ecf82 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 10:15:37 np0005548790.localdomain systemd[1]: tmp-crun.Tg02aK.mount: Deactivated successfully.
Dec 06 10:15:37 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-9474eed38b2a4893a7c08b7b4e3ceb711cea97001938ba768121db76a59f4d1e-merged.mount: Deactivated successfully.
Dec 06 10:15:37 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b03f29c8d036c16411e2eb8529a3b5777ec86b2802bb660da46d276f7acfbce9-userdata-shm.mount: Deactivated successfully.
Dec 06 10:15:37 np0005548790.localdomain systemd[1]: run-netns-ovnmeta\x2d45604602\x2dbc87\x2d4608\x2d9881\x2d9568cbf90870.mount: Deactivated successfully.
Dec 06 10:15:37 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-4f377242eabb2a26dd4beb6b01f7cad3751ae6c88498a41b76979474b10de6b3-merged.mount: Deactivated successfully.
Dec 06 10:15:37 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bb253b6be4038423d7c90019a179d26f72c00ba6ea6b2e994076f4a529103bd4-userdata-shm.mount: Deactivated successfully.
Dec 06 10:15:37 np0005548790.localdomain systemd[1]: run-netns-ovnmeta\x2d19043ea6\x2dc6b2\x2d4272\x2daa60\x2d1b11a7b5bd93.mount: Deactivated successfully.
Dec 06 10:15:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:37.421 280869 DEBUG nova.network.neutron [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Activated binding for port feb6a13d-305a-4541-a50e-4988833ecf82 and host np0005548789.localdomain migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Dec 06 10:15:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:37.422 280869 DEBUG nova.compute.manager [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "feb6a13d-305a-4541-a50e-4988833ecf82", "address": "fa:16:3e:e5:ea:4a", "network": {"id": "45604602-bc87-4608-9881-9568cbf90870", "bridge": "br-int", "label": "tempest-LiveMigrationTest-802114316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9167331b2c424ef6961b096b551f8434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeb6a13d-30", "ovs_interfaceid": "feb6a13d-305a-4541-a50e-4988833ecf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Dec 06 10:15:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:37.423 280869 DEBUG nova.virt.libvirt.vif [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:15:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-571789410',display_name='tempest-LiveMigrationTest-server-571789410',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005548790.localdomain',hostname='tempest-livemigrationtest-server-571789410',id=8,image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:15:26Z,launched_on='np0005548790.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005548790.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9167331b2c424ef6961b096b551f8434',ramdisk_id='',reservation_id='r-9204byw5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1593322913',owner_user_name='tempest-LiveMigrationTest-1593322913-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:15:28Z,user_data=None,user_id='b25d9e5ec9eb4368a764482a325b9dda',uuid=ed40901b-0bfc-426a-bf70-48d87ce95aa6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "feb6a13d-305a-4541-a50e-4988833ecf82", "address": "fa:16:3e:e5:ea:4a", "network": {"id": "45604602-bc87-4608-9881-9568cbf90870", "bridge": "br-int", "label": "tempest-LiveMigrationTest-802114316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9167331b2c424ef6961b096b551f8434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeb6a13d-30", "ovs_interfaceid": "feb6a13d-305a-4541-a50e-4988833ecf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 10:15:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:37.424 280869 DEBUG nova.network.os_vif_util [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Converting VIF {"id": "feb6a13d-305a-4541-a50e-4988833ecf82", "address": "fa:16:3e:e5:ea:4a", "network": {"id": "45604602-bc87-4608-9881-9568cbf90870", "bridge": "br-int", "label": "tempest-LiveMigrationTest-802114316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "9167331b2c424ef6961b096b551f8434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeb6a13d-30", "ovs_interfaceid": "feb6a13d-305a-4541-a50e-4988833ecf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:15:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:37.425 280869 DEBUG nova.network.os_vif_util [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:ea:4a,bridge_name='br-int',has_traffic_filtering=True,id=feb6a13d-305a-4541-a50e-4988833ecf82,network=Network(45604602-bc87-4608-9881-9568cbf90870),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfeb6a13d-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:15:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:37.425 280869 DEBUG os_vif [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:ea:4a,bridge_name='br-int',has_traffic_filtering=True,id=feb6a13d-305a-4541-a50e-4988833ecf82,network=Network(45604602-bc87-4608-9881-9568cbf90870),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfeb6a13d-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 10:15:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:37.427 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:37.428 280869 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfeb6a13d-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:15:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:37.431 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:37.433 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:15:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:37.436 280869 INFO os_vif [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:ea:4a,bridge_name='br-int',has_traffic_filtering=True,id=feb6a13d-305a-4541-a50e-4988833ecf82,network=Network(45604602-bc87-4608-9881-9568cbf90870),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfeb6a13d-30')
Dec 06 10:15:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:37.437 280869 DEBUG oslo_concurrency.lockutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:37.437 280869 DEBUG oslo_concurrency.lockutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:37.438 280869 DEBUG oslo_concurrency.lockutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:37.438 280869 DEBUG nova.compute.manager [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Dec 06 10:15:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:37.439 280869 INFO nova.virt.libvirt.driver [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Deleting instance files /var/lib/nova/instances/ed40901b-0bfc-426a-bf70-48d87ce95aa6_del
Dec 06 10:15:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:37.439 280869 INFO nova.virt.libvirt.driver [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Deletion of /var/lib/nova/instances/ed40901b-0bfc-426a-bf70-48d87ce95aa6_del complete
Dec 06 10:15:38 np0005548790.localdomain ceph-mon[301742]: pgmap v141: 177 pgs: 177 active+clean; 238 MiB data, 871 MiB used, 41 GiB / 42 GiB avail; 4.6 MiB/s rd, 20 KiB/s wr, 232 op/s
Dec 06 10:15:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:38.012 280869 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765016123.0113096, 3d34a856-7613-4158-b859-fb3089fe3bc7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:15:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:38.013 280869 INFO nova.compute.manager [-] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] VM Stopped (Lifecycle Event)
Dec 06 10:15:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:38.142 280869 DEBUG nova.compute.manager [None req-b402b6f8-cde7-4b40-a25c-23fcdd188945 - - - - - -] [instance: 3d34a856-7613-4158-b859-fb3089fe3bc7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:15:38 np0005548790.localdomain dnsmasq[308232]: read /var/lib/neutron/dhcp/04974db5-7261-4ae2-b659-99265ca8d091/addn_hosts - 0 addresses
Dec 06 10:15:38 np0005548790.localdomain podman[311296]: 2025-12-06 10:15:38.441768114 +0000 UTC m=+0.060376524 container kill 415e4886690b573ead76f3c09ac49387a8740a490afdbfcb8b1d0b62c0851d1a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04974db5-7261-4ae2-b659-99265ca8d091, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 10:15:38 np0005548790.localdomain dnsmasq-dhcp[308232]: read /var/lib/neutron/dhcp/04974db5-7261-4ae2-b659-99265ca8d091/host
Dec 06 10:15:38 np0005548790.localdomain dnsmasq-dhcp[308232]: read /var/lib/neutron/dhcp/04974db5-7261-4ae2-b659-99265ca8d091/opts
Dec 06 10:15:38 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:38Z|00094|binding|INFO|Releasing lport 949736e1-2ab1-4514-a633-0325abfed8f8 from this chassis (sb_readonly=0)
Dec 06 10:15:38 np0005548790.localdomain kernel: device tap949736e1-2a left promiscuous mode
Dec 06 10:15:38 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:38Z|00095|binding|INFO|Setting lport 949736e1-2ab1-4514-a633-0325abfed8f8 down in Southbound
Dec 06 10:15:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:38.593 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:38 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:38.603 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-04974db5-7261-4ae2-b659-99265ca8d091', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-04974db5-7261-4ae2-b659-99265ca8d091', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5bd426c09dd743399e71eb5c44db45cb', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=664f45b3-7d97-4383-8014-1d1ca469c527, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=949736e1-2ab1-4514-a633-0325abfed8f8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:15:38 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:38.605 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 949736e1-2ab1-4514-a633-0325abfed8f8 in datapath 04974db5-7261-4ae2-b659-99265ca8d091 unbound from our chassis
Dec 06 10:15:38 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:38.608 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 04974db5-7261-4ae2-b659-99265ca8d091, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:15:38 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:38.609 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[d97defa5-5b26-4922-8d81-63b1548b9d55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:38.624 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:38 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:38.907 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=33b2d0f4-3dae-458c-b286-c937c7cb3d9e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:15:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:15:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2236174487' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:15:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2236174487' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:15:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v142: 177 pgs: 177 active+clean; 238 MiB data, 871 MiB used, 41 GiB / 42 GiB avail; 3.9 MiB/s rd, 17 KiB/s wr, 200 op/s
Dec 06 10:15:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:39.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:15:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:39.353 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:39.354 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:39.354 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:39.354 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:15:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:39.355 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:39 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:15:39.490 2 INFO neutron.agent.securitygroups_rpc [None req-e5d6490d-2b46-4f4e-92e1-5479a93607f8 13b250438f8e49ee9d0d9f0fe4791c05 a22ced63e346459ab637424ae7833af7 - - default default] Security group member updated ['55c805cd-9bbe-4434-83af-206ee080e6b9']
Dec 06 10:15:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:15:39 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1736992612' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:39.863 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:40 np0005548790.localdomain ceph-mon[301742]: pgmap v142: 177 pgs: 177 active+clean; 238 MiB data, 871 MiB used, 41 GiB / 42 GiB avail; 3.9 MiB/s rd, 17 KiB/s wr, 200 op/s
Dec 06 10:15:40 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1736992612' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:40.125 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:15:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:40.126 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=11621MB free_disk=41.71154022216797GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:15:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:40.126 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:40.126 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:40.175 280869 INFO nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Updating resource usage from migration 0c4fb838-191a-43fb-92ad-31ab3b6d11ce
Dec 06 10:15:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:40.205 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Migration 0c4fb838-191a-43fb-92ad-31ab3b6d11ce is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Dec 06 10:15:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:40.205 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:15:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:40.206 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=640MB phys_disk=41GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:15:40 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:15:40.224 2 INFO neutron.agent.securitygroups_rpc [None req-806a1120-e80b-4f72-b62c-6adbb0e69b26 13b250438f8e49ee9d0d9f0fe4791c05 a22ced63e346459ab637424ae7833af7 - - default default] Security group member updated ['55c805cd-9bbe-4434-83af-206ee080e6b9']
Dec 06 10:15:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:40.255 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:40.284 280869 DEBUG nova.compute.manager [req-d327c153-bf6d-4066-9937-b83e8a35dd5f req-4435e64a-cddd-47dd-80c8-5e581d60497a 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Received event network-vif-plugged-feb6a13d-305a-4541-a50e-4988833ecf82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:15:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:40.285 280869 DEBUG oslo_concurrency.lockutils [req-d327c153-bf6d-4066-9937-b83e8a35dd5f req-4435e64a-cddd-47dd-80c8-5e581d60497a 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:40.285 280869 DEBUG oslo_concurrency.lockutils [req-d327c153-bf6d-4066-9937-b83e8a35dd5f req-4435e64a-cddd-47dd-80c8-5e581d60497a 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:40.285 280869 DEBUG oslo_concurrency.lockutils [req-d327c153-bf6d-4066-9937-b83e8a35dd5f req-4435e64a-cddd-47dd-80c8-5e581d60497a 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:40.286 280869 DEBUG nova.compute.manager [req-d327c153-bf6d-4066-9937-b83e8a35dd5f req-4435e64a-cddd-47dd-80c8-5e581d60497a 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] No waiting events found dispatching network-vif-plugged-feb6a13d-305a-4541-a50e-4988833ecf82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:15:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:40.286 280869 WARNING nova.compute.manager [req-d327c153-bf6d-4066-9937-b83e8a35dd5f req-4435e64a-cddd-47dd-80c8-5e581d60497a 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Received unexpected event network-vif-plugged-feb6a13d-305a-4541-a50e-4988833ecf82 for instance with vm_state active and task_state migrating.
Dec 06 10:15:40 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:15:40 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2671465232' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:40.719 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:40.726 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:15:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:40.742 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:15:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:40.772 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:15:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:40.773 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v143: 177 pgs: 177 active+clean; 238 MiB data, 871 MiB used, 41 GiB / 42 GiB avail; 3.0 MiB/s rd, 106 op/s
Dec 06 10:15:41 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2671465232' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:41.460 280869 DEBUG oslo_concurrency.lockutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Acquiring lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:41.461 280869 DEBUG oslo_concurrency.lockutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:41.461 280869 DEBUG oslo_concurrency.lockutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Lock "ed40901b-0bfc-426a-bf70-48d87ce95aa6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:41.475 280869 DEBUG oslo_concurrency.lockutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:41.476 280869 DEBUG oslo_concurrency.lockutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:41.476 280869 DEBUG oslo_concurrency.lockutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:41.476 280869 DEBUG nova.compute.resource_tracker [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:15:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:41.476 280869 DEBUG oslo_concurrency.processutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:41.571 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:41.774 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:15:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:41.775 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:15:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:41.775 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:15:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:41.776 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:15:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:41.776 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Optimize plan auto_2025-12-06_10:15:41
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] do_upmap
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] pools ['manila_metadata', 'backups', 'vms', 'volumes', 'manila_data', '.mgr', 'images']
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] prepared 0/10 changes
Dec 06 10:15:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:41.887 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:15:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:15:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2676620455' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006313577435790061 of space, bias 1.0, pg target 1.2627154871580122 quantized to 32 (current 32)
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8555772569444443 quantized to 32 (current 32)
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.7263051367950866e-06 of space, bias 4.0, pg target 0.0021628687418574354 quantized to 16 (current 16)
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:15:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:41.966 280869 DEBUG oslo_concurrency.processutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:15:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:15:42 np0005548790.localdomain dnsmasq[308232]: exiting on receipt of SIGTERM
Dec 06 10:15:42 np0005548790.localdomain podman[311402]: 2025-12-06 10:15:42.007431973 +0000 UTC m=+0.064210218 container kill 415e4886690b573ead76f3c09ac49387a8740a490afdbfcb8b1d0b62c0851d1a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04974db5-7261-4ae2-b659-99265ca8d091, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 06 10:15:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:15:42 np0005548790.localdomain systemd[1]: libpod-415e4886690b573ead76f3c09ac49387a8740a490afdbfcb8b1d0b62c0851d1a.scope: Deactivated successfully.
Dec 06 10:15:42 np0005548790.localdomain podman[311417]: 2025-12-06 10:15:42.061408944 +0000 UTC m=+0.044867665 container died 415e4886690b573ead76f3c09ac49387a8740a490afdbfcb8b1d0b62c0851d1a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04974db5-7261-4ae2-b659-99265ca8d091, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Dec 06 10:15:42 np0005548790.localdomain podman[311417]: 2025-12-06 10:15:42.091218809 +0000 UTC m=+0.074677490 container cleanup 415e4886690b573ead76f3c09ac49387a8740a490afdbfcb8b1d0b62c0851d1a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04974db5-7261-4ae2-b659-99265ca8d091, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 06 10:15:42 np0005548790.localdomain systemd[1]: libpod-conmon-415e4886690b573ead76f3c09ac49387a8740a490afdbfcb8b1d0b62c0851d1a.scope: Deactivated successfully.
Dec 06 10:15:42 np0005548790.localdomain ceph-mon[301742]: pgmap v143: 177 pgs: 177 active+clean; 238 MiB data, 871 MiB used, 41 GiB / 42 GiB avail; 3.0 MiB/s rd, 106 op/s
Dec 06 10:15:42 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2676620455' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:42 np0005548790.localdomain podman[311425]: 2025-12-06 10:15:42.145595231 +0000 UTC m=+0.118280971 container remove 415e4886690b573ead76f3c09ac49387a8740a490afdbfcb8b1d0b62c0851d1a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-04974db5-7261-4ae2-b659-99265ca8d091, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:15:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:42.167 280869 WARNING nova.virt.libvirt.driver [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:15:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:42.168 280869 DEBUG nova.compute.resource_tracker [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=11601MB free_disk=41.71154022216797GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:15:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:42.169 280869 DEBUG oslo_concurrency.lockutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:42.169 280869 DEBUG oslo_concurrency.lockutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:42 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:15:42.183 262327 INFO neutron.agent.dhcp.agent [None req-b76dd95b-5567-475d-a3df-7bb70bdf71c8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:15:42 np0005548790.localdomain podman[311418]: 2025-12-06 10:15:42.204197516 +0000 UTC m=+0.176871486 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 10:15:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:42.215 280869 DEBUG nova.compute.resource_tracker [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Migration for instance ed40901b-0bfc-426a-bf70-48d87ce95aa6 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Dec 06 10:15:42 np0005548790.localdomain podman[311418]: 2025-12-06 10:15:42.218304577 +0000 UTC m=+0.190978557 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 06 10:15:42 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:15:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:42.237 280869 DEBUG nova.compute.resource_tracker [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Dec 06 10:15:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:42.262 280869 DEBUG nova.compute.resource_tracker [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Migration 0c4fb838-191a-43fb-92ad-31ab3b6d11ce is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Dec 06 10:15:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:42.263 280869 DEBUG nova.compute.resource_tracker [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:15:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:42.263 280869 DEBUG nova.compute.resource_tracker [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:15:42 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:15:42.275 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:15:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:42.302 280869 DEBUG oslo_concurrency.processutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:15:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:42.431 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:42 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:15:42 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4064141538' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:42.726 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:42.735 280869 DEBUG oslo_concurrency.processutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:15:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:42.740 280869 DEBUG nova.compute.provider_tree [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:15:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:42.755 280869 DEBUG nova.scheduler.client.report [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:15:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:42.785 280869 DEBUG nova.compute.resource_tracker [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:15:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:42.785 280869 DEBUG oslo_concurrency.lockutils [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:42.790 280869 INFO nova.compute.manager [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Migrating instance to np0005548789.localdomain finished successfully.
Dec 06 10:15:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:42.880 280869 INFO nova.scheduler.client.report [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] Deleted allocation for migration 0c4fb838-191a-43fb-92ad-31ab3b6d11ce
Dec 06 10:15:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:42.881 280869 DEBUG nova.virt.libvirt.driver [None req-0fd7e56b-8c90-4a54-9ae0-5e0dfc07787c c0f82d42124043bcb076ae248bc35f73 d60454a44a4b4482bf705ee4e3667605 - - default default] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Dec 06 10:15:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-4ea2a1aea5fa4d28291b0e1707cea6272f5dd98200e5e0fadb479df92052417a-merged.mount: Deactivated successfully.
Dec 06 10:15:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-415e4886690b573ead76f3c09ac49387a8740a490afdbfcb8b1d0b62c0851d1a-userdata-shm.mount: Deactivated successfully.
Dec 06 10:15:43 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2d04974db5\x2d7261\x2d4ae2\x2db659\x2d99265ca8d091.mount: Deactivated successfully.
Dec 06 10:15:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v144: 177 pgs: 177 active+clean; 304 MiB data, 1021 MiB used, 41 GiB / 42 GiB avail; 3.7 MiB/s rd, 4.3 MiB/s wr, 233 op/s
Dec 06 10:15:43 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/1782327393' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:43 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/4064141538' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:43 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/3377775268' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:15:44 np0005548790.localdomain ceph-mon[301742]: pgmap v144: 177 pgs: 177 active+clean; 304 MiB data, 1021 MiB used, 41 GiB / 42 GiB avail; 3.7 MiB/s rd, 4.3 MiB/s wr, 233 op/s
Dec 06 10:15:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v145: 177 pgs: 177 active+clean; 304 MiB data, 1021 MiB used, 41 GiB / 42 GiB avail; 682 KiB/s rd, 4.3 MiB/s wr, 132 op/s
Dec 06 10:15:45 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:15:45 np0005548790.localdomain podman[311486]: 2025-12-06 10:15:45.564206051 +0000 UTC m=+0.080005045 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:15:45 np0005548790.localdomain podman[311486]: 2025-12-06 10:15:45.948290909 +0000 UTC m=+0.464089923 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 10:15:45 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:15:45 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:15:46 np0005548790.localdomain systemd[1]: tmp-crun.678zsv.mount: Deactivated successfully.
Dec 06 10:15:46 np0005548790.localdomain podman[311509]: 2025-12-06 10:15:46.077882104 +0000 UTC m=+0.087405235 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller)
Dec 06 10:15:46 np0005548790.localdomain podman[311509]: 2025-12-06 10:15:46.138155695 +0000 UTC m=+0.147678816 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec 06 10:15:46 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:15:46 np0005548790.localdomain ceph-mon[301742]: pgmap v145: 177 pgs: 177 active+clean; 304 MiB data, 1021 MiB used, 41 GiB / 42 GiB avail; 682 KiB/s rd, 4.3 MiB/s wr, 132 op/s
Dec 06 10:15:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:46.888 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v146: 177 pgs: 177 active+clean; 304 MiB data, 1021 MiB used, 41 GiB / 42 GiB avail; 682 KiB/s rd, 4.3 MiB/s wr, 132 op/s
Dec 06 10:15:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:47.451 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:48 np0005548790.localdomain ceph-mon[301742]: pgmap v146: 177 pgs: 177 active+clean; 304 MiB data, 1021 MiB used, 41 GiB / 42 GiB avail; 682 KiB/s rd, 4.3 MiB/s wr, 132 op/s
Dec 06 10:15:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:48.396 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:15:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:15:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:15:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:48.397 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:15:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:48.398 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:15:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:15:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158564 "" "Go-http-client/1.1"
Dec 06 10:15:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:15:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19679 "" "Go-http-client/1.1"
Dec 06 10:15:48 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:15:48.511 2 INFO neutron.agent.securitygroups_rpc [None req-0ed3e916-bdef-45c7-9c1d-50729e74f02a 13b250438f8e49ee9d0d9f0fe4791c05 a22ced63e346459ab637424ae7833af7 - - default default] Security group member updated ['55c805cd-9bbe-4434-83af-206ee080e6b9']
Dec 06 10:15:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:15:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v147: 177 pgs: 177 active+clean; 304 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 683 KiB/s rd, 4.3 MiB/s wr, 134 op/s
Dec 06 10:15:49 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:15:49.995 2 INFO neutron.agent.securitygroups_rpc [req-da70e705-23ca-45d2-aa6c-68d8abc979e1 req-8ff8aa52-7146-4285-bb8a-51bbd99a36a5 da7bbd24eb95438897585b10577ea2e0 da995d8e002548889747013c0eeca935 - - default default] Security group member updated ['581a4637-eff2-45f4-92f3-d575b736a840']
Dec 06 10:15:50 np0005548790.localdomain ceph-mon[301742]: pgmap v147: 177 pgs: 177 active+clean; 304 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 683 KiB/s rd, 4.3 MiB/s wr, 134 op/s
Dec 06 10:15:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v148: 177 pgs: 177 active+clean; 304 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 679 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Dec 06 10:15:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:51.423 280869 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765016136.4222028, ed40901b-0bfc-426a-bf70-48d87ce95aa6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:15:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:51.423 280869 INFO nova.compute.manager [-] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] VM Stopped (Lifecycle Event)
Dec 06 10:15:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:51.450 280869 DEBUG nova.compute.manager [None req-84bb9694-1cab-413d-bc6d-5d39768c45f7 - - - - - -] [instance: ed40901b-0bfc-426a-bf70-48d87ce95aa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:15:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:51.553 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:51.931 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:52 np0005548790.localdomain ceph-mon[301742]: pgmap v148: 177 pgs: 177 active+clean; 304 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 679 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Dec 06 10:15:52 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/1067841479' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:15:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:52.454 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v149: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 716 KiB/s rd, 4.3 MiB/s wr, 184 op/s
Dec 06 10:15:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:15:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:15:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:15:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:15:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:15:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:15:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:15:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:15:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:15:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:15:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:15:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:15:53 np0005548790.localdomain sudo[311536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:15:53 np0005548790.localdomain sudo[311536]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:15:53 np0005548790.localdomain sudo[311536]: pam_unix(sudo:session): session closed for user root
Dec 06 10:15:54 np0005548790.localdomain sudo[311554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:15:54 np0005548790.localdomain sudo[311554]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:15:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:15:54 np0005548790.localdomain ceph-mon[301742]: pgmap v149: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 716 KiB/s rd, 4.3 MiB/s wr, 184 op/s
Dec 06 10:15:54 np0005548790.localdomain sudo[311554]: pam_unix(sudo:session): session closed for user root
Dec 06 10:15:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:15:54 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:15:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:15:54 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:15:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:15:54 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] update: starting ev 34c3d1da-d5df-4359-8946-4ba81e3729d9 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:15:54 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] complete: finished ev 34c3d1da-d5df-4359-8946-4ba81e3729d9 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:15:54 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Completed event 34c3d1da-d5df-4359-8946-4ba81e3729d9 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 06 10:15:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:15:54 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:15:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:55.039 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:55 np0005548790.localdomain sudo[311603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:15:55 np0005548790.localdomain sudo[311603]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:15:55 np0005548790.localdomain sudo[311603]: pam_unix(sudo:session): session closed for user root
Dec 06 10:15:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v150: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 28 KiB/s wr, 56 op/s
Dec 06 10:15:55 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:15:55 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:15:55 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:15:55 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:15:56 np0005548790.localdomain ceph-mon[301742]: pgmap v150: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 28 KiB/s wr, 56 op/s
Dec 06 10:15:56 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:15:56.498 2 INFO neutron.agent.securitygroups_rpc [None req-97ddf7c5-61a2-4ea7-a37a-afceb032745e 13b250438f8e49ee9d0d9f0fe4791c05 a22ced63e346459ab637424ae7833af7 - - default default] Security group member updated ['55c805cd-9bbe-4434-83af-206ee080e6b9']
Dec 06 10:15:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:56.736 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:56.934 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:56 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Writing back 50 completed events
Dec 06 10:15:56 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:15:56 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:15:56.995 2 INFO neutron.agent.securitygroups_rpc [None req-e28bc6dc-5f9c-4334-81fe-cd06724fee5d b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Security group member updated ['4c82b56e-0fc5-4c7f-8922-ceb8236815fd']
Dec 06 10:15:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v151: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 28 KiB/s wr, 56 op/s
Dec 06 10:15:57 np0005548790.localdomain systemd[1]: tmp-crun.t200HU.mount: Deactivated successfully.
Dec 06 10:15:57 np0005548790.localdomain dnsmasq[309749]: read /var/lib/neutron/dhcp/19043ea6-c6b2-4272-aa60-1b11a7b5bd93/addn_hosts - 0 addresses
Dec 06 10:15:57 np0005548790.localdomain dnsmasq-dhcp[309749]: read /var/lib/neutron/dhcp/19043ea6-c6b2-4272-aa60-1b11a7b5bd93/host
Dec 06 10:15:57 np0005548790.localdomain dnsmasq-dhcp[309749]: read /var/lib/neutron/dhcp/19043ea6-c6b2-4272-aa60-1b11a7b5bd93/opts
Dec 06 10:15:57 np0005548790.localdomain podman[311638]: 2025-12-06 10:15:57.380873351 +0000 UTC m=+0.049400476 container kill 7b9bb58a6331674495e5f9f3d408ccf521c163d2293bf9d3195a368f77417d81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19043ea6-c6b2-4272-aa60-1b11a7b5bd93, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 10:15:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:57.456 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:58 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:15:58 np0005548790.localdomain ceph-mon[301742]: pgmap v151: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 28 KiB/s wr, 56 op/s
Dec 06 10:15:58 np0005548790.localdomain dnsmasq[309749]: exiting on receipt of SIGTERM
Dec 06 10:15:58 np0005548790.localdomain podman[311671]: 2025-12-06 10:15:58.665089385 +0000 UTC m=+0.061858425 container kill 7b9bb58a6331674495e5f9f3d408ccf521c163d2293bf9d3195a368f77417d81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19043ea6-c6b2-4272-aa60-1b11a7b5bd93, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:15:58 np0005548790.localdomain systemd[1]: libpod-7b9bb58a6331674495e5f9f3d408ccf521c163d2293bf9d3195a368f77417d81.scope: Deactivated successfully.
Dec 06 10:15:58 np0005548790.localdomain podman[311683]: 2025-12-06 10:15:58.738615603 +0000 UTC m=+0.057955629 container died 7b9bb58a6331674495e5f9f3d408ccf521c163d2293bf9d3195a368f77417d81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19043ea6-c6b2-4272-aa60-1b11a7b5bd93, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:15:58 np0005548790.localdomain systemd[1]: tmp-crun.OQZllF.mount: Deactivated successfully.
Dec 06 10:15:58 np0005548790.localdomain podman[311683]: 2025-12-06 10:15:58.781889003 +0000 UTC m=+0.101228979 container cleanup 7b9bb58a6331674495e5f9f3d408ccf521c163d2293bf9d3195a368f77417d81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19043ea6-c6b2-4272-aa60-1b11a7b5bd93, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:15:58 np0005548790.localdomain systemd[1]: libpod-conmon-7b9bb58a6331674495e5f9f3d408ccf521c163d2293bf9d3195a368f77417d81.scope: Deactivated successfully.
Dec 06 10:15:58 np0005548790.localdomain podman[311685]: 2025-12-06 10:15:58.816019526 +0000 UTC m=+0.126964944 container remove 7b9bb58a6331674495e5f9f3d408ccf521c163d2293bf9d3195a368f77417d81 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19043ea6-c6b2-4272-aa60-1b11a7b5bd93, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 06 10:15:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:58.827 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:58 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:58Z|00096|binding|INFO|Releasing lport 1766b235-0baa-458c-b553-7258f331e206 from this chassis (sb_readonly=0)
Dec 06 10:15:58 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:15:58Z|00097|binding|INFO|Setting lport 1766b235-0baa-458c-b553-7258f331e206 down in Southbound
Dec 06 10:15:58 np0005548790.localdomain kernel: device tap1766b235-0b left promiscuous mode
Dec 06 10:15:58 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:58.838 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.2/24', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-19043ea6-c6b2-4272-aa60-1b11a7b5bd93', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19043ea6-c6b2-4272-aa60-1b11a7b5bd93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9167331b2c424ef6961b096b551f8434', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=927c8639-172d-4240-b8a1-85db1fd6c03d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=1766b235-0baa-458c-b553-7258f331e206) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:15:58 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:58.840 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 1766b235-0baa-458c-b553-7258f331e206 in datapath 19043ea6-c6b2-4272-aa60-1b11a7b5bd93 unbound from our chassis
Dec 06 10:15:58 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:58.844 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 19043ea6-c6b2-4272-aa60-1b11a7b5bd93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:15:58 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:15:58.845 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[dfa1ce2c-73a7-46ed-aaf9-5103636106e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:15:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:15:58.851 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:15:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:15:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v152: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 28 KiB/s wr, 56 op/s
Dec 06 10:15:59 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:15:59.112 262327 INFO neutron.agent.dhcp.agent [None req-43c2036f-452d-4ba7-9b1c-badba5377d4c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:15:59 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:15:59.114 262327 INFO neutron.agent.dhcp.agent [None req-43c2036f-452d-4ba7-9b1c-badba5377d4c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:15:59 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d31bfadcf4150577d36d5f63ebd34ecf371bac720edb6b14f5d4d13960aa3a3c-merged.mount: Deactivated successfully.
Dec 06 10:15:59 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7b9bb58a6331674495e5f9f3d408ccf521c163d2293bf9d3195a368f77417d81-userdata-shm.mount: Deactivated successfully.
Dec 06 10:15:59 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2d19043ea6\x2dc6b2\x2d4272\x2daa60\x2d1b11a7b5bd93.mount: Deactivated successfully.
Dec 06 10:15:59 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:15:59.729 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:16:00 np0005548790.localdomain ceph-mon[301742]: pgmap v152: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 28 KiB/s wr, 56 op/s
Dec 06 10:16:00 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:00.251 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:00 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:16:00.365 2 INFO neutron.agent.securitygroups_rpc [None req-96bdfd29-c14f-4ef8-b3b0-32d637d65e93 13b250438f8e49ee9d0d9f0fe4791c05 a22ced63e346459ab637424ae7833af7 - - default default] Security group member updated ['55c805cd-9bbe-4434-83af-206ee080e6b9']
Dec 06 10:16:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v153: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Dec 06 10:16:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:16:01 np0005548790.localdomain podman[311712]: 2025-12-06 10:16:01.579638523 +0000 UTC m=+0.088790003 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:16:01 np0005548790.localdomain podman[311712]: 2025-12-06 10:16:01.585693066 +0000 UTC m=+0.094844506 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 06 10:16:01 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:16:01 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:16:01.608 2 INFO neutron.agent.securitygroups_rpc [None req-7d84f32e-96fa-49ab-97a7-a8cf557247b9 b25d9e5ec9eb4368a764482a325b9dda 9167331b2c424ef6961b096b551f8434 - - default default] Security group member updated ['4c82b56e-0fc5-4c7f-8922-ceb8236815fd']
Dec 06 10:16:01 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:01.975 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:02 np0005548790.localdomain ceph-mon[301742]: pgmap v153: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Dec 06 10:16:02 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:02.458 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v154: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Dec 06 10:16:03 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:03.306 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:16:04 np0005548790.localdomain ceph-mon[301742]: pgmap v154: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Dec 06 10:16:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v155: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:05 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:05.302 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:05 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:16:05.824 2 INFO neutron.agent.securitygroups_rpc [None req-be960e3b-e920-4ec4-8e87-e409a0af324a 13b250438f8e49ee9d0d9f0fe4791c05 a22ced63e346459ab637424ae7833af7 - - default default] Security group member updated ['55c805cd-9bbe-4434-83af-206ee080e6b9']
Dec 06 10:16:06 np0005548790.localdomain ceph-mon[301742]: pgmap v155: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:16:06 np0005548790.localdomain podman[311730]: 2025-12-06 10:16:06.579142963 +0000 UTC m=+0.086796049 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:16:06 np0005548790.localdomain podman[311730]: 2025-12-06 10:16:06.591481186 +0000 UTC m=+0.099134272 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 10:16:06 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:16:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:16:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:16:06 np0005548790.localdomain podman[311749]: 2025-12-06 10:16:06.734287548 +0000 UTC m=+0.090951660 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:16:06 np0005548790.localdomain podman[311749]: 2025-12-06 10:16:06.774910028 +0000 UTC m=+0.131574130 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:16:06 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:16:06 np0005548790.localdomain podman[311750]: 2025-12-06 10:16:06.794380063 +0000 UTC m=+0.147055198 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., distribution-scope=public, build-date=2025-08-20T13:12:41, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-type=git, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 06 10:16:06 np0005548790.localdomain podman[311750]: 2025-12-06 10:16:06.838348943 +0000 UTC m=+0.191024148 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_id=edpm, managed_by=edpm_ansible, name=ubi9-minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container)
Dec 06 10:16:06 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:16:06 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:06.978 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v156: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:07 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:07.460 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:07 np0005548790.localdomain systemd[1]: tmp-crun.eGUD1Y.mount: Deactivated successfully.
Dec 06 10:16:08 np0005548790.localdomain ceph-mon[301742]: pgmap v156: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:16:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v157: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:10 np0005548790.localdomain ceph-mon[301742]: pgmap v157: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:10 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:10.970 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v158: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:11 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:11.326 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:11 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:16:11.327 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:16:11 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:16:11.328 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:16:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:16:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:16:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:16:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:16:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:16:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:16:11 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:11.979 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:12 np0005548790.localdomain ceph-mon[301742]: pgmap v158: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:12.462 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:12 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:16:12 np0005548790.localdomain systemd[1]: tmp-crun.LoMLTN.mount: Deactivated successfully.
Dec 06 10:16:12 np0005548790.localdomain podman[311794]: 2025-12-06 10:16:12.575070731 +0000 UTC m=+0.088218627 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 06 10:16:12 np0005548790.localdomain podman[311794]: 2025-12-06 10:16:12.59276847 +0000 UTC m=+0.105916356 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 10:16:12 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:16:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:12.617 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:12 np0005548790.localdomain podman[311832]: 2025-12-06 10:16:12.852903236 +0000 UTC m=+0.046534279 container kill 6aa6ea9e6d6dc8e6b6af635c0e9adfd7e75dc372a4584cf6919347b15aefda10 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-024ac467-702d-4aa3-9f11-5a052a7660a7, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 10:16:12 np0005548790.localdomain dnsmasq[309319]: read /var/lib/neutron/dhcp/024ac467-702d-4aa3-9f11-5a052a7660a7/addn_hosts - 0 addresses
Dec 06 10:16:12 np0005548790.localdomain dnsmasq-dhcp[309319]: read /var/lib/neutron/dhcp/024ac467-702d-4aa3-9f11-5a052a7660a7/host
Dec 06 10:16:12 np0005548790.localdomain dnsmasq-dhcp[309319]: read /var/lib/neutron/dhcp/024ac467-702d-4aa3-9f11-5a052a7660a7/opts
Dec 06 10:16:13 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:16:13Z|00098|binding|INFO|Releasing lport 8a4f3ee4-2353-4fac-b8d8-f7a7aa1b44b2 from this chassis (sb_readonly=0)
Dec 06 10:16:13 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:16:13Z|00099|binding|INFO|Setting lport 8a4f3ee4-2353-4fac-b8d8-f7a7aa1b44b2 down in Southbound
Dec 06 10:16:13 np0005548790.localdomain kernel: device tap8a4f3ee4-23 left promiscuous mode
Dec 06 10:16:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:13.016 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:16:13.024 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-024ac467-702d-4aa3-9f11-5a052a7660a7', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-024ac467-702d-4aa3-9f11-5a052a7660a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd60454a44a4b4482bf705ee4e3667605', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8eef2af5-d600-4158-b892-77d6b006b733, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=8a4f3ee4-2353-4fac-b8d8-f7a7aa1b44b2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:16:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:16:13.027 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 8a4f3ee4-2353-4fac-b8d8-f7a7aa1b44b2 in datapath 024ac467-702d-4aa3-9f11-5a052a7660a7 unbound from our chassis
Dec 06 10:16:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:16:13.030 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 024ac467-702d-4aa3-9f11-5a052a7660a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:16:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:16:13.031 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[89b458b0-c6b5-489e-81a6-b76ab5030b00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:16:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:13.037 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:13.039 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v159: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:16:14 np0005548790.localdomain ceph-mon[301742]: pgmap v159: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:14 np0005548790.localdomain snmpd[67989]: empty variable list in _query
Dec 06 10:16:14 np0005548790.localdomain snmpd[67989]: empty variable list in _query
Dec 06 10:16:14 np0005548790.localdomain snmpd[67989]: empty variable list in _query
Dec 06 10:16:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v160: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:15 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:15.648 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:16 np0005548790.localdomain dnsmasq[309319]: exiting on receipt of SIGTERM
Dec 06 10:16:16 np0005548790.localdomain podman[311872]: 2025-12-06 10:16:16.19988718 +0000 UTC m=+0.071191507 container kill 6aa6ea9e6d6dc8e6b6af635c0e9adfd7e75dc372a4584cf6919347b15aefda10 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-024ac467-702d-4aa3-9f11-5a052a7660a7, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 10:16:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:16:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:16:16 np0005548790.localdomain systemd[1]: libpod-6aa6ea9e6d6dc8e6b6af635c0e9adfd7e75dc372a4584cf6919347b15aefda10.scope: Deactivated successfully.
Dec 06 10:16:16 np0005548790.localdomain ceph-mon[301742]: pgmap v160: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:16 np0005548790.localdomain podman[311888]: 2025-12-06 10:16:16.260594532 +0000 UTC m=+0.041976316 container died 6aa6ea9e6d6dc8e6b6af635c0e9adfd7e75dc372a4584cf6919347b15aefda10 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-024ac467-702d-4aa3-9f11-5a052a7660a7, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:16:16 np0005548790.localdomain podman[311900]: 2025-12-06 10:16:16.324050628 +0000 UTC m=+0.094901868 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 06 10:16:16 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6aa6ea9e6d6dc8e6b6af635c0e9adfd7e75dc372a4584cf6919347b15aefda10-userdata-shm.mount: Deactivated successfully.
Dec 06 10:16:16 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:16:16.331 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=33b2d0f4-3dae-458c-b286-c937c7cb3d9e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:16:16 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-1d9d2cc3cfb184effd5973b8db2ceccd9f50b7d1e73e08ee5ddad56bd1deb823-merged.mount: Deactivated successfully.
Dec 06 10:16:16 np0005548790.localdomain podman[311888]: 2025-12-06 10:16:16.352122027 +0000 UTC m=+0.133503831 container remove 6aa6ea9e6d6dc8e6b6af635c0e9adfd7e75dc372a4584cf6919347b15aefda10 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-024ac467-702d-4aa3-9f11-5a052a7660a7, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 10:16:16 np0005548790.localdomain systemd[1]: libpod-conmon-6aa6ea9e6d6dc8e6b6af635c0e9adfd7e75dc372a4584cf6919347b15aefda10.scope: Deactivated successfully.
Dec 06 10:16:16 np0005548790.localdomain podman[311900]: 2025-12-06 10:16:16.362299293 +0000 UTC m=+0.133150543 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 10:16:16 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:16:16 np0005548790.localdomain podman[311894]: 2025-12-06 10:16:16.420078846 +0000 UTC m=+0.190794222 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:16:16 np0005548790.localdomain podman[311894]: 2025-12-06 10:16:16.433117298 +0000 UTC m=+0.203832674 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:16:16 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:16:16 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:16:16.566 262327 INFO neutron.agent.dhcp.agent [None req-b0f868ec-6efb-4092-b142-2d1978bfe388 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:16:16 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:16:16.567 262327 INFO neutron.agent.dhcp.agent [None req-b0f868ec-6efb-4092-b142-2d1978bfe388 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:16:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:16.983 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v161: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:17 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2d024ac467\x2d702d\x2d4aa3\x2d9f11\x2d5a052a7660a7.mount: Deactivated successfully.
Dec 06 10:16:17 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:16:17.287 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:16:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:17.467 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:18 np0005548790.localdomain ceph-mon[301742]: pgmap v161: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:18.073 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:16:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:16:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:16:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:16:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:16:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18739 "" "Go-http-client/1.1"
Dec 06 10:16:19 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:16:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v162: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:20 np0005548790.localdomain ceph-mon[301742]: pgmap v162: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v163: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:21.985 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:22 np0005548790.localdomain ceph-mon[301742]: pgmap v163: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:22.474 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:22.726 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:22 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:16:22.876 2 INFO neutron.agent.securitygroups_rpc [None req-08283fcf-8c3f-4ce1-8201-1776fe09eb71 13b250438f8e49ee9d0d9f0fe4791c05 a22ced63e346459ab637424ae7833af7 - - default default] Security group member updated ['55c805cd-9bbe-4434-83af-206ee080e6b9']
Dec 06 10:16:23 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v164: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:16:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:16:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:16:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:16:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:16:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:16:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:16:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:16:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:16:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:16:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:16:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:16:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:16:24 np0005548790.localdomain ceph-mon[301742]: pgmap v164: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:24 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:16:24.302 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:16:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v165: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:25 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:16:25.246 2 INFO neutron.agent.securitygroups_rpc [None req-2f0fe649-a0ce-475a-a444-c6db3fc27153 13b250438f8e49ee9d0d9f0fe4791c05 a22ced63e346459ab637424ae7833af7 - - default default] Security group member updated ['55c805cd-9bbe-4434-83af-206ee080e6b9']
Dec 06 10:16:26 np0005548790.localdomain ceph-mon[301742]: pgmap v165: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:26.988 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v166: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:27.252 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:27.511 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:28 np0005548790.localdomain ceph-mon[301742]: pgmap v166: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:16:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v167: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:29 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:16:29.311 2 INFO neutron.agent.securitygroups_rpc [None req-64ece17b-51fa-4f7d-ac9f-f7ae51f6ef1a 13b250438f8e49ee9d0d9f0fe4791c05 a22ced63e346459ab637424ae7833af7 - - default default] Security group member updated ['55c805cd-9bbe-4434-83af-206ee080e6b9']
Dec 06 10:16:29 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:16:29.841 2 INFO neutron.agent.securitygroups_rpc [None req-e4d175d7-f151-45a2-bfa9-dd114b2ac98c 13b250438f8e49ee9d0d9f0fe4791c05 a22ced63e346459ab637424ae7833af7 - - default default] Security group member updated ['55c805cd-9bbe-4434-83af-206ee080e6b9']
Dec 06 10:16:30 np0005548790.localdomain ceph-mon[301742]: pgmap v167: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:30 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:16:30.411 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:16:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v168: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:31.991 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:32 np0005548790.localdomain ceph-mon[301742]: pgmap v168: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:32 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/2582979284' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:16:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:16:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:32.513 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:32 np0005548790.localdomain podman[311961]: 2025-12-06 10:16:32.569127322 +0000 UTC m=+0.083852489 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:16:32 np0005548790.localdomain podman[311961]: 2025-12-06 10:16:32.599916665 +0000 UTC m=+0.114641822 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:16:32 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:16:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v169: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:33 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/4066488691' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:16:34 np0005548790.localdomain sshd[311980]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:16:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:16:34 np0005548790.localdomain ceph-mon[301742]: pgmap v169: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v170: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:35.337 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:16:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:35.338 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:16:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:35.338 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:16:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:16:35.344 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:16:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:35.356 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:16:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:36.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:16:36 np0005548790.localdomain ceph-mon[301742]: pgmap v170: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:36.994 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v171: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:37.328 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:16:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:37.329 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:16:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:16:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:16:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:16:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:37.515 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:37 np0005548790.localdomain podman[311983]: 2025-12-06 10:16:37.588639923 +0000 UTC m=+0.085944106 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 10:16:37 np0005548790.localdomain podman[311983]: 2025-12-06 10:16:37.60111974 +0000 UTC m=+0.098423913 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute)
Dec 06 10:16:37 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:16:37 np0005548790.localdomain podman[311984]: 2025-12-06 10:16:37.648013309 +0000 UTC m=+0.139281968 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc.)
Dec 06 10:16:37 np0005548790.localdomain podman[311982]: 2025-12-06 10:16:37.692585554 +0000 UTC m=+0.192371064 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 10:16:37 np0005548790.localdomain podman[311982]: 2025-12-06 10:16:37.69834464 +0000 UTC m=+0.198130120 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:16:37 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:16:37 np0005548790.localdomain podman[311984]: 2025-12-06 10:16:37.716963454 +0000 UTC m=+0.208232173 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, version=9.6, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-type=git, architecture=x86_64, distribution-scope=public)
Dec 06 10:16:37 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:16:38 np0005548790.localdomain ceph-mon[301742]: pgmap v171: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:38.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:16:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:38.577 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:38 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:16:38 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1196607636' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:16:38 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:16:38 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1196607636' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:16:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:16:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1196607636' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:16:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1196607636' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:16:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v172: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:40 np0005548790.localdomain sshd[311980]: Connection closed by 101.47.160.186 port 33740 [preauth]
Dec 06 10:16:40 np0005548790.localdomain ceph-mon[301742]: pgmap v172: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:40.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:16:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:40.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:16:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:40.358 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:16:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:40.358 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:16:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:40.359 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:16:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:40.359 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:16:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:40.360 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:16:40 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:16:40 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2751703348' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:16:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:40.831 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:16:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:41.036 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:16:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:41.038 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=11624MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:16:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:41.039 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:16:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:41.039 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v173: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:41.117 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:16:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:41.118 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:16:41 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2751703348' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:16:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:41.137 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:16:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:16:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2821502390' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:16:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:41.589 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:16:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:41.596 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:16:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:41.612 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:16:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:41.615 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:16:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:41.616 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Optimize plan auto_2025-12-06_10:16:41
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] do_upmap
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] pools ['manila_data', 'manila_metadata', 'volumes', '.mgr', 'images', 'backups', 'vms']
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] prepared 0/10 changes
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32)
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.7263051367950866e-06 of space, bias 4.0, pg target 0.002170138888888889 quantized to 16 (current 16)
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:16:41 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:16:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:42.033 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:42 np0005548790.localdomain ceph-mon[301742]: pgmap v173: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:42 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/1359574343' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:16:42 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2821502390' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:16:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:42.518 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:42.617 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:16:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:42.617 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:16:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:42.618 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:16:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:42.618 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:16:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v174: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:43 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2914913536' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:16:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:16:43 np0005548790.localdomain systemd[1]: tmp-crun.ch9r6j.mount: Deactivated successfully.
Dec 06 10:16:43 np0005548790.localdomain podman[312090]: 2025-12-06 10:16:43.576147974 +0000 UTC m=+0.090255592 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:16:43 np0005548790.localdomain podman[312090]: 2025-12-06 10:16:43.614852301 +0000 UTC m=+0.128959949 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 10:16:43 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:16:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:43.752 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:16:44 np0005548790.localdomain ceph-mon[301742]: pgmap v174: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v175: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:46 np0005548790.localdomain ceph-mon[301742]: pgmap v175: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:16:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:16:46 np0005548790.localdomain podman[312108]: 2025-12-06 10:16:46.564735095 +0000 UTC m=+0.078908496 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:16:46 np0005548790.localdomain podman[312108]: 2025-12-06 10:16:46.572821974 +0000 UTC m=+0.086995375 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:16:46 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:16:46 np0005548790.localdomain podman[312109]: 2025-12-06 10:16:46.6259267 +0000 UTC m=+0.135349592 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:16:46 np0005548790.localdomain podman[312109]: 2025-12-06 10:16:46.689180721 +0000 UTC m=+0.198603603 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 10:16:46 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:16:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:47.073 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v176: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:47.521 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:48 np0005548790.localdomain ceph-mon[301742]: pgmap v176: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:16:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:16:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:16:48.398 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:16:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:16:48.398 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:16:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:16:48.399 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:16:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:16:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:16:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:16:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18735 "" "Go-http-client/1.1"
Dec 06 10:16:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:16:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v177: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:50 np0005548790.localdomain ceph-mon[301742]: pgmap v177: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v178: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:52.074 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:52 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e116 e116: 6 total, 6 up, 6 in
Dec 06 10:16:52 np0005548790.localdomain ceph-mon[301742]: pgmap v178: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:52.523 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:52.710 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v180: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 4.3 KiB/s wr, 35 op/s
Dec 06 10:16:53 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:16:53.167 2 INFO neutron.agent.securitygroups_rpc [None req-26ae0ef5-9433-41c4-a064-a0d5d3110043 183487bfea4148c8bd274489b01ac583 290c121e7a5344fea2a32f4e64e74fb4 - - default default] Security group member updated ['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3']
Dec 06 10:16:53 np0005548790.localdomain ceph-mon[301742]: osdmap e116: 6 total, 6 up, 6 in
Dec 06 10:16:53 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e117 e117: 6 total, 6 up, 6 in
Dec 06 10:16:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:16:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:16:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:16:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:16:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:16:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:16:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:16:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:16:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:16:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:16:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:16:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:16:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:16:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e118 e118: 6 total, 6 up, 6 in
Dec 06 10:16:54 np0005548790.localdomain ceph-mon[301742]: pgmap v180: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 4.3 KiB/s wr, 35 op/s
Dec 06 10:16:54 np0005548790.localdomain ceph-mon[301742]: osdmap e117: 6 total, 6 up, 6 in
Dec 06 10:16:54 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:16:54.700 262327 INFO neutron.agent.linux.ip_lib [None req-2c7c97b2-079b-4620-bb93-3eb39a1ca908 - - - - - -] Device tapc290d0b6-ee cannot be used as it has no MAC address
Dec 06 10:16:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:54.725 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:54 np0005548790.localdomain kernel: device tapc290d0b6-ee entered promiscuous mode
Dec 06 10:16:54 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016214.7334] manager: (tapc290d0b6-ee): new Generic device (/org/freedesktop/NetworkManager/Devices/25)
Dec 06 10:16:54 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:16:54Z|00100|binding|INFO|Claiming lport c290d0b6-ee62-46ad-8b4b-6ee39f9294c0 for this chassis.
Dec 06 10:16:54 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:16:54Z|00101|binding|INFO|c290d0b6-ee62-46ad-8b4b-6ee39f9294c0: Claiming unknown
Dec 06 10:16:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:54.734 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:54 np0005548790.localdomain systemd-udevd[312166]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:16:54 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapc290d0b6-ee: No such device
Dec 06 10:16:54 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapc290d0b6-ee: No such device
Dec 06 10:16:54 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:16:54Z|00102|binding|INFO|Setting lport c290d0b6-ee62-46ad-8b4b-6ee39f9294c0 ovn-installed in OVS
Dec 06 10:16:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:54.769 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:54 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapc290d0b6-ee: No such device
Dec 06 10:16:54 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapc290d0b6-ee: No such device
Dec 06 10:16:54 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapc290d0b6-ee: No such device
Dec 06 10:16:54 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapc290d0b6-ee: No such device
Dec 06 10:16:54 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapc290d0b6-ee: No such device
Dec 06 10:16:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:54.800 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:54 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapc290d0b6-ee: No such device
Dec 06 10:16:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:54.828 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:55 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:16:55Z|00103|binding|INFO|Setting lport c290d0b6-ee62-46ad-8b4b-6ee39f9294c0 up in Southbound
Dec 06 10:16:55 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:16:55.050 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-1399d1e5-1513-4962-bae4-5ed900ff7211', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1399d1e5-1513-4962-bae4-5ed900ff7211', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cee3e0c1575f4b46bd60ec5b2e858b9d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af784563-e963-4a26-a17b-1d37d6980aac, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=c290d0b6-ee62-46ad-8b4b-6ee39f9294c0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:16:55 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:16:55.052 159200 INFO neutron.agent.ovn.metadata.agent [-] Port c290d0b6-ee62-46ad-8b4b-6ee39f9294c0 in datapath 1399d1e5-1513-4962-bae4-5ed900ff7211 bound to our chassis
Dec 06 10:16:55 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:16:55.056 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Port 79147712-2915-48f4-90ec-23e94492bdf3 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 06 10:16:55 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:16:55.056 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1399d1e5-1513-4962-bae4-5ed900ff7211, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:16:55 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:16:55.057 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[3a752f00-e35d-4427-8d04-3bbcdddc319b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:16:55 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:16:55.065 2 INFO neutron.agent.securitygroups_rpc [None req-6596b1da-4291-462f-a9bc-899ad3053051 183487bfea4148c8bd274489b01ac583 290c121e7a5344fea2a32f4e64e74fb4 - - default default] Security group member updated ['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3']
Dec 06 10:16:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v183: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 7.2 KiB/s wr, 59 op/s
Dec 06 10:16:55 np0005548790.localdomain sudo[312206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:16:55 np0005548790.localdomain sudo[312206]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:16:55 np0005548790.localdomain sudo[312206]: pam_unix(sudo:session): session closed for user root
Dec 06 10:16:55 np0005548790.localdomain sudo[312229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:16:55 np0005548790.localdomain sudo[312229]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:16:55 np0005548790.localdomain ceph-mon[301742]: osdmap e118: 6 total, 6 up, 6 in
Dec 06 10:16:55 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e119 e119: 6 total, 6 up, 6 in
Dec 06 10:16:55 np0005548790.localdomain podman[312290]: 
Dec 06 10:16:55 np0005548790.localdomain podman[312290]: 2025-12-06 10:16:55.850942305 +0000 UTC m=+0.086051699 container create 079edef0e3f21a80b96737ad7b3f76896df0011a3c5e9cf0ffcd7f03e64741a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1399d1e5-1513-4962-bae4-5ed900ff7211, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 10:16:55 np0005548790.localdomain systemd[1]: Started libpod-conmon-079edef0e3f21a80b96737ad7b3f76896df0011a3c5e9cf0ffcd7f03e64741a2.scope.
Dec 06 10:16:55 np0005548790.localdomain systemd[1]: tmp-crun.709ve8.mount: Deactivated successfully.
Dec 06 10:16:55 np0005548790.localdomain podman[312290]: 2025-12-06 10:16:55.809262758 +0000 UTC m=+0.044372202 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:16:55 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:16:55 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a69795329f816d8bf8ebc2e9d9d9d492772f9918c14322138757dfdc2ef71d2a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:16:55 np0005548790.localdomain sudo[312229]: pam_unix(sudo:session): session closed for user root
Dec 06 10:16:55 np0005548790.localdomain podman[312290]: 2025-12-06 10:16:55.929719876 +0000 UTC m=+0.164829260 container init 079edef0e3f21a80b96737ad7b3f76896df0011a3c5e9cf0ffcd7f03e64741a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1399d1e5-1513-4962-bae4-5ed900ff7211, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true)
Dec 06 10:16:55 np0005548790.localdomain podman[312290]: 2025-12-06 10:16:55.938818431 +0000 UTC m=+0.173927825 container start 079edef0e3f21a80b96737ad7b3f76896df0011a3c5e9cf0ffcd7f03e64741a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1399d1e5-1513-4962-bae4-5ed900ff7211, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 10:16:55 np0005548790.localdomain dnsmasq[312321]: started, version 2.85 cachesize 150
Dec 06 10:16:55 np0005548790.localdomain dnsmasq[312321]: DNS service limited to local subnets
Dec 06 10:16:55 np0005548790.localdomain dnsmasq[312321]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:16:55 np0005548790.localdomain dnsmasq[312321]: warning: no upstream servers configured
Dec 06 10:16:55 np0005548790.localdomain dnsmasq-dhcp[312321]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 06 10:16:55 np0005548790.localdomain dnsmasq[312321]: read /var/lib/neutron/dhcp/1399d1e5-1513-4962-bae4-5ed900ff7211/addn_hosts - 0 addresses
Dec 06 10:16:55 np0005548790.localdomain dnsmasq-dhcp[312321]: read /var/lib/neutron/dhcp/1399d1e5-1513-4962-bae4-5ed900ff7211/host
Dec 06 10:16:55 np0005548790.localdomain dnsmasq-dhcp[312321]: read /var/lib/neutron/dhcp/1399d1e5-1513-4962-bae4-5ed900ff7211/opts
Dec 06 10:16:56 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:16:56 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:16:56 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:16:56 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:16:56 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:16:56 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] update: starting ev d23861a3-2625-4790-9107-4e8f61949b20 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:16:56 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] complete: finished ev d23861a3-2625-4790-9107-4e8f61949b20 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:16:56 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Completed event d23861a3-2625-4790-9107-4e8f61949b20 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 06 10:16:56 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:16:56 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:16:56 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:16:56.148 262327 INFO neutron.agent.dhcp.agent [None req-e147ba3a-e9d2-4947-9c98-df7a7e545a4f - - - - - -] DHCP configuration for ports {'8732035d-1123-4ed9-8b0a-7b94749ef63f'} is completed
Dec 06 10:16:56 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e120 e120: 6 total, 6 up, 6 in
Dec 06 10:16:56 np0005548790.localdomain sudo[312322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:16:56 np0005548790.localdomain ceph-mon[301742]: pgmap v183: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 7.2 KiB/s wr, 59 op/s
Dec 06 10:16:56 np0005548790.localdomain ceph-mon[301742]: osdmap e119: 6 total, 6 up, 6 in
Dec 06 10:16:56 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:16:56 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:16:56 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:16:56 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:16:56 np0005548790.localdomain sudo[312322]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:16:56 np0005548790.localdomain sudo[312322]: pam_unix(sudo:session): session closed for user root
Dec 06 10:16:56 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:16:56.534 2 INFO neutron.agent.securitygroups_rpc [None req-c7b28b51-59d5-4f0a-ad7d-a932cd0ad09d 183487bfea4148c8bd274489b01ac583 290c121e7a5344fea2a32f4e64e74fb4 - - default default] Security group member updated ['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3']
Dec 06 10:16:57 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Writing back 50 completed events
Dec 06 10:16:57 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:16:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v186: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:57.124 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:57 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:16:57.216 2 INFO neutron.agent.securitygroups_rpc [None req-dca415e6-2c01-4081-a144-3151bae67c51 183487bfea4148c8bd274489b01ac583 290c121e7a5344fea2a32f4e64e74fb4 - - default default] Security group member updated ['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3']
Dec 06 10:16:57 np0005548790.localdomain ceph-mon[301742]: osdmap e120: 6 total, 6 up, 6 in
Dec 06 10:16:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:16:57 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:16:57.352 262327 INFO neutron.agent.linux.ip_lib [None req-61ef96fa-eb71-4daf-b319-a4fc3d675075 - - - - - -] Device tap3780d604-8f cannot be used as it has no MAC address
Dec 06 10:16:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:57.374 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:57 np0005548790.localdomain kernel: device tap3780d604-8f entered promiscuous mode
Dec 06 10:16:57 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:16:57Z|00104|binding|INFO|Claiming lport 3780d604-8f3c-416e-a6a2-35d810aa2cb5 for this chassis.
Dec 06 10:16:57 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:16:57Z|00105|binding|INFO|3780d604-8f3c-416e-a6a2-35d810aa2cb5: Claiming unknown
Dec 06 10:16:57 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016217.3834] manager: (tap3780d604-8f): new Generic device (/org/freedesktop/NetworkManager/Devices/26)
Dec 06 10:16:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:57.382 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:57 np0005548790.localdomain systemd-udevd[312168]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:16:57 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:16:57.396 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-2f7ffd81-9f27-41ba-85a3-f6c464090ebd', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f7ffd81-9f27-41ba-85a3-f6c464090ebd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '37dcf5204733427ebb8bdbe574dca584', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96cb3e55-a48b-4639-b988-01bcb84fe479, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=3780d604-8f3c-416e-a6a2-35d810aa2cb5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:16:57 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:16:57.398 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 3780d604-8f3c-416e-a6a2-35d810aa2cb5 in datapath 2f7ffd81-9f27-41ba-85a3-f6c464090ebd bound to our chassis
Dec 06 10:16:57 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:16:57.402 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2f7ffd81-9f27-41ba-85a3-f6c464090ebd or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:16:57 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:16:57.403 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[bd4d4945-138e-4124-b968-7ac795a838e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:16:57 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap3780d604-8f: No such device
Dec 06 10:16:57 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap3780d604-8f: No such device
Dec 06 10:16:57 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap3780d604-8f: No such device
Dec 06 10:16:57 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:16:57Z|00106|binding|INFO|Setting lport 3780d604-8f3c-416e-a6a2-35d810aa2cb5 ovn-installed in OVS
Dec 06 10:16:57 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:16:57Z|00107|binding|INFO|Setting lport 3780d604-8f3c-416e-a6a2-35d810aa2cb5 up in Southbound
Dec 06 10:16:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:57.419 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:57 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap3780d604-8f: No such device
Dec 06 10:16:57 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap3780d604-8f: No such device
Dec 06 10:16:57 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap3780d604-8f: No such device
Dec 06 10:16:57 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap3780d604-8f: No such device
Dec 06 10:16:57 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap3780d604-8f: No such device
Dec 06 10:16:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:57.453 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:57.484 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:16:57.525 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:16:58 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e121 e121: 6 total, 6 up, 6 in
Dec 06 10:16:58 np0005548790.localdomain podman[312420]: 
Dec 06 10:16:58 np0005548790.localdomain podman[312420]: 2025-12-06 10:16:58.28915519 +0000 UTC m=+0.085536344 container create c372683d3a1f4f9c8d2e6d7c48725d4a7032b1a514acea5093d400670933a0ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2f7ffd81-9f27-41ba-85a3-f6c464090ebd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 10:16:58 np0005548790.localdomain ceph-mon[301742]: pgmap v186: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:16:58 np0005548790.localdomain ceph-mon[301742]: osdmap e121: 6 total, 6 up, 6 in
Dec 06 10:16:58 np0005548790.localdomain systemd[1]: Started libpod-conmon-c372683d3a1f4f9c8d2e6d7c48725d4a7032b1a514acea5093d400670933a0ce.scope.
Dec 06 10:16:58 np0005548790.localdomain podman[312420]: 2025-12-06 10:16:58.242499029 +0000 UTC m=+0.038880233 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:16:58 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:16:58 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57b0d22eb355abde3c775ccf121f353083ab46383967bd63e19ad96b9f05f499/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:16:58 np0005548790.localdomain podman[312420]: 2025-12-06 10:16:58.358891636 +0000 UTC m=+0.155272790 container init c372683d3a1f4f9c8d2e6d7c48725d4a7032b1a514acea5093d400670933a0ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2f7ffd81-9f27-41ba-85a3-f6c464090ebd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:16:58 np0005548790.localdomain podman[312420]: 2025-12-06 10:16:58.368580909 +0000 UTC m=+0.164962063 container start c372683d3a1f4f9c8d2e6d7c48725d4a7032b1a514acea5093d400670933a0ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2f7ffd81-9f27-41ba-85a3-f6c464090ebd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:16:58 np0005548790.localdomain dnsmasq[312439]: started, version 2.85 cachesize 150
Dec 06 10:16:58 np0005548790.localdomain dnsmasq[312439]: DNS service limited to local subnets
Dec 06 10:16:58 np0005548790.localdomain dnsmasq[312439]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:16:58 np0005548790.localdomain dnsmasq[312439]: warning: no upstream servers configured
Dec 06 10:16:58 np0005548790.localdomain dnsmasq-dhcp[312439]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 06 10:16:58 np0005548790.localdomain dnsmasq[312439]: read /var/lib/neutron/dhcp/2f7ffd81-9f27-41ba-85a3-f6c464090ebd/addn_hosts - 0 addresses
Dec 06 10:16:58 np0005548790.localdomain dnsmasq-dhcp[312439]: read /var/lib/neutron/dhcp/2f7ffd81-9f27-41ba-85a3-f6c464090ebd/host
Dec 06 10:16:58 np0005548790.localdomain dnsmasq-dhcp[312439]: read /var/lib/neutron/dhcp/2f7ffd81-9f27-41ba-85a3-f6c464090ebd/opts
Dec 06 10:16:58 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:16:58.382 2 INFO neutron.agent.securitygroups_rpc [None req-97e21b62-44c1-4fc5-958c-bcc0268c52d3 183487bfea4148c8bd274489b01ac583 290c121e7a5344fea2a32f4e64e74fb4 - - default default] Security group member updated ['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3']
Dec 06 10:16:58 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:16:58.508 262327 INFO neutron.agent.dhcp.agent [None req-a8fc789d-44ca-44a5-b5d5-1cd68d0d927b - - - - - -] DHCP configuration for ports {'e670c6a4-08cc-4096-ab1f-37bd6485b485'} is completed
Dec 06 10:16:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:16:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v188: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 109 KiB/s rd, 15 KiB/s wr, 153 op/s
Dec 06 10:16:59 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:16:59.204 2 INFO neutron.agent.securitygroups_rpc [None req-a68c2924-ed4d-4682-a596-626401a139a3 183487bfea4148c8bd274489b01ac583 290c121e7a5344fea2a32f4e64e74fb4 - - default default] Security group member updated ['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3']
Dec 06 10:16:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e122 e122: 6 total, 6 up, 6 in
Dec 06 10:17:00 np0005548790.localdomain ceph-mon[301742]: pgmap v188: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 109 KiB/s rd, 15 KiB/s wr, 153 op/s
Dec 06 10:17:00 np0005548790.localdomain ceph-mon[301742]: osdmap e122: 6 total, 6 up, 6 in
Dec 06 10:17:00 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:00.352 2 INFO neutron.agent.securitygroups_rpc [None req-1e316261-d213-40cc-b644-592e2d6242e7 183487bfea4148c8bd274489b01ac583 290c121e7a5344fea2a32f4e64e74fb4 - - default default] Security group member updated ['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3']
Dec 06 10:17:01 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:01.031 2 INFO neutron.agent.securitygroups_rpc [None req-af1607ac-cf21-43e9-9dc2-41d6c40546b7 183487bfea4148c8bd274489b01ac583 290c121e7a5344fea2a32f4e64e74fb4 - - default default] Security group member updated ['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3']
Dec 06 10:17:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v190: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 12 KiB/s wr, 128 op/s
Dec 06 10:17:02 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e123 e123: 6 total, 6 up, 6 in
Dec 06 10:17:02 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:02.128 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:02 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:02.527 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:03 np0005548790.localdomain ceph-mon[301742]: pgmap v190: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 12 KiB/s wr, 128 op/s
Dec 06 10:17:03 np0005548790.localdomain ceph-mon[301742]: osdmap e123: 6 total, 6 up, 6 in
Dec 06 10:17:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v192: 177 pgs: 177 active+clean; 145 MiB data, 765 MiB used, 41 GiB / 42 GiB avail; 141 KiB/s rd, 18 KiB/s wr, 198 op/s
Dec 06 10:17:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:17:03 np0005548790.localdomain podman[312440]: 2025-12-06 10:17:03.57747894 +0000 UTC m=+0.081465084 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 10:17:03 np0005548790.localdomain podman[312440]: 2025-12-06 10:17:03.582658191 +0000 UTC m=+0.086644305 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 06 10:17:03 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:17:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:17:04 np0005548790.localdomain ceph-mon[301742]: pgmap v192: 177 pgs: 177 active+clean; 145 MiB data, 765 MiB used, 41 GiB / 42 GiB avail; 141 KiB/s rd, 18 KiB/s wr, 198 op/s
Dec 06 10:17:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e124 e124: 6 total, 6 up, 6 in
Dec 06 10:17:04 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:04.263 2 INFO neutron.agent.securitygroups_rpc [None req-585d973c-6716-4802-939d-774d36a541bf 183487bfea4148c8bd274489b01ac583 290c121e7a5344fea2a32f4e64e74fb4 - - default default] Security group member updated ['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3']
Dec 06 10:17:05 np0005548790.localdomain ceph-mon[301742]: osdmap e124: 6 total, 6 up, 6 in
Dec 06 10:17:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v194: 177 pgs: 177 active+clean; 145 MiB data, 765 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 6.3 KiB/s wr, 74 op/s
Dec 06 10:17:05 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:05.335 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:06 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:17:06.109 262327 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:05Z, description=, device_id=0df74357-660e-4fa2-9159-46e39f559540, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c85828be0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c85828cd0>], id=08c3c00a-de2d-4ed0-a507-32776d951596, ip_allocation=immediate, mac_address=fa:16:3e:b5:47:36, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:16:52Z, description=, dns_domain=, id=2f7ffd81-9f27-41ba-85a3-f6c464090ebd, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesTestJSON-1133471416-network, port_security_enabled=True, project_id=37dcf5204733427ebb8bdbe574dca584, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=38217, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1218, status=ACTIVE, subnets=['4a91a984-aa5b-414c-9ac0-4ed9dc9f03d4'], tags=[], tenant_id=37dcf5204733427ebb8bdbe574dca584, updated_at=2025-12-06T10:16:56Z, vlan_transparent=None, network_id=2f7ffd81-9f27-41ba-85a3-f6c464090ebd, port_security_enabled=False, project_id=37dcf5204733427ebb8bdbe574dca584, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1263, status=DOWN, tags=[], tenant_id=37dcf5204733427ebb8bdbe574dca584, updated_at=2025-12-06T10:17:05Z on network 2f7ffd81-9f27-41ba-85a3-f6c464090ebd
Dec 06 10:17:06 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e125 e125: 6 total, 6 up, 6 in
Dec 06 10:17:06 np0005548790.localdomain ceph-mon[301742]: pgmap v194: 177 pgs: 177 active+clean; 145 MiB data, 765 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 6.3 KiB/s wr, 74 op/s
Dec 06 10:17:06 np0005548790.localdomain dnsmasq[312439]: read /var/lib/neutron/dhcp/2f7ffd81-9f27-41ba-85a3-f6c464090ebd/addn_hosts - 1 addresses
Dec 06 10:17:06 np0005548790.localdomain podman[312476]: 2025-12-06 10:17:06.347641173 +0000 UTC m=+0.057088604 container kill c372683d3a1f4f9c8d2e6d7c48725d4a7032b1a514acea5093d400670933a0ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2f7ffd81-9f27-41ba-85a3-f6c464090ebd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:17:06 np0005548790.localdomain dnsmasq-dhcp[312439]: read /var/lib/neutron/dhcp/2f7ffd81-9f27-41ba-85a3-f6c464090ebd/host
Dec 06 10:17:06 np0005548790.localdomain dnsmasq-dhcp[312439]: read /var/lib/neutron/dhcp/2f7ffd81-9f27-41ba-85a3-f6c464090ebd/opts
Dec 06 10:17:06 np0005548790.localdomain systemd[1]: tmp-crun.DN2K6I.mount: Deactivated successfully.
Dec 06 10:17:06 np0005548790.localdomain sshd[312492]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:17:06 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:17:06.612 262327 INFO neutron.agent.dhcp.agent [None req-b42eeb39-4ac1-4fef-9cc2-6327e27a7ae0 - - - - - -] DHCP configuration for ports {'08c3c00a-de2d-4ed0-a507-32776d951596'} is completed
Dec 06 10:17:06 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:06.957 2 INFO neutron.agent.securitygroups_rpc [None req-0061dfd5-bb12-495d-9673-65d6ef0bbdf6 183487bfea4148c8bd274489b01ac583 290c121e7a5344fea2a32f4e64e74fb4 - - default default] Security group member updated ['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3']
Dec 06 10:17:06 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:06.962 2 INFO neutron.agent.securitygroups_rpc [None req-113b5acf-416d-4ae9-b224-76f1b565f762 7dcd2b11aeb4499894c7ac7c29cb6997 d6a02136413f4ad3ac51d2c4ffdad3d4 - - default default] Security group member updated ['58296f43-3702-412f-8387-07510507ed41']
Dec 06 10:17:07 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e126 e126: 6 total, 6 up, 6 in
Dec 06 10:17:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v197: 177 pgs: 177 active+clean; 145 MiB data, 765 MiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 7.5 KiB/s wr, 88 op/s
Dec 06 10:17:07 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:07.131 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:07 np0005548790.localdomain ceph-mon[301742]: osdmap e125: 6 total, 6 up, 6 in
Dec 06 10:17:07 np0005548790.localdomain ceph-mon[301742]: osdmap e126: 6 total, 6 up, 6 in
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.325 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:17:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:17:07 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:07.529 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:08 np0005548790.localdomain sshd[312492]: Invalid user ubuntu from 43.163.93.82 port 56108
Dec 06 10:17:08 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:17:08 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:17:08 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:17:08 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:08.213 2 INFO neutron.agent.securitygroups_rpc [None req-7b711491-888b-4783-949a-3ad1e34a6987 183487bfea4148c8bd274489b01ac583 290c121e7a5344fea2a32f4e64e74fb4 - - default default] Security group member updated ['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3']
Dec 06 10:17:08 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e127 e127: 6 total, 6 up, 6 in
Dec 06 10:17:08 np0005548790.localdomain ceph-mon[301742]: pgmap v197: 177 pgs: 177 active+clean; 145 MiB data, 765 MiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 7.5 KiB/s wr, 88 op/s
Dec 06 10:17:08 np0005548790.localdomain systemd[1]: tmp-crun.hvved5.mount: Deactivated successfully.
Dec 06 10:17:08 np0005548790.localdomain podman[312498]: 2025-12-06 10:17:08.305697513 +0000 UTC m=+0.109027110 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:17:08 np0005548790.localdomain systemd[1]: tmp-crun.E2mOVK.mount: Deactivated successfully.
Dec 06 10:17:08 np0005548790.localdomain podman[312500]: 2025-12-06 10:17:08.319971139 +0000 UTC m=+0.114453837 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vendor=Red Hat, Inc., config_id=edpm, name=ubi9-minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 06 10:17:08 np0005548790.localdomain podman[312498]: 2025-12-06 10:17:08.349261651 +0000 UTC m=+0.152591208 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:17:08 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:17:08 np0005548790.localdomain podman[312499]: 2025-12-06 10:17:08.369870759 +0000 UTC m=+0.169670581 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 06 10:17:08 np0005548790.localdomain podman[312499]: 2025-12-06 10:17:08.384050472 +0000 UTC m=+0.183850224 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 10:17:08 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:17:08 np0005548790.localdomain sshd[312492]: Received disconnect from 43.163.93.82 port 56108:11:  [preauth]
Dec 06 10:17:08 np0005548790.localdomain sshd[312492]: Disconnected from invalid user ubuntu 43.163.93.82 port 56108 [preauth]
Dec 06 10:17:08 np0005548790.localdomain podman[312500]: 2025-12-06 10:17:08.438813813 +0000 UTC m=+0.233296471 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, managed_by=edpm_ansible, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, config_id=edpm, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 06 10:17:08 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:17:08 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:17:08.623 262327 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:17:05Z, description=, device_id=0df74357-660e-4fa2-9159-46e39f559540, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c858033d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c85803ca0>], id=08c3c00a-de2d-4ed0-a507-32776d951596, ip_allocation=immediate, mac_address=fa:16:3e:b5:47:36, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:16:52Z, description=, dns_domain=, id=2f7ffd81-9f27-41ba-85a3-f6c464090ebd, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesTestJSON-1133471416-network, port_security_enabled=True, project_id=37dcf5204733427ebb8bdbe574dca584, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=38217, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1218, status=ACTIVE, subnets=['4a91a984-aa5b-414c-9ac0-4ed9dc9f03d4'], tags=[], tenant_id=37dcf5204733427ebb8bdbe574dca584, updated_at=2025-12-06T10:16:56Z, vlan_transparent=None, network_id=2f7ffd81-9f27-41ba-85a3-f6c464090ebd, port_security_enabled=False, project_id=37dcf5204733427ebb8bdbe574dca584, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1263, status=DOWN, tags=[], tenant_id=37dcf5204733427ebb8bdbe574dca584, updated_at=2025-12-06T10:17:05Z on network 2f7ffd81-9f27-41ba-85a3-f6c464090ebd
Dec 06 10:17:08 np0005548790.localdomain dnsmasq[312439]: read /var/lib/neutron/dhcp/2f7ffd81-9f27-41ba-85a3-f6c464090ebd/addn_hosts - 1 addresses
Dec 06 10:17:08 np0005548790.localdomain podman[312579]: 2025-12-06 10:17:08.799432937 +0000 UTC m=+0.033926499 container kill c372683d3a1f4f9c8d2e6d7c48725d4a7032b1a514acea5093d400670933a0ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2f7ffd81-9f27-41ba-85a3-f6c464090ebd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 10:17:08 np0005548790.localdomain dnsmasq-dhcp[312439]: read /var/lib/neutron/dhcp/2f7ffd81-9f27-41ba-85a3-f6c464090ebd/host
Dec 06 10:17:08 np0005548790.localdomain dnsmasq-dhcp[312439]: read /var/lib/neutron/dhcp/2f7ffd81-9f27-41ba-85a3-f6c464090ebd/opts
Dec 06 10:17:08 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:08.858 2 INFO neutron.agent.securitygroups_rpc [None req-5531eff7-1536-47cb-87b6-fc07778b8cfc 183487bfea4148c8bd274489b01ac583 290c121e7a5344fea2a32f4e64e74fb4 - - default default] Security group member updated ['7248d87f-aba7-4d7e-b680-1fdbc4f1cdd3']
Dec 06 10:17:08 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:17:08.980 262327 INFO neutron.agent.dhcp.agent [None req-cc1a1c47-9cef-4bf3-9bf5-f727a31fcc3a - - - - - -] DHCP configuration for ports {'08c3c00a-de2d-4ed0-a507-32776d951596'} is completed
Dec 06 10:17:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:17:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v199: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 188 KiB/s rd, 16 KiB/s wr, 256 op/s
Dec 06 10:17:09 np0005548790.localdomain ceph-mon[301742]: osdmap e127: 6 total, 6 up, 6 in
Dec 06 10:17:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e128 e128: 6 total, 6 up, 6 in
Dec 06 10:17:10 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e129 e129: 6 total, 6 up, 6 in
Dec 06 10:17:10 np0005548790.localdomain ceph-mon[301742]: pgmap v199: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 188 KiB/s rd, 16 KiB/s wr, 256 op/s
Dec 06 10:17:10 np0005548790.localdomain ceph-mon[301742]: osdmap e128: 6 total, 6 up, 6 in
Dec 06 10:17:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v202: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 234 KiB/s rd, 20 KiB/s wr, 319 op/s
Dec 06 10:17:11 np0005548790.localdomain ceph-mon[301742]: osdmap e129: 6 total, 6 up, 6 in
Dec 06 10:17:11 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:17:11.615 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:17:11 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:17:11.616 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:17:11 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:11.651 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:17:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:17:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:17:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:17:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:17:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:17:12 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e130 e130: 6 total, 6 up, 6 in
Dec 06 10:17:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:12.135 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:12 np0005548790.localdomain ceph-mon[301742]: pgmap v202: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 234 KiB/s rd, 20 KiB/s wr, 319 op/s
Dec 06 10:17:12 np0005548790.localdomain ceph-mon[301742]: osdmap e130: 6 total, 6 up, 6 in
Dec 06 10:17:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:12.531 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:12 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:12.611 2 INFO neutron.agent.securitygroups_rpc [req-0f5bcb52-14d6-4090-84d5-2a6fc264a912 req-f6b94b35-a5a9-45fc-80c3-03af12f9ebaa b4f9b4e4cabd4b079cb8c31c22004b7a 37dcf5204733427ebb8bdbe574dca584 - - default default] Security group rule updated ['1e2df8fe-9d93-4483-a509-0caee18c220e']
Dec 06 10:17:12 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:12.965 2 INFO neutron.agent.securitygroups_rpc [None req-5b35cf9b-2c10-4acb-804d-e7f71d7bfae3 7dcd2b11aeb4499894c7ac7c29cb6997 d6a02136413f4ad3ac51d2c4ffdad3d4 - - default default] Security group member updated ['58296f43-3702-412f-8387-07510507ed41']
Dec 06 10:17:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v204: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 137 KiB/s rd, 10 KiB/s wr, 188 op/s
Dec 06 10:17:13 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:13.898 2 INFO neutron.agent.securitygroups_rpc [req-01e69ff7-4c57-4a62-a8e2-72eac205e556 req-eb6ec33a-4a21-4246-bab7-a4fceda1903a b4f9b4e4cabd4b079cb8c31c22004b7a 37dcf5204733427ebb8bdbe574dca584 - - default default] Security group rule updated ['73772eb3-7feb-4994-9518-58f9e6d5a8ed']
Dec 06 10:17:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:17:14 np0005548790.localdomain ceph-mon[301742]: pgmap v204: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 137 KiB/s rd, 10 KiB/s wr, 188 op/s
Dec 06 10:17:14 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:17:14 np0005548790.localdomain podman[312602]: 2025-12-06 10:17:14.570269739 +0000 UTC m=+0.078171515 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd)
Dec 06 10:17:14 np0005548790.localdomain podman[312602]: 2025-12-06 10:17:14.606918391 +0000 UTC m=+0.114820157 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd)
Dec 06 10:17:14 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:17:14.618 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=33b2d0f4-3dae-458c-b286-c937c7cb3d9e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:17:14 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:17:14 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:14.678 2 INFO neutron.agent.securitygroups_rpc [None req-591e6c3a-21e9-4cb6-8654-ee5dfe5ee17d f89e0038548e41fa9a8202b7a7e9ade1 49bb78ce003e4bec87707ab7af03ae7e - - default default] Security group rule updated ['7d9717d3-d014-450e-9e8d-c62143b51d32']
Dec 06 10:17:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v205: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 111 KiB/s rd, 8.3 KiB/s wr, 153 op/s
Dec 06 10:17:15 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:15.530 2 INFO neutron.agent.securitygroups_rpc [req-802ef8ad-2f30-424a-8810-ccf196e89ec8 req-2c9ca4ac-9b05-42f0-9546-b86c6383ded6 b4f9b4e4cabd4b079cb8c31c22004b7a 37dcf5204733427ebb8bdbe574dca584 - - default default] Security group rule updated ['80cd7ff3-0b8b-4d61-9358-b2f28d5f4668']
Dec 06 10:17:16 np0005548790.localdomain ceph-mon[301742]: pgmap v205: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 111 KiB/s rd, 8.3 KiB/s wr, 153 op/s
Dec 06 10:17:16 np0005548790.localdomain podman[312639]: 2025-12-06 10:17:16.838516318 +0000 UTC m=+0.056302224 container kill 079edef0e3f21a80b96737ad7b3f76896df0011a3c5e9cf0ffcd7f03e64741a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1399d1e5-1513-4962-bae4-5ed900ff7211, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 10:17:16 np0005548790.localdomain dnsmasq[312321]: exiting on receipt of SIGTERM
Dec 06 10:17:16 np0005548790.localdomain systemd[1]: libpod-079edef0e3f21a80b96737ad7b3f76896df0011a3c5e9cf0ffcd7f03e64741a2.scope: Deactivated successfully.
Dec 06 10:17:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:17:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:17:16 np0005548790.localdomain podman[312653]: 2025-12-06 10:17:16.911507792 +0000 UTC m=+0.060872358 container died 079edef0e3f21a80b96737ad7b3f76896df0011a3c5e9cf0ffcd7f03e64741a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1399d1e5-1513-4962-bae4-5ed900ff7211, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 10:17:16 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-079edef0e3f21a80b96737ad7b3f76896df0011a3c5e9cf0ffcd7f03e64741a2-userdata-shm.mount: Deactivated successfully.
Dec 06 10:17:16 np0005548790.localdomain podman[312653]: 2025-12-06 10:17:16.949357175 +0000 UTC m=+0.098721701 container cleanup 079edef0e3f21a80b96737ad7b3f76896df0011a3c5e9cf0ffcd7f03e64741a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1399d1e5-1513-4962-bae4-5ed900ff7211, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:17:16 np0005548790.localdomain systemd[1]: libpod-conmon-079edef0e3f21a80b96737ad7b3f76896df0011a3c5e9cf0ffcd7f03e64741a2.scope: Deactivated successfully.
Dec 06 10:17:16 np0005548790.localdomain podman[312655]: 2025-12-06 10:17:16.993964802 +0000 UTC m=+0.134026687 container remove 079edef0e3f21a80b96737ad7b3f76896df0011a3c5e9cf0ffcd7f03e64741a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1399d1e5-1513-4962-bae4-5ed900ff7211, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 10:17:17 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:17:17Z|00108|binding|INFO|Releasing lport c290d0b6-ee62-46ad-8b4b-6ee39f9294c0 from this chassis (sb_readonly=0)
Dec 06 10:17:17 np0005548790.localdomain kernel: device tapc290d0b6-ee left promiscuous mode
Dec 06 10:17:17 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:17:17Z|00109|binding|INFO|Setting lport c290d0b6-ee62-46ad-8b4b-6ee39f9294c0 down in Southbound
Dec 06 10:17:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:17.042 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:17 np0005548790.localdomain podman[312661]: 2025-12-06 10:17:17.050586954 +0000 UTC m=+0.183899095 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:17:17 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:17:17.054 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-1399d1e5-1513-4962-bae4-5ed900ff7211', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1399d1e5-1513-4962-bae4-5ed900ff7211', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cee3e0c1575f4b46bd60ec5b2e858b9d', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af784563-e963-4a26-a17b-1d37d6980aac, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=c290d0b6-ee62-46ad-8b4b-6ee39f9294c0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:17:17 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:17:17.056 159200 INFO neutron.agent.ovn.metadata.agent [-] Port c290d0b6-ee62-46ad-8b4b-6ee39f9294c0 in datapath 1399d1e5-1513-4962-bae4-5ed900ff7211 unbound from our chassis
Dec 06 10:17:17 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:17:17.059 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1399d1e5-1513-4962-bae4-5ed900ff7211, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:17:17 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:17:17.061 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[624507f3-be81-4f13-b7ae-32099587412a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:17:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:17.064 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:17 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e131 e131: 6 total, 6 up, 6 in
Dec 06 10:17:17 np0005548790.localdomain podman[312661]: 2025-12-06 10:17:17.088394186 +0000 UTC m=+0.221706337 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:17:17 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:17:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v207: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 98 KiB/s rd, 7.3 KiB/s wr, 135 op/s
Dec 06 10:17:17 np0005548790.localdomain podman[312667]: 2025-12-06 10:17:17.107928644 +0000 UTC m=+0.237214167 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec 06 10:17:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:17.138 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:17 np0005548790.localdomain podman[312667]: 2025-12-06 10:17:17.168333198 +0000 UTC m=+0.297618751 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 10:17:17 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:17:17 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:17.226 2 INFO neutron.agent.securitygroups_rpc [req-d0c022c7-5c29-48d9-b4af-ef083b33fa00 req-5f51dca4-f136-4f7f-a521-ca766171afcb b4f9b4e4cabd4b079cb8c31c22004b7a 37dcf5204733427ebb8bdbe574dca584 - - default default] Security group rule updated ['48d24f9a-1de0-4ca7-bff4-bdd00474b49e']
Dec 06 10:17:17 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:17:17.436 262327 INFO neutron.agent.dhcp.agent [None req-1fe421f8-25b0-4460-94ea-ebffcf3f0148 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:17:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:17.533 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:17 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:17.658 2 INFO neutron.agent.securitygroups_rpc [None req-2df808f7-3669-4bdd-a1f6-a6327b63c196 406e5cc53df749808a8770da68d7033d 64b9b91747c648148f6dd23ce81ceb80 - - default default] Security group member updated ['592d46d3-9a35-48d4-b6ca-ba1068626b4d']
Dec 06 10:17:17 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:17:17.712 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:17:17 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-a69795329f816d8bf8ebc2e9d9d9d492772f9918c14322138757dfdc2ef71d2a-merged.mount: Deactivated successfully.
Dec 06 10:17:17 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2d1399d1e5\x2d1513\x2d4962\x2dbae4\x2d5ed900ff7211.mount: Deactivated successfully.
Dec 06 10:17:17 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:17:17.876 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:17:18 np0005548790.localdomain ceph-mon[301742]: osdmap e131: 6 total, 6 up, 6 in
Dec 06 10:17:18 np0005548790.localdomain ceph-mon[301742]: pgmap v207: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 98 KiB/s rd, 7.3 KiB/s wr, 135 op/s
Dec 06 10:17:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:18.274 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:17:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:17:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:17:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156742 "" "Go-http-client/1.1"
Dec 06 10:17:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:17:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19211 "" "Go-http-client/1.1"
Dec 06 10:17:18 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:18.889 2 INFO neutron.agent.securitygroups_rpc [None req-a4d21316-b177-48b7-92ec-319ed42d1b0b 406e5cc53df749808a8770da68d7033d 64b9b91747c648148f6dd23ce81ceb80 - - default default] Security group member updated ['592d46d3-9a35-48d4-b6ca-ba1068626b4d']
Dec 06 10:17:19 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:17:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v208: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 83 KiB/s rd, 6.2 KiB/s wr, 114 op/s
Dec 06 10:17:19 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:19.633 2 INFO neutron.agent.securitygroups_rpc [None req-de533a6a-08ae-42c2-b158-11c15e64ecbf 406e5cc53df749808a8770da68d7033d 64b9b91747c648148f6dd23ce81ceb80 - - default default] Security group member updated ['592d46d3-9a35-48d4-b6ca-ba1068626b4d']
Dec 06 10:17:19 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:19.701 2 INFO neutron.agent.securitygroups_rpc [req-5de42e7e-0662-4156-9401-22106a567059 req-ed4ee54f-d494-4326-9cfe-66d7201bb9f8 b4f9b4e4cabd4b079cb8c31c22004b7a 37dcf5204733427ebb8bdbe574dca584 - - default default] Security group rule updated ['9b6ed766-684e-4de1-9195-49dc13639cf2']
Dec 06 10:17:20 np0005548790.localdomain ceph-mon[301742]: pgmap v208: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 83 KiB/s rd, 6.2 KiB/s wr, 114 op/s
Dec 06 10:17:20 np0005548790.localdomain systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 06 10:17:20 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:20.531 2 INFO neutron.agent.securitygroups_rpc [req-0deed905-7e01-4df3-9b96-c6dd2bc740af req-aa574686-cd75-40ee-9098-a2781b4cfdf3 b4f9b4e4cabd4b079cb8c31c22004b7a 37dcf5204733427ebb8bdbe574dca584 - - default default] Security group rule updated ['9b6ed766-684e-4de1-9195-49dc13639cf2']
Dec 06 10:17:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v209: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 74 KiB/s rd, 5.5 KiB/s wr, 101 op/s
Dec 06 10:17:21 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:21.275 2 INFO neutron.agent.securitygroups_rpc [None req-a16f2b30-088a-4292-a104-7f6939a88353 406e5cc53df749808a8770da68d7033d 64b9b91747c648148f6dd23ce81ceb80 - - default default] Security group member updated ['592d46d3-9a35-48d4-b6ca-ba1068626b4d']
Dec 06 10:17:21 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:21.332 2 INFO neutron.agent.securitygroups_rpc [req-eb1d2fcf-8073-401d-9c1d-cc925d78bfca req-148b7f3e-4a54-4ceb-8b18-a63fa0926a26 b4f9b4e4cabd4b079cb8c31c22004b7a 37dcf5204733427ebb8bdbe574dca584 - - default default] Security group rule updated ['9b6ed766-684e-4de1-9195-49dc13639cf2']
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:17:22.097447) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016242097507, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 2564, "num_deletes": 264, "total_data_size": 3565012, "memory_usage": 3625056, "flush_reason": "Manual Compaction"}
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016242116127, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 2275089, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18251, "largest_seqno": 20810, "table_properties": {"data_size": 2265916, "index_size": 5678, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 20684, "raw_average_key_size": 21, "raw_value_size": 2246918, "raw_average_value_size": 2321, "num_data_blocks": 247, "num_entries": 968, "num_filter_entries": 968, "num_deletions": 264, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016080, "oldest_key_time": 1765016080, "file_creation_time": 1765016242, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 18725 microseconds, and 5981 cpu microseconds.
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:17:22.116172) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 2275089 bytes OK
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:17:22.116195) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:17:22.118109) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:17:22.118134) EVENT_LOG_v1 {"time_micros": 1765016242118128, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:17:22.118155) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 3553544, prev total WAL file size 3553544, number of live WAL files 2.
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:17:22.119103) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131373937' seq:72057594037927935, type:22 .. '7061786F73003132303439' seq:0, type:0; will stop at (end)
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(2221KB)], [27(18MB)]
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016242119143, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 21558465, "oldest_snapshot_seqno": -1}
Dec 06 10:17:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:22.139 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 12584 keys, 17569362 bytes, temperature: kUnknown
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016242216303, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 17569362, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17497905, "index_size": 38908, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31493, "raw_key_size": 336702, "raw_average_key_size": 26, "raw_value_size": 17283957, "raw_average_value_size": 1373, "num_data_blocks": 1480, "num_entries": 12584, "num_filter_entries": 12584, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015768, "oldest_key_time": 0, "file_creation_time": 1765016242, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:17:22.216638) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 17569362 bytes
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:17:22.218674) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 221.7 rd, 180.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 18.4 +0.0 blob) out(16.8 +0.0 blob), read-write-amplify(17.2) write-amplify(7.7) OK, records in: 13122, records dropped: 538 output_compression: NoCompression
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:17:22.218707) EVENT_LOG_v1 {"time_micros": 1765016242218693, "job": 14, "event": "compaction_finished", "compaction_time_micros": 97252, "compaction_time_cpu_micros": 47101, "output_level": 6, "num_output_files": 1, "total_output_size": 17569362, "num_input_records": 13122, "num_output_records": 12584, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016242219307, "job": 14, "event": "table_file_deletion", "file_number": 29}
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016242221749, "job": 14, "event": "table_file_deletion", "file_number": 27}
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:17:22.119003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:17:22.221874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:17:22.221882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:17:22.221886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:17:22.221889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:17:22.221892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:17:22 np0005548790.localdomain ceph-mon[301742]: pgmap v209: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 74 KiB/s rd, 5.5 KiB/s wr, 101 op/s
Dec 06 10:17:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:22.534 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:23 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v210: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:17:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:17:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:17:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:17:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:17:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:17:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:17:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:17:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:17:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:17:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:17:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:17:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:17:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:17:24 np0005548790.localdomain ceph-mon[301742]: pgmap v210: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:17:24 np0005548790.localdomain dnsmasq[312439]: read /var/lib/neutron/dhcp/2f7ffd81-9f27-41ba-85a3-f6c464090ebd/addn_hosts - 0 addresses
Dec 06 10:17:24 np0005548790.localdomain dnsmasq-dhcp[312439]: read /var/lib/neutron/dhcp/2f7ffd81-9f27-41ba-85a3-f6c464090ebd/host
Dec 06 10:17:24 np0005548790.localdomain podman[312748]: 2025-12-06 10:17:24.716435138 +0000 UTC m=+0.070508618 container kill c372683d3a1f4f9c8d2e6d7c48725d4a7032b1a514acea5093d400670933a0ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2f7ffd81-9f27-41ba-85a3-f6c464090ebd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 10:17:24 np0005548790.localdomain dnsmasq-dhcp[312439]: read /var/lib/neutron/dhcp/2f7ffd81-9f27-41ba-85a3-f6c464090ebd/opts
Dec 06 10:17:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:24.885 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:24 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:17:24Z|00110|binding|INFO|Releasing lport 3780d604-8f3c-416e-a6a2-35d810aa2cb5 from this chassis (sb_readonly=0)
Dec 06 10:17:24 np0005548790.localdomain kernel: device tap3780d604-8f left promiscuous mode
Dec 06 10:17:24 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:17:24Z|00111|binding|INFO|Setting lport 3780d604-8f3c-416e-a6a2-35d810aa2cb5 down in Southbound
Dec 06 10:17:24 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:17:24.901 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-2f7ffd81-9f27-41ba-85a3-f6c464090ebd', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f7ffd81-9f27-41ba-85a3-f6c464090ebd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '37dcf5204733427ebb8bdbe574dca584', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96cb3e55-a48b-4639-b988-01bcb84fe479, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=3780d604-8f3c-416e-a6a2-35d810aa2cb5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:17:24 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:17:24.904 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 3780d604-8f3c-416e-a6a2-35d810aa2cb5 in datapath 2f7ffd81-9f27-41ba-85a3-f6c464090ebd unbound from our chassis
Dec 06 10:17:24 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:17:24.906 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2f7ffd81-9f27-41ba-85a3-f6c464090ebd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:17:24 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:17:24.907 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[2be405cc-7a89-410a-abaa-2029a70fc9e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:17:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:24.909 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v211: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:17:26 np0005548790.localdomain ceph-mon[301742]: pgmap v211: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:17:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:26.440 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:26 np0005548790.localdomain dnsmasq[312439]: exiting on receipt of SIGTERM
Dec 06 10:17:26 np0005548790.localdomain podman[312788]: 2025-12-06 10:17:26.909348509 +0000 UTC m=+0.060822117 container kill c372683d3a1f4f9c8d2e6d7c48725d4a7032b1a514acea5093d400670933a0ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2f7ffd81-9f27-41ba-85a3-f6c464090ebd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:17:26 np0005548790.localdomain systemd[1]: tmp-crun.ZF61m1.mount: Deactivated successfully.
Dec 06 10:17:26 np0005548790.localdomain systemd[1]: libpod-c372683d3a1f4f9c8d2e6d7c48725d4a7032b1a514acea5093d400670933a0ce.scope: Deactivated successfully.
Dec 06 10:17:26 np0005548790.localdomain podman[312800]: 2025-12-06 10:17:26.976087434 +0000 UTC m=+0.054580898 container died c372683d3a1f4f9c8d2e6d7c48725d4a7032b1a514acea5093d400670933a0ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2f7ffd81-9f27-41ba-85a3-f6c464090ebd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 10:17:27 np0005548790.localdomain podman[312800]: 2025-12-06 10:17:27.007521814 +0000 UTC m=+0.086015228 container cleanup c372683d3a1f4f9c8d2e6d7c48725d4a7032b1a514acea5093d400670933a0ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2f7ffd81-9f27-41ba-85a3-f6c464090ebd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:17:27 np0005548790.localdomain systemd[1]: libpod-conmon-c372683d3a1f4f9c8d2e6d7c48725d4a7032b1a514acea5093d400670933a0ce.scope: Deactivated successfully.
Dec 06 10:17:27 np0005548790.localdomain podman[312804]: 2025-12-06 10:17:27.061818042 +0000 UTC m=+0.129361480 container remove c372683d3a1f4f9c8d2e6d7c48725d4a7032b1a514acea5093d400670933a0ce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2f7ffd81-9f27-41ba-85a3-f6c464090ebd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:17:27 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:17:27.087 262327 INFO neutron.agent.dhcp.agent [None req-55426180-6ff7-40bc-8977-0c4a8e14e8ea - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:17:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v212: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:17:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:27.141 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:27 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:17:27.229 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:17:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:27.535 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:27 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-57b0d22eb355abde3c775ccf121f353083ab46383967bd63e19ad96b9f05f499-merged.mount: Deactivated successfully.
Dec 06 10:17:27 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c372683d3a1f4f9c8d2e6d7c48725d4a7032b1a514acea5093d400670933a0ce-userdata-shm.mount: Deactivated successfully.
Dec 06 10:17:27 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2d2f7ffd81\x2d9f27\x2d41ba\x2d85a3\x2df6c464090ebd.mount: Deactivated successfully.
Dec 06 10:17:28 np0005548790.localdomain ceph-mon[301742]: pgmap v212: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:17:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:17:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v213: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:17:29 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:29.500 2 INFO neutron.agent.securitygroups_rpc [None req-b4a3dd75-3886-433c-a68a-5b82ba491223 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:17:30 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:30.065 2 INFO neutron.agent.securitygroups_rpc [None req-77263169-ab43-473e-a592-07200b19e18c a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:17:30 np0005548790.localdomain ceph-mon[301742]: pgmap v213: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:17:30 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:30.256 2 INFO neutron.agent.securitygroups_rpc [None req-b0fdf288-4ef8-4212-8aee-98bfee473c24 8eeb1ce8ea6f4981a55c23fbea57f4cb f9595f0635f14c2196533c0f5ee5dc3b - - default default] Security group member updated ['cab1d39e-aba5-4938-880e-87b80fed90d0']
Dec 06 10:17:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v214: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:17:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:32.143 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:32 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:32.192 2 INFO neutron.agent.securitygroups_rpc [None req-a2daea0b-127d-4cb1-8d58-679cf0ec3092 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:17:32 np0005548790.localdomain ceph-mon[301742]: pgmap v214: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:17:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:32.538 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:32 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:32.684 2 INFO neutron.agent.securitygroups_rpc [None req-a950d9cb-4b90-43c7-9619-4f314921acec 8eeb1ce8ea6f4981a55c23fbea57f4cb f9595f0635f14c2196533c0f5ee5dc3b - - default default] Security group member updated ['cab1d39e-aba5-4938-880e-87b80fed90d0']
Dec 06 10:17:33 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:33.080 2 INFO neutron.agent.securitygroups_rpc [None req-675c08cc-007c-4dc9-986b-f4514913c9a2 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:17:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v215: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:17:33 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:33.674 2 INFO neutron.agent.securitygroups_rpc [None req-2bf571b9-2f59-4b7c-8546-bb481f9be7b1 3ea76362796945abb0389f60eab07566 23fdd860878442e1b8fc77e4ae3ef271 - - default default] Security group member updated ['dd9785c1-eb5d-4293-ac78-0fc1ce108f20']
Dec 06 10:17:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:17:34 np0005548790.localdomain ceph-mon[301742]: pgmap v215: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:17:34 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/1151500263' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:17:34 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:34.248 2 INFO neutron.agent.securitygroups_rpc [None req-f3a7982c-6432-4aaa-a51f-6f45752d4aa1 440e57a58b9f4b64af7435927930ce6a 37eea2b31d9543b793c928d777810de4 - - default default] Security group member updated ['5bf6ab1c-c80a-456c-9ce8-d446d055d129']
Dec 06 10:17:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:17:34 np0005548790.localdomain podman[312829]: 2025-12-06 10:17:34.565334845 +0000 UTC m=+0.079133682 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:17:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:17:34 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2930003635' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:17:34 np0005548790.localdomain podman[312829]: 2025-12-06 10:17:34.60029578 +0000 UTC m=+0.114094577 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 06 10:17:34 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:17:34 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:34.798 2 INFO neutron.agent.securitygroups_rpc [None req-b1db9883-f5c1-471b-9a07-cebf6b7ffba6 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:17:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v216: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:17:35 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:35.198 2 INFO neutron.agent.securitygroups_rpc [None req-5b990af0-9142-4008-b949-8f1c6c9fa9d7 440e57a58b9f4b64af7435927930ce6a 37eea2b31d9543b793c928d777810de4 - - default default] Security group member updated ['5bf6ab1c-c80a-456c-9ce8-d446d055d129']
Dec 06 10:17:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:17:35.222 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:17:35 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/2930003635' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:17:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:35.334 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:17:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:35.335 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:17:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:35.335 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:17:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:35.353 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:17:35 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:35.857 2 INFO neutron.agent.securitygroups_rpc [None req-a5058513-5128-4405-b292-62b6045d3f2a a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:17:36 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e132 e132: 6 total, 6 up, 6 in
Dec 06 10:17:36 np0005548790.localdomain ceph-mon[301742]: pgmap v216: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:17:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v218: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:17:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:37.146 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:37 np0005548790.localdomain ceph-mon[301742]: osdmap e132: 6 total, 6 up, 6 in
Dec 06 10:17:37 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e133 e133: 6 total, 6 up, 6 in
Dec 06 10:17:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:37.540 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:38.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:17:38 np0005548790.localdomain ceph-mon[301742]: pgmap v218: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail
Dec 06 10:17:38 np0005548790.localdomain ceph-mon[301742]: osdmap e133: 6 total, 6 up, 6 in
Dec 06 10:17:38 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:38.466 2 INFO neutron.agent.securitygroups_rpc [None req-ab57ea17-3add-445e-9d4b-332ca72ce0af a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:17:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:17:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:17:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:17:38 np0005548790.localdomain podman[312849]: 2025-12-06 10:17:38.576449333 +0000 UTC m=+0.085871214 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 10:17:38 np0005548790.localdomain podman[312848]: 2025-12-06 10:17:38.617611535 +0000 UTC m=+0.131849657 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 10:17:38 np0005548790.localdomain podman[312849]: 2025-12-06 10:17:38.641608985 +0000 UTC m=+0.151030826 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 06 10:17:38 np0005548790.localdomain podman[312848]: 2025-12-06 10:17:38.655554422 +0000 UTC m=+0.169792484 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:17:38 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:17:38 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:17:38 np0005548790.localdomain podman[312850]: 2025-12-06 10:17:38.731358652 +0000 UTC m=+0.236646522 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, release=1755695350, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, container_name=openstack_network_exporter, managed_by=edpm_ansible, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Dec 06 10:17:38 np0005548790.localdomain podman[312850]: 2025-12-06 10:17:38.748336961 +0000 UTC m=+0.253624841 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, vcs-type=git, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, version=9.6)
Dec 06 10:17:38 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:17:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:17:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v220: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 767 B/s wr, 1 op/s
Dec 06 10:17:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e134 e134: 6 total, 6 up, 6 in
Dec 06 10:17:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:39.331 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:17:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/80922449' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:17:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/80922449' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:17:39 np0005548790.localdomain ceph-mon[301742]: osdmap e134: 6 total, 6 up, 6 in
Dec 06 10:17:39 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:39.759 2 INFO neutron.agent.securitygroups_rpc [None req-24de80cf-8a07-42c3-8966-675d0403c3d2 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:17:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:40.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:17:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:40.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:17:40 np0005548790.localdomain ceph-mon[301742]: pgmap v220: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 767 B/s wr, 1 op/s
Dec 06 10:17:40 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:17:40.561 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:17:41 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:41.102 2 INFO neutron.agent.securitygroups_rpc [None req-4acfb63b-6c96-4af3-b5fa-66e73a2e25c0 cf2cadf875da4c9b86fb2902b9ee90bb 2b975a1e6b7941c09260aeb20365b968 - - default default] Security group member updated ['f9be6b32-ff8a-467f-8358-ff505a55042e']
Dec 06 10:17:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v222: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 1023 B/s wr, 1 op/s
Dec 06 10:17:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:41.336 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:17:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:41.337 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:17:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:41.367 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:17:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:41.367 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:17:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:41.368 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:17:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:41.368 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:17:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:41.368 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:17:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e135 e135: 6 total, 6 up, 6 in
Dec 06 10:17:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:17:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/619121492' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:17:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:41.798 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:17:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Optimize plan auto_2025-12-06_10:17:41
Dec 06 10:17:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:17:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] do_upmap
Dec 06 10:17:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] pools ['backups', 'vms', 'images', 'manila_metadata', 'volumes', '.mgr', 'manila_data']
Dec 06 10:17:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] prepared 0/10 changes
Dec 06 10:17:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:17:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:17:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:17:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:17:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:17:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:17:41 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:17:41.954 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:17:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:41.998 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:17:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:42.000 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=11617MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:17:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:42.000 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:17:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:42.001 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32)
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 8.17891541038526e-07 of space, bias 1.0, pg target 0.0001633056776940257 quantized to 32 (current 32)
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.7263051367950866e-06 of space, bias 4.0, pg target 0.002170138888888889 quantized to 16 (current 16)
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:17:42 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:42.012 2 INFO neutron.agent.securitygroups_rpc [None req-f035cee5-5c71-4777-a408-c824903df12b 3ea76362796945abb0389f60eab07566 23fdd860878442e1b8fc77e4ae3ef271 - - default default] Security group member updated ['dd9785c1-eb5d-4293-ac78-0fc1ce108f20']
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:17:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:17:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:42.073 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:17:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:42.073 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:17:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:42.090 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:17:42 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e136 e136: 6 total, 6 up, 6 in
Dec 06 10:17:42 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:42.182 2 INFO neutron.agent.securitygroups_rpc [None req-2d1fe085-81b9-49e2-b303-f7feeabc4137 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:17:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:42.199 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:42 np0005548790.localdomain ceph-mon[301742]: pgmap v222: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 1023 B/s wr, 1 op/s
Dec 06 10:17:42 np0005548790.localdomain ceph-mon[301742]: osdmap e135: 6 total, 6 up, 6 in
Dec 06 10:17:42 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/619121492' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:17:42 np0005548790.localdomain ceph-mon[301742]: osdmap e136: 6 total, 6 up, 6 in
Dec 06 10:17:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:42.542 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:42 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:42.569 2 INFO neutron.agent.securitygroups_rpc [None req-463c5a9c-1342-4628-be66-c954070435e6 cf2cadf875da4c9b86fb2902b9ee90bb 2b975a1e6b7941c09260aeb20365b968 - - default default] Security group member updated ['f9be6b32-ff8a-467f-8358-ff505a55042e']
Dec 06 10:17:42 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:17:42 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1627444727' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:17:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:42.630 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:17:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:42.635 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:17:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:42.660 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:17:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:42.661 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:17:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:42.662 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:17:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v225: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 45 KiB/s rd, 3.3 KiB/s wr, 59 op/s
Dec 06 10:17:43 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:43.231 2 INFO neutron.agent.securitygroups_rpc [None req-74f6711f-47e9-487d-bd32-5a2f1bba6efe a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:17:43 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1627444727' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:17:43 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1439659764' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:17:43 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1439659764' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:17:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:17:44 np0005548790.localdomain ceph-mon[301742]: pgmap v225: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 45 KiB/s rd, 3.3 KiB/s wr, 59 op/s
Dec 06 10:17:44 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/3506080910' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:17:44 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/4032121207' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:17:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v226: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 2.2 KiB/s wr, 55 op/s
Dec 06 10:17:45 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:17:45 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:45.569 2 INFO neutron.agent.securitygroups_rpc [None req-a9308ef0-170e-430a-9f5f-6439b979faf7 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:17:45 np0005548790.localdomain systemd[1]: tmp-crun.Uwj5r9.mount: Deactivated successfully.
Dec 06 10:17:45 np0005548790.localdomain podman[312954]: 2025-12-06 10:17:45.581814794 +0000 UTC m=+0.088971777 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:17:45 np0005548790.localdomain podman[312954]: 2025-12-06 10:17:45.620297945 +0000 UTC m=+0.127454908 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 06 10:17:45 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:17:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:45.657 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:17:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:45.658 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:17:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:45.658 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:17:46 np0005548790.localdomain ceph-mon[301742]: pgmap v226: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 2.2 KiB/s wr, 55 op/s
Dec 06 10:17:46 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:46.575 2 INFO neutron.agent.securitygroups_rpc [None req-38541453-b414-4a96-8c97-455c5ffb96a0 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:17:47 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e137 e137: 6 total, 6 up, 6 in
Dec 06 10:17:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v228: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 2.2 KiB/s wr, 55 op/s
Dec 06 10:17:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:47.173 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:47.201 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:17:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:17:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:47.543 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:47 np0005548790.localdomain podman[312975]: 2025-12-06 10:17:47.57190985 +0000 UTC m=+0.082911154 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:17:47 np0005548790.localdomain podman[312975]: 2025-12-06 10:17:47.584257074 +0000 UTC m=+0.095258348 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:17:47 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:17:47 np0005548790.localdomain podman[312976]: 2025-12-06 10:17:47.678962025 +0000 UTC m=+0.184647325 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 10:17:47 np0005548790.localdomain podman[312976]: 2025-12-06 10:17:47.719245844 +0000 UTC m=+0.224931214 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 10:17:47 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:17:48 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:48.021 2 INFO neutron.agent.securitygroups_rpc [None req-77939ad8-3a8c-44db-b1d8-896917e1a291 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:17:48 np0005548790.localdomain ceph-mon[301742]: osdmap e137: 6 total, 6 up, 6 in
Dec 06 10:17:48 np0005548790.localdomain ceph-mon[301742]: pgmap v228: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 2.2 KiB/s wr, 55 op/s
Dec 06 10:17:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:17:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:17:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:17:48.398 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:17:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:17:48.398 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:17:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:17:48.398 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:17:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:17:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:17:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:17:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18732 "" "Go-http-client/1.1"
Dec 06 10:17:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:17:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v229: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 1.7 KiB/s wr, 45 op/s
Dec 06 10:17:49 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:17:49.735 2 INFO neutron.agent.securitygroups_rpc [None req-028fe2d3-a2af-4154-9a69-d7d602ad3ddf a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:17:50 np0005548790.localdomain ceph-mon[301742]: pgmap v229: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 1.7 KiB/s wr, 45 op/s
Dec 06 10:17:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v230: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 1.4 KiB/s wr, 38 op/s
Dec 06 10:17:52 np0005548790.localdomain ceph-mon[301742]: pgmap v230: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 1.4 KiB/s wr, 38 op/s
Dec 06 10:17:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:52.244 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:52.545 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v231: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 409 B/s wr, 2 op/s
Dec 06 10:17:53 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e138 e138: 6 total, 6 up, 6 in
Dec 06 10:17:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:17:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:17:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:17:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:17:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:17:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:17:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:17:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:17:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:17:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:17:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:17:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:17:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:53.721 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:17:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:54.090 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:54 np0005548790.localdomain ceph-mon[301742]: pgmap v231: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 409 B/s wr, 2 op/s
Dec 06 10:17:54 np0005548790.localdomain ceph-mon[301742]: osdmap e138: 6 total, 6 up, 6 in
Dec 06 10:17:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:55.070 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v233: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 3 op/s
Dec 06 10:17:56 np0005548790.localdomain ceph-mon[301742]: pgmap v233: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 3 op/s
Dec 06 10:17:56 np0005548790.localdomain sudo[313022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:17:56 np0005548790.localdomain sudo[313022]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:17:56 np0005548790.localdomain sudo[313022]: pam_unix(sudo:session): session closed for user root
Dec 06 10:17:56 np0005548790.localdomain sudo[313040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:17:56 np0005548790.localdomain sudo[313040]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:17:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:56.875 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:57 np0005548790.localdomain sudo[313040]: pam_unix(sudo:session): session closed for user root
Dec 06 10:17:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v234: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 409 B/s wr, 2 op/s
Dec 06 10:17:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:57.249 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:57 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1671277929' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:17:57 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1671277929' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:17:57 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:17:57 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:17:57 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:17:57 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:17:57 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:17:57 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] update: starting ev 4b807e5a-4e58-4dc8-bd42-a27b59a07bf3 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:17:57 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] complete: finished ev 4b807e5a-4e58-4dc8-bd42-a27b59a07bf3 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:17:57 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Completed event 4b807e5a-4e58-4dc8-bd42-a27b59a07bf3 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 06 10:17:57 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:17:57 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:17:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:57.546 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:57 np0005548790.localdomain sudo[313091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:17:57 np0005548790.localdomain sudo[313091]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:17:57 np0005548790.localdomain sudo[313091]: pam_unix(sudo:session): session closed for user root
Dec 06 10:17:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:57.837 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:58 np0005548790.localdomain ceph-mon[301742]: pgmap v234: 177 pgs: 177 active+clean; 145 MiB data, 803 MiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 409 B/s wr, 2 op/s
Dec 06 10:17:58 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:17:58 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:17:58 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:17:58 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:17:58 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e139 e139: 6 total, 6 up, 6 in
Dec 06 10:17:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:17:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v236: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 2.7 KiB/s wr, 70 op/s
Dec 06 10:17:59 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:17:59.128 262327 INFO neutron.agent.linux.ip_lib [None req-5291c7c1-d8b7-498e-8d94-31ce8ba3d0e9 - - - - - -] Device tapf608362e-e1 cannot be used as it has no MAC address
Dec 06 10:17:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:59.152 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:59 np0005548790.localdomain kernel: device tapf608362e-e1 entered promiscuous mode
Dec 06 10:17:59 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016279.1615] manager: (tapf608362e-e1): new Generic device (/org/freedesktop/NetworkManager/Devices/27)
Dec 06 10:17:59 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:17:59Z|00112|binding|INFO|Claiming lport f608362e-e156-45cc-a4c3-b483191b6825 for this chassis.
Dec 06 10:17:59 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:17:59Z|00113|binding|INFO|f608362e-e156-45cc-a4c3-b483191b6825: Claiming unknown
Dec 06 10:17:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:59.162 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:59 np0005548790.localdomain systemd-udevd[313119]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:17:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:17:59.175 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-79f5539e-33bc-41c0-9c50-7ed2af3efbc6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79f5539e-33bc-41c0-9c50-7ed2af3efbc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98e92b02588946eca862b8460f965b72', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=41c882d1-936d-4721-b349-63efa62b10d5, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=f608362e-e156-45cc-a4c3-b483191b6825) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:17:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:17:59.177 159200 INFO neutron.agent.ovn.metadata.agent [-] Port f608362e-e156-45cc-a4c3-b483191b6825 in datapath 79f5539e-33bc-41c0-9c50-7ed2af3efbc6 bound to our chassis
Dec 06 10:17:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:17:59.180 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Port 2b9ce8e0-b069-40da-9361-ad82bd92c244 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 06 10:17:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:17:59.181 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 79f5539e-33bc-41c0-9c50-7ed2af3efbc6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:17:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:17:59.182 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[a36abe1e-b671-401e-a137-021042d01490]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:17:59 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapf608362e-e1: No such device
Dec 06 10:17:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:59.191 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:59.195 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:59 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:17:59Z|00114|binding|INFO|Setting lport f608362e-e156-45cc-a4c3-b483191b6825 ovn-installed in OVS
Dec 06 10:17:59 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:17:59Z|00115|binding|INFO|Setting lport f608362e-e156-45cc-a4c3-b483191b6825 up in Southbound
Dec 06 10:17:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:59.198 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:59 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapf608362e-e1: No such device
Dec 06 10:17:59 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapf608362e-e1: No such device
Dec 06 10:17:59 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapf608362e-e1: No such device
Dec 06 10:17:59 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapf608362e-e1: No such device
Dec 06 10:17:59 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapf608362e-e1: No such device
Dec 06 10:17:59 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapf608362e-e1: No such device
Dec 06 10:17:59 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapf608362e-e1: No such device
Dec 06 10:17:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:59.233 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:17:59.259 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:17:59 np0005548790.localdomain ceph-mon[301742]: osdmap e139: 6 total, 6 up, 6 in
Dec 06 10:18:00 np0005548790.localdomain podman[313190]: 
Dec 06 10:18:00 np0005548790.localdomain podman[313190]: 2025-12-06 10:18:00.134907445 +0000 UTC m=+0.093901611 container create 6f8c7034cbf9b8bb799e52f2354ce967f7719b2297e7f6369353c7ef19fcfb93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-79f5539e-33bc-41c0-9c50-7ed2af3efbc6, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:18:00 np0005548790.localdomain systemd[1]: Started libpod-conmon-6f8c7034cbf9b8bb799e52f2354ce967f7719b2297e7f6369353c7ef19fcfb93.scope.
Dec 06 10:18:00 np0005548790.localdomain podman[313190]: 2025-12-06 10:18:00.094469161 +0000 UTC m=+0.053463327 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:18:00 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:18:00 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efab6c1a404aa4245a6c1e01080e4fb8482aac86d65a8c738ae3d2e6853147d6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:18:00 np0005548790.localdomain podman[313190]: 2025-12-06 10:18:00.225716341 +0000 UTC m=+0.184710477 container init 6f8c7034cbf9b8bb799e52f2354ce967f7719b2297e7f6369353c7ef19fcfb93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-79f5539e-33bc-41c0-9c50-7ed2af3efbc6, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:18:00 np0005548790.localdomain podman[313190]: 2025-12-06 10:18:00.235847255 +0000 UTC m=+0.194841391 container start 6f8c7034cbf9b8bb799e52f2354ce967f7719b2297e7f6369353c7ef19fcfb93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-79f5539e-33bc-41c0-9c50-7ed2af3efbc6, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 10:18:00 np0005548790.localdomain dnsmasq[313208]: started, version 2.85 cachesize 150
Dec 06 10:18:00 np0005548790.localdomain dnsmasq[313208]: DNS service limited to local subnets
Dec 06 10:18:00 np0005548790.localdomain dnsmasq[313208]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:18:00 np0005548790.localdomain dnsmasq[313208]: warning: no upstream servers configured
Dec 06 10:18:00 np0005548790.localdomain dnsmasq-dhcp[313208]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 06 10:18:00 np0005548790.localdomain dnsmasq[313208]: read /var/lib/neutron/dhcp/79f5539e-33bc-41c0-9c50-7ed2af3efbc6/addn_hosts - 0 addresses
Dec 06 10:18:00 np0005548790.localdomain dnsmasq-dhcp[313208]: read /var/lib/neutron/dhcp/79f5539e-33bc-41c0-9c50-7ed2af3efbc6/host
Dec 06 10:18:00 np0005548790.localdomain dnsmasq-dhcp[313208]: read /var/lib/neutron/dhcp/79f5539e-33bc-41c0-9c50-7ed2af3efbc6/opts
Dec 06 10:18:00 np0005548790.localdomain ceph-mon[301742]: pgmap v236: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 2.7 KiB/s wr, 70 op/s
Dec 06 10:18:00 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1856326074' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:18:00 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1856326074' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:18:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:00.627 262327 INFO neutron.agent.dhcp.agent [None req-19f05bf7-5b11-45bc-8baa-76c97de2d84e - - - - - -] DHCP configuration for ports {'0a377678-cbbb-44f7-bb46-56eb6938949a'} is completed
Dec 06 10:18:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v237: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 2.2 KiB/s wr, 68 op/s
Dec 06 10:18:01 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:01.142 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:01 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2389509646' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:18:01 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2389509646' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:18:01 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:18:01 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2062782985' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:18:01 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:18:01 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2062782985' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:18:02 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Writing back 50 completed events
Dec 06 10:18:02 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:18:02 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:02.273 262327 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:01Z, description=, device_id=6d77d769-2432-46ea-81cb-7c9efbed3186, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c85762df0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c85762850>], id=086d97ad-4b49-4628-bce2-4f403ae021a9, ip_allocation=immediate, mac_address=fa:16:3e:bc:12:ba, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:55Z, description=, dns_domain=, id=79f5539e-33bc-41c0-9c50-7ed2af3efbc6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-1207249274-network, port_security_enabled=True, project_id=98e92b02588946eca862b8460f965b72, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=53823, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1613, status=ACTIVE, subnets=['fbfd9fd6-a225-428f-8a68-81e4e210a2a5'], tags=[], tenant_id=98e92b02588946eca862b8460f965b72, updated_at=2025-12-06T10:17:57Z, vlan_transparent=None, network_id=79f5539e-33bc-41c0-9c50-7ed2af3efbc6, port_security_enabled=False, project_id=98e92b02588946eca862b8460f965b72, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1651, status=DOWN, tags=[], tenant_id=98e92b02588946eca862b8460f965b72, updated_at=2025-12-06T10:18:01Z on network 79f5539e-33bc-41c0-9c50-7ed2af3efbc6
Dec 06 10:18:02 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:02.284 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:02 np0005548790.localdomain dnsmasq[313208]: read /var/lib/neutron/dhcp/79f5539e-33bc-41c0-9c50-7ed2af3efbc6/addn_hosts - 1 addresses
Dec 06 10:18:02 np0005548790.localdomain podman[313226]: 2025-12-06 10:18:02.484322218 +0000 UTC m=+0.061472563 container kill 6f8c7034cbf9b8bb799e52f2354ce967f7719b2297e7f6369353c7ef19fcfb93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-79f5539e-33bc-41c0-9c50-7ed2af3efbc6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 10:18:02 np0005548790.localdomain dnsmasq-dhcp[313208]: read /var/lib/neutron/dhcp/79f5539e-33bc-41c0-9c50-7ed2af3efbc6/host
Dec 06 10:18:02 np0005548790.localdomain dnsmasq-dhcp[313208]: read /var/lib/neutron/dhcp/79f5539e-33bc-41c0-9c50-7ed2af3efbc6/opts
Dec 06 10:18:02 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:02.548 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:02 np0005548790.localdomain ceph-mon[301742]: pgmap v237: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 2.2 KiB/s wr, 68 op/s
Dec 06 10:18:02 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2062782985' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:18:02 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2062782985' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:18:02 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:18:02 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:02.825 262327 INFO neutron.agent.dhcp.agent [None req-cc101286-6899-4ec9-8092-a1967fc4d428 - - - - - -] DHCP configuration for ports {'086d97ad-4b49-4628-bce2-4f403ae021a9'} is completed
Dec 06 10:18:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v238: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 3.3 KiB/s wr, 91 op/s
Dec 06 10:18:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:18:04 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:04.288 2 INFO neutron.agent.securitygroups_rpc [None req-a84ff9e7-4dda-4f24-9c52-73179c1374d1 4c6008178bdc445aa99fb1b726f87b45 a2aaeadee6f14b78a73f8886be99b671 - - default default] Security group member updated ['13551db8-e8e0-43b0-89a9-b0d8423e74c9']
Dec 06 10:18:04 np0005548790.localdomain ceph-mon[301742]: pgmap v238: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 3.3 KiB/s wr, 91 op/s
Dec 06 10:18:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v239: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 68 KiB/s rd, 3.3 KiB/s wr, 90 op/s
Dec 06 10:18:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:18:05 np0005548790.localdomain systemd[1]: tmp-crun.d50v93.mount: Deactivated successfully.
Dec 06 10:18:05 np0005548790.localdomain podman[313247]: 2025-12-06 10:18:05.579919214 +0000 UTC m=+0.088838105 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 10:18:05 np0005548790.localdomain podman[313247]: 2025-12-06 10:18:05.613279816 +0000 UTC m=+0.122198707 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 06 10:18:05 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:18:05 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:05.982 262327 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:01Z, description=, device_id=6d77d769-2432-46ea-81cb-7c9efbed3186, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c857fd070>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c857fda00>], id=086d97ad-4b49-4628-bce2-4f403ae021a9, ip_allocation=immediate, mac_address=fa:16:3e:bc:12:ba, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:17:55Z, description=, dns_domain=, id=79f5539e-33bc-41c0-9c50-7ed2af3efbc6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-1207249274-network, port_security_enabled=True, project_id=98e92b02588946eca862b8460f965b72, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=53823, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1613, status=ACTIVE, subnets=['fbfd9fd6-a225-428f-8a68-81e4e210a2a5'], tags=[], tenant_id=98e92b02588946eca862b8460f965b72, updated_at=2025-12-06T10:17:57Z, vlan_transparent=None, network_id=79f5539e-33bc-41c0-9c50-7ed2af3efbc6, port_security_enabled=False, project_id=98e92b02588946eca862b8460f965b72, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1651, status=DOWN, tags=[], tenant_id=98e92b02588946eca862b8460f965b72, updated_at=2025-12-06T10:18:01Z on network 79f5539e-33bc-41c0-9c50-7ed2af3efbc6
Dec 06 10:18:06 np0005548790.localdomain podman[313283]: 2025-12-06 10:18:06.231813056 +0000 UTC m=+0.065122223 container kill 6f8c7034cbf9b8bb799e52f2354ce967f7719b2297e7f6369353c7ef19fcfb93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-79f5539e-33bc-41c0-9c50-7ed2af3efbc6, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 06 10:18:06 np0005548790.localdomain dnsmasq[313208]: read /var/lib/neutron/dhcp/79f5539e-33bc-41c0-9c50-7ed2af3efbc6/addn_hosts - 1 addresses
Dec 06 10:18:06 np0005548790.localdomain dnsmasq-dhcp[313208]: read /var/lib/neutron/dhcp/79f5539e-33bc-41c0-9c50-7ed2af3efbc6/host
Dec 06 10:18:06 np0005548790.localdomain dnsmasq-dhcp[313208]: read /var/lib/neutron/dhcp/79f5539e-33bc-41c0-9c50-7ed2af3efbc6/opts
Dec 06 10:18:06 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:06.478 262327 INFO neutron.agent.dhcp.agent [None req-aaef4089-f64f-4979-876d-456dc83757e0 - - - - - -] DHCP configuration for ports {'086d97ad-4b49-4628-bce2-4f403ae021a9'} is completed
Dec 06 10:18:06 np0005548790.localdomain ceph-mon[301742]: pgmap v239: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 68 KiB/s rd, 3.3 KiB/s wr, 90 op/s
Dec 06 10:18:07 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e140 e140: 6 total, 6 up, 6 in
Dec 06 10:18:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v240: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 68 KiB/s rd, 3.3 KiB/s wr, 90 op/s
Dec 06 10:18:07 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:07.325 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:07 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:07.549 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:08 np0005548790.localdomain ceph-mon[301742]: pgmap v240: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 68 KiB/s rd, 3.3 KiB/s wr, 90 op/s
Dec 06 10:18:08 np0005548790.localdomain ceph-mon[301742]: osdmap e140: 6 total, 6 up, 6 in
Dec 06 10:18:08 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:08.687 2 INFO neutron.agent.securitygroups_rpc [None req-2cd445e7-be6d-4272-b78a-eedc8c1ca774 4c6008178bdc445aa99fb1b726f87b45 a2aaeadee6f14b78a73f8886be99b671 - - default default] Security group member updated ['13551db8-e8e0-43b0-89a9-b0d8423e74c9']
Dec 06 10:18:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:18:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:18:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:18:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:18:09 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:09.129 262327 INFO neutron.agent.linux.ip_lib [None req-be122f16-8931-4db8-8d35-38eff6ad8e83 - - - - - -] Device tapf29906d1-35 cannot be used as it has no MAC address
Dec 06 10:18:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v242: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 1.9 KiB/s wr, 36 op/s
Dec 06 10:18:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:09.148 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:09 np0005548790.localdomain kernel: device tapf29906d1-35 entered promiscuous mode
Dec 06 10:18:09 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016289.1585] manager: (tapf29906d1-35): new Generic device (/org/freedesktop/NetworkManager/Devices/28)
Dec 06 10:18:09 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:18:09Z|00116|binding|INFO|Claiming lport f29906d1-355f-44b5-8226-92f4452379c6 for this chassis.
Dec 06 10:18:09 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:18:09Z|00117|binding|INFO|f29906d1-355f-44b5-8226-92f4452379c6: Claiming unknown
Dec 06 10:18:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:09.163 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:09 np0005548790.localdomain systemd[1]: tmp-crun.9tqAgm.mount: Deactivated successfully.
Dec 06 10:18:09 np0005548790.localdomain podman[313310]: 2025-12-06 10:18:09.170842875 +0000 UTC m=+0.097268632 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41, release=1755695350)
Dec 06 10:18:09 np0005548790.localdomain systemd-udevd[313350]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:18:09 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:09.185 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe66:2e3c/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-a4b60f19-317d-48ed-ac1b-193a89e4381f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4b60f19-317d-48ed-ac1b-193a89e4381f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2aaeadee6f14b78a73f8886be99b671', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a6e70fd-d823-4cab-b7c4-cd756db3ca68, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=f29906d1-355f-44b5-8226-92f4452379c6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:18:09 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:09.187 159200 INFO neutron.agent.ovn.metadata.agent [-] Port f29906d1-355f-44b5-8226-92f4452379c6 in datapath a4b60f19-317d-48ed-ac1b-193a89e4381f bound to our chassis
Dec 06 10:18:09 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:09.190 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Port 7770d51c-5dc5-4c0a-b5b8-4623f0619258 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 06 10:18:09 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:09.190 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a4b60f19-317d-48ed-ac1b-193a89e4381f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:18:09 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:09.191 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[cc08a156-2e12-4bde-b0ef-2d0efca05f9b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:18:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:09.195 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:09.198 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:09 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:18:09Z|00118|binding|INFO|Setting lport f29906d1-355f-44b5-8226-92f4452379c6 ovn-installed in OVS
Dec 06 10:18:09 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:18:09Z|00119|binding|INFO|Setting lport f29906d1-355f-44b5-8226-92f4452379c6 up in Southbound
Dec 06 10:18:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:09.200 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:09 np0005548790.localdomain podman[313308]: 2025-12-06 10:18:09.214943107 +0000 UTC m=+0.145369572 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:18:09 np0005548790.localdomain podman[313308]: 2025-12-06 10:18:09.222744099 +0000 UTC m=+0.153170574 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 10:18:09 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:09.228 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:fa:29 10.100.0.2 2001:db8::f816:3eff:fe3f:fa29'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=687d7abb-e6aa-4047-aa26-552c962fcc91) old=Port_Binding(mac=['fa:16:3e:3f:fa:29 2001:db8::f816:3eff:fe3f:fa29'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:18:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:09.230 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:09 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:09.231 159200 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 687d7abb-e6aa-4047-aa26-552c962fcc91 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c updated
Dec 06 10:18:09 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:09.235 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:18:09 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:09.235 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[f4765e3a-09ad-4d9e-a030-f407db925ade]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:18:09 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:18:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:09.260 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:09 np0005548790.localdomain podman[313310]: 2025-12-06 10:18:09.275240599 +0000 UTC m=+0.201666346 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1755695350, config_id=edpm, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container)
Dec 06 10:18:09 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:18:09 np0005548790.localdomain podman[313307]: 2025-12-06 10:18:09.277233683 +0000 UTC m=+0.210756642 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:18:09 np0005548790.localdomain podman[313307]: 2025-12-06 10:18:09.356769414 +0000 UTC m=+0.290292433 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:18:09 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:18:10 np0005548790.localdomain systemd[1]: tmp-crun.ytOUL8.mount: Deactivated successfully.
Dec 06 10:18:10 np0005548790.localdomain podman[313428]: 
Dec 06 10:18:10 np0005548790.localdomain podman[313428]: 2025-12-06 10:18:10.043552669 +0000 UTC m=+0.083540931 container create 0d30082fd4f0502ceb4df94dbe5886c16440ac3852a4ee468f6eafadc803c3f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a4b60f19-317d-48ed-ac1b-193a89e4381f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 10:18:10 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:10.062 2 INFO neutron.agent.securitygroups_rpc [None req-36813505-8d2e-42b4-bcdd-400a4500589a a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:18:10 np0005548790.localdomain systemd[1]: Started libpod-conmon-0d30082fd4f0502ceb4df94dbe5886c16440ac3852a4ee468f6eafadc803c3f2.scope.
Dec 06 10:18:10 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:18:10 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b591cf72021080d4bba485739eac4b1c085eefb4cd4d058bc85430dad4be286e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:18:10 np0005548790.localdomain podman[313428]: 2025-12-06 10:18:10.005310905 +0000 UTC m=+0.045299227 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:18:10 np0005548790.localdomain podman[313428]: 2025-12-06 10:18:10.110477649 +0000 UTC m=+0.150465931 container init 0d30082fd4f0502ceb4df94dbe5886c16440ac3852a4ee468f6eafadc803c3f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a4b60f19-317d-48ed-ac1b-193a89e4381f, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 10:18:10 np0005548790.localdomain podman[313428]: 2025-12-06 10:18:10.118371493 +0000 UTC m=+0.158359785 container start 0d30082fd4f0502ceb4df94dbe5886c16440ac3852a4ee468f6eafadc803c3f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a4b60f19-317d-48ed-ac1b-193a89e4381f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:18:10 np0005548790.localdomain dnsmasq[313446]: started, version 2.85 cachesize 150
Dec 06 10:18:10 np0005548790.localdomain dnsmasq[313446]: DNS service limited to local subnets
Dec 06 10:18:10 np0005548790.localdomain dnsmasq[313446]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:18:10 np0005548790.localdomain dnsmasq[313446]: warning: no upstream servers configured
Dec 06 10:18:10 np0005548790.localdomain dnsmasq-dhcp[313446]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 06 10:18:10 np0005548790.localdomain dnsmasq[313446]: read /var/lib/neutron/dhcp/a4b60f19-317d-48ed-ac1b-193a89e4381f/addn_hosts - 0 addresses
Dec 06 10:18:10 np0005548790.localdomain dnsmasq-dhcp[313446]: read /var/lib/neutron/dhcp/a4b60f19-317d-48ed-ac1b-193a89e4381f/host
Dec 06 10:18:10 np0005548790.localdomain dnsmasq-dhcp[313446]: read /var/lib/neutron/dhcp/a4b60f19-317d-48ed-ac1b-193a89e4381f/opts
Dec 06 10:18:10 np0005548790.localdomain ceph-mon[301742]: pgmap v242: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 1.9 KiB/s wr, 36 op/s
Dec 06 10:18:10 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:10.327 262327 INFO neutron.agent.dhcp.agent [None req-6f70133d-9f7a-484d-bf50-e24799a07dd5 - - - - - -] DHCP configuration for ports {'9eed21da-76f5-4550-90ef-0f2549a8aab7'} is completed
Dec 06 10:18:10 np0005548790.localdomain dnsmasq[313446]: exiting on receipt of SIGTERM
Dec 06 10:18:10 np0005548790.localdomain podman[313462]: 2025-12-06 10:18:10.50872653 +0000 UTC m=+0.060322772 container kill 0d30082fd4f0502ceb4df94dbe5886c16440ac3852a4ee468f6eafadc803c3f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a4b60f19-317d-48ed-ac1b-193a89e4381f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:18:10 np0005548790.localdomain systemd[1]: libpod-0d30082fd4f0502ceb4df94dbe5886c16440ac3852a4ee468f6eafadc803c3f2.scope: Deactivated successfully.
Dec 06 10:18:10 np0005548790.localdomain podman[313476]: 2025-12-06 10:18:10.562489444 +0000 UTC m=+0.044799382 container died 0d30082fd4f0502ceb4df94dbe5886c16440ac3852a4ee468f6eafadc803c3f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a4b60f19-317d-48ed-ac1b-193a89e4381f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 10:18:10 np0005548790.localdomain podman[313476]: 2025-12-06 10:18:10.592576068 +0000 UTC m=+0.074885946 container cleanup 0d30082fd4f0502ceb4df94dbe5886c16440ac3852a4ee468f6eafadc803c3f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a4b60f19-317d-48ed-ac1b-193a89e4381f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:18:10 np0005548790.localdomain systemd[1]: libpod-conmon-0d30082fd4f0502ceb4df94dbe5886c16440ac3852a4ee468f6eafadc803c3f2.scope: Deactivated successfully.
Dec 06 10:18:10 np0005548790.localdomain podman[313483]: 2025-12-06 10:18:10.664300608 +0000 UTC m=+0.133988695 container remove 0d30082fd4f0502ceb4df94dbe5886c16440ac3852a4ee468f6eafadc803c3f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a4b60f19-317d-48ed-ac1b-193a89e4381f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 06 10:18:10 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:10.676 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:10 np0005548790.localdomain kernel: device tapf29906d1-35 left promiscuous mode
Dec 06 10:18:10 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:18:10Z|00120|binding|INFO|Releasing lport f29906d1-355f-44b5-8226-92f4452379c6 from this chassis (sb_readonly=0)
Dec 06 10:18:10 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:18:10Z|00121|binding|INFO|Setting lport f29906d1-355f-44b5-8226-92f4452379c6 down in Southbound
Dec 06 10:18:10 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:10.687 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-a4b60f19-317d-48ed-ac1b-193a89e4381f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a4b60f19-317d-48ed-ac1b-193a89e4381f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2aaeadee6f14b78a73f8886be99b671', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a6e70fd-d823-4cab-b7c4-cd756db3ca68, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=f29906d1-355f-44b5-8226-92f4452379c6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:18:10 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:10.689 159200 INFO neutron.agent.ovn.metadata.agent [-] Port f29906d1-355f-44b5-8226-92f4452379c6 in datapath a4b60f19-317d-48ed-ac1b-193a89e4381f unbound from our chassis
Dec 06 10:18:10 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:10.692 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a4b60f19-317d-48ed-ac1b-193a89e4381f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:18:10 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:10.693 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[bd903d9f-e68d-41db-80e3-944dc274c927]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:18:10 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:10.701 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:10 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:10.940 2 INFO neutron.agent.securitygroups_rpc [None req-809d6155-5d31-4aee-97b1-907b0d1ee5ee a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:18:11 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-b591cf72021080d4bba485739eac4b1c085eefb4cd4d058bc85430dad4be286e-merged.mount: Deactivated successfully.
Dec 06 10:18:11 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0d30082fd4f0502ceb4df94dbe5886c16440ac3852a4ee468f6eafadc803c3f2-userdata-shm.mount: Deactivated successfully.
Dec 06 10:18:11 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:11.089 262327 INFO neutron.agent.dhcp.agent [None req-59f89888-3122-4c3c-91da-f5da2341ea02 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:18:11 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2da4b60f19\x2d317d\x2d48ed\x2dac1b\x2d193a89e4381f.mount: Deactivated successfully.
Dec 06 10:18:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v243: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 1.9 KiB/s wr, 36 op/s
Dec 06 10:18:11 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e141 e141: 6 total, 6 up, 6 in
Dec 06 10:18:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:18:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:18:11 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:11.929 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:18:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:18:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:18:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:18:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:18:11 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:11.930 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:18:11 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:11.931 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=33b2d0f4-3dae-458c-b286-c937c7cb3d9e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:18:11 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:11.969 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:12 np0005548790.localdomain ceph-mon[301742]: pgmap v243: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 1.9 KiB/s wr, 36 op/s
Dec 06 10:18:12 np0005548790.localdomain ceph-mon[301742]: osdmap e141: 6 total, 6 up, 6 in
Dec 06 10:18:12 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e142 e142: 6 total, 6 up, 6 in
Dec 06 10:18:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:12.328 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:12.552 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v246: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 3.2 KiB/s wr, 57 op/s
Dec 06 10:18:13 np0005548790.localdomain ceph-mon[301742]: osdmap e142: 6 total, 6 up, 6 in
Dec 06 10:18:13 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2958126727' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:18:13 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2958126727' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:18:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:13.756 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:fa:29 2001:db8::f816:3eff:fe3f:fa29'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=687d7abb-e6aa-4047-aa26-552c962fcc91) old=Port_Binding(mac=['fa:16:3e:3f:fa:29 10.100.0.2 2001:db8::f816:3eff:fe3f:fa29'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:18:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:13.758 159200 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 687d7abb-e6aa-4047-aa26-552c962fcc91 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c updated
Dec 06 10:18:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:13.760 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:18:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:13.761 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[ae08472d-3bd0-4305-b4d5-269c2f094c59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:18:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:18:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e143 e143: 6 total, 6 up, 6 in
Dec 06 10:18:14 np0005548790.localdomain ceph-mon[301742]: pgmap v246: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 3.2 KiB/s wr, 57 op/s
Dec 06 10:18:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:18:14 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4158350361' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:18:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:18:14 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4158350361' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:18:14 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:14.989 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:fa:29 10.100.0.2 2001:db8::f816:3eff:fe3f:fa29'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=687d7abb-e6aa-4047-aa26-552c962fcc91) old=Port_Binding(mac=['fa:16:3e:3f:fa:29 2001:db8::f816:3eff:fe3f:fa29'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:18:14 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:14.990 159200 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 687d7abb-e6aa-4047-aa26-552c962fcc91 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c updated
Dec 06 10:18:14 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:14.993 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:18:14 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:14.994 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[e2fe350c-6cc0-4ccd-9a3e-4b63d6e76c22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:18:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v248: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 2.5 KiB/s wr, 55 op/s
Dec 06 10:18:15 np0005548790.localdomain ceph-mon[301742]: osdmap e143: 6 total, 6 up, 6 in
Dec 06 10:18:15 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/4158350361' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:18:15 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/4158350361' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:18:15 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e144 e144: 6 total, 6 up, 6 in
Dec 06 10:18:16 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:16.112 2 INFO neutron.agent.securitygroups_rpc [None req-6fa383fb-a4a1-4db9-8964-14f7246d83c2 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:18:16 np0005548790.localdomain ceph-mon[301742]: pgmap v248: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 2.5 KiB/s wr, 55 op/s
Dec 06 10:18:16 np0005548790.localdomain ceph-mon[301742]: osdmap e144: 6 total, 6 up, 6 in
Dec 06 10:18:16 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e145 e145: 6 total, 6 up, 6 in
Dec 06 10:18:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:18:16 np0005548790.localdomain podman[313511]: 2025-12-06 10:18:16.58791414 +0000 UTC m=+0.095290588 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:18:16 np0005548790.localdomain podman[313511]: 2025-12-06 10:18:16.602161285 +0000 UTC m=+0.109537753 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3)
Dec 06 10:18:16 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:18:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v251: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 3.1 KiB/s wr, 67 op/s
Dec 06 10:18:17 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:17.368 2 INFO neutron.agent.securitygroups_rpc [None req-034cc1e4-4fb9-4793-8ac5-168cd3b3cb7e a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:18:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:17.367 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:17 np0005548790.localdomain ceph-mon[301742]: osdmap e145: 6 total, 6 up, 6 in
Dec 06 10:18:17 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:18:17 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3956098805' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:18:17 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:18:17 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3956098805' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:18:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:17.554 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:18 np0005548790.localdomain ceph-mon[301742]: pgmap v251: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 3.1 KiB/s wr, 67 op/s
Dec 06 10:18:18 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3956098805' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:18:18 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3956098805' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:18:18 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1565843773' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:18:18 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1565843773' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:18:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:18:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:18:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:18:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156741 "" "Go-http-client/1.1"
Dec 06 10:18:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:18:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19204 "" "Go-http-client/1.1"
Dec 06 10:18:18 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:18:18 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:18:18 np0005548790.localdomain podman[313530]: 2025-12-06 10:18:18.564550971 +0000 UTC m=+0.078074152 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:18:18 np0005548790.localdomain podman[313530]: 2025-12-06 10:18:18.602418425 +0000 UTC m=+0.115941566 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 10:18:18 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:18:18 np0005548790.localdomain podman[313531]: 2025-12-06 10:18:18.622122719 +0000 UTC m=+0.131614841 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 06 10:18:18 np0005548790.localdomain podman[313531]: 2025-12-06 10:18:18.686621223 +0000 UTC m=+0.196113315 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:18:18 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:18:19 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:18:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v252: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 127 KiB/s rd, 8.0 KiB/s wr, 172 op/s
Dec 06 10:18:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:19.717 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:fa:29 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=687d7abb-e6aa-4047-aa26-552c962fcc91) old=Port_Binding(mac=['fa:16:3e:3f:fa:29 10.100.0.2 2001:db8::f816:3eff:fe3f:fa29'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:18:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:19.718 159200 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 687d7abb-e6aa-4047-aa26-552c962fcc91 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c updated
Dec 06 10:18:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:19.721 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:18:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:19.722 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[d4e7d796-9ee9-41b2-955c-244a09299d95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:18:20 np0005548790.localdomain ceph-mon[301742]: pgmap v252: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 127 KiB/s rd, 8.0 KiB/s wr, 172 op/s
Dec 06 10:18:20 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3253677710' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:18:20 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3253677710' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:18:21 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:21.130 262327 INFO neutron.agent.linux.ip_lib [None req-c78dbc64-5687-45da-be5f-038a4d14148b - - - - - -] Device tapf870d2a5-42 cannot be used as it has no MAC address
Dec 06 10:18:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v253: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 111 KiB/s rd, 7.0 KiB/s wr, 151 op/s
Dec 06 10:18:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:21.153 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:21 np0005548790.localdomain kernel: device tapf870d2a5-42 entered promiscuous mode
Dec 06 10:18:21 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016301.1619] manager: (tapf870d2a5-42): new Generic device (/org/freedesktop/NetworkManager/Devices/29)
Dec 06 10:18:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:21.161 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:21 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:18:21Z|00122|binding|INFO|Claiming lport f870d2a5-426d-4b04-ad31-189a5d6fb5a2 for this chassis.
Dec 06 10:18:21 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:18:21Z|00123|binding|INFO|f870d2a5-426d-4b04-ad31-189a5d6fb5a2: Claiming unknown
Dec 06 10:18:21 np0005548790.localdomain systemd-udevd[313586]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:18:21 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:21.171 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe56:9669/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-921a64ce-0b35-4e83-aaa3-fb4b03781597', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-921a64ce-0b35-4e83-aaa3-fb4b03781597', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2aaeadee6f14b78a73f8886be99b671', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87e5edff-fb33-4146-a57f-67c35802f201, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=f870d2a5-426d-4b04-ad31-189a5d6fb5a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:18:21 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:21.173 159200 INFO neutron.agent.ovn.metadata.agent [-] Port f870d2a5-426d-4b04-ad31-189a5d6fb5a2 in datapath 921a64ce-0b35-4e83-aaa3-fb4b03781597 bound to our chassis
Dec 06 10:18:21 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:21.174 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 921a64ce-0b35-4e83-aaa3-fb4b03781597 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:18:21 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:21.175 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[204a3368-6840-4555-89df-3f44b8ea426d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:18:21 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapf870d2a5-42: No such device
Dec 06 10:18:21 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:18:21Z|00124|binding|INFO|Setting lport f870d2a5-426d-4b04-ad31-189a5d6fb5a2 ovn-installed in OVS
Dec 06 10:18:21 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:18:21Z|00125|binding|INFO|Setting lport f870d2a5-426d-4b04-ad31-189a5d6fb5a2 up in Southbound
Dec 06 10:18:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:21.204 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:21 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapf870d2a5-42: No such device
Dec 06 10:18:21 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapf870d2a5-42: No such device
Dec 06 10:18:21 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapf870d2a5-42: No such device
Dec 06 10:18:21 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapf870d2a5-42: No such device
Dec 06 10:18:21 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapf870d2a5-42: No such device
Dec 06 10:18:21 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapf870d2a5-42: No such device
Dec 06 10:18:21 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapf870d2a5-42: No such device
Dec 06 10:18:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:21.239 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:21.264 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e146 e146: 6 total, 6 up, 6 in
Dec 06 10:18:21 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/4142291706' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:18:21 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/4142291706' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:18:21 np0005548790.localdomain podman[313655]: 
Dec 06 10:18:21 np0005548790.localdomain podman[313655]: 2025-12-06 10:18:21.978892348 +0000 UTC m=+0.055457061 container create 26d3563528bac50b1dd52cb6ce07cd7ba3d9535c89a8afdf0e60cbfb81c43380 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-921a64ce-0b35-4e83-aaa3-fb4b03781597, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 06 10:18:22 np0005548790.localdomain systemd[1]: Started libpod-conmon-26d3563528bac50b1dd52cb6ce07cd7ba3d9535c89a8afdf0e60cbfb81c43380.scope.
Dec 06 10:18:22 np0005548790.localdomain systemd[1]: tmp-crun.Ko3KYd.mount: Deactivated successfully.
Dec 06 10:18:22 np0005548790.localdomain podman[313655]: 2025-12-06 10:18:21.947577131 +0000 UTC m=+0.024141994 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:18:22 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:18:22 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7c9ca2406473224853dc15d3fc1fd09f5149912474185036a6f6f35915aa784/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:18:22 np0005548790.localdomain podman[313655]: 2025-12-06 10:18:22.072303004 +0000 UTC m=+0.148867717 container init 26d3563528bac50b1dd52cb6ce07cd7ba3d9535c89a8afdf0e60cbfb81c43380 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-921a64ce-0b35-4e83-aaa3-fb4b03781597, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 06 10:18:22 np0005548790.localdomain podman[313655]: 2025-12-06 10:18:22.082820409 +0000 UTC m=+0.159385132 container start 26d3563528bac50b1dd52cb6ce07cd7ba3d9535c89a8afdf0e60cbfb81c43380 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-921a64ce-0b35-4e83-aaa3-fb4b03781597, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:18:22 np0005548790.localdomain dnsmasq[313673]: started, version 2.85 cachesize 150
Dec 06 10:18:22 np0005548790.localdomain dnsmasq[313673]: DNS service limited to local subnets
Dec 06 10:18:22 np0005548790.localdomain dnsmasq[313673]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:18:22 np0005548790.localdomain dnsmasq[313673]: warning: no upstream servers configured
Dec 06 10:18:22 np0005548790.localdomain dnsmasq-dhcp[313673]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 06 10:18:22 np0005548790.localdomain dnsmasq[313673]: read /var/lib/neutron/dhcp/921a64ce-0b35-4e83-aaa3-fb4b03781597/addn_hosts - 0 addresses
Dec 06 10:18:22 np0005548790.localdomain dnsmasq-dhcp[313673]: read /var/lib/neutron/dhcp/921a64ce-0b35-4e83-aaa3-fb4b03781597/host
Dec 06 10:18:22 np0005548790.localdomain dnsmasq-dhcp[313673]: read /var/lib/neutron/dhcp/921a64ce-0b35-4e83-aaa3-fb4b03781597/opts
Dec 06 10:18:22 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e147 e147: 6 total, 6 up, 6 in
Dec 06 10:18:22 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:22.285 262327 INFO neutron.agent.dhcp.agent [None req-55de3e95-a058-4074-9238-2dd9edc5c7df - - - - - -] DHCP configuration for ports {'83e6fa90-7d5f-4aa0-8fb1-be8ad350b5da'} is completed
Dec 06 10:18:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:22.370 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:22 np0005548790.localdomain ceph-mon[301742]: pgmap v253: 177 pgs: 177 active+clean; 145 MiB data, 804 MiB used, 41 GiB / 42 GiB avail; 111 KiB/s rd, 7.0 KiB/s wr, 151 op/s
Dec 06 10:18:22 np0005548790.localdomain ceph-mon[301742]: osdmap e146: 6 total, 6 up, 6 in
Dec 06 10:18:22 np0005548790.localdomain ceph-mon[301742]: osdmap e147: 6 total, 6 up, 6 in
Dec 06 10:18:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:22.478 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:fa:29 10.100.0.2 2001:db8::f816:3eff:fe3f:fa29'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=687d7abb-e6aa-4047-aa26-552c962fcc91) old=Port_Binding(mac=['fa:16:3e:3f:fa:29 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:18:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:22.480 159200 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 687d7abb-e6aa-4047-aa26-552c962fcc91 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c updated
Dec 06 10:18:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:22.484 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:18:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:22.485 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[c735d6d7-6df6-41b7-b3d9-873ee8ba23a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:18:22 np0005548790.localdomain podman[313691]: 2025-12-06 10:18:22.531836623 +0000 UTC m=+0.078318089 container kill 26d3563528bac50b1dd52cb6ce07cd7ba3d9535c89a8afdf0e60cbfb81c43380 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-921a64ce-0b35-4e83-aaa3-fb4b03781597, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 10:18:22 np0005548790.localdomain dnsmasq[313673]: exiting on receipt of SIGTERM
Dec 06 10:18:22 np0005548790.localdomain systemd[1]: libpod-26d3563528bac50b1dd52cb6ce07cd7ba3d9535c89a8afdf0e60cbfb81c43380.scope: Deactivated successfully.
Dec 06 10:18:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:22.556 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:22 np0005548790.localdomain podman[313707]: 2025-12-06 10:18:22.611072746 +0000 UTC m=+0.056919020 container died 26d3563528bac50b1dd52cb6ce07cd7ba3d9535c89a8afdf0e60cbfb81c43380 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-921a64ce-0b35-4e83-aaa3-fb4b03781597, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 10:18:22 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-26d3563528bac50b1dd52cb6ce07cd7ba3d9535c89a8afdf0e60cbfb81c43380-userdata-shm.mount: Deactivated successfully.
Dec 06 10:18:22 np0005548790.localdomain podman[313707]: 2025-12-06 10:18:22.657484111 +0000 UTC m=+0.103330395 container remove 26d3563528bac50b1dd52cb6ce07cd7ba3d9535c89a8afdf0e60cbfb81c43380 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-921a64ce-0b35-4e83-aaa3-fb4b03781597, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 10:18:22 np0005548790.localdomain systemd[1]: libpod-conmon-26d3563528bac50b1dd52cb6ce07cd7ba3d9535c89a8afdf0e60cbfb81c43380.scope: Deactivated successfully.
Dec 06 10:18:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:22.671 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:22 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:18:22Z|00126|binding|INFO|Releasing lport f870d2a5-426d-4b04-ad31-189a5d6fb5a2 from this chassis (sb_readonly=0)
Dec 06 10:18:22 np0005548790.localdomain kernel: device tapf870d2a5-42 left promiscuous mode
Dec 06 10:18:22 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:18:22Z|00127|binding|INFO|Setting lport f870d2a5-426d-4b04-ad31-189a5d6fb5a2 down in Southbound
Dec 06 10:18:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:22.685 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-921a64ce-0b35-4e83-aaa3-fb4b03781597', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-921a64ce-0b35-4e83-aaa3-fb4b03781597', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a2aaeadee6f14b78a73f8886be99b671', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=87e5edff-fb33-4146-a57f-67c35802f201, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=f870d2a5-426d-4b04-ad31-189a5d6fb5a2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:18:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:22.687 159200 INFO neutron.agent.ovn.metadata.agent [-] Port f870d2a5-426d-4b04-ad31-189a5d6fb5a2 in datapath 921a64ce-0b35-4e83-aaa3-fb4b03781597 unbound from our chassis
Dec 06 10:18:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:22.689 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 921a64ce-0b35-4e83-aaa3-fb4b03781597 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:18:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:22.691 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[3bfff205-e75e-4ee1-bee3-66317e54ac36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:18:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:22.697 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:22 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:22.890 262327 INFO neutron.agent.dhcp.agent [None req-beb20564-a83d-456a-a1ea-6e312de5a6e9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:18:22 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:22.891 262327 INFO neutron.agent.dhcp.agent [None req-beb20564-a83d-456a-a1ea-6e312de5a6e9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:18:22 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:22.929 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:18:23 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c7c9ca2406473224853dc15d3fc1fd09f5149912474185036a6f6f35915aa784-merged.mount: Deactivated successfully.
Dec 06 10:18:23 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2d921a64ce\x2d0b35\x2d4e83\x2daaa3\x2dfb4b03781597.mount: Deactivated successfully.
Dec 06 10:18:23 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v256: 177 pgs: 177 active+clean; 249 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 204 KiB/s rd, 15 MiB/s wr, 282 op/s
Dec 06 10:18:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:23.293 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:23 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:18:23 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/298180858' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:18:23 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:18:23 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/298180858' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:18:23 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e148 e148: 6 total, 6 up, 6 in
Dec 06 10:18:23 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/298180858' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:18:23 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/298180858' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:18:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:18:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:18:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:18:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:18:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:18:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:18:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:18:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:18:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:18:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:18:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:18:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:18:23 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:23.642 2 INFO neutron.agent.securitygroups_rpc [None req-b79a01a3-8e64-4889-8420-e298cffcfc58 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:18:23 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:23.726 2 INFO neutron.agent.securitygroups_rpc [None req-9f63fce7-8a34-4731-bfa7-9d45ada3f54e 4c6008178bdc445aa99fb1b726f87b45 a2aaeadee6f14b78a73f8886be99b671 - - default default] Security group member updated ['13551db8-e8e0-43b0-89a9-b0d8423e74c9']
Dec 06 10:18:23 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:23.745 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:18:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:18:24 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:24.381 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:18:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e149 e149: 6 total, 6 up, 6 in
Dec 06 10:18:24 np0005548790.localdomain ceph-mon[301742]: pgmap v256: 177 pgs: 177 active+clean; 249 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 204 KiB/s rd, 15 MiB/s wr, 282 op/s
Dec 06 10:18:24 np0005548790.localdomain ceph-mon[301742]: osdmap e148: 6 total, 6 up, 6 in
Dec 06 10:18:24 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:24.636 2 INFO neutron.agent.securitygroups_rpc [None req-366a0057-fc3f-46e6-9a84-ba466e35126f a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:18:24 np0005548790.localdomain sshd[313734]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:18:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v259: 177 pgs: 177 active+clean; 249 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 155 KiB/s rd, 26 MiB/s wr, 220 op/s
Dec 06 10:18:25 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:25.374 2 INFO neutron.agent.securitygroups_rpc [None req-9f7062a2-5eeb-4deb-87a1-858e2e900cdd 4c6008178bdc445aa99fb1b726f87b45 a2aaeadee6f14b78a73f8886be99b671 - - default default] Security group member updated ['13551db8-e8e0-43b0-89a9-b0d8423e74c9']
Dec 06 10:18:25 np0005548790.localdomain ceph-mon[301742]: osdmap e149: 6 total, 6 up, 6 in
Dec 06 10:18:25 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1581079548' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:18:25 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1581079548' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:18:25 np0005548790.localdomain sshd[313734]: Received disconnect from 193.46.255.33 port 53069:11:  [preauth]
Dec 06 10:18:25 np0005548790.localdomain sshd[313734]: Disconnected from authenticating user root 193.46.255.33 port 53069 [preauth]
Dec 06 10:18:25 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e150 e150: 6 total, 6 up, 6 in
Dec 06 10:18:26 np0005548790.localdomain ceph-mon[301742]: pgmap v259: 177 pgs: 177 active+clean; 249 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 155 KiB/s rd, 26 MiB/s wr, 220 op/s
Dec 06 10:18:26 np0005548790.localdomain ceph-mon[301742]: osdmap e150: 6 total, 6 up, 6 in
Dec 06 10:18:26 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e151 e151: 6 total, 6 up, 6 in
Dec 06 10:18:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:27.107 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v262: 177 pgs: 177 active+clean; 249 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:18:27 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:27.168 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:fa:29 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=687d7abb-e6aa-4047-aa26-552c962fcc91) old=Port_Binding(mac=['fa:16:3e:3f:fa:29 10.100.0.2 2001:db8::f816:3eff:fe3f:fa29'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:18:27 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:27.170 159200 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 687d7abb-e6aa-4047-aa26-552c962fcc91 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c updated
Dec 06 10:18:27 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:27.172 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:18:27 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:27.173 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[3a528801-768d-455b-bee6-3b3eaae5c784]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:18:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:27.372 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:27.558 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:27 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e152 e152: 6 total, 6 up, 6 in
Dec 06 10:18:27 np0005548790.localdomain ceph-mon[301742]: osdmap e151: 6 total, 6 up, 6 in
Dec 06 10:18:28 np0005548790.localdomain ceph-mon[301742]: pgmap v262: 177 pgs: 177 active+clean; 249 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:18:28 np0005548790.localdomain ceph-mon[301742]: osdmap e152: 6 total, 6 up, 6 in
Dec 06 10:18:28 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:28.657 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:fa:29 10.100.0.2 2001:db8::f816:3eff:fe3f:fa29'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=687d7abb-e6aa-4047-aa26-552c962fcc91) old=Port_Binding(mac=['fa:16:3e:3f:fa:29 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:18:28 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:28.659 159200 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 687d7abb-e6aa-4047-aa26-552c962fcc91 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c updated
Dec 06 10:18:28 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:28.661 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:18:28 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:28.662 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa363df-0c90-4466-8351-96ed46f49171]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:18:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:18:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v264: 177 pgs: 177 active+clean; 145 MiB data, 840 MiB used, 41 GiB / 42 GiB avail; 161 KiB/s rd, 1.7 MiB/s wr, 226 op/s
Dec 06 10:18:29 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3964149539' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:18:29 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3964149539' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:18:29 np0005548790.localdomain dnsmasq[313208]: read /var/lib/neutron/dhcp/79f5539e-33bc-41c0-9c50-7ed2af3efbc6/addn_hosts - 0 addresses
Dec 06 10:18:29 np0005548790.localdomain podman[313753]: 2025-12-06 10:18:29.659626987 +0000 UTC m=+0.061023041 container kill 6f8c7034cbf9b8bb799e52f2354ce967f7719b2297e7f6369353c7ef19fcfb93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-79f5539e-33bc-41c0-9c50-7ed2af3efbc6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 10:18:29 np0005548790.localdomain dnsmasq-dhcp[313208]: read /var/lib/neutron/dhcp/79f5539e-33bc-41c0-9c50-7ed2af3efbc6/host
Dec 06 10:18:29 np0005548790.localdomain dnsmasq-dhcp[313208]: read /var/lib/neutron/dhcp/79f5539e-33bc-41c0-9c50-7ed2af3efbc6/opts
Dec 06 10:18:29 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:18:29Z|00128|binding|INFO|Releasing lport f608362e-e156-45cc-a4c3-b483191b6825 from this chassis (sb_readonly=0)
Dec 06 10:18:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:29.818 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:29 np0005548790.localdomain kernel: device tapf608362e-e1 left promiscuous mode
Dec 06 10:18:29 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:18:29Z|00129|binding|INFO|Setting lport f608362e-e156-45cc-a4c3-b483191b6825 down in Southbound
Dec 06 10:18:29 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:29.831 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-79f5539e-33bc-41c0-9c50-7ed2af3efbc6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79f5539e-33bc-41c0-9c50-7ed2af3efbc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98e92b02588946eca862b8460f965b72', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=41c882d1-936d-4721-b349-63efa62b10d5, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=f608362e-e156-45cc-a4c3-b483191b6825) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:18:29 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:29.833 159200 INFO neutron.agent.ovn.metadata.agent [-] Port f608362e-e156-45cc-a4c3-b483191b6825 in datapath 79f5539e-33bc-41c0-9c50-7ed2af3efbc6 unbound from our chassis
Dec 06 10:18:29 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:29.835 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 79f5539e-33bc-41c0-9c50-7ed2af3efbc6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:18:29 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:29.836 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[8693df23-bdbc-4164-a1b2-767482fa6c3d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:18:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:29.838 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:29 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:29.906 2 INFO neutron.agent.securitygroups_rpc [None req-cc7e06ae-2215-4c85-8ca6-e56c13503fc8 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:18:30 np0005548790.localdomain ceph-mon[301742]: pgmap v264: 177 pgs: 177 active+clean; 145 MiB data, 840 MiB used, 41 GiB / 42 GiB avail; 161 KiB/s rd, 1.7 MiB/s wr, 226 op/s
Dec 06 10:18:30 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:30.759 2 INFO neutron.agent.securitygroups_rpc [None req-e8307117-28c2-4262-9c6e-dc24bf4a796c a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:18:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v265: 177 pgs: 177 active+clean; 145 MiB data, 840 MiB used, 41 GiB / 42 GiB avail; 125 KiB/s rd, 1.3 MiB/s wr, 174 op/s
Dec 06 10:18:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:31.219 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:31 np0005548790.localdomain systemd[1]: tmp-crun.DUGEVu.mount: Deactivated successfully.
Dec 06 10:18:31 np0005548790.localdomain dnsmasq[313208]: exiting on receipt of SIGTERM
Dec 06 10:18:31 np0005548790.localdomain podman[313794]: 2025-12-06 10:18:31.729230651 +0000 UTC m=+0.066239872 container kill 6f8c7034cbf9b8bb799e52f2354ce967f7719b2297e7f6369353c7ef19fcfb93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-79f5539e-33bc-41c0-9c50-7ed2af3efbc6, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 10:18:31 np0005548790.localdomain systemd[1]: libpod-6f8c7034cbf9b8bb799e52f2354ce967f7719b2297e7f6369353c7ef19fcfb93.scope: Deactivated successfully.
Dec 06 10:18:31 np0005548790.localdomain podman[313806]: 2025-12-06 10:18:31.795287548 +0000 UTC m=+0.053038375 container died 6f8c7034cbf9b8bb799e52f2354ce967f7719b2297e7f6369353c7ef19fcfb93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-79f5539e-33bc-41c0-9c50-7ed2af3efbc6, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 10:18:31 np0005548790.localdomain podman[313806]: 2025-12-06 10:18:31.825447314 +0000 UTC m=+0.083198091 container cleanup 6f8c7034cbf9b8bb799e52f2354ce967f7719b2297e7f6369353c7ef19fcfb93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-79f5539e-33bc-41c0-9c50-7ed2af3efbc6, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 10:18:31 np0005548790.localdomain systemd[1]: libpod-conmon-6f8c7034cbf9b8bb799e52f2354ce967f7719b2297e7f6369353c7ef19fcfb93.scope: Deactivated successfully.
Dec 06 10:18:31 np0005548790.localdomain podman[313808]: 2025-12-06 10:18:31.881318235 +0000 UTC m=+0.130893041 container remove 6f8c7034cbf9b8bb799e52f2354ce967f7719b2297e7f6369353c7ef19fcfb93 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-79f5539e-33bc-41c0-9c50-7ed2af3efbc6, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 10:18:31 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:31.905 262327 INFO neutron.agent.dhcp.agent [None req-4ec94a8e-a33f-4c8e-9083-bfe4803307af - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:18:31 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:31.906 262327 INFO neutron.agent.dhcp.agent [None req-4ec94a8e-a33f-4c8e-9083-bfe4803307af - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:18:32 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e153 e153: 6 total, 6 up, 6 in
Dec 06 10:18:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:32.398 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:32.559 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:32 np0005548790.localdomain ceph-mon[301742]: pgmap v265: 177 pgs: 177 active+clean; 145 MiB data, 840 MiB used, 41 GiB / 42 GiB avail; 125 KiB/s rd, 1.3 MiB/s wr, 174 op/s
Dec 06 10:18:32 np0005548790.localdomain ceph-mon[301742]: osdmap e153: 6 total, 6 up, 6 in
Dec 06 10:18:32 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-efab6c1a404aa4245a6c1e01080e4fb8482aac86d65a8c738ae3d2e6853147d6-merged.mount: Deactivated successfully.
Dec 06 10:18:32 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f8c7034cbf9b8bb799e52f2354ce967f7719b2297e7f6369353c7ef19fcfb93-userdata-shm.mount: Deactivated successfully.
Dec 06 10:18:32 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2d79f5539e\x2d33bc\x2d41c0\x2d9c50\x2d7ed2af3efbc6.mount: Deactivated successfully.
Dec 06 10:18:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v267: 177 pgs: 177 active+clean; 145 MiB data, 840 MiB used, 41 GiB / 42 GiB avail; 133 KiB/s rd, 1.2 MiB/s wr, 184 op/s
Dec 06 10:18:33 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:33.527 2 INFO neutron.agent.securitygroups_rpc [None req-dd900d05-ceed-4a76-8792-94f73f7d9bdc b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']
Dec 06 10:18:33 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:33.809 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:fa:29 2001:db8::f816:3eff:fe3f:fa29'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=687d7abb-e6aa-4047-aa26-552c962fcc91) old=Port_Binding(mac=['fa:16:3e:3f:fa:29 10.100.0.2 2001:db8::f816:3eff:fe3f:fa29'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:18:33 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:33.811 159200 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 687d7abb-e6aa-4047-aa26-552c962fcc91 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c updated
Dec 06 10:18:33 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:33.813 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:18:33 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:33.813 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[bd3120d6-0464-461f-8586-79c98e6d9128]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:18:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:18:34 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:34.363 2 INFO neutron.agent.securitygroups_rpc [None req-155fc4a8-22cb-4d06-82dd-8cfd5b79a9e9 8705da02a69e4c3281916dd7bc9ac6d1 851f2bb5c4164322946aa41fe266eb66 - - default default] Security group member updated ['6607cea2-9b0f-45af-9864-1af2923eb94b']
Dec 06 10:18:34 np0005548790.localdomain ceph-mon[301742]: pgmap v267: 177 pgs: 177 active+clean; 145 MiB data, 840 MiB used, 41 GiB / 42 GiB avail; 133 KiB/s rd, 1.2 MiB/s wr, 184 op/s
Dec 06 10:18:34 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/2082246135' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:18:34 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:34.832 2 INFO neutron.agent.securitygroups_rpc [None req-d5c62043-5321-4cf5-baec-1c1605bc1cd9 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:18:35 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:35.046 2 INFO neutron.agent.securitygroups_rpc [None req-f9e16e57-d76f-4e49-8c54-adacc8516f8a b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']
Dec 06 10:18:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v268: 177 pgs: 177 active+clean; 145 MiB data, 840 MiB used, 41 GiB / 42 GiB avail; 109 KiB/s rd, 1.0 MiB/s wr, 151 op/s
Dec 06 10:18:35 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:35.145 2 INFO neutron.agent.securitygroups_rpc [None req-fe503d20-8e49-4871-94e0-efabb011ed42 8705da02a69e4c3281916dd7bc9ac6d1 851f2bb5c4164322946aa41fe266eb66 - - default default] Security group member updated ['6607cea2-9b0f-45af-9864-1af2923eb94b']
Dec 06 10:18:35 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:35.424 2 INFO neutron.agent.securitygroups_rpc [None req-f9e16e57-d76f-4e49-8c54-adacc8516f8a b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']
Dec 06 10:18:35 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:35.687 2 INFO neutron.agent.securitygroups_rpc [None req-4e477950-abaa-4886-9df8-9dd5bb5175a4 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:18:35 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/1644367828' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:18:36 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:36.117 2 INFO neutron.agent.securitygroups_rpc [None req-14f85306-cb54-46c0-a6f6-e09d3e175b2a b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']
Dec 06 10:18:36 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:36.150 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:18:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:18:36 np0005548790.localdomain podman[313834]: 2025-12-06 10:18:36.313393917 +0000 UTC m=+0.083661914 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 10:18:36 np0005548790.localdomain podman[313834]: 2025-12-06 10:18:36.323155101 +0000 UTC m=+0.093423118 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 06 10:18:36 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:18:36 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:36.651 2 INFO neutron.agent.securitygroups_rpc [None req-af2f65bf-97b7-4bb0-b9fe-3c28224c3c96 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']
Dec 06 10:18:36 np0005548790.localdomain ceph-mon[301742]: pgmap v268: 177 pgs: 177 active+clean; 145 MiB data, 840 MiB used, 41 GiB / 42 GiB avail; 109 KiB/s rd, 1.0 MiB/s wr, 151 op/s
Dec 06 10:18:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v269: 177 pgs: 177 active+clean; 145 MiB data, 840 MiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 863 KiB/s wr, 126 op/s
Dec 06 10:18:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:37.330 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:18:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:37.352 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:18:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:37.352 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:18:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:37.352 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:18:37 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:37.370 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:18:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:37.372 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:18:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:37.373 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:18:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:37.373 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 10:18:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:37.440 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:37.562 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:38 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:38.027 2 INFO neutron.agent.securitygroups_rpc [None req-1f216153-8df9-4f5f-9520-bf151df27051 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:18:38 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e154 e154: 6 total, 6 up, 6 in
Dec 06 10:18:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:38.345 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:18:38 np0005548790.localdomain ceph-mon[301742]: pgmap v269: 177 pgs: 177 active+clean; 145 MiB data, 840 MiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 863 KiB/s wr, 126 op/s
Dec 06 10:18:38 np0005548790.localdomain ceph-mon[301742]: osdmap e154: 6 total, 6 up, 6 in
Dec 06 10:18:39 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:39.030 2 INFO neutron.agent.securitygroups_rpc [None req-df5d0c63-3dbc-41ec-8a7e-d627e1beca42 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:18:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:18:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v271: 177 pgs: 177 active+clean; 145 MiB data, 840 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 895 B/s wr, 25 op/s
Dec 06 10:18:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:39.329 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:18:39 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:18:39 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:18:39 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:18:39 np0005548790.localdomain podman[313853]: 2025-12-06 10:18:39.573553522 +0000 UTC m=+0.085380280 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:18:39 np0005548790.localdomain podman[313853]: 2025-12-06 10:18:39.585077495 +0000 UTC m=+0.096904253 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm)
Dec 06 10:18:39 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:18:39 np0005548790.localdomain podman[313852]: 2025-12-06 10:18:39.624901611 +0000 UTC m=+0.139426022 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 10:18:39 np0005548790.localdomain podman[313852]: 2025-12-06 10:18:39.636282649 +0000 UTC m=+0.150807100 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:18:39 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:18:39 np0005548790.localdomain podman[313854]: 2025-12-06 10:18:39.68253317 +0000 UTC m=+0.189323931 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, io.openshift.tags=minimal rhel9, version=9.6, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public)
Dec 06 10:18:39 np0005548790.localdomain podman[313854]: 2025-12-06 10:18:39.697239988 +0000 UTC m=+0.204030759 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.openshift.expose-services=, container_name=openstack_network_exporter, release=1755695350, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7)
Dec 06 10:18:39 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:18:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1177396961' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:18:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1177396961' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:18:39 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:39.882 2 INFO neutron.agent.securitygroups_rpc [None req-57bddb58-8e7b-4200-a14c-4d9431ae075f b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']
Dec 06 10:18:40 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:40.271 2 INFO neutron.agent.securitygroups_rpc [None req-dae126c9-280d-4a4d-ad9b-17df376d8729 9b1422e7ba894ba7b8e14df8e50e50d0 2b1d664fab0f4b7f87439c153244cdc1 - - default default] Security group member updated ['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0']
Dec 06 10:18:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:40.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:18:40 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:40.353 2 INFO neutron.agent.securitygroups_rpc [None req-e4bf1752-f0a3-4484-84a7-e670337a989c b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']
Dec 06 10:18:40 np0005548790.localdomain ceph-mon[301742]: pgmap v271: 177 pgs: 177 active+clean; 145 MiB data, 840 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 895 B/s wr, 25 op/s
Dec 06 10:18:40 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e155 e155: 6 total, 6 up, 6 in
Dec 06 10:18:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v273: 177 pgs: 177 active+clean; 145 MiB data, 840 MiB used, 41 GiB / 42 GiB avail; 3.6 KiB/s rd, 511 B/s wr, 5 op/s
Dec 06 10:18:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:41.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:18:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:41.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:18:41 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:41.534 2 INFO neutron.agent.securitygroups_rpc [None req-652b9b12-ec43-4a29-b268-053f1f58f2a3 9b1422e7ba894ba7b8e14df8e50e50d0 2b1d664fab0f4b7f87439c153244cdc1 - - default default] Security group member updated ['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0']
Dec 06 10:18:41 np0005548790.localdomain ceph-mon[301742]: osdmap e155: 6 total, 6 up, 6 in
Dec 06 10:18:41 np0005548790.localdomain ceph-mon[301742]: pgmap v273: 177 pgs: 177 active+clean; 145 MiB data, 840 MiB used, 41 GiB / 42 GiB avail; 3.6 KiB/s rd, 511 B/s wr, 5 op/s
Dec 06 10:18:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e156 e156: 6 total, 6 up, 6 in
Dec 06 10:18:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Optimize plan auto_2025-12-06_10:18:41
Dec 06 10:18:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:18:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] do_upmap
Dec 06 10:18:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] pools ['manila_metadata', 'volumes', 'images', 'vms', 'backups', 'manila_data', '.mgr']
Dec 06 10:18:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] prepared 0/10 changes
Dec 06 10:18:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:18:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:18:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:41.910 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:18:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:18:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:18:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:18:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32)
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 8.17891541038526e-07 of space, bias 1.0, pg target 0.0001633056776940257 quantized to 32 (current 32)
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.7263051367950866e-06 of space, bias 4.0, pg target 0.002170138888888889 quantized to 16 (current 16)
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:18:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:18:42 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:42.321 2 INFO neutron.agent.securitygroups_rpc [None req-b75aa7a3-5af1-4cd3-b1b1-cfd423d7e2ab 9b1422e7ba894ba7b8e14df8e50e50d0 2b1d664fab0f4b7f87439c153244cdc1 - - default default] Security group member updated ['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0']
Dec 06 10:18:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:42.445 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:42.563 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:42 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:42.587 2 INFO neutron.agent.securitygroups_rpc [None req-f4287026-ab49-4456-bd2b-fbcf52c0630e a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:18:42 np0005548790.localdomain ceph-mon[301742]: osdmap e156: 6 total, 6 up, 6 in
Dec 06 10:18:42 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1970788286' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:18:42 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1970788286' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:18:43 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:43.024 2 INFO neutron.agent.securitygroups_rpc [None req-55643603-9572-45b2-ae71-f445e3294506 9b1422e7ba894ba7b8e14df8e50e50d0 2b1d664fab0f4b7f87439c153244cdc1 - - default default] Security group member updated ['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0']
Dec 06 10:18:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v275: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 83 KiB/s rd, 5.2 KiB/s wr, 111 op/s
Dec 06 10:18:43 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:18:43 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4278276739' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:18:43 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:18:43 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4278276739' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:18:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:43.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:18:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:43.357 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:18:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:43.358 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:18:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:43.358 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:18:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:43.359 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:18:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:43.359 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:18:43 np0005548790.localdomain ceph-mon[301742]: pgmap v275: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 83 KiB/s rd, 5.2 KiB/s wr, 111 op/s
Dec 06 10:18:43 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/4278276739' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:18:43 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/4278276739' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:18:43 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/409284351' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:18:43 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:18:43 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2594254949' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:18:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:43.854 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:18:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:18:44 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:44.058 2 INFO neutron.agent.securitygroups_rpc [None req-255af1be-4d8f-48ce-b409-88074fe3f28a a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:18:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:44.073 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:18:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:44.075 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=11607MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:18:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:44.077 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:18:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:44.078 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:18:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:44.413 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:18:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:44.413 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:18:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:44.683 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:18:44 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:44.760 2 INFO neutron.agent.securitygroups_rpc [None req-f9323177-d4e4-4dce-bd8f-2cc985b7b1dc 9b1422e7ba894ba7b8e14df8e50e50d0 2b1d664fab0f4b7f87439c153244cdc1 - - default default] Security group member updated ['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0']
Dec 06 10:18:44 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2594254949' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:18:44 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/3870109751' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:18:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v276: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 67 KiB/s rd, 3.9 KiB/s wr, 89 op/s
Dec 06 10:18:45 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:18:45 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2023953488' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:18:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:45.183 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:18:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:45.191 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:18:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:45.217 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:18:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:45.219 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:18:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:45.220 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:18:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:45.221 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:18:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:45.221 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 10:18:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:45.247 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 10:18:45 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:45.343 2 INFO neutron.agent.securitygroups_rpc [None req-ec83bf10-c909-4c2a-a1a4-0827521eece8 9b1422e7ba894ba7b8e14df8e50e50d0 2b1d664fab0f4b7f87439c153244cdc1 - - default default] Security group member updated ['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0']
Dec 06 10:18:45 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:45.555 262327 INFO neutron.agent.linux.ip_lib [None req-7a612d14-8a21-42da-8cf6-fcc952d0fd2a - - - - - -] Device tapbfd7408a-e4 cannot be used as it has no MAC address
Dec 06 10:18:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:45.575 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:45 np0005548790.localdomain kernel: device tapbfd7408a-e4 entered promiscuous mode
Dec 06 10:18:45 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016325.5826] manager: (tapbfd7408a-e4): new Generic device (/org/freedesktop/NetworkManager/Devices/30)
Dec 06 10:18:45 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:18:45Z|00130|binding|INFO|Claiming lport bfd7408a-e48a-4b6a-a11f-c1722b38f74d for this chassis.
Dec 06 10:18:45 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:18:45Z|00131|binding|INFO|bfd7408a-e48a-4b6a-a11f-c1722b38f74d: Claiming unknown
Dec 06 10:18:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:45.585 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:45 np0005548790.localdomain systemd-udevd[313968]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:18:45 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:45.597 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-66856c74-d8dc-4fd2-9226-e45b15148141', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-66856c74-d8dc-4fd2-9226-e45b15148141', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f8e1c4c589749b99178bbc7c2bea3f0', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5bfe15ac-c004-483f-814d-df89bb910212, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=bfd7408a-e48a-4b6a-a11f-c1722b38f74d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:18:45 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:45.599 159200 INFO neutron.agent.ovn.metadata.agent [-] Port bfd7408a-e48a-4b6a-a11f-c1722b38f74d in datapath 66856c74-d8dc-4fd2-9226-e45b15148141 bound to our chassis
Dec 06 10:18:45 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:45.600 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 66856c74-d8dc-4fd2-9226-e45b15148141 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:18:45 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:45.601 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[75276dfb-b7bf-4a4c-b058-44c084e18d71]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:18:45 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapbfd7408a-e4: No such device
Dec 06 10:18:45 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:18:45Z|00132|binding|INFO|Setting lport bfd7408a-e48a-4b6a-a11f-c1722b38f74d ovn-installed in OVS
Dec 06 10:18:45 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:18:45Z|00133|binding|INFO|Setting lport bfd7408a-e48a-4b6a-a11f-c1722b38f74d up in Southbound
Dec 06 10:18:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:45.614 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:45 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapbfd7408a-e4: No such device
Dec 06 10:18:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:45.616 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:45 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapbfd7408a-e4: No such device
Dec 06 10:18:45 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapbfd7408a-e4: No such device
Dec 06 10:18:45 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapbfd7408a-e4: No such device
Dec 06 10:18:45 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapbfd7408a-e4: No such device
Dec 06 10:18:45 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapbfd7408a-e4: No such device
Dec 06 10:18:45 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapbfd7408a-e4: No such device
Dec 06 10:18:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:45.655 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:45.686 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:45 np0005548790.localdomain ceph-mon[301742]: pgmap v276: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 67 KiB/s rd, 3.9 KiB/s wr, 89 op/s
Dec 06 10:18:45 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2023953488' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:18:46 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:46.530 2 INFO neutron.agent.securitygroups_rpc [None req-28a20db1-a3bb-47de-ad39-5c494614c36d 9b1422e7ba894ba7b8e14df8e50e50d0 2b1d664fab0f4b7f87439c153244cdc1 - - default default] Security group member updated ['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0']
Dec 06 10:18:46 np0005548790.localdomain podman[314039]: 
Dec 06 10:18:46 np0005548790.localdomain podman[314039]: 2025-12-06 10:18:46.574553145 +0000 UTC m=+0.088599218 container create f1fd39902a4c6c973837a9dfabb3eb1fbca1dc354d55b6195b206398eeaa3311 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-66856c74-d8dc-4fd2-9226-e45b15148141, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 06 10:18:46 np0005548790.localdomain systemd[1]: Started libpod-conmon-f1fd39902a4c6c973837a9dfabb3eb1fbca1dc354d55b6195b206398eeaa3311.scope.
Dec 06 10:18:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:18:46 np0005548790.localdomain podman[314039]: 2025-12-06 10:18:46.529896397 +0000 UTC m=+0.043942500 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:18:46 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:18:46 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9944b1b3e83921e99121923e7a37a7abb6a9fbf297e8d92a8cdeebe476a3bee4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:18:46 np0005548790.localdomain podman[314039]: 2025-12-06 10:18:46.661938848 +0000 UTC m=+0.175984921 container init f1fd39902a4c6c973837a9dfabb3eb1fbca1dc354d55b6195b206398eeaa3311 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-66856c74-d8dc-4fd2-9226-e45b15148141, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 10:18:46 np0005548790.localdomain podman[314039]: 2025-12-06 10:18:46.672527965 +0000 UTC m=+0.186574038 container start f1fd39902a4c6c973837a9dfabb3eb1fbca1dc354d55b6195b206398eeaa3311 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-66856c74-d8dc-4fd2-9226-e45b15148141, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true)
Dec 06 10:18:46 np0005548790.localdomain dnsmasq[314067]: started, version 2.85 cachesize 150
Dec 06 10:18:46 np0005548790.localdomain dnsmasq[314067]: DNS service limited to local subnets
Dec 06 10:18:46 np0005548790.localdomain dnsmasq[314067]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:18:46 np0005548790.localdomain dnsmasq[314067]: warning: no upstream servers configured
Dec 06 10:18:46 np0005548790.localdomain dnsmasq-dhcp[314067]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 06 10:18:46 np0005548790.localdomain dnsmasq[314067]: read /var/lib/neutron/dhcp/66856c74-d8dc-4fd2-9226-e45b15148141/addn_hosts - 0 addresses
Dec 06 10:18:46 np0005548790.localdomain dnsmasq-dhcp[314067]: read /var/lib/neutron/dhcp/66856c74-d8dc-4fd2-9226-e45b15148141/host
Dec 06 10:18:46 np0005548790.localdomain dnsmasq-dhcp[314067]: read /var/lib/neutron/dhcp/66856c74-d8dc-4fd2-9226-e45b15148141/opts
Dec 06 10:18:46 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:46.724 262327 INFO neutron.agent.dhcp.agent [None req-8453a5f4-77fb-43f2-a0d9-cab2fd6547da - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:18:46Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c859b3940>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c859075b0>], id=7e1b63f5-f8f5-4e98-94f6-4f3968921e98, ip_allocation=immediate, mac_address=fa:16:3e:60:56:e2, name=tempest-PortsTestJSON-430353884, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:18:42Z, description=, dns_domain=, id=66856c74-d8dc-4fd2-9226-e45b15148141, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-1674786788, port_security_enabled=True, project_id=5f8e1c4c589749b99178bbc7c2bea3f0, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=44607, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1892, status=ACTIVE, subnets=['e0079af5-8cf6-4f97-bb10-f566aff97ac6'], tags=[], tenant_id=5f8e1c4c589749b99178bbc7c2bea3f0, updated_at=2025-12-06T10:18:44Z, vlan_transparent=None, network_id=66856c74-d8dc-4fd2-9226-e45b15148141, port_security_enabled=True, project_id=5f8e1c4c589749b99178bbc7c2bea3f0, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1920, status=DOWN, tags=[], tenant_id=5f8e1c4c589749b99178bbc7c2bea3f0, updated_at=2025-12-06T10:18:46Z on network 66856c74-d8dc-4fd2-9226-e45b15148141
Dec 06 10:18:46 np0005548790.localdomain podman[314055]: 2025-12-06 10:18:46.744002187 +0000 UTC m=+0.110443188 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 06 10:18:46 np0005548790.localdomain podman[314055]: 2025-12-06 10:18:46.780403132 +0000 UTC m=+0.146844133 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:18:46 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:18:46 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:46.819 262327 INFO neutron.agent.dhcp.agent [None req-fdc268b3-8084-4b03-aaab-49b141791fbe - - - - - -] DHCP configuration for ports {'2349e3fc-bcff-4604-a13c-45a2f7077631'} is completed
Dec 06 10:18:46 np0005548790.localdomain dnsmasq[314067]: read /var/lib/neutron/dhcp/66856c74-d8dc-4fd2-9226-e45b15148141/addn_hosts - 1 addresses
Dec 06 10:18:46 np0005548790.localdomain dnsmasq-dhcp[314067]: read /var/lib/neutron/dhcp/66856c74-d8dc-4fd2-9226-e45b15148141/host
Dec 06 10:18:46 np0005548790.localdomain dnsmasq-dhcp[314067]: read /var/lib/neutron/dhcp/66856c74-d8dc-4fd2-9226-e45b15148141/opts
Dec 06 10:18:46 np0005548790.localdomain podman[314094]: 2025-12-06 10:18:46.971817838 +0000 UTC m=+0.060996289 container kill f1fd39902a4c6c973837a9dfabb3eb1fbca1dc354d55b6195b206398eeaa3311 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-66856c74-d8dc-4fd2-9226-e45b15148141, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 10:18:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v277: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 3.4 KiB/s wr, 77 op/s
Dec 06 10:18:47 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:47.170 262327 INFO neutron.agent.dhcp.agent [None req-d44f66cf-5782-4318-ae99-5ed7c1281f05 - - - - - -] DHCP configuration for ports {'7e1b63f5-f8f5-4e98-94f6-4f3968921e98'} is completed
Dec 06 10:18:47 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e157 e157: 6 total, 6 up, 6 in
Dec 06 10:18:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:47.248 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:18:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:47.248 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:18:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:47.249 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:18:47 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:47.313 2 INFO neutron.agent.securitygroups_rpc [None req-e6ce568b-b382-4ede-9132-8960f2608a77 9b1422e7ba894ba7b8e14df8e50e50d0 2b1d664fab0f4b7f87439c153244cdc1 - - default default] Security group member updated ['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0']
Dec 06 10:18:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:47.474 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:47 np0005548790.localdomain dnsmasq[314067]: read /var/lib/neutron/dhcp/66856c74-d8dc-4fd2-9226-e45b15148141/addn_hosts - 0 addresses
Dec 06 10:18:47 np0005548790.localdomain podman[314133]: 2025-12-06 10:18:47.502963984 +0000 UTC m=+0.061851313 container kill f1fd39902a4c6c973837a9dfabb3eb1fbca1dc354d55b6195b206398eeaa3311 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-66856c74-d8dc-4fd2-9226-e45b15148141, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 10:18:47 np0005548790.localdomain dnsmasq-dhcp[314067]: read /var/lib/neutron/dhcp/66856c74-d8dc-4fd2-9226-e45b15148141/host
Dec 06 10:18:47 np0005548790.localdomain dnsmasq-dhcp[314067]: read /var/lib/neutron/dhcp/66856c74-d8dc-4fd2-9226-e45b15148141/opts
Dec 06 10:18:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:47.564 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:47 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:47.683 2 INFO neutron.agent.securitygroups_rpc [None req-7750bef0-0e9a-45d8-b031-72812daa7ba7 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:18:48 np0005548790.localdomain ceph-mon[301742]: pgmap v277: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 3.4 KiB/s wr, 77 op/s
Dec 06 10:18:48 np0005548790.localdomain ceph-mon[301742]: osdmap e157: 6 total, 6 up, 6 in
Dec 06 10:18:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:18:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:18:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:48.400 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:18:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:48.400 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:18:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:48.401 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:18:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:18:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156742 "" "Go-http-client/1.1"
Dec 06 10:18:48 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:48.428 2 INFO neutron.agent.securitygroups_rpc [None req-ca0f0e3b-d5e2-4383-9ed8-27fd62359bae 9b1422e7ba894ba7b8e14df8e50e50d0 2b1d664fab0f4b7f87439c153244cdc1 - - default default] Security group member updated ['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0']
Dec 06 10:18:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:18:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19206 "" "Go-http-client/1.1"
Dec 06 10:18:48 np0005548790.localdomain podman[314169]: 2025-12-06 10:18:48.803823078 +0000 UTC m=+0.064116365 container kill f1fd39902a4c6c973837a9dfabb3eb1fbca1dc354d55b6195b206398eeaa3311 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-66856c74-d8dc-4fd2-9226-e45b15148141, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:18:48 np0005548790.localdomain dnsmasq[314067]: exiting on receipt of SIGTERM
Dec 06 10:18:48 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:48.812 2 INFO neutron.agent.securitygroups_rpc [None req-18dbc826-4a96-46e3-9a3f-9599a1372c95 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:18:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:18:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:18:48 np0005548790.localdomain systemd[1]: libpod-f1fd39902a4c6c973837a9dfabb3eb1fbca1dc354d55b6195b206398eeaa3311.scope: Deactivated successfully.
Dec 06 10:18:48 np0005548790.localdomain podman[314182]: 2025-12-06 10:18:48.884414128 +0000 UTC m=+0.064386062 container died f1fd39902a4c6c973837a9dfabb3eb1fbca1dc354d55b6195b206398eeaa3311 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-66856c74-d8dc-4fd2-9226-e45b15148141, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:18:48 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9cfe70fc-1bee-4bef-bb9d-910ccc1c896e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:18:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9cfe70fc-1bee-4bef-bb9d-910ccc1c896e, vol_name:cephfs) < ""
Dec 06 10:18:48 np0005548790.localdomain podman[314191]: 2025-12-06 10:18:48.942471388 +0000 UTC m=+0.103224533 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 06 10:18:48 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:48 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:48 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:48 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:48 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:48 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:18:48.948+0000 7f06345ec640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:48 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:18:48.948+0000 7f06345ec640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:48 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:18:48.948+0000 7f06345ec640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:48 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:18:48.948+0000 7f06345ec640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:48 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:18:48.948+0000 7f06345ec640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:48 np0005548790.localdomain podman[314182]: 2025-12-06 10:18:48.969623093 +0000 UTC m=+0.149595047 container cleanup f1fd39902a4c6c973837a9dfabb3eb1fbca1dc354d55b6195b206398eeaa3311 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-66856c74-d8dc-4fd2-9226-e45b15148141, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 10:18:48 np0005548790.localdomain systemd[1]: libpod-conmon-f1fd39902a4c6c973837a9dfabb3eb1fbca1dc354d55b6195b206398eeaa3311.scope: Deactivated successfully.
Dec 06 10:18:48 np0005548790.localdomain podman[314190]: 2025-12-06 10:18:48.977735282 +0000 UTC m=+0.144005136 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:18:49 np0005548790.localdomain podman[314190]: 2025-12-06 10:18:49.014447796 +0000 UTC m=+0.180717610 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 10:18:49 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:18:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9cfe70fc-1bee-4bef-bb9d-910ccc1c896e/.meta.tmp'
Dec 06 10:18:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9cfe70fc-1bee-4bef-bb9d-910ccc1c896e/.meta.tmp' to config b'/volumes/_nogroup/9cfe70fc-1bee-4bef-bb9d-910ccc1c896e/.meta'
Dec 06 10:18:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9cfe70fc-1bee-4bef-bb9d-910ccc1c896e, vol_name:cephfs) < ""
Dec 06 10:18:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9cfe70fc-1bee-4bef-bb9d-910ccc1c896e", "format": "json"}]: dispatch
Dec 06 10:18:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9cfe70fc-1bee-4bef-bb9d-910ccc1c896e, vol_name:cephfs) < ""
Dec 06 10:18:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9cfe70fc-1bee-4bef-bb9d-910ccc1c896e, vol_name:cephfs) < ""
Dec 06 10:18:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:18:49 np0005548790.localdomain podman[314191]: 2025-12-06 10:18:49.065138816 +0000 UTC m=+0.225891991 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 10:18:49 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:18:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:49.111 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:49 np0005548790.localdomain podman[314184]: 2025-12-06 10:18:49.12407575 +0000 UTC m=+0.296373137 container remove f1fd39902a4c6c973837a9dfabb3eb1fbca1dc354d55b6195b206398eeaa3311 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-66856c74-d8dc-4fd2-9226-e45b15148141, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 06 10:18:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:49.137 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:49 np0005548790.localdomain kernel: device tapbfd7408a-e4 left promiscuous mode
Dec 06 10:18:49 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:18:49Z|00134|binding|INFO|Releasing lport bfd7408a-e48a-4b6a-a11f-c1722b38f74d from this chassis (sb_readonly=0)
Dec 06 10:18:49 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:18:49Z|00135|binding|INFO|Setting lport bfd7408a-e48a-4b6a-a11f-c1722b38f74d down in Southbound
Dec 06 10:18:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v279: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 64 KiB/s rd, 4.2 KiB/s wr, 86 op/s
Dec 06 10:18:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:49.150 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-66856c74-d8dc-4fd2-9226-e45b15148141', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-66856c74-d8dc-4fd2-9226-e45b15148141', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f8e1c4c589749b99178bbc7c2bea3f0', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5bfe15ac-c004-483f-814d-df89bb910212, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=bfd7408a-e48a-4b6a-a11f-c1722b38f74d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:18:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:49.153 159200 INFO neutron.agent.ovn.metadata.agent [-] Port bfd7408a-e48a-4b6a-a11f-c1722b38f74d in datapath 66856c74-d8dc-4fd2-9226-e45b15148141 unbound from our chassis
Dec 06 10:18:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:49.155 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 66856c74-d8dc-4fd2-9226-e45b15148141, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:18:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:18:49.156 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[b46f8878-9b5f-4815-9af2-cee20910d054]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:18:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:49.159 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:49 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:18:49 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:49.217 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:18:49 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:18:49.598 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:18:49 np0005548790.localdomain systemd[1]: tmp-crun.De5rlU.mount: Deactivated successfully.
Dec 06 10:18:49 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-9944b1b3e83921e99121923e7a37a7abb6a9fbf297e8d92a8cdeebe476a3bee4-merged.mount: Deactivated successfully.
Dec 06 10:18:49 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f1fd39902a4c6c973837a9dfabb3eb1fbca1dc354d55b6195b206398eeaa3311-userdata-shm.mount: Deactivated successfully.
Dec 06 10:18:49 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2d66856c74\x2dd8dc\x2d4fd2\x2d9226\x2de45b15148141.mount: Deactivated successfully.
Dec 06 10:18:50 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9cfe70fc-1bee-4bef-bb9d-910ccc1c896e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:18:50 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9cfe70fc-1bee-4bef-bb9d-910ccc1c896e", "format": "json"}]: dispatch
Dec 06 10:18:50 np0005548790.localdomain ceph-mon[301742]: pgmap v279: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 64 KiB/s rd, 4.2 KiB/s wr, 86 op/s
Dec 06 10:18:50 np0005548790.localdomain ceph-mon[301742]: mgrmap e47: np0005548790.kvkfyr(active, since 7m), standbys: np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548789.mzhmje
Dec 06 10:18:50 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/352148416' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:18:50 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/352148416' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:18:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:50.309 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:50 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:50.838 2 INFO neutron.agent.securitygroups_rpc [None req-a73968c5-9b01-4d0f-920e-d4625a021612 9b1422e7ba894ba7b8e14df8e50e50d0 2b1d664fab0f4b7f87439c153244cdc1 - - default default] Security group member updated ['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0']
Dec 06 10:18:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v280: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 3.6 KiB/s wr, 73 op/s
Dec 06 10:18:51 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:51.393 2 INFO neutron.agent.securitygroups_rpc [None req-3e522469-6174-4bd8-8fa4-46c3281a8670 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']
Dec 06 10:18:51 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:51.423 2 INFO neutron.agent.securitygroups_rpc [None req-c981ff7c-a5c4-4968-95e1-73b35c2abc32 9b1422e7ba894ba7b8e14df8e50e50d0 2b1d664fab0f4b7f87439c153244cdc1 - - default default] Security group member updated ['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0']
Dec 06 10:18:51 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:51.785 2 INFO neutron.agent.securitygroups_rpc [None req-57ba5b5d-0488-433d-83ba-25f85616d546 9b1422e7ba894ba7b8e14df8e50e50d0 2b1d664fab0f4b7f87439c153244cdc1 - - default default] Security group member updated ['ec3ea7b0-8bb2-49ca-ac1e-7fc51bb6b4e0']
Dec 06 10:18:52 np0005548790.localdomain ceph-mon[301742]: pgmap v280: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 3.6 KiB/s wr, 73 op/s
Dec 06 10:18:52 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:52.375 2 INFO neutron.agent.securitygroups_rpc [None req-db878ba9-86fb-4d9b-8254-a77b4f9b264f a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:18:52 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:52.444 2 INFO neutron.agent.securitygroups_rpc [None req-d89000d1-9304-4659-90ea-3fec18561423 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']
Dec 06 10:18:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:52.514 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:52.566 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v281: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 4.1 KiB/s wr, 23 op/s
Dec 06 10:18:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:18:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:18:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:18:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:18:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:18:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:18:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:18:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:18:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:18:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:18:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:18:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:18:53 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:53.634 2 INFO neutron.agent.securitygroups_rpc [None req-7967e84a-5cad-44b5-8ea9-0783854ccdc0 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']
Dec 06 10:18:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:53.909 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:18:54 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:54.135 2 INFO neutron.agent.securitygroups_rpc [None req-47656a87-41ce-4f30-afbc-1720c247f9e1 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:18:54 np0005548790.localdomain ceph-mon[301742]: pgmap v281: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 4.1 KiB/s wr, 23 op/s
Dec 06 10:18:54 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9cfe70fc-1bee-4bef-bb9d-910ccc1c896e", "format": "json"}]: dispatch
Dec 06 10:18:54 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9cfe70fc-1bee-4bef-bb9d-910ccc1c896e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:18:54 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9cfe70fc-1bee-4bef-bb9d-910ccc1c896e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:18:54 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9cfe70fc-1bee-4bef-bb9d-910ccc1c896e' of type subvolume
Dec 06 10:18:54 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:18:54.515+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9cfe70fc-1bee-4bef-bb9d-910ccc1c896e' of type subvolume
Dec 06 10:18:54 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9cfe70fc-1bee-4bef-bb9d-910ccc1c896e", "force": true, "format": "json"}]: dispatch
Dec 06 10:18:54 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9cfe70fc-1bee-4bef-bb9d-910ccc1c896e, vol_name:cephfs) < ""
Dec 06 10:18:54 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9cfe70fc-1bee-4bef-bb9d-910ccc1c896e'' moved to trashcan
Dec 06 10:18:54 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:18:54 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9cfe70fc-1bee-4bef-bb9d-910ccc1c896e, vol_name:cephfs) < ""
Dec 06 10:18:54 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:54 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:18:54.535+0000 7f06375f2640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:54 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:54 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:18:54.535+0000 7f06375f2640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:54 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:54 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:18:54.535+0000 7f06375f2640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:54 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:54 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:18:54.535+0000 7f06375f2640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:54 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:54 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:18:54.535+0000 7f06375f2640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:54 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:18:54.576+0000 7f06365f0640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:54 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:54 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:54 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:54 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:18:54.576+0000 7f06365f0640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:54 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:54 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:18:54.576+0000 7f06365f0640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:54 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:54 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:18:54.576+0000 7f06365f0640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:54 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:18:54.576+0000 7f06365f0640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:18:54 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:54.903 2 INFO neutron.agent.securitygroups_rpc [None req-fc107b59-6198-4194-ae82-133aadd3bf55 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']
Dec 06 10:18:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v282: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 4.1 KiB/s wr, 23 op/s
Dec 06 10:18:55 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9cfe70fc-1bee-4bef-bb9d-910ccc1c896e", "format": "json"}]: dispatch
Dec 06 10:18:55 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9cfe70fc-1bee-4bef-bb9d-910ccc1c896e", "force": true, "format": "json"}]: dispatch
Dec 06 10:18:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:55.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:18:55 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:55.353 2 INFO neutron.agent.securitygroups_rpc [None req-6d09d5a9-00b8-48ae-971d-442c4fe5ddd4 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']
Dec 06 10:18:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:55.669 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:56 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:56.152 2 INFO neutron.agent.securitygroups_rpc [None req-49caf4e8-1cf3-4ac4-8ed0-7c14787e9a49 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']
Dec 06 10:18:56 np0005548790.localdomain ceph-mon[301742]: pgmap v282: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 4.1 KiB/s wr, 23 op/s
Dec 06 10:18:56 np0005548790.localdomain ceph-mon[301742]: mgrmap e48: np0005548790.kvkfyr(active, since 7m), standbys: np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548789.mzhmje
Dec 06 10:18:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:56.554 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v283: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 4.1 KiB/s wr, 23 op/s
Dec 06 10:18:57 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:57.376 2 INFO neutron.agent.securitygroups_rpc [None req-371fcf7f-6858-4f55-8627-7d44578a7f6c a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:18:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:57.518 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:18:57.567 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:18:57 np0005548790.localdomain sudo[314296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:18:57 np0005548790.localdomain sudo[314296]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:18:57 np0005548790.localdomain sudo[314296]: pam_unix(sudo:session): session closed for user root
Dec 06 10:18:57 np0005548790.localdomain sudo[314314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:18:57 np0005548790.localdomain sudo[314314]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:18:58 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:58.224 2 INFO neutron.agent.securitygroups_rpc [None req-dcd65c97-9e9b-434b-8250-c459c3b8a42b a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:18:58 np0005548790.localdomain sudo[314314]: pam_unix(sudo:session): session closed for user root
Dec 06 10:18:58 np0005548790.localdomain ceph-mon[301742]: pgmap v283: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 4.1 KiB/s wr, 23 op/s
Dec 06 10:18:58 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:18:58 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:18:58 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:18:58 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:18:58 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:18:58 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] update: starting ev f630ee0f-2e3a-4002-a9b2-f6bb6be22c79 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:18:58 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] complete: finished ev f630ee0f-2e3a-4002-a9b2-f6bb6be22c79 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:18:58 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Completed event f630ee0f-2e3a-4002-a9b2-f6bb6be22c79 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 06 10:18:58 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:18:58 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:18:58 np0005548790.localdomain sudo[314365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:18:58 np0005548790.localdomain sudo[314365]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:18:58 np0005548790.localdomain sudo[314365]: pam_unix(sudo:session): session closed for user root
Dec 06 10:18:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:18:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v284: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 6.7 KiB/s wr, 35 op/s
Dec 06 10:18:59 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:18:59 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:18:59 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:18:59 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:18:59 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:18:59.741 2 INFO neutron.agent.securitygroups_rpc [None req-39208e2c-f7c0-489d-b264-ceaa95c43793 1333c58cfc75447fad1b488a958549ce a269d8afc49848fbb8ce5cdb49ef37dc - - default default] Security group member updated ['a71b7ce5-d152-4b06-83b8-76d380ec29b6']
Dec 06 10:19:00 np0005548790.localdomain ceph-mon[301742]: pgmap v284: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 6.7 KiB/s wr, 35 op/s
Dec 06 10:19:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v285: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 6.1 KiB/s wr, 29 op/s
Dec 06 10:19:01 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:01.286 2 INFO neutron.agent.securitygroups_rpc [None req-7d4e4684-fcf3-4893-8a23-2e4dea6b64ed 1333c58cfc75447fad1b488a958549ce a269d8afc49848fbb8ce5cdb49ef37dc - - default default] Security group member updated ['a71b7ce5-d152-4b06-83b8-76d380ec29b6']
Dec 06 10:19:02 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Writing back 50 completed events
Dec 06 10:19:02 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:19:02 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:02.316 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:3e:86 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-9beccfed-6ce7-4343-a09a-a10df412729f', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9beccfed-6ce7-4343-a09a-a10df412729f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f8e1c4c589749b99178bbc7c2bea3f0', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd8b2850-e3e7-477f-8017-199231500400, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=5e418b23-64fb-4cc3-b4f5-351454b6f675) old=Port_Binding(mac=['fa:16:3e:fe:3e:86 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-9beccfed-6ce7-4343-a09a-a10df412729f', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9beccfed-6ce7-4343-a09a-a10df412729f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f8e1c4c589749b99178bbc7c2bea3f0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:19:02 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:02.318 159200 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 5e418b23-64fb-4cc3-b4f5-351454b6f675 in datapath 9beccfed-6ce7-4343-a09a-a10df412729f updated
Dec 06 10:19:02 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:02.320 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9beccfed-6ce7-4343-a09a-a10df412729f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:19:02 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:02.321 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[0d7673a6-0410-4abb-ab43-6f1ffc1a5a31]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:19:02 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:02.521 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:02 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:02.569 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:02 np0005548790.localdomain ceph-mon[301742]: pgmap v285: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 6.1 KiB/s wr, 29 op/s
Dec 06 10:19:02 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:19:02 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:02.689 2 INFO neutron.agent.securitygroups_rpc [None req-84b84473-0a2d-4cb4-b946-8bdffecc1ba7 1333c58cfc75447fad1b488a958549ce a269d8afc49848fbb8ce5cdb49ef37dc - - default default] Security group member updated ['a71b7ce5-d152-4b06-83b8-76d380ec29b6']
Dec 06 10:19:02 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:02.871 2 INFO neutron.agent.securitygroups_rpc [None req-9f09c577-620d-43b3-bb16-a6c9b188fc98 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']
Dec 06 10:19:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v286: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 7.3 KiB/s wr, 39 op/s
Dec 06 10:19:03 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:03.572 2 INFO neutron.agent.securitygroups_rpc [None req-dcc65382-3d9e-4656-a85e-ef8650caf3cc b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']
Dec 06 10:19:03 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 06 10:19:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:19:04 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:04.434 2 INFO neutron.agent.securitygroups_rpc [None req-debf1305-ba6e-49c2-9083-8908dd68e972 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:19:04 np0005548790.localdomain ceph-mon[301742]: pgmap v286: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 7.3 KiB/s wr, 39 op/s
Dec 06 10:19:04 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:04.985 262327 INFO neutron.agent.linux.ip_lib [None req-cffcce63-8a52-4daf-b467-a0c8b5d03ede - - - - - -] Device tap6272a5d5-64 cannot be used as it has no MAC address
Dec 06 10:19:05 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:05.017 2 INFO neutron.agent.securitygroups_rpc [None req-91092cc5-6d84-4cc3-b0ed-55c483b81857 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']
Dec 06 10:19:05 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:05.037 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:05 np0005548790.localdomain kernel: device tap6272a5d5-64 entered promiscuous mode
Dec 06 10:19:05 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016345.0464] manager: (tap6272a5d5-64): new Generic device (/org/freedesktop/NetworkManager/Devices/31)
Dec 06 10:19:05 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:19:05Z|00136|binding|INFO|Claiming lport 6272a5d5-64d8-48a5-ba3a-ea77217dee6b for this chassis.
Dec 06 10:19:05 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:19:05Z|00137|binding|INFO|6272a5d5-64d8-48a5-ba3a-ea77217dee6b: Claiming unknown
Dec 06 10:19:05 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:05.047 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:05 np0005548790.localdomain systemd-udevd[314393]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:19:05 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:05.060 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-51560d53-5f9a-46c5-823c-16f2f5b717fc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51560d53-5f9a-46c5-823c-16f2f5b717fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '12673f85bb004c3c946338dc70e565e7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7a32195-cece-48a1-a6e6-b30849278cec, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=6272a5d5-64d8-48a5-ba3a-ea77217dee6b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:19:05 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:05.061 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 6272a5d5-64d8-48a5-ba3a-ea77217dee6b in datapath 51560d53-5f9a-46c5-823c-16f2f5b717fc bound to our chassis
Dec 06 10:19:05 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:05.064 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Port e34b3a60-7ea6-43d1-a1ee-379c344575f6 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 06 10:19:05 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:05.064 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 51560d53-5f9a-46c5-823c-16f2f5b717fc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:19:05 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:05.065 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[25b154ad-0d58-404d-8820-9d2b87d95815]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:19:05 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap6272a5d5-64: No such device
Dec 06 10:19:05 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap6272a5d5-64: No such device
Dec 06 10:19:05 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:19:05Z|00138|binding|INFO|Setting lport 6272a5d5-64d8-48a5-ba3a-ea77217dee6b ovn-installed in OVS
Dec 06 10:19:05 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:19:05Z|00139|binding|INFO|Setting lport 6272a5d5-64d8-48a5-ba3a-ea77217dee6b up in Southbound
Dec 06 10:19:05 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:05.086 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:05 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap6272a5d5-64: No such device
Dec 06 10:19:05 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap6272a5d5-64: No such device
Dec 06 10:19:05 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap6272a5d5-64: No such device
Dec 06 10:19:05 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap6272a5d5-64: No such device
Dec 06 10:19:05 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap6272a5d5-64: No such device
Dec 06 10:19:05 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:05.110 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:05 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap6272a5d5-64: No such device
Dec 06 10:19:05 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:05.138 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v287: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 4.5 KiB/s wr, 25 op/s
Dec 06 10:19:05 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:05.581 2 INFO neutron.agent.securitygroups_rpc [None req-03766a42-54e7-4e6a-a01a-d12c463a6613 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:19:05 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e158 e158: 6 total, 6 up, 6 in
Dec 06 10:19:05 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:05.723 2 INFO neutron.agent.securitygroups_rpc [None req-9d59c4b8-d3a8-40d9-8d73-2b90f45c1e12 1333c58cfc75447fad1b488a958549ce a269d8afc49848fbb8ce5cdb49ef37dc - - default default] Security group member updated ['a71b7ce5-d152-4b06-83b8-76d380ec29b6']
Dec 06 10:19:05 np0005548790.localdomain podman[314464]: 
Dec 06 10:19:06 np0005548790.localdomain podman[314464]: 2025-12-06 10:19:06.009898612 +0000 UTC m=+0.084534277 container create 08e12c7b51cb9d1e2ff66cffe85aeb20473fb7b993654762da7c7848fe32cd0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-51560d53-5f9a-46c5-823c-16f2f5b717fc, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:19:06 np0005548790.localdomain systemd[1]: Started libpod-conmon-08e12c7b51cb9d1e2ff66cffe85aeb20473fb7b993654762da7c7848fe32cd0a.scope.
Dec 06 10:19:06 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:19:06 np0005548790.localdomain podman[314464]: 2025-12-06 10:19:05.972651825 +0000 UTC m=+0.047287490 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:19:06 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15efbe0d3a54f85c97fab3e5a2a692d30d464471e5cee7798576d06646981534/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:19:06 np0005548790.localdomain podman[314464]: 2025-12-06 10:19:06.083339729 +0000 UTC m=+0.157975374 container init 08e12c7b51cb9d1e2ff66cffe85aeb20473fb7b993654762da7c7848fe32cd0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-51560d53-5f9a-46c5-823c-16f2f5b717fc, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:19:06 np0005548790.localdomain podman[314464]: 2025-12-06 10:19:06.092441865 +0000 UTC m=+0.167077510 container start 08e12c7b51cb9d1e2ff66cffe85aeb20473fb7b993654762da7c7848fe32cd0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-51560d53-5f9a-46c5-823c-16f2f5b717fc, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 10:19:06 np0005548790.localdomain dnsmasq[314482]: started, version 2.85 cachesize 150
Dec 06 10:19:06 np0005548790.localdomain dnsmasq[314482]: DNS service limited to local subnets
Dec 06 10:19:06 np0005548790.localdomain dnsmasq[314482]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:19:06 np0005548790.localdomain dnsmasq[314482]: warning: no upstream servers configured
Dec 06 10:19:06 np0005548790.localdomain dnsmasq-dhcp[314482]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 06 10:19:06 np0005548790.localdomain dnsmasq[314482]: read /var/lib/neutron/dhcp/51560d53-5f9a-46c5-823c-16f2f5b717fc/addn_hosts - 0 addresses
Dec 06 10:19:06 np0005548790.localdomain dnsmasq-dhcp[314482]: read /var/lib/neutron/dhcp/51560d53-5f9a-46c5-823c-16f2f5b717fc/host
Dec 06 10:19:06 np0005548790.localdomain dnsmasq-dhcp[314482]: read /var/lib/neutron/dhcp/51560d53-5f9a-46c5-823c-16f2f5b717fc/opts
Dec 06 10:19:06 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:06.246 262327 INFO neutron.agent.dhcp.agent [None req-6ade0625-a6cd-4d06-bf75-fb9b1f3f4c7f - - - - - -] DHCP configuration for ports {'17cdee24-9478-4164-9ff8-687b5931afd0'} is completed
Dec 06 10:19:06 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:06.285 2 INFO neutron.agent.securitygroups_rpc [None req-13d35407-bef4-4c5e-baaa-9390a0fcd613 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']
Dec 06 10:19:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:19:06 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:06.507 262327 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:19:05Z, description=, device_id=bb479751-502c-46ed-b675-e6226f6b62b5, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c8576c040>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c8581fdc0>], id=bd780071-96c3-435a-8369-53ad56ea2f75, ip_allocation=immediate, mac_address=fa:16:3e:ef:76:d6, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:19:00Z, description=, dns_domain=, id=51560d53-5f9a-46c5-823c-16f2f5b717fc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPNegativeTestJSON-test-network-1295535333, port_security_enabled=True, project_id=12673f85bb004c3c946338dc70e565e7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=59183, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2011, status=ACTIVE, subnets=['53fb0fcf-3cdc-4774-b981-e592ce0722c9'], tags=[], tenant_id=12673f85bb004c3c946338dc70e565e7, updated_at=2025-12-06T10:19:02Z, vlan_transparent=None, network_id=51560d53-5f9a-46c5-823c-16f2f5b717fc, port_security_enabled=False, project_id=12673f85bb004c3c946338dc70e565e7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2030, status=DOWN, tags=[], tenant_id=12673f85bb004c3c946338dc70e565e7, updated_at=2025-12-06T10:19:06Z on network 51560d53-5f9a-46c5-823c-16f2f5b717fc
Dec 06 10:19:06 np0005548790.localdomain podman[314483]: 2025-12-06 10:19:06.572648253 +0000 UTC m=+0.088628808 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:19:06 np0005548790.localdomain podman[314483]: 2025-12-06 10:19:06.577575656 +0000 UTC m=+0.093556231 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 10:19:06 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:19:06 np0005548790.localdomain ceph-mon[301742]: pgmap v287: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 4.5 KiB/s wr, 25 op/s
Dec 06 10:19:06 np0005548790.localdomain ceph-mon[301742]: osdmap e158: 6 total, 6 up, 6 in
Dec 06 10:19:06 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3936622661' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:19:06 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3936622661' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:19:06 np0005548790.localdomain dnsmasq[314482]: read /var/lib/neutron/dhcp/51560d53-5f9a-46c5-823c-16f2f5b717fc/addn_hosts - 1 addresses
Dec 06 10:19:06 np0005548790.localdomain dnsmasq-dhcp[314482]: read /var/lib/neutron/dhcp/51560d53-5f9a-46c5-823c-16f2f5b717fc/host
Dec 06 10:19:06 np0005548790.localdomain dnsmasq-dhcp[314482]: read /var/lib/neutron/dhcp/51560d53-5f9a-46c5-823c-16f2f5b717fc/opts
Dec 06 10:19:06 np0005548790.localdomain podman[314519]: 2025-12-06 10:19:06.713698228 +0000 UTC m=+0.058716599 container kill 08e12c7b51cb9d1e2ff66cffe85aeb20473fb7b993654762da7c7848fe32cd0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-51560d53-5f9a-46c5-823c-16f2f5b717fc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:19:07 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:07.028 262327 INFO neutron.agent.dhcp.agent [None req-7b98bd23-62c9-4842-b70b-e7a999c52bc5 - - - - - -] DHCP configuration for ports {'bd780071-96c3-435a-8369-53ad56ea2f75'} is completed
Dec 06 10:19:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v289: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 4.1 MiB/s rd, 5.4 KiB/s wr, 30 op/s
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:19:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:19:07 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:07.524 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:07 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:07.570 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:08 np0005548790.localdomain ceph-mon[301742]: pgmap v289: 177 pgs: 177 active+clean; 145 MiB data, 841 MiB used, 41 GiB / 42 GiB avail; 4.1 MiB/s rd, 5.4 KiB/s wr, 30 op/s
Dec 06 10:19:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:19:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v290: 177 pgs: 177 active+clean; 192 MiB data, 940 MiB used, 41 GiB / 42 GiB avail; 87 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Dec 06 10:19:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e159 e159: 6 total, 6 up, 6 in
Dec 06 10:19:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:19:09 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/961294606' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:19:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:19:09 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/961294606' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:19:10 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:10.297 262327 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:19:05Z, description=, device_id=bb479751-502c-46ed-b675-e6226f6b62b5, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c8584f310>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c8577cca0>], id=bd780071-96c3-435a-8369-53ad56ea2f75, ip_allocation=immediate, mac_address=fa:16:3e:ef:76:d6, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:19:00Z, description=, dns_domain=, id=51560d53-5f9a-46c5-823c-16f2f5b717fc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPNegativeTestJSON-test-network-1295535333, port_security_enabled=True, project_id=12673f85bb004c3c946338dc70e565e7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=59183, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2011, status=ACTIVE, subnets=['53fb0fcf-3cdc-4774-b981-e592ce0722c9'], tags=[], tenant_id=12673f85bb004c3c946338dc70e565e7, updated_at=2025-12-06T10:19:02Z, vlan_transparent=None, network_id=51560d53-5f9a-46c5-823c-16f2f5b717fc, port_security_enabled=False, project_id=12673f85bb004c3c946338dc70e565e7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2030, status=DOWN, tags=[], tenant_id=12673f85bb004c3c946338dc70e565e7, updated_at=2025-12-06T10:19:06Z on network 51560d53-5f9a-46c5-823c-16f2f5b717fc
Dec 06 10:19:10 np0005548790.localdomain sshd[314559]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:19:10 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:19:10 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:19:10 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:19:10 np0005548790.localdomain systemd[1]: tmp-crun.qetoKa.mount: Deactivated successfully.
Dec 06 10:19:10 np0005548790.localdomain dnsmasq[314482]: read /var/lib/neutron/dhcp/51560d53-5f9a-46c5-823c-16f2f5b717fc/addn_hosts - 1 addresses
Dec 06 10:19:10 np0005548790.localdomain dnsmasq-dhcp[314482]: read /var/lib/neutron/dhcp/51560d53-5f9a-46c5-823c-16f2f5b717fc/host
Dec 06 10:19:10 np0005548790.localdomain dnsmasq-dhcp[314482]: read /var/lib/neutron/dhcp/51560d53-5f9a-46c5-823c-16f2f5b717fc/opts
Dec 06 10:19:10 np0005548790.localdomain podman[314558]: 2025-12-06 10:19:10.539716348 +0000 UTC m=+0.084390093 container kill 08e12c7b51cb9d1e2ff66cffe85aeb20473fb7b993654762da7c7848fe32cd0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-51560d53-5f9a-46c5-823c-16f2f5b717fc, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 10:19:10 np0005548790.localdomain podman[314569]: 2025-12-06 10:19:10.596431993 +0000 UTC m=+0.104207920 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:19:10 np0005548790.localdomain podman[314569]: 2025-12-06 10:19:10.608356175 +0000 UTC m=+0.116132122 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 10:19:10 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:19:10 np0005548790.localdomain podman[314571]: 2025-12-06 10:19:10.654128783 +0000 UTC m=+0.151242872 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 06 10:19:10 np0005548790.localdomain podman[314571]: 2025-12-06 10:19:10.669286843 +0000 UTC m=+0.166400932 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, config_id=edpm, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc.)
Dec 06 10:19:10 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:19:10 np0005548790.localdomain ceph-mon[301742]: pgmap v290: 177 pgs: 177 active+clean; 192 MiB data, 940 MiB used, 41 GiB / 42 GiB avail; 87 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Dec 06 10:19:10 np0005548790.localdomain ceph-mon[301742]: osdmap e159: 6 total, 6 up, 6 in
Dec 06 10:19:10 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/961294606' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:19:10 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/961294606' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:19:10 np0005548790.localdomain podman[314570]: 2025-12-06 10:19:10.76385403 +0000 UTC m=+0.267775693 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 06 10:19:10 np0005548790.localdomain podman[314570]: 2025-12-06 10:19:10.778027024 +0000 UTC m=+0.281948637 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 06 10:19:10 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:19:10 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:10.797 262327 INFO neutron.agent.dhcp.agent [None req-dfd41f7c-cd98-49e0-95a6-518e8ef0883e - - - - - -] DHCP configuration for ports {'bd780071-96c3-435a-8369-53ad56ea2f75'} is completed
Dec 06 10:19:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v292: 177 pgs: 177 active+clean; 192 MiB data, 940 MiB used, 41 GiB / 42 GiB avail; 97 KiB/s rd, 5.3 MiB/s wr, 147 op/s
Dec 06 10:19:11 np0005548790.localdomain systemd[1]: tmp-crun.75OUaK.mount: Deactivated successfully.
Dec 06 10:19:11 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:11.586 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:fa:29 2001:db8:0:1:f816:3eff:fe3f:fa29'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82d32fcc-fa15-458b-9d3c-0c87258ef71d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=687d7abb-e6aa-4047-aa26-552c962fcc91) old=Port_Binding(mac=['fa:16:3e:3f:fa:29 2001:db8::f816:3eff:fe3f:fa29'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe3f:fa29/64', 'neutron:device_id': 'ovnmeta-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43883dce-1590-48c4-987c-a21b63b82a1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '34a17eee71de4bac8b71972a4b7b506c', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:19:11 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:11.588 159200 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 687d7abb-e6aa-4047-aa26-552c962fcc91 in datapath 43883dce-1590-48c4-987c-a21b63b82a1c updated
Dec 06 10:19:11 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:11.590 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43883dce-1590-48c4-987c-a21b63b82a1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:19:11 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:11.591 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[5df6f7e4-7ec5-4b4a-93fb-cb5729a768f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:19:11 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1079829945' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:19:11 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1079829945' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:19:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:19:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:19:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:19:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:19:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:19:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:19:11 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:11.986 2 INFO neutron.agent.securitygroups_rpc [None req-cdcfd119-194b-4de8-98ff-a5e7eedce5b7 7365839d5bca455283c571ca0abd33bb 12673f85bb004c3c946338dc70e565e7 - - default default] Security group member updated ['5a014cda-2333-483a-bcd0-2243e387c412']
Dec 06 10:19:12 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:12.064 262327 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:19:11Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c857ef8e0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c857ef5e0>], id=f2f90e63-81f9-42f5-a9e3-974a5da29c73, ip_allocation=immediate, mac_address=fa:16:3e:e7:17:c5, name=tempest-FloatingIPNegativeTestJSON-995402438, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:19:00Z, description=, dns_domain=, id=51560d53-5f9a-46c5-823c-16f2f5b717fc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPNegativeTestJSON-test-network-1295535333, port_security_enabled=True, project_id=12673f85bb004c3c946338dc70e565e7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=59183, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2011, status=ACTIVE, subnets=['53fb0fcf-3cdc-4774-b981-e592ce0722c9'], tags=[], tenant_id=12673f85bb004c3c946338dc70e565e7, updated_at=2025-12-06T10:19:02Z, vlan_transparent=None, network_id=51560d53-5f9a-46c5-823c-16f2f5b717fc, port_security_enabled=True, project_id=12673f85bb004c3c946338dc70e565e7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['5a014cda-2333-483a-bcd0-2243e387c412'], standard_attr_id=2043, status=DOWN, tags=[], tenant_id=12673f85bb004c3c946338dc70e565e7, updated_at=2025-12-06T10:19:11Z on network 51560d53-5f9a-46c5-823c-16f2f5b717fc
Dec 06 10:19:12 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:12.185 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:19:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:12.186 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:12 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:12.187 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:19:12 np0005548790.localdomain systemd[1]: tmp-crun.h4OyFg.mount: Deactivated successfully.
Dec 06 10:19:12 np0005548790.localdomain dnsmasq[314482]: read /var/lib/neutron/dhcp/51560d53-5f9a-46c5-823c-16f2f5b717fc/addn_hosts - 2 addresses
Dec 06 10:19:12 np0005548790.localdomain dnsmasq-dhcp[314482]: read /var/lib/neutron/dhcp/51560d53-5f9a-46c5-823c-16f2f5b717fc/host
Dec 06 10:19:12 np0005548790.localdomain dnsmasq-dhcp[314482]: read /var/lib/neutron/dhcp/51560d53-5f9a-46c5-823c-16f2f5b717fc/opts
Dec 06 10:19:12 np0005548790.localdomain podman[314660]: 2025-12-06 10:19:12.281381554 +0000 UTC m=+0.068459162 container kill 08e12c7b51cb9d1e2ff66cffe85aeb20473fb7b993654762da7c7848fe32cd0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-51560d53-5f9a-46c5-823c-16f2f5b717fc, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 06 10:19:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:12.527 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:12 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:12.546 262327 INFO neutron.agent.dhcp.agent [None req-30993708-b1c2-4791-a575-b32f7400f885 - - - - - -] DHCP configuration for ports {'f2f90e63-81f9-42f5-a9e3-974a5da29c73'} is completed
Dec 06 10:19:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:12.572 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:12 np0005548790.localdomain ceph-mon[301742]: pgmap v292: 177 pgs: 177 active+clean; 192 MiB data, 940 MiB used, 41 GiB / 42 GiB avail; 97 KiB/s rd, 5.3 MiB/s wr, 147 op/s
Dec 06 10:19:12 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e160 e160: 6 total, 6 up, 6 in
Dec 06 10:19:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v294: 177 pgs: 2 active+clean+snaptrim, 175 active+clean; 192 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 157 KiB/s rd, 5.7 MiB/s wr, 230 op/s
Dec 06 10:19:13 np0005548790.localdomain ceph-mon[301742]: osdmap e160: 6 total, 6 up, 6 in
Dec 06 10:19:13 np0005548790.localdomain ceph-mon[301742]: pgmap v294: 177 pgs: 2 active+clean+snaptrim, 175 active+clean; 192 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 157 KiB/s rd, 5.7 MiB/s wr, 230 op/s
Dec 06 10:19:13 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e161 e161: 6 total, 6 up, 6 in
Dec 06 10:19:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:19:14 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:14.137 2 INFO neutron.agent.securitygroups_rpc [None req-a19ad948-85b8-4074-80f7-d1d223959ce7 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:19:14 np0005548790.localdomain ceph-mon[301742]: osdmap e161: 6 total, 6 up, 6 in
Dec 06 10:19:14 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/932893422' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:19:14 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/932893422' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:19:14 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:14.867 2 INFO neutron.agent.securitygroups_rpc [None req-83d5638e-2f4a-455b-b2d3-487dd6af4b6c a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:19:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v296: 177 pgs: 2 active+clean+snaptrim, 175 active+clean; 192 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 66 KiB/s rd, 3.5 KiB/s wr, 90 op/s
Dec 06 10:19:15 np0005548790.localdomain ceph-mon[301742]: pgmap v296: 177 pgs: 2 active+clean+snaptrim, 175 active+clean; 192 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 66 KiB/s rd, 3.5 KiB/s wr, 90 op/s
Dec 06 10:19:15 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:15.913 2 INFO neutron.agent.securitygroups_rpc [None req-ca0b48dc-f1ce-4207-b602-f1515b9dc7e0 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']
Dec 06 10:19:16 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:19:16 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3547940131' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:19:16 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:19:16 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3547940131' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:19:16 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1061312478' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:19:16 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1061312478' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:19:16 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3547940131' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:19:16 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3547940131' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:19:17 np0005548790.localdomain sshd[314559]: Received disconnect from 101.47.160.186 port 35802:11: Bye Bye [preauth]
Dec 06 10:19:17 np0005548790.localdomain sshd[314559]: Disconnected from authenticating user root 101.47.160.186 port 35802 [preauth]
Dec 06 10:19:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:19:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v297: 177 pgs: 2 active+clean+snaptrim, 175 active+clean; 192 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 2.8 KiB/s wr, 72 op/s
Dec 06 10:19:17 np0005548790.localdomain podman[314681]: 2025-12-06 10:19:17.246372629 +0000 UTC m=+0.083407757 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 10:19:17 np0005548790.localdomain podman[314681]: 2025-12-06 10:19:17.288163089 +0000 UTC m=+0.125198177 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true)
Dec 06 10:19:17 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:19:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:17.528 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:17.573 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:17 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:17.662 2 INFO neutron.agent.securitygroups_rpc [None req-09283ffa-3b28-4158-b02d-0d7572bb2b32 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']
Dec 06 10:19:18 np0005548790.localdomain ceph-mon[301742]: pgmap v297: 177 pgs: 2 active+clean+snaptrim, 175 active+clean; 192 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 2.8 KiB/s wr, 72 op/s
Dec 06 10:19:18 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:18.248 2 INFO neutron.agent.securitygroups_rpc [None req-c5dcea7d-aa1b-4625-8fc4-dc86e9ad2a1a 7365839d5bca455283c571ca0abd33bb 12673f85bb004c3c946338dc70e565e7 - - default default] Security group member updated ['5a014cda-2333-483a-bcd0-2243e387c412']
Dec 06 10:19:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:19:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:19:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:19:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156742 "" "Go-http-client/1.1"
Dec 06 10:19:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:19:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19213 "" "Go-http-client/1.1"
Dec 06 10:19:18 np0005548790.localdomain dnsmasq[314482]: read /var/lib/neutron/dhcp/51560d53-5f9a-46c5-823c-16f2f5b717fc/addn_hosts - 1 addresses
Dec 06 10:19:18 np0005548790.localdomain dnsmasq-dhcp[314482]: read /var/lib/neutron/dhcp/51560d53-5f9a-46c5-823c-16f2f5b717fc/host
Dec 06 10:19:18 np0005548790.localdomain dnsmasq-dhcp[314482]: read /var/lib/neutron/dhcp/51560d53-5f9a-46c5-823c-16f2f5b717fc/opts
Dec 06 10:19:18 np0005548790.localdomain systemd[1]: tmp-crun.YHbGoN.mount: Deactivated successfully.
Dec 06 10:19:18 np0005548790.localdomain podman[314717]: 2025-12-06 10:19:18.587412919 +0000 UTC m=+0.065345618 container kill 08e12c7b51cb9d1e2ff66cffe85aeb20473fb7b993654762da7c7848fe32cd0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-51560d53-5f9a-46c5-823c-16f2f5b717fc, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:19:19 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:19:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v298: 177 pgs: 177 active+clean; 192 MiB data, 935 MiB used, 41 GiB / 42 GiB avail; 122 KiB/s rd, 5.1 KiB/s wr, 161 op/s
Dec 06 10:19:19 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:19.278 2 INFO neutron.agent.securitygroups_rpc [None req-7362e19c-e595-471d-b005-9585c5cb5a42 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']
Dec 06 10:19:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:19:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:19:19 np0005548790.localdomain podman[314737]: 2025-12-06 10:19:19.573559071 +0000 UTC m=+0.086084650 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:19:19 np0005548790.localdomain podman[314737]: 2025-12-06 10:19:19.585234697 +0000 UTC m=+0.097760256 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:19:19 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:19:19 np0005548790.localdomain podman[314738]: 2025-12-06 10:19:19.669453584 +0000 UTC m=+0.179371452 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:19:19 np0005548790.localdomain podman[314738]: 2025-12-06 10:19:19.711241565 +0000 UTC m=+0.221159493 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:19:19 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:19:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:20.189 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=33b2d0f4-3dae-458c-b286-c937c7cb3d9e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:19:20 np0005548790.localdomain ceph-mon[301742]: pgmap v298: 177 pgs: 177 active+clean; 192 MiB data, 935 MiB used, 41 GiB / 42 GiB avail; 122 KiB/s rd, 5.1 KiB/s wr, 161 op/s
Dec 06 10:19:20 np0005548790.localdomain dnsmasq[314482]: read /var/lib/neutron/dhcp/51560d53-5f9a-46c5-823c-16f2f5b717fc/addn_hosts - 0 addresses
Dec 06 10:19:20 np0005548790.localdomain dnsmasq-dhcp[314482]: read /var/lib/neutron/dhcp/51560d53-5f9a-46c5-823c-16f2f5b717fc/host
Dec 06 10:19:20 np0005548790.localdomain dnsmasq-dhcp[314482]: read /var/lib/neutron/dhcp/51560d53-5f9a-46c5-823c-16f2f5b717fc/opts
Dec 06 10:19:20 np0005548790.localdomain podman[314802]: 2025-12-06 10:19:20.777885084 +0000 UTC m=+0.068691558 container kill 08e12c7b51cb9d1e2ff66cffe85aeb20473fb7b993654762da7c7848fe32cd0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-51560d53-5f9a-46c5-823c-16f2f5b717fc, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 10:19:20 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:20.872 2 INFO neutron.agent.securitygroups_rpc [None req-8a7361c8-fe6c-42e8-b9eb-90548f1065a0 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']
Dec 06 10:19:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:21.143 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:21 np0005548790.localdomain kernel: device tap6272a5d5-64 left promiscuous mode
Dec 06 10:19:21 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:19:21Z|00140|binding|INFO|Releasing lport 6272a5d5-64d8-48a5-ba3a-ea77217dee6b from this chassis (sb_readonly=0)
Dec 06 10:19:21 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:19:21Z|00141|binding|INFO|Setting lport 6272a5d5-64d8-48a5-ba3a-ea77217dee6b down in Southbound
Dec 06 10:19:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v299: 177 pgs: 177 active+clean; 192 MiB data, 935 MiB used, 41 GiB / 42 GiB avail; 116 KiB/s rd, 4.9 KiB/s wr, 154 op/s
Dec 06 10:19:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:21.169 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:21 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:21.181 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-51560d53-5f9a-46c5-823c-16f2f5b717fc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-51560d53-5f9a-46c5-823c-16f2f5b717fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '12673f85bb004c3c946338dc70e565e7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7a32195-cece-48a1-a6e6-b30849278cec, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=6272a5d5-64d8-48a5-ba3a-ea77217dee6b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:19:21 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:21.182 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 6272a5d5-64d8-48a5-ba3a-ea77217dee6b in datapath 51560d53-5f9a-46c5-823c-16f2f5b717fc unbound from our chassis
Dec 06 10:19:21 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:21.185 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 51560d53-5f9a-46c5-823c-16f2f5b717fc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:19:21 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:21.185 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[88735f2b-6683-4673-a98d-8a82bb831351]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:19:21 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1538185790' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:19:21 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1538185790' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:19:22 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e162 e162: 6 total, 6 up, 6 in
Dec 06 10:19:22 np0005548790.localdomain ceph-mon[301742]: pgmap v299: 177 pgs: 177 active+clean; 192 MiB data, 935 MiB used, 41 GiB / 42 GiB avail; 116 KiB/s rd, 4.9 KiB/s wr, 154 op/s
Dec 06 10:19:22 np0005548790.localdomain ceph-mon[301742]: osdmap e162: 6 total, 6 up, 6 in
Dec 06 10:19:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:22.531 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:22.575 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:22 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:19:22 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3020127696' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:19:22 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:19:22 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3020127696' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:19:23 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v301: 177 pgs: 177 active+clean; 192 MiB data, 939 MiB used, 41 GiB / 42 GiB avail; 97 KiB/s rd, 4.1 KiB/s wr, 129 op/s
Dec 06 10:19:23 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3020127696' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:19:23 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3020127696' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:19:23 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:19:23 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3025114459' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:19:23 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:19:23 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3025114459' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:19:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:19:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:19:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:19:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:19:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:19:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:19:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:19:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:19:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:19:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:19:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:19:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:19:23 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:23.771 2 INFO neutron.agent.securitygroups_rpc [None req-7c634670-f27b-4241-a6ed-35c65bde0f68 a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:19:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:19:24 np0005548790.localdomain ceph-mon[301742]: pgmap v301: 177 pgs: 177 active+clean; 192 MiB data, 939 MiB used, 41 GiB / 42 GiB avail; 97 KiB/s rd, 4.1 KiB/s wr, 129 op/s
Dec 06 10:19:24 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3025114459' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:19:24 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3025114459' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:19:24 np0005548790.localdomain dnsmasq[314482]: exiting on receipt of SIGTERM
Dec 06 10:19:24 np0005548790.localdomain podman[314840]: 2025-12-06 10:19:24.482837307 +0000 UTC m=+0.056200482 container kill 08e12c7b51cb9d1e2ff66cffe85aeb20473fb7b993654762da7c7848fe32cd0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-51560d53-5f9a-46c5-823c-16f2f5b717fc, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 10:19:24 np0005548790.localdomain systemd[1]: libpod-08e12c7b51cb9d1e2ff66cffe85aeb20473fb7b993654762da7c7848fe32cd0a.scope: Deactivated successfully.
Dec 06 10:19:24 np0005548790.localdomain podman[314852]: 2025-12-06 10:19:24.54834598 +0000 UTC m=+0.050607741 container died 08e12c7b51cb9d1e2ff66cffe85aeb20473fb7b993654762da7c7848fe32cd0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-51560d53-5f9a-46c5-823c-16f2f5b717fc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:19:24 np0005548790.localdomain podman[314852]: 2025-12-06 10:19:24.581688112 +0000 UTC m=+0.083949833 container cleanup 08e12c7b51cb9d1e2ff66cffe85aeb20473fb7b993654762da7c7848fe32cd0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-51560d53-5f9a-46c5-823c-16f2f5b717fc, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:19:24 np0005548790.localdomain systemd[1]: libpod-conmon-08e12c7b51cb9d1e2ff66cffe85aeb20473fb7b993654762da7c7848fe32cd0a.scope: Deactivated successfully.
Dec 06 10:19:24 np0005548790.localdomain podman[314854]: 2025-12-06 10:19:24.625303922 +0000 UTC m=+0.122852025 container remove 08e12c7b51cb9d1e2ff66cffe85aeb20473fb7b993654762da7c7848fe32cd0a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-51560d53-5f9a-46c5-823c-16f2f5b717fc, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 10:19:25 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:25.043 262327 INFO neutron.agent.dhcp.agent [None req-4ff401e8-40fc-41c4-b349-6355af0c9ae8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:19:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v302: 177 pgs: 177 active+clean; 192 MiB data, 939 MiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 3.8 KiB/s wr, 121 op/s
Dec 06 10:19:25 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:25.185 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:19:25 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:25.244 2 INFO neutron.agent.securitygroups_rpc [None req-e1143dbb-8340-4dac-af2c-b301e23bde0e a6a9256bca1441629c18003b71ba1c6f 34a17eee71de4bac8b71972a4b7b506c - - default default] Security group member updated ['d618a097-5989-47aa-9263-1c8a114ad269']
Dec 06 10:19:25 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e163 e163: 6 total, 6 up, 6 in
Dec 06 10:19:25 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:25.455 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:19:25 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-15efbe0d3a54f85c97fab3e5a2a692d30d464471e5cee7798576d06646981534-merged.mount: Deactivated successfully.
Dec 06 10:19:25 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-08e12c7b51cb9d1e2ff66cffe85aeb20473fb7b993654762da7c7848fe32cd0a-userdata-shm.mount: Deactivated successfully.
Dec 06 10:19:25 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2d51560d53\x2d5f9a\x2d46c5\x2d823c\x2d16f2f5b717fc.mount: Deactivated successfully.
Dec 06 10:19:25 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:25.517 2 INFO neutron.agent.securitygroups_rpc [None req-16b19944-c36d-4221-9d13-f63b2c9f61ac b30ee2b2ade74f9e80de3f1afc291bda 29c573bcf157448abe548893ad01e3d2 - - default default] Security group member updated ['96d14d8a-5f78-4831-ba37-3f88bccdbe58']
Dec 06 10:19:25 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:25.678 2 INFO neutron.agent.securitygroups_rpc [None req-16b19944-c36d-4221-9d13-f63b2c9f61ac b30ee2b2ade74f9e80de3f1afc291bda 29c573bcf157448abe548893ad01e3d2 - - default default] Security group member updated ['96d14d8a-5f78-4831-ba37-3f88bccdbe58']
Dec 06 10:19:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:25.777 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:26 np0005548790.localdomain ceph-mon[301742]: pgmap v302: 177 pgs: 177 active+clean; 192 MiB data, 939 MiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 3.8 KiB/s wr, 121 op/s
Dec 06 10:19:26 np0005548790.localdomain ceph-mon[301742]: osdmap e163: 6 total, 6 up, 6 in
Dec 06 10:19:26 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:26.798 2 INFO neutron.agent.securitygroups_rpc [None req-6fcbbc2a-54c0-4eb0-a7e2-cb02681a4453 b30ee2b2ade74f9e80de3f1afc291bda 29c573bcf157448abe548893ad01e3d2 - - default default] Security group member updated ['96d14d8a-5f78-4831-ba37-3f88bccdbe58']
Dec 06 10:19:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v304: 177 pgs: 177 active+clean; 192 MiB data, 939 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 2.2 KiB/s wr, 57 op/s
Dec 06 10:19:27 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:27.178 2 INFO neutron.agent.securitygroups_rpc [None req-a0866618-9e73-4e70-a70b-4bf19bcc43ec b30ee2b2ade74f9e80de3f1afc291bda 29c573bcf157448abe548893ad01e3d2 - - default default] Security group member updated ['96d14d8a-5f78-4831-ba37-3f88bccdbe58']
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:27.208641) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016367208717, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 2235, "num_deletes": 268, "total_data_size": 4244735, "memory_usage": 4323160, "flush_reason": "Manual Compaction"}
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Dec 06 10:19:27 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:27.209 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016367226242, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 2755035, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20815, "largest_seqno": 23045, "table_properties": {"data_size": 2746461, "index_size": 5207, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 19127, "raw_average_key_size": 20, "raw_value_size": 2728650, "raw_average_value_size": 2991, "num_data_blocks": 225, "num_entries": 912, "num_filter_entries": 912, "num_deletions": 268, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016242, "oldest_key_time": 1765016242, "file_creation_time": 1765016367, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 17649 microseconds, and 6152 cpu microseconds.
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:27.226296) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 2755035 bytes OK
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:27.226319) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:27.227983) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:27.228003) EVENT_LOG_v1 {"time_micros": 1765016367227997, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:27.228024) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 4234450, prev total WAL file size 4234450, number of live WAL files 2.
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:27.229144) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303137' seq:72057594037927935, type:22 .. '6C6F676D0034323731' seq:0, type:0; will stop at (end)
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(2690KB)], [30(16MB)]
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016367229211, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 20324397, "oldest_snapshot_seqno": -1}
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 12945 keys, 19836192 bytes, temperature: kUnknown
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016367334620, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 19836192, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19760327, "index_size": 42420, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32389, "raw_key_size": 345833, "raw_average_key_size": 26, "raw_value_size": 19538031, "raw_average_value_size": 1509, "num_data_blocks": 1616, "num_entries": 12945, "num_filter_entries": 12945, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015768, "oldest_key_time": 0, "file_creation_time": 1765016367, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:27.334965) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 19836192 bytes
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:27.336696) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.6 rd, 188.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 16.8 +0.0 blob) out(18.9 +0.0 blob), read-write-amplify(14.6) write-amplify(7.2) OK, records in: 13496, records dropped: 551 output_compression: NoCompression
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:27.336724) EVENT_LOG_v1 {"time_micros": 1765016367336711, "job": 16, "event": "compaction_finished", "compaction_time_micros": 105530, "compaction_time_cpu_micros": 49263, "output_level": 6, "num_output_files": 1, "total_output_size": 19836192, "num_input_records": 13496, "num_output_records": 12945, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016367337203, "job": 16, "event": "table_file_deletion", "file_number": 32}
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016367339570, "job": 16, "event": "table_file_deletion", "file_number": 30}
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:27.229019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:27.339681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:27.339688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:27.339691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:27.339694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:19:27 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:27.339697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:19:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:27.556 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:27.577 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:28 np0005548790.localdomain ceph-mon[301742]: pgmap v304: 177 pgs: 177 active+clean; 192 MiB data, 939 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 2.2 KiB/s wr, 57 op/s
Dec 06 10:19:28 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3594078665' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:19:28 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3594078665' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:19:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:19:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                           ** DB Stats **
                                                           Uptime(secs): 600.0 total, 600.0 interval
                                                           Cumulative writes: 2279 writes, 23K keys, 2279 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.07 MB/s
                                                           Cumulative WAL: 2279 writes, 2279 syncs, 1.00 writes per sync, written: 0.04 GB, 0.07 MB/s
                                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           Interval writes: 2279 writes, 23K keys, 2279 commit groups, 1.0 writes per commit group, ingest: 41.84 MB, 0.07 MB/s
                                                           Interval WAL: 2279 writes, 2279 syncs, 1.00 writes per sync, written: 0.04 GB, 0.07 MB/s
                                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           
                                                           ** Compaction Stats [default] **
                                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    161.4      0.19              0.07         8    0.024       0      0       0.0       0.0
                                                             L6      1/0   18.92 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    171.5    156.9      0.83              0.35         7    0.119     89K   3460       0.0       0.0
                                                            Sum      1/0   18.92 MB   0.0      0.1     0.0      0.1       0.2      0.0       0.0   5.2    139.3    157.8      1.03              0.42        15    0.068     89K   3460       0.0       0.0
                                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.2      0.0       0.0   5.2    139.8    158.3      1.02              0.42        14    0.073     89K   3460       0.0       0.0
                                                           
                                                           ** Compaction Stats [default] **
                                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    171.5    156.9      0.83              0.35         7    0.119     89K   3460       0.0       0.0
                                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    164.4      0.19              0.07         7    0.027       0      0       0.0       0.0
                                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           
                                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                           
                                                           Uptime(secs): 600.0 total, 600.0 interval
                                                           Flush(GB): cumulative 0.030, interval 0.030
                                                           AddFile(GB): cumulative 0.000, interval 0.000
                                                           AddFile(Total Files): cumulative 0, interval 0
                                                           AddFile(L0 Files): cumulative 0, interval 0
                                                           AddFile(Keys): cumulative 0, interval 0
                                                           Cumulative compaction: 0.16 GB write, 0.27 MB/s write, 0.14 GB read, 0.24 MB/s read, 1.0 seconds
                                                           Interval compaction: 0.16 GB write, 0.27 MB/s write, 0.14 GB read, 0.24 MB/s read, 1.0 seconds
                                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                           Block cache BinnedLRUCache@0x55bcb02831f0#2 capacity: 308.00 MB usage: 13.55 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000107 secs_since: 0
                                                           Block cache entry stats(count,size,portion): DataBlock(566,12.93 MB,4.1977%) FilterBlock(15,268.42 KB,0.0851074%) IndexBlock(15,362.95 KB,0.11508%) Misc(1,0.00 KB,0%)
                                                           
                                                           ** File Read Latency Histogram By Level [default] **
Dec 06 10:19:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:19:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v305: 177 pgs: 177 active+clean; 192 MiB data, 923 MiB used, 41 GiB / 42 GiB avail; 84 KiB/s rd, 5.2 KiB/s wr, 116 op/s
Dec 06 10:19:29 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3743484833' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:19:29 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3743484833' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:19:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3837e3e6-e965-4096-a547-bca7edbdb76b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:19:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3837e3e6-e965-4096-a547-bca7edbdb76b, vol_name:cephfs) < ""
Dec 06 10:19:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3837e3e6-e965-4096-a547-bca7edbdb76b/.meta.tmp'
Dec 06 10:19:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3837e3e6-e965-4096-a547-bca7edbdb76b/.meta.tmp' to config b'/volumes/_nogroup/3837e3e6-e965-4096-a547-bca7edbdb76b/.meta'
Dec 06 10:19:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3837e3e6-e965-4096-a547-bca7edbdb76b, vol_name:cephfs) < ""
Dec 06 10:19:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3837e3e6-e965-4096-a547-bca7edbdb76b", "format": "json"}]: dispatch
Dec 06 10:19:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3837e3e6-e965-4096-a547-bca7edbdb76b, vol_name:cephfs) < ""
Dec 06 10:19:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3837e3e6-e965-4096-a547-bca7edbdb76b, vol_name:cephfs) < ""
Dec 06 10:19:30 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:19:30 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4147539828' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:19:30 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:19:30 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4147539828' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:19:30 np0005548790.localdomain ceph-mon[301742]: pgmap v305: 177 pgs: 177 active+clean; 192 MiB data, 923 MiB used, 41 GiB / 42 GiB avail; 84 KiB/s rd, 5.2 KiB/s wr, 116 op/s
Dec 06 10:19:30 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:19:30 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/4147539828' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:19:30 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/4147539828' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:19:30 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:30.942 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:19:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v306: 177 pgs: 177 active+clean; 192 MiB data, 923 MiB used, 41 GiB / 42 GiB avail; 75 KiB/s rd, 4.7 KiB/s wr, 103 op/s
Dec 06 10:19:31 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:31.272 2 INFO neutron.agent.securitygroups_rpc [None req-33076df9-23c5-4745-bba5-728ca02b1a7f b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']
Dec 06 10:19:31 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3837e3e6-e965-4096-a547-bca7edbdb76b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:19:31 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3837e3e6-e965-4096-a547-bca7edbdb76b", "format": "json"}]: dispatch
Dec 06 10:19:32 np0005548790.localdomain ceph-mon[301742]: pgmap v306: 177 pgs: 177 active+clean; 192 MiB data, 923 MiB used, 41 GiB / 42 GiB avail; 75 KiB/s rd, 4.7 KiB/s wr, 103 op/s
Dec 06 10:19:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:32.578 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:19:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:32.580 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:19:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:32.581 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 10:19:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:32.581 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:19:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:32.594 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:32.594 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:19:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v307: 177 pgs: 177 active+clean; 192 MiB data, 923 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 5.6 KiB/s wr, 84 op/s
Dec 06 10:19:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3837e3e6-e965-4096-a547-bca7edbdb76b", "format": "json"}]: dispatch
Dec 06 10:19:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3837e3e6-e965-4096-a547-bca7edbdb76b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:19:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3837e3e6-e965-4096-a547-bca7edbdb76b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:19:33 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:19:33.862+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3837e3e6-e965-4096-a547-bca7edbdb76b' of type subvolume
Dec 06 10:19:33 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3837e3e6-e965-4096-a547-bca7edbdb76b' of type subvolume
Dec 06 10:19:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3837e3e6-e965-4096-a547-bca7edbdb76b", "force": true, "format": "json"}]: dispatch
Dec 06 10:19:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3837e3e6-e965-4096-a547-bca7edbdb76b, vol_name:cephfs) < ""
Dec 06 10:19:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3837e3e6-e965-4096-a547-bca7edbdb76b'' moved to trashcan
Dec 06 10:19:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:19:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3837e3e6-e965-4096-a547-bca7edbdb76b, vol_name:cephfs) < ""
Dec 06 10:19:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:19:34 np0005548790.localdomain ceph-mon[301742]: pgmap v307: 177 pgs: 177 active+clean; 192 MiB data, 923 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 5.6 KiB/s wr, 84 op/s
Dec 06 10:19:34 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/1956276203' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:19:35 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:35.113 2 INFO neutron.agent.securitygroups_rpc [None req-dad18757-c8ae-4573-92a7-49e2b9f564ab a31577503edf4745abb112adc3113276 90bd35d6ab7c40c58d9d1d61ff7a12d3 - - default default] Security group member updated ['1fabbc74-497e-44d5-8d22-97b341de2968']
Dec 06 10:19:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v308: 177 pgs: 177 active+clean; 192 MiB data, 923 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 5.6 KiB/s wr, 84 op/s
Dec 06 10:19:35 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:35.419 2 INFO neutron.agent.securitygroups_rpc [None req-dad18757-c8ae-4573-92a7-49e2b9f564ab a31577503edf4745abb112adc3113276 90bd35d6ab7c40c58d9d1d61ff7a12d3 - - default default] Security group member updated ['1fabbc74-497e-44d5-8d22-97b341de2968']
Dec 06 10:19:35 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3837e3e6-e965-4096-a547-bca7edbdb76b", "format": "json"}]: dispatch
Dec 06 10:19:35 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3837e3e6-e965-4096-a547-bca7edbdb76b", "force": true, "format": "json"}]: dispatch
Dec 06 10:19:35 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/2644755643' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:19:36 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:36.141 2 INFO neutron.agent.securitygroups_rpc [None req-792339ab-c7cd-409a-a342-ae21c75c2ee5 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']
Dec 06 10:19:36 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:36.471 2 INFO neutron.agent.securitygroups_rpc [None req-21429926-074a-46a0-a4f4-611f2e364131 a31577503edf4745abb112adc3113276 90bd35d6ab7c40c58d9d1d61ff7a12d3 - - default default] Security group member updated ['1fabbc74-497e-44d5-8d22-97b341de2968']
Dec 06 10:19:36 np0005548790.localdomain ceph-mon[301742]: pgmap v308: 177 pgs: 177 active+clean; 192 MiB data, 923 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 5.6 KiB/s wr, 84 op/s
Dec 06 10:19:36 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:36.544 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:19:36 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7a05360b-59a7-495e-a884-ff87c0880377", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:19:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7a05360b-59a7-495e-a884-ff87c0880377, vol_name:cephfs) < ""
Dec 06 10:19:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7a05360b-59a7-495e-a884-ff87c0880377/.meta.tmp'
Dec 06 10:19:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7a05360b-59a7-495e-a884-ff87c0880377/.meta.tmp' to config b'/volumes/_nogroup/7a05360b-59a7-495e-a884-ff87c0880377/.meta'
Dec 06 10:19:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7a05360b-59a7-495e-a884-ff87c0880377, vol_name:cephfs) < ""
Dec 06 10:19:36 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7a05360b-59a7-495e-a884-ff87c0880377", "format": "json"}]: dispatch
Dec 06 10:19:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7a05360b-59a7-495e-a884-ff87c0880377, vol_name:cephfs) < ""
Dec 06 10:19:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7a05360b-59a7-495e-a884-ff87c0880377, vol_name:cephfs) < ""
Dec 06 10:19:37 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:37.109 2 INFO neutron.agent.securitygroups_rpc [None req-bb88ef2d-64f1-4b09-a81f-2bd8c1d4b6c6 a31577503edf4745abb112adc3113276 90bd35d6ab7c40c58d9d1d61ff7a12d3 - - default default] Security group member updated ['1fabbc74-497e-44d5-8d22-97b341de2968']
Dec 06 10:19:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v309: 177 pgs: 177 active+clean; 192 MiB data, 923 MiB used, 41 GiB / 42 GiB avail; 1.8 MiB/s rd, 4.7 KiB/s wr, 71 op/s
Dec 06 10:19:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:19:37 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:37.331 262327 INFO neutron.agent.linux.ip_lib [None req-dd933904-8e42-4806-a7b6-655752126ec7 - - - - - -] Device tap947432f0-9b cannot be used as it has no MAC address
Dec 06 10:19:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:37.344 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:19:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:37.344 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:19:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:37.345 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:19:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:37.348 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:37 np0005548790.localdomain kernel: device tap947432f0-9b entered promiscuous mode
Dec 06 10:19:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:37.360 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:37 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:19:37Z|00142|binding|INFO|Claiming lport 947432f0-9b0d-4744-bced-84ec7576b9c5 for this chassis.
Dec 06 10:19:37 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:19:37Z|00143|binding|INFO|947432f0-9b0d-4744-bced-84ec7576b9c5: Claiming unknown
Dec 06 10:19:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:37.363 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:19:37 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016377.3646] manager: (tap947432f0-9b): new Generic device (/org/freedesktop/NetworkManager/Devices/32)
Dec 06 10:19:37 np0005548790.localdomain systemd-udevd[314900]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:19:37 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:37.371 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-c9d67231-ab4c-4f96-bca5-0b19bf32e0d0', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9d67231-ab4c-4f96-bca5-0b19bf32e0d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24086b701d6b4d4081d2e63578d18d24', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf26c9e9-21b5-4ea6-9866-b034ff322cf2, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=947432f0-9b0d-4744-bced-84ec7576b9c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:19:37 np0005548790.localdomain podman[314879]: 2025-12-06 10:19:37.372468483 +0000 UTC m=+0.094598551 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 06 10:19:37 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:37.376 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 947432f0-9b0d-4744-bced-84ec7576b9c5 in datapath c9d67231-ab4c-4f96-bca5-0b19bf32e0d0 bound to our chassis
Dec 06 10:19:37 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:37.379 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c9d67231-ab4c-4f96-bca5-0b19bf32e0d0 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:19:37 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:37.380 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[4c69099e-0ac7-44c9-9d0e-7a20c6c659b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:19:37 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap947432f0-9b: No such device
Dec 06 10:19:37 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap947432f0-9b: No such device
Dec 06 10:19:37 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:19:37Z|00144|binding|INFO|Setting lport 947432f0-9b0d-4744-bced-84ec7576b9c5 ovn-installed in OVS
Dec 06 10:19:37 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:19:37Z|00145|binding|INFO|Setting lport 947432f0-9b0d-4744-bced-84ec7576b9c5 up in Southbound
Dec 06 10:19:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:37.401 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:37 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap947432f0-9b: No such device
Dec 06 10:19:37 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap947432f0-9b: No such device
Dec 06 10:19:37 np0005548790.localdomain podman[314879]: 2025-12-06 10:19:37.414038379 +0000 UTC m=+0.136168447 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 10:19:37 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap947432f0-9b: No such device
Dec 06 10:19:37 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap947432f0-9b: No such device
Dec 06 10:19:37 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap947432f0-9b: No such device
Dec 06 10:19:37 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:19:37 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap947432f0-9b: No such device
Dec 06 10:19:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:37.475 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:37 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7a05360b-59a7-495e-a884-ff87c0880377", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:19:37 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7a05360b-59a7-495e-a884-ff87c0880377", "format": "json"}]: dispatch
Dec 06 10:19:37 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:19:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:37.594 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:37.596 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:37 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:37.822 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:19:38 np0005548790.localdomain podman[314976]: 
Dec 06 10:19:38 np0005548790.localdomain podman[314976]: 2025-12-06 10:19:38.231269967 +0000 UTC m=+0.073855349 container create b972c9b88c0a19fae22c75ab3a56d4efb32d033ad1e5df219efc24b539d1540b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c9d67231-ab4c-4f96-bca5-0b19bf32e0d0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 10:19:38 np0005548790.localdomain systemd[1]: Started libpod-conmon-b972c9b88c0a19fae22c75ab3a56d4efb32d033ad1e5df219efc24b539d1540b.scope.
Dec 06 10:19:38 np0005548790.localdomain podman[314976]: 2025-12-06 10:19:38.186904766 +0000 UTC m=+0.029490138 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:19:38 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:19:38 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f684f63584aa27adcf81303e96b1315140634d356c57299f57780f47e0ab69/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:19:38 np0005548790.localdomain podman[314976]: 2025-12-06 10:19:38.301655672 +0000 UTC m=+0.144241004 container init b972c9b88c0a19fae22c75ab3a56d4efb32d033ad1e5df219efc24b539d1540b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c9d67231-ab4c-4f96-bca5-0b19bf32e0d0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 06 10:19:38 np0005548790.localdomain podman[314976]: 2025-12-06 10:19:38.307700497 +0000 UTC m=+0.150285839 container start b972c9b88c0a19fae22c75ab3a56d4efb32d033ad1e5df219efc24b539d1540b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c9d67231-ab4c-4f96-bca5-0b19bf32e0d0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 10:19:38 np0005548790.localdomain dnsmasq[314994]: started, version 2.85 cachesize 150
Dec 06 10:19:38 np0005548790.localdomain dnsmasq[314994]: DNS service limited to local subnets
Dec 06 10:19:38 np0005548790.localdomain dnsmasq[314994]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:19:38 np0005548790.localdomain dnsmasq[314994]: warning: no upstream servers configured
Dec 06 10:19:38 np0005548790.localdomain dnsmasq-dhcp[314994]: DHCPv6, static leases only on 2001:db8:1::, lease time 1d
Dec 06 10:19:38 np0005548790.localdomain dnsmasq[314994]: read /var/lib/neutron/dhcp/c9d67231-ab4c-4f96-bca5-0b19bf32e0d0/addn_hosts - 0 addresses
Dec 06 10:19:38 np0005548790.localdomain dnsmasq-dhcp[314994]: read /var/lib/neutron/dhcp/c9d67231-ab4c-4f96-bca5-0b19bf32e0d0/host
Dec 06 10:19:38 np0005548790.localdomain dnsmasq-dhcp[314994]: read /var/lib/neutron/dhcp/c9d67231-ab4c-4f96-bca5-0b19bf32e0d0/opts
Dec 06 10:19:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:38.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:19:38 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:38.450 262327 INFO neutron.agent.dhcp.agent [None req-472016b6-016f-488f-9e64-66bf1d1f192d - - - - - -] DHCP configuration for ports {'5ec1809c-e091-45bb-9080-5aa3e7d79943'} is completed
Dec 06 10:19:38 np0005548790.localdomain ceph-mon[301742]: pgmap v309: 177 pgs: 177 active+clean; 192 MiB data, 923 MiB used, 41 GiB / 42 GiB avail; 1.8 MiB/s rd, 4.7 KiB/s wr, 71 op/s
Dec 06 10:19:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:19:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v310: 177 pgs: 177 active+clean; 238 MiB data, 991 MiB used, 41 GiB / 42 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Dec 06 10:19:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3512849964' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:19:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3512849964' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:19:40 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7a05360b-59a7-495e-a884-ff87c0880377", "auth_id": "tempest-cephx-id-659509012", "tenant_id": "d694f30d513746329568207534277c9c", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:19:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-659509012, format:json, prefix:fs subvolume authorize, sub_name:7a05360b-59a7-495e-a884-ff87c0880377, tenant_id:d694f30d513746329568207534277c9c, vol_name:cephfs) < ""
Dec 06 10:19:40 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-659509012", "format": "json"} v 0)
Dec 06 10:19:40 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-659509012", "format": "json"} : dispatch
Dec 06 10:19:40 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID tempest-cephx-id-659509012 with tenant d694f30d513746329568207534277c9c
Dec 06 10:19:40 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-659509012", "caps": ["mds", "allow rw path=/volumes/_nogroup/7a05360b-59a7-495e-a884-ff87c0880377/50397115-0c2d-4191-896c-db5fe71ed3ba", "osd", "allow rw pool=manila_data namespace=fsvolumens_7a05360b-59a7-495e-a884-ff87c0880377", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:19:40 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-659509012", "caps": ["mds", "allow rw path=/volumes/_nogroup/7a05360b-59a7-495e-a884-ff87c0880377/50397115-0c2d-4191-896c-db5fe71ed3ba", "osd", "allow rw pool=manila_data namespace=fsvolumens_7a05360b-59a7-495e-a884-ff87c0880377", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:19:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-659509012, format:json, prefix:fs subvolume authorize, sub_name:7a05360b-59a7-495e-a884-ff87c0880377, tenant_id:d694f30d513746329568207534277c9c, vol_name:cephfs) < ""
Dec 06 10:19:40 np0005548790.localdomain ceph-mon[301742]: pgmap v310: 177 pgs: 177 active+clean; 238 MiB data, 991 MiB used, 41 GiB / 42 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Dec 06 10:19:40 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-659509012", "format": "json"} : dispatch
Dec 06 10:19:40 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-659509012", "caps": ["mds", "allow rw path=/volumes/_nogroup/7a05360b-59a7-495e-a884-ff87c0880377/50397115-0c2d-4191-896c-db5fe71ed3ba", "osd", "allow rw pool=manila_data namespace=fsvolumens_7a05360b-59a7-495e-a884-ff87c0880377", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:19:40 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-659509012", "caps": ["mds", "allow rw path=/volumes/_nogroup/7a05360b-59a7-495e-a884-ff87c0880377/50397115-0c2d-4191-896c-db5fe71ed3ba", "osd", "allow rw pool=manila_data namespace=fsvolumens_7a05360b-59a7-495e-a884-ff87c0880377", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:19:40 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-659509012", "caps": ["mds", "allow rw path=/volumes/_nogroup/7a05360b-59a7-495e-a884-ff87c0880377/50397115-0c2d-4191-896c-db5fe71ed3ba", "osd", "allow rw pool=manila_data namespace=fsvolumens_7a05360b-59a7-495e-a884-ff87c0880377", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:19:40 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:19:40 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/467307692' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:19:40 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:19:40 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/467307692' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v311: 177 pgs: 177 active+clean; 238 MiB data, 991 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 67 op/s
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7a05360b-59a7-495e-a884-ff87c0880377", "auth_id": "tempest-cephx-id-659509012", "format": "json"}]: dispatch
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-659509012, format:json, prefix:fs subvolume deauthorize, sub_name:7a05360b-59a7-495e-a884-ff87c0880377, vol_name:cephfs) < ""
Dec 06 10:19:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:41.329 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:19:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-659509012", "format": "json"} v 0)
Dec 06 10:19:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-659509012", "format": "json"} : dispatch
Dec 06 10:19:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-659509012"} v 0)
Dec 06 10:19:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-659509012"} : dispatch
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-659509012, format:json, prefix:fs subvolume deauthorize, sub_name:7a05360b-59a7-495e-a884-ff87c0880377, vol_name:cephfs) < ""
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7a05360b-59a7-495e-a884-ff87c0880377", "auth_id": "tempest-cephx-id-659509012", "format": "json"}]: dispatch
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-659509012, format:json, prefix:fs subvolume evict, sub_name:7a05360b-59a7-495e-a884-ff87c0880377, vol_name:cephfs) < ""
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-659509012, client_metadata.root=/volumes/_nogroup/7a05360b-59a7-495e-a884-ff87c0880377/50397115-0c2d-4191-896c-db5fe71ed3ba
Dec 06 10:19:41 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=tempest-cephx-id-659509012,client_metadata.root=/volumes/_nogroup/7a05360b-59a7-495e-a884-ff87c0880377/50397115-0c2d-4191-896c-db5fe71ed3ba],prefix=session evict} (starting...)
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-659509012, format:json, prefix:fs subvolume evict, sub_name:7a05360b-59a7-495e-a884-ff87c0880377, vol_name:cephfs) < ""
Dec 06 10:19:41 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:41.455 262327 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:19:40Z, description=, device_id=fd85d19e-cd56-4873-b452-bd25ff0b3893, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c85842760>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c858420d0>], id=20fb4069-b2cd-43be-9bc0-8450a150c331, ip_allocation=immediate, mac_address=fa:16:3e:17:41:ee, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:19:33Z, description=, dns_domain=, id=c9d67231-ab4c-4f96-bca5-0b19bf32e0d0, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-2019732009, port_security_enabled=True, project_id=24086b701d6b4d4081d2e63578d18d24, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=15187, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2163, status=ACTIVE, subnets=['b2556eec-504a-4d64-a769-be4bd0740580'], tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:19:36Z, vlan_transparent=None, network_id=c9d67231-ab4c-4f96-bca5-0b19bf32e0d0, port_security_enabled=False, project_id=24086b701d6b4d4081d2e63578d18d24, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2200, status=DOWN, tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:19:41Z on network c9d67231-ab4c-4f96-bca5-0b19bf32e0d0
Dec 06 10:19:41 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:19:41 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:19:41 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7a05360b-59a7-495e-a884-ff87c0880377", "format": "json"}]: dispatch
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7a05360b-59a7-495e-a884-ff87c0880377, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7a05360b-59a7-495e-a884-ff87c0880377, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7a05360b-59a7-495e-a884-ff87c0880377' of type subvolume
Dec 06 10:19:41 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:19:41.584+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7a05360b-59a7-495e-a884-ff87c0880377' of type subvolume
Dec 06 10:19:41 np0005548790.localdomain systemd[1]: tmp-crun.i3RNS4.mount: Deactivated successfully.
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7a05360b-59a7-495e-a884-ff87c0880377", "force": true, "format": "json"}]: dispatch
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7a05360b-59a7-495e-a884-ff87c0880377, vol_name:cephfs) < ""
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7a05360b-59a7-495e-a884-ff87c0880377'' moved to trashcan
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7a05360b-59a7-495e-a884-ff87c0880377, vol_name:cephfs) < ""
Dec 06 10:19:41 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7a05360b-59a7-495e-a884-ff87c0880377", "auth_id": "tempest-cephx-id-659509012", "tenant_id": "d694f30d513746329568207534277c9c", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:19:41 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/467307692' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:19:41 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/467307692' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:19:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-659509012", "format": "json"} : dispatch
Dec 06 10:19:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-659509012"} : dispatch
Dec 06 10:19:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-659509012"} : dispatch
Dec 06 10:19:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-659509012"}]': finished
Dec 06 10:19:41 np0005548790.localdomain systemd[1]: tmp-crun.kMvNvm.mount: Deactivated successfully.
Dec 06 10:19:41 np0005548790.localdomain podman[314998]: 2025-12-06 10:19:41.638626563 +0000 UTC m=+0.149106818 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 10:19:41 np0005548790.localdomain podman[314998]: 2025-12-06 10:19:41.649538157 +0000 UTC m=+0.160018392 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:19:41 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:19:41 np0005548790.localdomain podman[314999]: 2025-12-06 10:19:41.602910066 +0000 UTC m=+0.110261456 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:19:41 np0005548790.localdomain podman[314999]: 2025-12-06 10:19:41.740246673 +0000 UTC m=+0.247598093 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 06 10:19:41 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:19:41 np0005548790.localdomain dnsmasq[314994]: read /var/lib/neutron/dhcp/c9d67231-ab4c-4f96-bca5-0b19bf32e0d0/addn_hosts - 1 addresses
Dec 06 10:19:41 np0005548790.localdomain dnsmasq-dhcp[314994]: read /var/lib/neutron/dhcp/c9d67231-ab4c-4f96-bca5-0b19bf32e0d0/host
Dec 06 10:19:41 np0005548790.localdomain dnsmasq-dhcp[314994]: read /var/lib/neutron/dhcp/c9d67231-ab4c-4f96-bca5-0b19bf32e0d0/opts
Dec 06 10:19:41 np0005548790.localdomain podman[315060]: 2025-12-06 10:19:41.811349577 +0000 UTC m=+0.063710535 container kill b972c9b88c0a19fae22c75ab3a56d4efb32d033ad1e5df219efc24b539d1540b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c9d67231-ab4c-4f96-bca5-0b19bf32e0d0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Optimize plan auto_2025-12-06_10:19:41
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] do_upmap
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] pools ['manila_metadata', '.mgr', 'images', 'volumes', 'manila_data', 'vms', 'backups']
Dec 06 10:19:41 np0005548790.localdomain podman[315001]: 2025-12-06 10:19:41.790646677 +0000 UTC m=+0.293741722 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.33.7, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] prepared 0/10 changes
Dec 06 10:19:41 np0005548790.localdomain podman[315001]: 2025-12-06 10:19:41.875477843 +0000 UTC m=+0.378572948 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.6, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, architecture=x86_64, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 06 10:19:41 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d55ecf85-9ebf-4a13-b414-b26993895e14", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:19:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d55ecf85-9ebf-4a13-b414-b26993895e14, vol_name:cephfs) < ""
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d55ecf85-9ebf-4a13-b414-b26993895e14/.meta.tmp'
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d55ecf85-9ebf-4a13-b414-b26993895e14/.meta.tmp' to config b'/volumes/_nogroup/d55ecf85-9ebf-4a13-b414-b26993895e14/.meta'
Dec 06 10:19:42 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:42.052 262327 INFO neutron.agent.dhcp.agent [None req-ff1ba803-4ad5-4392-af62-f13f26fc405f - - - - - -] DHCP configuration for ports {'20fb4069-b2cd-43be-9bc0-8450a150c331'} is completed
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d55ecf85-9ebf-4a13-b414-b26993895e14, vol_name:cephfs) < ""
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d55ecf85-9ebf-4a13-b414-b26993895e14", "format": "json"}]: dispatch
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d55ecf85-9ebf-4a13-b414-b26993895e14, vol_name:cephfs) < ""
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32)
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00296840103294249 of space, bias 1.0, pg target 0.5926907395775172 quantized to 32 (current 32)
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8570103846780196 quantized to 32 (current 32)
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.416259538432906e-05 quantized to 32 (current 32)
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 1.1450481574539363e-05 of space, bias 4.0, pg target 0.00909931602456728 quantized to 16 (current 16)
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d55ecf85-9ebf-4a13-b414-b26993895e14, vol_name:cephfs) < ""
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:19:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:19:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:42.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:19:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:42.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:19:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:42.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:19:42 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:42.466 262327 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:19:40Z, description=, device_id=fd85d19e-cd56-4873-b452-bd25ff0b3893, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c8577c520>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c8577cd90>], id=20fb4069-b2cd-43be-9bc0-8450a150c331, ip_allocation=immediate, mac_address=fa:16:3e:17:41:ee, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:19:33Z, description=, dns_domain=, id=c9d67231-ab4c-4f96-bca5-0b19bf32e0d0, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-2019732009, port_security_enabled=True, project_id=24086b701d6b4d4081d2e63578d18d24, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=15187, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2163, status=ACTIVE, subnets=['b2556eec-504a-4d64-a769-be4bd0740580'], tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:19:36Z, vlan_transparent=None, network_id=c9d67231-ab4c-4f96-bca5-0b19bf32e0d0, port_security_enabled=False, project_id=24086b701d6b4d4081d2e63578d18d24, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2200, status=DOWN, tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:19:41Z on network c9d67231-ab4c-4f96-bca5-0b19bf32e0d0
Dec 06 10:19:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:42.597 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:19:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:42.599 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:19:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:42.599 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 10:19:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:42.600 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:19:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:42.633 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:42.634 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:19:42 np0005548790.localdomain ceph-mon[301742]: pgmap v311: 177 pgs: 177 active+clean; 238 MiB data, 991 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 67 op/s
Dec 06 10:19:42 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7a05360b-59a7-495e-a884-ff87c0880377", "auth_id": "tempest-cephx-id-659509012", "format": "json"}]: dispatch
Dec 06 10:19:42 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7a05360b-59a7-495e-a884-ff87c0880377", "auth_id": "tempest-cephx-id-659509012", "format": "json"}]: dispatch
Dec 06 10:19:42 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7a05360b-59a7-495e-a884-ff87c0880377", "format": "json"}]: dispatch
Dec 06 10:19:42 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7a05360b-59a7-495e-a884-ff87c0880377", "force": true, "format": "json"}]: dispatch
Dec 06 10:19:42 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:19:42 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/139921279' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:19:42 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/139921279' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:19:42 np0005548790.localdomain podman[315110]: 2025-12-06 10:19:42.672450075 +0000 UTC m=+0.071209209 container kill b972c9b88c0a19fae22c75ab3a56d4efb32d033ad1e5df219efc24b539d1540b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c9d67231-ab4c-4f96-bca5-0b19bf32e0d0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:19:42 np0005548790.localdomain dnsmasq[314994]: read /var/lib/neutron/dhcp/c9d67231-ab4c-4f96-bca5-0b19bf32e0d0/addn_hosts - 1 addresses
Dec 06 10:19:42 np0005548790.localdomain dnsmasq-dhcp[314994]: read /var/lib/neutron/dhcp/c9d67231-ab4c-4f96-bca5-0b19bf32e0d0/host
Dec 06 10:19:42 np0005548790.localdomain dnsmasq-dhcp[314994]: read /var/lib/neutron/dhcp/c9d67231-ab4c-4f96-bca5-0b19bf32e0d0/opts
Dec 06 10:19:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v312: 177 pgs: 177 active+clean; 192 MiB data, 927 MiB used, 41 GiB / 42 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 10:19:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:43.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:19:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:43.354 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:19:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:43.355 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:19:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:43.355 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:19:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:43.355 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:19:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:43.356 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:19:43 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:43.624 2 INFO neutron.agent.securitygroups_rpc [None req-f21d32c3-41e3-465d-a5ba-39b4b631a0c1 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['ae1eaa44-7360-485a-b85b-f1bfb95ce20b']
Dec 06 10:19:43 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d55ecf85-9ebf-4a13-b414-b26993895e14", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:19:43 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d55ecf85-9ebf-4a13-b414-b26993895e14", "format": "json"}]: dispatch
Dec 06 10:19:43 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:43.699 262327 INFO neutron.agent.dhcp.agent [None req-0d15e6d8-0cfc-49d4-8962-ae7188744f93 - - - - - -] DHCP configuration for ports {'20fb4069-b2cd-43be-9bc0-8450a150c331'} is completed
Dec 06 10:19:43 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:19:43 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1258063083' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:19:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:43.854 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:19:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:44.041 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:19:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:44.043 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=11594MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:19:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:44.043 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:19:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:44.043 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:19:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:19:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:44.394 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:19:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:44.395 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:19:44 np0005548790.localdomain ceph-mon[301742]: pgmap v312: 177 pgs: 177 active+clean; 192 MiB data, 927 MiB used, 41 GiB / 42 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Dec 06 10:19:44 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1258063083' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:19:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:44.702 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Refreshing inventories for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 10:19:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:44.727 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updating ProviderTree inventory for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 10:19:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:44.727 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updating inventory in ProviderTree for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 10:19:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:44.745 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Refreshing aggregate associations for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 10:19:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:44.776 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Refreshing trait associations for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0, traits: HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AMD_SVM,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_F16C,HW_CPU_X86_ABM,HW_CPU_X86_BMI,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SVM,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 10:19:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:44.800 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:19:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v313: 177 pgs: 177 active+clean; 192 MiB data, 927 MiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Dec 06 10:19:45 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:19:45 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2619362809' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:19:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:45.255 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:19:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:45.259 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:19:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:45.513 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:19:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:45.516 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:19:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:45.516 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.473s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:19:45 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2619362809' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:19:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d55ecf85-9ebf-4a13-b414-b26993895e14", "format": "json"}]: dispatch
Dec 06 10:19:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d55ecf85-9ebf-4a13-b414-b26993895e14, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:19:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d55ecf85-9ebf-4a13-b414-b26993895e14, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:19:45 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:19:45.693+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd55ecf85-9ebf-4a13-b414-b26993895e14' of type subvolume
Dec 06 10:19:45 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd55ecf85-9ebf-4a13-b414-b26993895e14' of type subvolume
Dec 06 10:19:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d55ecf85-9ebf-4a13-b414-b26993895e14", "force": true, "format": "json"}]: dispatch
Dec 06 10:19:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d55ecf85-9ebf-4a13-b414-b26993895e14, vol_name:cephfs) < ""
Dec 06 10:19:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d55ecf85-9ebf-4a13-b414-b26993895e14'' moved to trashcan
Dec 06 10:19:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:19:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d55ecf85-9ebf-4a13-b414-b26993895e14, vol_name:cephfs) < ""
Dec 06 10:19:46 np0005548790.localdomain ceph-mon[301742]: pgmap v313: 177 pgs: 177 active+clean; 192 MiB data, 927 MiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Dec 06 10:19:46 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d55ecf85-9ebf-4a13-b414-b26993895e14", "format": "json"}]: dispatch
Dec 06 10:19:46 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d55ecf85-9ebf-4a13-b414-b26993895e14", "force": true, "format": "json"}]: dispatch
Dec 06 10:19:46 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2321015285' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:19:47 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:47.117 262327 INFO neutron.agent.linux.ip_lib [None req-3bcb5eb6-d1e1-4315-9b6f-3e4b977417b6 - - - - - -] Device tap8d2f31e2-69 cannot be used as it has no MAC address
Dec 06 10:19:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:47.143 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:47 np0005548790.localdomain kernel: device tap8d2f31e2-69 entered promiscuous mode
Dec 06 10:19:47 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016387.1509] manager: (tap8d2f31e2-69): new Generic device (/org/freedesktop/NetworkManager/Devices/33)
Dec 06 10:19:47 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:19:47Z|00146|binding|INFO|Claiming lport 8d2f31e2-69d8-4f51-96e6-9e70a13e384e for this chassis.
Dec 06 10:19:47 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:19:47Z|00147|binding|INFO|8d2f31e2-69d8-4f51-96e6-9e70a13e384e: Claiming unknown
Dec 06 10:19:47 np0005548790.localdomain systemd-udevd[315207]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:19:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:47.157 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v314: 177 pgs: 177 active+clean; 192 MiB data, 927 MiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Dec 06 10:19:47 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:47.173 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-a51fe0ea-a792-451c-8baf-2b5d7719d125', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a51fe0ea-a792-451c-8baf-2b5d7719d125', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90bd35d6ab7c40c58d9d1d61ff7a12d3', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0453294d-76be-4825-8ca3-e3add9a9a816, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=8d2f31e2-69d8-4f51-96e6-9e70a13e384e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:19:47 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:47.175 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 8d2f31e2-69d8-4f51-96e6-9e70a13e384e in datapath a51fe0ea-a792-451c-8baf-2b5d7719d125 bound to our chassis
Dec 06 10:19:47 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:47.176 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Port a541d63f-f9a5-49cc-aaf5-b3a30ac93b28 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 06 10:19:47 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:47.176 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a51fe0ea-a792-451c-8baf-2b5d7719d125, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:19:47 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:47.177 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[6de8404f-0602-4984-83f4-93755739733e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:19:47 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:19:47Z|00148|binding|INFO|Setting lport 8d2f31e2-69d8-4f51-96e6-9e70a13e384e ovn-installed in OVS
Dec 06 10:19:47 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:19:47Z|00149|binding|INFO|Setting lport 8d2f31e2-69d8-4f51-96e6-9e70a13e384e up in Southbound
Dec 06 10:19:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:47.192 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:47 np0005548790.localdomain podman[315199]: 2025-12-06 10:19:47.217342469 +0000 UTC m=+0.077218932 container kill b972c9b88c0a19fae22c75ab3a56d4efb32d033ad1e5df219efc24b539d1540b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c9d67231-ab4c-4f96-bca5-0b19bf32e0d0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 06 10:19:47 np0005548790.localdomain dnsmasq[314994]: read /var/lib/neutron/dhcp/c9d67231-ab4c-4f96-bca5-0b19bf32e0d0/addn_hosts - 0 addresses
Dec 06 10:19:47 np0005548790.localdomain dnsmasq-dhcp[314994]: read /var/lib/neutron/dhcp/c9d67231-ab4c-4f96-bca5-0b19bf32e0d0/host
Dec 06 10:19:47 np0005548790.localdomain dnsmasq-dhcp[314994]: read /var/lib/neutron/dhcp/c9d67231-ab4c-4f96-bca5-0b19bf32e0d0/opts
Dec 06 10:19:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:47.221 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:47.240 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:19:47 np0005548790.localdomain podman[315244]: 2025-12-06 10:19:47.575856883 +0000 UTC m=+0.090106630 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 10:19:47 np0005548790.localdomain podman[315244]: 2025-12-06 10:19:47.587379514 +0000 UTC m=+0.101629231 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 06 10:19:47 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:19:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:47.672 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:47 np0005548790.localdomain kernel: device tap947432f0-9b left promiscuous mode
Dec 06 10:19:47 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:19:47Z|00150|binding|INFO|Releasing lport 947432f0-9b0d-4744-bced-84ec7576b9c5 from this chassis (sb_readonly=0)
Dec 06 10:19:47 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:19:47Z|00151|binding|INFO|Setting lport 947432f0-9b0d-4744-bced-84ec7576b9c5 down in Southbound
Dec 06 10:19:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:47.700 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:47 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:47.781 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-c9d67231-ab4c-4f96-bca5-0b19bf32e0d0', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c9d67231-ab4c-4f96-bca5-0b19bf32e0d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24086b701d6b4d4081d2e63578d18d24', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf26c9e9-21b5-4ea6-9866-b034ff322cf2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=947432f0-9b0d-4744-bced-84ec7576b9c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:19:47 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:47.783 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 947432f0-9b0d-4744-bced-84ec7576b9c5 in datapath c9d67231-ab4c-4f96-bca5-0b19bf32e0d0 unbound from our chassis
Dec 06 10:19:47 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:47.784 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c9d67231-ab4c-4f96-bca5-0b19bf32e0d0 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:19:47 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:47.785 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[0d0818c4-e868-4a18-b418-f05c9732c8de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:19:48 np0005548790.localdomain podman[315297]: 
Dec 06 10:19:48 np0005548790.localdomain podman[315297]: 2025-12-06 10:19:48.095603691 +0000 UTC m=+0.090635735 container create c2bcd5c47f8d79355709df3f930932e4243105395316c01f162a6a4627b328c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a51fe0ea-a792-451c-8baf-2b5d7719d125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 10:19:48 np0005548790.localdomain systemd[1]: Started libpod-conmon-c2bcd5c47f8d79355709df3f930932e4243105395316c01f162a6a4627b328c0.scope.
Dec 06 10:19:48 np0005548790.localdomain podman[315297]: 2025-12-06 10:19:48.053309415 +0000 UTC m=+0.048341489 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:19:48 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:19:48 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc43e3128927d1cbccce2238e37681993846ad1b8f061ee4c9b6700e6dd94cd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:19:48 np0005548790.localdomain podman[315297]: 2025-12-06 10:19:48.170824546 +0000 UTC m=+0.165856590 container init c2bcd5c47f8d79355709df3f930932e4243105395316c01f162a6a4627b328c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a51fe0ea-a792-451c-8baf-2b5d7719d125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 10:19:48 np0005548790.localdomain podman[315297]: 2025-12-06 10:19:48.179234773 +0000 UTC m=+0.174266807 container start c2bcd5c47f8d79355709df3f930932e4243105395316c01f162a6a4627b328c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a51fe0ea-a792-451c-8baf-2b5d7719d125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 10:19:48 np0005548790.localdomain dnsmasq[315316]: started, version 2.85 cachesize 150
Dec 06 10:19:48 np0005548790.localdomain dnsmasq[315316]: DNS service limited to local subnets
Dec 06 10:19:48 np0005548790.localdomain dnsmasq[315316]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:19:48 np0005548790.localdomain dnsmasq[315316]: warning: no upstream servers configured
Dec 06 10:19:48 np0005548790.localdomain dnsmasq-dhcp[315316]: DHCP, static leases only on 10.100.0.16, lease time 1d
Dec 06 10:19:48 np0005548790.localdomain dnsmasq[315316]: read /var/lib/neutron/dhcp/a51fe0ea-a792-451c-8baf-2b5d7719d125/addn_hosts - 0 addresses
Dec 06 10:19:48 np0005548790.localdomain dnsmasq-dhcp[315316]: read /var/lib/neutron/dhcp/a51fe0ea-a792-451c-8baf-2b5d7719d125/host
Dec 06 10:19:48 np0005548790.localdomain dnsmasq-dhcp[315316]: read /var/lib/neutron/dhcp/a51fe0ea-a792-451c-8baf-2b5d7719d125/opts
Dec 06 10:19:48 np0005548790.localdomain ceph-mon[301742]: pgmap v314: 177 pgs: 177 active+clean; 192 MiB data, 927 MiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Dec 06 10:19:48 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2332530373' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:19:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:19:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:19:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:48.401 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:19:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:48.402 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:19:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:48.402 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:19:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:19:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158561 "" "Go-http-client/1.1"
Dec 06 10:19:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:19:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19686 "" "Go-http-client/1.1"
Dec 06 10:19:48 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:48.461 262327 INFO neutron.agent.dhcp.agent [None req-bae1b11e-6557-48d2-992c-e7fcde5a9406 - - - - - -] DHCP configuration for ports {'3813a795-db36-48a5-806e-4a87117e106f'} is completed
Dec 06 10:19:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:48.518 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:19:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:48.518 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:19:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:48.519 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:19:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:48.597 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:c3:b8 10.100.0.18 10.100.0.3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-667a7cf2-00f8-4896-8e3d-8222fad7f397', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-667a7cf2-00f8-4896-8e3d-8222fad7f397', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f8e1c4c589749b99178bbc7c2bea3f0', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=528a9f17-509a-4c49-a9ac-4a6363f2178f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=659e29bd-a84c-4733-b754-dbb7b70b98cc) old=Port_Binding(mac=['fa:16:3e:94:c3:b8 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-667a7cf2-00f8-4896-8e3d-8222fad7f397', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-667a7cf2-00f8-4896-8e3d-8222fad7f397', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f8e1c4c589749b99178bbc7c2bea3f0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:19:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:48.599 159200 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 659e29bd-a84c-4733-b754-dbb7b70b98cc in datapath 667a7cf2-00f8-4896-8e3d-8222fad7f397 updated
Dec 06 10:19:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:48.602 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 667a7cf2-00f8-4896-8e3d-8222fad7f397, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:19:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:48.603 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[da09ba07-f1b9-4cd9-ba55-e2e3178dd64d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:19:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:19:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v315: 177 pgs: 177 active+clean; 192 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Dec 06 10:19:49 np0005548790.localdomain dnsmasq[315316]: exiting on receipt of SIGTERM
Dec 06 10:19:49 np0005548790.localdomain podman[315332]: 2025-12-06 10:19:49.189919419 +0000 UTC m=+0.060164479 container kill c2bcd5c47f8d79355709df3f930932e4243105395316c01f162a6a4627b328c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a51fe0ea-a792-451c-8baf-2b5d7719d125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:19:49 np0005548790.localdomain systemd[1]: libpod-c2bcd5c47f8d79355709df3f930932e4243105395316c01f162a6a4627b328c0.scope: Deactivated successfully.
Dec 06 10:19:49 np0005548790.localdomain podman[315348]: 2025-12-06 10:19:49.261366464 +0000 UTC m=+0.054823356 container died c2bcd5c47f8d79355709df3f930932e4243105395316c01f162a6a4627b328c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a51fe0ea-a792-451c-8baf-2b5d7719d125, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:19:49 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c2bcd5c47f8d79355709df3f930932e4243105395316c01f162a6a4627b328c0-userdata-shm.mount: Deactivated successfully.
Dec 06 10:19:49 np0005548790.localdomain podman[315348]: 2025-12-06 10:19:49.298663973 +0000 UTC m=+0.092120805 container cleanup c2bcd5c47f8d79355709df3f930932e4243105395316c01f162a6a4627b328c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a51fe0ea-a792-451c-8baf-2b5d7719d125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 10:19:49 np0005548790.localdomain systemd[1]: libpod-conmon-c2bcd5c47f8d79355709df3f930932e4243105395316c01f162a6a4627b328c0.scope: Deactivated successfully.
Dec 06 10:19:49 np0005548790.localdomain podman[315349]: 2025-12-06 10:19:49.323043243 +0000 UTC m=+0.110281437 container remove c2bcd5c47f8d79355709df3f930932e4243105395316c01f162a6a4627b328c0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a51fe0ea-a792-451c-8baf-2b5d7719d125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:19:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:49.335 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:49 np0005548790.localdomain kernel: device tap8d2f31e2-69 left promiscuous mode
Dec 06 10:19:49 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:19:49Z|00152|binding|INFO|Releasing lport 8d2f31e2-69d8-4f51-96e6-9e70a13e384e from this chassis (sb_readonly=0)
Dec 06 10:19:49 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:19:49Z|00153|binding|INFO|Setting lport 8d2f31e2-69d8-4f51-96e6-9e70a13e384e down in Southbound
Dec 06 10:19:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:49.358 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:49.359 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:49.638 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-a51fe0ea-a792-451c-8baf-2b5d7719d125', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a51fe0ea-a792-451c-8baf-2b5d7719d125', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90bd35d6ab7c40c58d9d1d61ff7a12d3', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0453294d-76be-4825-8ca3-e3add9a9a816, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=8d2f31e2-69d8-4f51-96e6-9e70a13e384e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:19:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:49.640 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 8d2f31e2-69d8-4f51-96e6-9e70a13e384e in datapath a51fe0ea-a792-451c-8baf-2b5d7719d125 unbound from our chassis
Dec 06 10:19:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:49.642 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a51fe0ea-a792-451c-8baf-2b5d7719d125, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:19:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:49.643 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[2fb7d7a3-cb85-43cd-a9c9-61933cde1588]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:19:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:19:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:19:50 np0005548790.localdomain podman[315375]: 2025-12-06 10:19:50.068365816 +0000 UTC m=+0.078783003 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 10:19:50 np0005548790.localdomain podman[315375]: 2025-12-06 10:19:50.144332292 +0000 UTC m=+0.154749569 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 10:19:50 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:19:50 np0005548790.localdomain podman[315374]: 2025-12-06 10:19:50.16197852 +0000 UTC m=+0.173592150 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 10:19:50 np0005548790.localdomain podman[315374]: 2025-12-06 10:19:50.171140287 +0000 UTC m=+0.182754397 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:19:50 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:19:50 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-fdc43e3128927d1cbccce2238e37681993846ad1b8f061ee4c9b6700e6dd94cd-merged.mount: Deactivated successfully.
Dec 06 10:19:50 np0005548790.localdomain ceph-mon[301742]: pgmap v315: 177 pgs: 177 active+clean; 192 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Dec 06 10:19:50 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2da51fe0ea\x2da792\x2d451c\x2d8baf\x2d2b5d7719d125.mount: Deactivated successfully.
Dec 06 10:19:50 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:50.457 262327 INFO neutron.agent.dhcp.agent [None req-6146de7e-3744-4e1c-bebf-93eca44b3e3b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:19:50 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:50.458 262327 INFO neutron.agent.dhcp.agent [None req-6146de7e-3744-4e1c-bebf-93eca44b3e3b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:19:50 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:50.922 2 INFO neutron.agent.securitygroups_rpc [None req-8ab54d3f-0dba-4adf-88cd-ebbf59b7b541 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['86fafa90-40d2-4e2b-87d7-dc3d530576aa', 'ae1eaa44-7360-485a-b85b-f1bfb95ce20b']
Dec 06 10:19:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v316: 177 pgs: 177 active+clean; 192 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 13 KiB/s wr, 35 op/s
Dec 06 10:19:51 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:51.541 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:19:52 np0005548790.localdomain ceph-mon[301742]: pgmap v316: 177 pgs: 177 active+clean; 192 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 13 KiB/s wr, 35 op/s
Dec 06 10:19:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:52.557 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:52.674 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:52.679 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v317: 177 pgs: 177 active+clean; 192 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 14 KiB/s wr, 36 op/s
Dec 06 10:19:53 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:19:53.189 2 INFO neutron.agent.securitygroups_rpc [None req-cbc27e7f-4bef-4dfe-ad5b-dd1345427342 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['86fafa90-40d2-4e2b-87d7-dc3d530576aa']
Dec 06 10:19:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:19:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:19:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:19:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:19:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:19:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:19:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:19:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:19:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:19:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:19:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:19:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:19:54 np0005548790.localdomain systemd[1]: tmp-crun.bpXxlX.mount: Deactivated successfully.
Dec 06 10:19:54 np0005548790.localdomain dnsmasq[314994]: exiting on receipt of SIGTERM
Dec 06 10:19:54 np0005548790.localdomain podman[315439]: 2025-12-06 10:19:54.077959922 +0000 UTC m=+0.066998705 container kill b972c9b88c0a19fae22c75ab3a56d4efb32d033ad1e5df219efc24b539d1540b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c9d67231-ab4c-4f96-bca5-0b19bf32e0d0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 10:19:54 np0005548790.localdomain systemd[1]: libpod-b972c9b88c0a19fae22c75ab3a56d4efb32d033ad1e5df219efc24b539d1540b.scope: Deactivated successfully.
Dec 06 10:19:54 np0005548790.localdomain podman[315453]: 2025-12-06 10:19:54.153226359 +0000 UTC m=+0.055516194 container died b972c9b88c0a19fae22c75ab3a56d4efb32d033ad1e5df219efc24b539d1540b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c9d67231-ab4c-4f96-bca5-0b19bf32e0d0, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 10:19:54 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b972c9b88c0a19fae22c75ab3a56d4efb32d033ad1e5df219efc24b539d1540b-userdata-shm.mount: Deactivated successfully.
Dec 06 10:19:54 np0005548790.localdomain podman[315453]: 2025-12-06 10:19:54.194115475 +0000 UTC m=+0.096405300 container remove b972c9b88c0a19fae22c75ab3a56d4efb32d033ad1e5df219efc24b539d1540b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c9d67231-ab4c-4f96-bca5-0b19bf32e0d0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:19:54 np0005548790.localdomain systemd[1]: libpod-conmon-b972c9b88c0a19fae22c75ab3a56d4efb32d033ad1e5df219efc24b539d1540b.scope: Deactivated successfully.
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: pgmap v317: 177 pgs: 177 active+clean; 192 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 14 KiB/s wr, 36 op/s
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:54.319361) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016394319672, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 666, "num_deletes": 251, "total_data_size": 537930, "memory_usage": 549880, "flush_reason": "Manual Compaction"}
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016394325220, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 345511, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23050, "largest_seqno": 23711, "table_properties": {"data_size": 342325, "index_size": 1041, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8349, "raw_average_key_size": 20, "raw_value_size": 335709, "raw_average_value_size": 818, "num_data_blocks": 46, "num_entries": 410, "num_filter_entries": 410, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016367, "oldest_key_time": 1765016367, "file_creation_time": 1765016394, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 5647 microseconds, and 2038 cpu microseconds.
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:54.325255) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 345511 bytes OK
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:54.325275) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:54.327557) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:54.327579) EVENT_LOG_v1 {"time_micros": 1765016394327572, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:54.327597) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 534157, prev total WAL file size 534157, number of live WAL files 2.
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:54.328338) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132303438' seq:72057594037927935, type:22 .. '7061786F73003132333030' seq:0, type:0; will stop at (end)
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(337KB)], [33(18MB)]
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016394328367, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 20181703, "oldest_snapshot_seqno": -1}
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 12836 keys, 18822534 bytes, temperature: kUnknown
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016394410960, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 18822534, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18748379, "index_size": 40966, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32133, "raw_key_size": 344217, "raw_average_key_size": 26, "raw_value_size": 18529162, "raw_average_value_size": 1443, "num_data_blocks": 1551, "num_entries": 12836, "num_filter_entries": 12836, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015768, "oldest_key_time": 0, "file_creation_time": 1765016394, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:54.411443) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 18822534 bytes
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:54.413267) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 243.8 rd, 227.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 18.9 +0.0 blob) out(18.0 +0.0 blob), read-write-amplify(112.9) write-amplify(54.5) OK, records in: 13355, records dropped: 519 output_compression: NoCompression
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:54.413299) EVENT_LOG_v1 {"time_micros": 1765016394413284, "job": 18, "event": "compaction_finished", "compaction_time_micros": 82785, "compaction_time_cpu_micros": 25728, "output_level": 6, "num_output_files": 1, "total_output_size": 18822534, "num_input_records": 13355, "num_output_records": 12836, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016394413490, "job": 18, "event": "table_file_deletion", "file_number": 35}
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016394416417, "job": 18, "event": "table_file_deletion", "file_number": 33}
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:54.328281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:54.416457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:54.416462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:54.416464) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:54.416466) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:19:54 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:19:54.416468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:19:55 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-50f684f63584aa27adcf81303e96b1315140634d356c57299f57780f47e0ab69-merged.mount: Deactivated successfully.
Dec 06 10:19:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v318: 177 pgs: 177 active+clean; 192 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 4.5 KiB/s wr, 2 op/s
Dec 06 10:19:56 np0005548790.localdomain ceph-mon[301742]: pgmap v318: 177 pgs: 177 active+clean; 192 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 4.5 KiB/s wr, 2 op/s
Dec 06 10:19:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v319: 177 pgs: 177 active+clean; 192 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 4.5 KiB/s wr, 2 op/s
Dec 06 10:19:57 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:57.616 262327 INFO neutron.agent.dhcp.agent [None req-f9e31bd3-9206-447c-9049-53d96a8e24e5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:19:57 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2dc9d67231\x2dab4c\x2d4f96\x2dbca5\x2d0b19bf32e0d0.mount: Deactivated successfully.
Dec 06 10:19:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:57.680 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:19:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:57.681 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:19:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:57.682 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 10:19:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:57.682 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:19:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:57.710 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:57.710 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:19:58 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:58.145 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:19:58 np0005548790.localdomain ceph-mon[301742]: pgmap v319: 177 pgs: 177 active+clean; 192 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 4.5 KiB/s wr, 2 op/s
Dec 06 10:19:58 np0005548790.localdomain sudo[315478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:19:58 np0005548790.localdomain sudo[315478]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:19:58 np0005548790.localdomain sudo[315478]: pam_unix(sudo:session): session closed for user root
Dec 06 10:19:58 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "aaafb223-94a1-4885-a088-5199e647d774", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:19:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aaafb223-94a1-4885-a088-5199e647d774, vol_name:cephfs) < ""
Dec 06 10:19:59 np0005548790.localdomain sudo[315496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:19:59 np0005548790.localdomain sudo[315496]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:19:59 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aaafb223-94a1-4885-a088-5199e647d774/.meta.tmp'
Dec 06 10:19:59 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aaafb223-94a1-4885-a088-5199e647d774/.meta.tmp' to config b'/volumes/_nogroup/aaafb223-94a1-4885-a088-5199e647d774/.meta'
Dec 06 10:19:59 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aaafb223-94a1-4885-a088-5199e647d774, vol_name:cephfs) < ""
Dec 06 10:19:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:19:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "aaafb223-94a1-4885-a088-5199e647d774", "format": "json"}]: dispatch
Dec 06 10:19:59 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aaafb223-94a1-4885-a088-5199e647d774, vol_name:cephfs) < ""
Dec 06 10:19:59 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aaafb223-94a1-4885-a088-5199e647d774, vol_name:cephfs) < ""
Dec 06 10:19:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v320: 177 pgs: 177 active+clean; 192 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 5.0 KiB/s wr, 2 op/s
Dec 06 10:19:59 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:19:59.189 262327 INFO neutron.agent.linux.ip_lib [None req-689ed9ef-a504-40b4-a6eb-69383c4e1d28 - - - - - -] Device tap29a03e65-bd cannot be used as it has no MAC address
Dec 06 10:19:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:59.211 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:59 np0005548790.localdomain kernel: device tap29a03e65-bd entered promiscuous mode
Dec 06 10:19:59 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016399.2221] manager: (tap29a03e65-bd): new Generic device (/org/freedesktop/NetworkManager/Devices/34)
Dec 06 10:19:59 np0005548790.localdomain systemd-udevd[315524]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:19:59 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:19:59Z|00154|binding|INFO|Claiming lport 29a03e65-bdd4-432e-ab23-7ec271752c56 for this chassis.
Dec 06 10:19:59 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:19:59Z|00155|binding|INFO|29a03e65-bdd4-432e-ab23-7ec271752c56: Claiming unknown
Dec 06 10:19:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:59.223 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:59 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:19:59Z|00156|binding|INFO|Setting lport 29a03e65-bdd4-432e-ab23-7ec271752c56 ovn-installed in OVS
Dec 06 10:19:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:59.263 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:59.298 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:59.330 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:59 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:19:59Z|00157|binding|INFO|Setting lport 29a03e65-bdd4-432e-ab23-7ec271752c56 up in Southbound
Dec 06 10:19:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:59.357 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-42333291-f903-457d-9049-510f34840094', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-42333291-f903-457d-9049-510f34840094', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f00ab5f7d934f62991ed1e7e798e47e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=65db5b0c-83cb-4c26-99a0-ff16ae53dfc3, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=29a03e65-bdd4-432e-ab23-7ec271752c56) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:19:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:59.358 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 29a03e65-bdd4-432e-ab23-7ec271752c56 in datapath 42333291-f903-457d-9049-510f34840094 bound to our chassis
Dec 06 10:19:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:59.358 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 42333291-f903-457d-9049-510f34840094 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:19:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:59.359 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[609e117d-6ca4-46ce-a65c-197587dc1a81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:19:59 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:19:59Z|00158|binding|INFO|Removing iface tap29a03e65-bd ovn-installed in OVS
Dec 06 10:19:59 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:19:59Z|00159|binding|INFO|Removing lport 29a03e65-bdd4-432e-ab23-7ec271752c56 ovn-installed in OVS
Dec 06 10:19:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:59.370 159200 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 01e0548f-408b-45c5-92eb-708476f0990a with type ""
Dec 06 10:19:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:59.371 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-42333291-f903-457d-9049-510f34840094', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-42333291-f903-457d-9049-510f34840094', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f00ab5f7d934f62991ed1e7e798e47e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=65db5b0c-83cb-4c26-99a0-ff16ae53dfc3, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=29a03e65-bdd4-432e-ab23-7ec271752c56) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:19:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:59.372 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 29a03e65-bdd4-432e-ab23-7ec271752c56 in datapath 42333291-f903-457d-9049-510f34840094 unbound from our chassis
Dec 06 10:19:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:59.372 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 42333291-f903-457d-9049-510f34840094 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:19:59 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:19:59.373 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[8b589536-126d-4ab2-8d6d-c19ab681274d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:19:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:59.375 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:19:59.380 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:19:59 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:19:59 np0005548790.localdomain sudo[315496]: pam_unix(sudo:session): session closed for user root
Dec 06 10:19:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:19:59 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:19:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:19:59 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:19:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:19:59 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] update: starting ev b8c3a438-2725-4645-9088-260a577339f9 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:19:59 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] complete: finished ev b8c3a438-2725-4645-9088-260a577339f9 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:19:59 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Completed event b8c3a438-2725-4645-9088-260a577339f9 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 06 10:19:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:19:59 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:20:00 np0005548790.localdomain sudo[315611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:20:00 np0005548790.localdomain sudo[315611]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:20:00 np0005548790.localdomain sudo[315611]: pam_unix(sudo:session): session closed for user root
Dec 06 10:20:00 np0005548790.localdomain podman[315610]: 
Dec 06 10:20:00 np0005548790.localdomain podman[315610]: 2025-12-06 10:20:00.109881404 +0000 UTC m=+0.075154265 container create d312a10d5a52b3c6c16932b9d346744f1ed40b127a64252ed5179c54753fe315 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42333291-f903-457d-9049-510f34840094, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.147 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:20:00 np0005548790.localdomain systemd[1]: Started libpod-conmon-d312a10d5a52b3c6c16932b9d346744f1ed40b127a64252ed5179c54753fe315.scope.
Dec 06 10:20:00 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:20:00 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92fd4bdccfb64410229adcfee7c773075a4f59708cfb48ae605129b1e6294fc1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:20:00 np0005548790.localdomain podman[315610]: 2025-12-06 10:20:00.078497735 +0000 UTC m=+0.043770596 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:20:00 np0005548790.localdomain podman[315610]: 2025-12-06 10:20:00.188126342 +0000 UTC m=+0.153399223 container init d312a10d5a52b3c6c16932b9d346744f1ed40b127a64252ed5179c54753fe315 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42333291-f903-457d-9049-510f34840094, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:20:00 np0005548790.localdomain podman[315610]: 2025-12-06 10:20:00.199544431 +0000 UTC m=+0.164817312 container start d312a10d5a52b3c6c16932b9d346744f1ed40b127a64252ed5179c54753fe315 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42333291-f903-457d-9049-510f34840094, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:20:00 np0005548790.localdomain dnsmasq[315646]: started, version 2.85 cachesize 150
Dec 06 10:20:00 np0005548790.localdomain dnsmasq[315646]: DNS service limited to local subnets
Dec 06 10:20:00 np0005548790.localdomain dnsmasq[315646]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:20:00 np0005548790.localdomain dnsmasq[315646]: warning: no upstream servers configured
Dec 06 10:20:00 np0005548790.localdomain dnsmasq-dhcp[315646]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 06 10:20:00 np0005548790.localdomain dnsmasq[315646]: read /var/lib/neutron/dhcp/42333291-f903-457d-9049-510f34840094/addn_hosts - 0 addresses
Dec 06 10:20:00 np0005548790.localdomain dnsmasq-dhcp[315646]: read /var/lib/neutron/dhcp/42333291-f903-457d-9049-510f34840094/host
Dec 06 10:20:00 np0005548790.localdomain dnsmasq-dhcp[315646]: read /var/lib/neutron/dhcp/42333291-f903-457d-9049-510f34840094/opts
Dec 06 10:20:00 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:00.331 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:00 np0005548790.localdomain kernel: device tap29a03e65-bd left promiscuous mode
Dec 06 10:20:00 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:00.351 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:00 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "aaafb223-94a1-4885-a088-5199e647d774", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:20:00 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "aaafb223-94a1-4885-a088-5199e647d774", "format": "json"}]: dispatch
Dec 06 10:20:00 np0005548790.localdomain ceph-mon[301742]: pgmap v320: 177 pgs: 177 active+clean; 192 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 5.0 KiB/s wr, 2 op/s
Dec 06 10:20:00 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:20:00 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:20:00 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:20:00 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:20:00 np0005548790.localdomain ceph-mon[301742]: overall HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.429 262327 INFO neutron.agent.dhcp.agent [None req-c01f0675-2918-4851-a368-60e8ac2f4ec9 - - - - - -] DHCP configuration for ports {'e40ec8c5-9f39-4cf9-b03d-df445072a9de'} is completed
Dec 06 10:20:00 np0005548790.localdomain dnsmasq[315646]: read /var/lib/neutron/dhcp/42333291-f903-457d-9049-510f34840094/addn_hosts - 0 addresses
Dec 06 10:20:00 np0005548790.localdomain dnsmasq-dhcp[315646]: read /var/lib/neutron/dhcp/42333291-f903-457d-9049-510f34840094/host
Dec 06 10:20:00 np0005548790.localdomain dnsmasq-dhcp[315646]: read /var/lib/neutron/dhcp/42333291-f903-457d-9049-510f34840094/opts
Dec 06 10:20:00 np0005548790.localdomain podman[315666]: 2025-12-06 10:20:00.525385911 +0000 UTC m=+0.058347170 container kill d312a10d5a52b3c6c16932b9d346744f1ed40b127a64252ed5179c54753fe315 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42333291-f903-457d-9049-510f34840094, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent [None req-689ed9ef-a504-40b4-a6eb-69383c4e1d28 - - - - - -] Unable to reload_allocations dhcp for 42333291-f903-457d-9049-510f34840094.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap29a03e65-bd not found in namespace qdhcp-42333291-f903-457d-9049-510f34840094.
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent     return fut.result()
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent     raise self._exception
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap29a03e65-bd not found in namespace qdhcp-42333291-f903-457d-9049-510f34840094.
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.551 262327 ERROR neutron.agent.dhcp.agent 
Dec 06 10:20:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:00.556 262327 INFO neutron.agent.dhcp.agent [-] Synchronizing state
Dec 06 10:20:01 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:01.165 262327 INFO neutron.agent.dhcp.agent [None req-ec2e0b7b-ca98-4d5c-b043-2d0825631077 - - - - - -] All active networks have been fetched through RPC.
Dec 06 10:20:01 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:01.165 262327 INFO neutron.agent.dhcp.agent [-] Starting network 42333291-f903-457d-9049-510f34840094 dhcp configuration
Dec 06 10:20:01 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:01.166 262327 INFO neutron.agent.dhcp.agent [-] Finished network 42333291-f903-457d-9049-510f34840094 dhcp configuration
Dec 06 10:20:01 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:01.166 262327 INFO neutron.agent.dhcp.agent [None req-ec2e0b7b-ca98-4d5c-b043-2d0825631077 - - - - - -] Synchronizing state complete
Dec 06 10:20:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v321: 177 pgs: 177 active+clean; 192 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s wr, 0 op/s
Dec 06 10:20:01 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:01.340 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:01 np0005548790.localdomain dnsmasq[315646]: exiting on receipt of SIGTERM
Dec 06 10:20:01 np0005548790.localdomain podman[315697]: 2025-12-06 10:20:01.578372221 +0000 UTC m=+0.058603286 container kill d312a10d5a52b3c6c16932b9d346744f1ed40b127a64252ed5179c54753fe315 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42333291-f903-457d-9049-510f34840094, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 10:20:01 np0005548790.localdomain systemd[1]: libpod-d312a10d5a52b3c6c16932b9d346744f1ed40b127a64252ed5179c54753fe315.scope: Deactivated successfully.
Dec 06 10:20:01 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:01.585 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:c3:b8 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-667a7cf2-00f8-4896-8e3d-8222fad7f397', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-667a7cf2-00f8-4896-8e3d-8222fad7f397', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f8e1c4c589749b99178bbc7c2bea3f0', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=528a9f17-509a-4c49-a9ac-4a6363f2178f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=659e29bd-a84c-4733-b754-dbb7b70b98cc) old=Port_Binding(mac=['fa:16:3e:94:c3:b8 10.100.0.18 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-667a7cf2-00f8-4896-8e3d-8222fad7f397', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-667a7cf2-00f8-4896-8e3d-8222fad7f397', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f8e1c4c589749b99178bbc7c2bea3f0', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:20:01 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:01.587 159200 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 659e29bd-a84c-4733-b754-dbb7b70b98cc in datapath 667a7cf2-00f8-4896-8e3d-8222fad7f397 updated
Dec 06 10:20:01 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:01.590 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 667a7cf2-00f8-4896-8e3d-8222fad7f397, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:20:01 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:01.591 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[196d61c9-2f2b-4a98-815f-68cd013b37a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:20:01 np0005548790.localdomain podman[315709]: 2025-12-06 10:20:01.6359543 +0000 UTC m=+0.044620179 container died d312a10d5a52b3c6c16932b9d346744f1ed40b127a64252ed5179c54753fe315 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42333291-f903-457d-9049-510f34840094, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 10:20:01 np0005548790.localdomain podman[315709]: 2025-12-06 10:20:01.668274495 +0000 UTC m=+0.076940304 container cleanup d312a10d5a52b3c6c16932b9d346744f1ed40b127a64252ed5179c54753fe315 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42333291-f903-457d-9049-510f34840094, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 10:20:01 np0005548790.localdomain systemd[1]: libpod-conmon-d312a10d5a52b3c6c16932b9d346744f1ed40b127a64252ed5179c54753fe315.scope: Deactivated successfully.
Dec 06 10:20:01 np0005548790.localdomain podman[315716]: 2025-12-06 10:20:01.685996924 +0000 UTC m=+0.081355603 container remove d312a10d5a52b3c6c16932b9d346744f1ed40b127a64252ed5179c54753fe315 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-42333291-f903-457d-9049-510f34840094, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 10:20:02 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Writing back 50 completed events
Dec 06 10:20:02 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:20:02 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-92fd4bdccfb64410229adcfee7c773075a4f59708cfb48ae605129b1e6294fc1-merged.mount: Deactivated successfully.
Dec 06 10:20:02 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d312a10d5a52b3c6c16932b9d346744f1ed40b127a64252ed5179c54753fe315-userdata-shm.mount: Deactivated successfully.
Dec 06 10:20:02 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2d42333291\x2df903\x2d457d\x2d9049\x2d510f34840094.mount: Deactivated successfully.
Dec 06 10:20:02 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "aaafb223-94a1-4885-a088-5199e647d774", "snap_name": "86b1b260-491a-4ff7-9199-adafe7f88c8f", "format": "json"}]: dispatch
Dec 06 10:20:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:86b1b260-491a-4ff7-9199-adafe7f88c8f, sub_name:aaafb223-94a1-4885-a088-5199e647d774, vol_name:cephfs) < ""
Dec 06 10:20:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:86b1b260-491a-4ff7-9199-adafe7f88c8f, sub_name:aaafb223-94a1-4885-a088-5199e647d774, vol_name:cephfs) < ""
Dec 06 10:20:02 np0005548790.localdomain ceph-mon[301742]: pgmap v321: 177 pgs: 177 active+clean; 192 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s wr, 0 op/s
Dec 06 10:20:02 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:20:02 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:02.753 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v322: 177 pgs: 177 active+clean; 192 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 3.8 KiB/s wr, 1 op/s
Dec 06 10:20:03 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "aaafb223-94a1-4885-a088-5199e647d774", "snap_name": "86b1b260-491a-4ff7-9199-adafe7f88c8f", "format": "json"}]: dispatch
Dec 06 10:20:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:20:04 np0005548790.localdomain ceph-mon[301742]: pgmap v322: 177 pgs: 177 active+clean; 192 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 3.8 KiB/s wr, 1 op/s
Dec 06 10:20:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v323: 177 pgs: 177 active+clean; 192 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 3.2 KiB/s wr, 1 op/s
Dec 06 10:20:06 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "aaafb223-94a1-4885-a088-5199e647d774", "snap_name": "86b1b260-491a-4ff7-9199-adafe7f88c8f_1b65360c-9474-4053-9db5-09821cc600f9", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:06 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:86b1b260-491a-4ff7-9199-adafe7f88c8f_1b65360c-9474-4053-9db5-09821cc600f9, sub_name:aaafb223-94a1-4885-a088-5199e647d774, vol_name:cephfs) < ""
Dec 06 10:20:06 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aaafb223-94a1-4885-a088-5199e647d774/.meta.tmp'
Dec 06 10:20:06 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aaafb223-94a1-4885-a088-5199e647d774/.meta.tmp' to config b'/volumes/_nogroup/aaafb223-94a1-4885-a088-5199e647d774/.meta'
Dec 06 10:20:06 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:86b1b260-491a-4ff7-9199-adafe7f88c8f_1b65360c-9474-4053-9db5-09821cc600f9, sub_name:aaafb223-94a1-4885-a088-5199e647d774, vol_name:cephfs) < ""
Dec 06 10:20:06 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "aaafb223-94a1-4885-a088-5199e647d774", "snap_name": "86b1b260-491a-4ff7-9199-adafe7f88c8f", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:06 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:86b1b260-491a-4ff7-9199-adafe7f88c8f, sub_name:aaafb223-94a1-4885-a088-5199e647d774, vol_name:cephfs) < ""
Dec 06 10:20:06 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aaafb223-94a1-4885-a088-5199e647d774/.meta.tmp'
Dec 06 10:20:06 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aaafb223-94a1-4885-a088-5199e647d774/.meta.tmp' to config b'/volumes/_nogroup/aaafb223-94a1-4885-a088-5199e647d774/.meta'
Dec 06 10:20:06 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:86b1b260-491a-4ff7-9199-adafe7f88c8f, sub_name:aaafb223-94a1-4885-a088-5199e647d774, vol_name:cephfs) < ""
Dec 06 10:20:06 np0005548790.localdomain ceph-mon[301742]: pgmap v323: 177 pgs: 177 active+clean; 192 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 3.2 KiB/s wr, 1 op/s
Dec 06 10:20:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v324: 177 pgs: 177 active+clean; 192 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 3.2 KiB/s wr, 1 op/s
Dec 06 10:20:07 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:07.317 262327 INFO neutron.agent.linux.ip_lib [None req-6d8d0749-801d-4349-ae76-9f2b48c4b3f2 - - - - - -] Device tap95e0b5c7-fe cannot be used as it has no MAC address
Dec 06 10:20:07 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:07.339 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:07 np0005548790.localdomain kernel: device tap95e0b5c7-fe entered promiscuous mode
Dec 06 10:20:07 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:07.347 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:07 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016407.3475] manager: (tap95e0b5c7-fe): new Generic device (/org/freedesktop/NetworkManager/Devices/35)
Dec 06 10:20:07 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:07Z|00160|binding|INFO|Claiming lport 95e0b5c7-fe8e-4869-ba1c-3e8e00e81df6 for this chassis.
Dec 06 10:20:07 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:07Z|00161|binding|INFO|95e0b5c7-fe8e-4869-ba1c-3e8e00e81df6: Claiming unknown
Dec 06 10:20:07 np0005548790.localdomain systemd-udevd[315751]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:20:07 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:07.355 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-1fd699fe-a5f3-49c0-88c7-01911eab5153', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1fd699fe-a5f3-49c0-88c7-01911eab5153', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f00ab5f7d934f62991ed1e7e798e47e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99a438ef-9e8b-4c22-9cf2-faf6518a2343, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=95e0b5c7-fe8e-4869-ba1c-3e8e00e81df6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:20:07 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:07.358 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 95e0b5c7-fe8e-4869-ba1c-3e8e00e81df6 in datapath 1fd699fe-a5f3-49c0-88c7-01911eab5153 bound to our chassis
Dec 06 10:20:07 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:07.359 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1fd699fe-a5f3-49c0-88c7-01911eab5153 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:20:07 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:07.360 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[743076bb-3329-4b09-beb7-1f12681309ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:20:07 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap95e0b5c7-fe: No such device
Dec 06 10:20:07 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap95e0b5c7-fe: No such device
Dec 06 10:20:07 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:07Z|00162|binding|INFO|Setting lport 95e0b5c7-fe8e-4869-ba1c-3e8e00e81df6 ovn-installed in OVS
Dec 06 10:20:07 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:07Z|00163|binding|INFO|Setting lport 95e0b5c7-fe8e-4869-ba1c-3e8e00e81df6 up in Southbound
Dec 06 10:20:07 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:07.381 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:07 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:07.383 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:07 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap95e0b5c7-fe: No such device
Dec 06 10:20:07 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap95e0b5c7-fe: No such device
Dec 06 10:20:07 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap95e0b5c7-fe: No such device
Dec 06 10:20:07 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap95e0b5c7-fe: No such device
Dec 06 10:20:07 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap95e0b5c7-fe: No such device
Dec 06 10:20:07 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap95e0b5c7-fe: No such device
Dec 06 10:20:07 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:07.414 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:07 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:07.442 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:20:07 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "aaafb223-94a1-4885-a088-5199e647d774", "snap_name": "86b1b260-491a-4ff7-9199-adafe7f88c8f_1b65360c-9474-4053-9db5-09821cc600f9", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:07 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "aaafb223-94a1-4885-a088-5199e647d774", "snap_name": "86b1b260-491a-4ff7-9199-adafe7f88c8f", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:07 np0005548790.localdomain podman[315779]: 2025-12-06 10:20:07.571060282 +0000 UTC m=+0.080729066 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:20:07 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:07.588 2 INFO neutron.agent.securitygroups_rpc [None req-6d59f8dd-76ee-4672-86ac-2d91b87c0791 260dfc8941214c308c05293af65bdae9 24086b701d6b4d4081d2e63578d18d24 - - default default] Security group member updated ['ea587027-2c02-4165-a90f-98eaf0ce1ddb']
Dec 06 10:20:07 np0005548790.localdomain podman[315779]: 2025-12-06 10:20:07.607272613 +0000 UTC m=+0.116941407 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:20:07 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:07.615 2 INFO neutron.agent.securitygroups_rpc [None req-1250ea59-7c13-4a58-b22f-38de2df53542 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['c05cd5e8-c5d4-4d05-80ba-b6a4af8b3ba8']
Dec 06 10:20:07 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:20:07 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:07.797 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:08 np0005548790.localdomain podman[315840]: 
Dec 06 10:20:08 np0005548790.localdomain podman[315840]: 2025-12-06 10:20:08.276336031 +0000 UTC m=+0.078011062 container create fb7ac37f81200a0b2f7cd85c1820dde5fa28f9a4019b3cdd2bbbc8fe17cd558c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1fd699fe-a5f3-49c0-88c7-01911eab5153, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:20:08 np0005548790.localdomain systemd[1]: Started libpod-conmon-fb7ac37f81200a0b2f7cd85c1820dde5fa28f9a4019b3cdd2bbbc8fe17cd558c.scope.
Dec 06 10:20:08 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:20:08 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97aced54ce5c907e9167b63ed37d52762223f03a0d1ef7b2633584df53445ee0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:20:08 np0005548790.localdomain podman[315840]: 2025-12-06 10:20:08.235174707 +0000 UTC m=+0.036849798 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:20:08 np0005548790.localdomain podman[315840]: 2025-12-06 10:20:08.337014634 +0000 UTC m=+0.138689685 container init fb7ac37f81200a0b2f7cd85c1820dde5fa28f9a4019b3cdd2bbbc8fe17cd558c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1fd699fe-a5f3-49c0-88c7-01911eab5153, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:20:08 np0005548790.localdomain podman[315840]: 2025-12-06 10:20:08.346109619 +0000 UTC m=+0.147784670 container start fb7ac37f81200a0b2f7cd85c1820dde5fa28f9a4019b3cdd2bbbc8fe17cd558c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1fd699fe-a5f3-49c0-88c7-01911eab5153, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 10:20:08 np0005548790.localdomain dnsmasq[315859]: started, version 2.85 cachesize 150
Dec 06 10:20:08 np0005548790.localdomain dnsmasq[315859]: DNS service limited to local subnets
Dec 06 10:20:08 np0005548790.localdomain dnsmasq[315859]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:20:08 np0005548790.localdomain dnsmasq[315859]: warning: no upstream servers configured
Dec 06 10:20:08 np0005548790.localdomain dnsmasq-dhcp[315859]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 06 10:20:08 np0005548790.localdomain dnsmasq[315859]: read /var/lib/neutron/dhcp/1fd699fe-a5f3-49c0-88c7-01911eab5153/addn_hosts - 0 addresses
Dec 06 10:20:08 np0005548790.localdomain dnsmasq-dhcp[315859]: read /var/lib/neutron/dhcp/1fd699fe-a5f3-49c0-88c7-01911eab5153/host
Dec 06 10:20:08 np0005548790.localdomain dnsmasq-dhcp[315859]: read /var/lib/neutron/dhcp/1fd699fe-a5f3-49c0-88c7-01911eab5153/opts
Dec 06 10:20:08 np0005548790.localdomain ceph-mon[301742]: pgmap v324: 177 pgs: 177 active+clean; 192 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 3.2 KiB/s wr, 1 op/s
Dec 06 10:20:08 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:08.535 262327 INFO neutron.agent.dhcp.agent [None req-bd543e46-22bf-4d98-8d93-0c713f5f3287 - - - - - -] DHCP configuration for ports {'d55dbb67-2d00-4ea6-96d9-b76e9abacc84'} is completed
Dec 06 10:20:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:20:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 8400.1 total, 600.0 interval
                                                          Cumulative writes: 10K writes, 37K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s
                                                          Cumulative WAL: 10K writes, 2822 syncs, 3.55 writes per sync, written: 0.03 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 4703 writes, 14K keys, 4703 commit groups, 1.0 writes per commit group, ingest: 10.73 MB, 0.02 MB/s
                                                          Interval WAL: 4703 writes, 2083 syncs, 2.26 writes per sync, written: 0.01 GB, 0.02 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 10:20:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:20:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v325: 177 pgs: 177 active+clean; 192 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 7.3 KiB/s wr, 2 op/s
Dec 06 10:20:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "aaafb223-94a1-4885-a088-5199e647d774", "format": "json"}]: dispatch
Dec 06 10:20:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:aaafb223-94a1-4885-a088-5199e647d774, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:20:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:aaafb223-94a1-4885-a088-5199e647d774, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:20:09 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:20:09.541+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aaafb223-94a1-4885-a088-5199e647d774' of type subvolume
Dec 06 10:20:09 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aaafb223-94a1-4885-a088-5199e647d774' of type subvolume
Dec 06 10:20:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "aaafb223-94a1-4885-a088-5199e647d774", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aaafb223-94a1-4885-a088-5199e647d774, vol_name:cephfs) < ""
Dec 06 10:20:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/aaafb223-94a1-4885-a088-5199e647d774'' moved to trashcan
Dec 06 10:20:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:20:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aaafb223-94a1-4885-a088-5199e647d774, vol_name:cephfs) < ""
Dec 06 10:20:10 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:10.430 2 INFO neutron.agent.securitygroups_rpc [None req-e7d31636-8439-4e6f-9785-a953cb0386af 260dfc8941214c308c05293af65bdae9 24086b701d6b4d4081d2e63578d18d24 - - default default] Security group member updated ['ea587027-2c02-4165-a90f-98eaf0ce1ddb']
Dec 06 10:20:10 np0005548790.localdomain ceph-mon[301742]: pgmap v325: 177 pgs: 177 active+clean; 192 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 7.3 KiB/s wr, 2 op/s
Dec 06 10:20:10 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "aaafb223-94a1-4885-a088-5199e647d774", "format": "json"}]: dispatch
Dec 06 10:20:10 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "aaafb223-94a1-4885-a088-5199e647d774", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:10 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e164 e164: 6 total, 6 up, 6 in
Dec 06 10:20:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d4fe01f5-ec52-46f8-9434-cea0a18dbdb7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:20:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d4fe01f5-ec52-46f8-9434-cea0a18dbdb7, vol_name:cephfs) < ""
Dec 06 10:20:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d4fe01f5-ec52-46f8-9434-cea0a18dbdb7/.meta.tmp'
Dec 06 10:20:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d4fe01f5-ec52-46f8-9434-cea0a18dbdb7/.meta.tmp' to config b'/volumes/_nogroup/d4fe01f5-ec52-46f8-9434-cea0a18dbdb7/.meta'
Dec 06 10:20:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d4fe01f5-ec52-46f8-9434-cea0a18dbdb7, vol_name:cephfs) < ""
Dec 06 10:20:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d4fe01f5-ec52-46f8-9434-cea0a18dbdb7", "format": "json"}]: dispatch
Dec 06 10:20:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d4fe01f5-ec52-46f8-9434-cea0a18dbdb7, vol_name:cephfs) < ""
Dec 06 10:20:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d4fe01f5-ec52-46f8-9434-cea0a18dbdb7, vol_name:cephfs) < ""
Dec 06 10:20:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v327: 177 pgs: 177 active+clean; 192 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 8.2 KiB/s wr, 2 op/s
Dec 06 10:20:11 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:11.212 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:c3:b8 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-667a7cf2-00f8-4896-8e3d-8222fad7f397', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-667a7cf2-00f8-4896-8e3d-8222fad7f397', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f8e1c4c589749b99178bbc7c2bea3f0', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=528a9f17-509a-4c49-a9ac-4a6363f2178f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=659e29bd-a84c-4733-b754-dbb7b70b98cc) old=Port_Binding(mac=['fa:16:3e:94:c3:b8 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-667a7cf2-00f8-4896-8e3d-8222fad7f397', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-667a7cf2-00f8-4896-8e3d-8222fad7f397', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5f8e1c4c589749b99178bbc7c2bea3f0', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:20:11 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:11.214 159200 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 659e29bd-a84c-4733-b754-dbb7b70b98cc in datapath 667a7cf2-00f8-4896-8e3d-8222fad7f397 updated
Dec 06 10:20:11 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:11.216 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 667a7cf2-00f8-4896-8e3d-8222fad7f397, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:20:11 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:11.217 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[6afb3c92-4d3d-4ceb-b8a7-fcbf8e1b1553]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:20:11 np0005548790.localdomain ceph-mon[301742]: osdmap e164: 6 total, 6 up, 6 in
Dec 06 10:20:11 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d4fe01f5-ec52-46f8-9434-cea0a18dbdb7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:20:11 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:20:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7adc54b5-eca0-4f1a-9df6-2cf3e56ffc2b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:20:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7adc54b5-eca0-4f1a-9df6-2cf3e56ffc2b, vol_name:cephfs) < ""
Dec 06 10:20:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7adc54b5-eca0-4f1a-9df6-2cf3e56ffc2b/.meta.tmp'
Dec 06 10:20:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7adc54b5-eca0-4f1a-9df6-2cf3e56ffc2b/.meta.tmp' to config b'/volumes/_nogroup/7adc54b5-eca0-4f1a-9df6-2cf3e56ffc2b/.meta'
Dec 06 10:20:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7adc54b5-eca0-4f1a-9df6-2cf3e56ffc2b, vol_name:cephfs) < ""
Dec 06 10:20:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7adc54b5-eca0-4f1a-9df6-2cf3e56ffc2b", "format": "json"}]: dispatch
Dec 06 10:20:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7adc54b5-eca0-4f1a-9df6-2cf3e56ffc2b, vol_name:cephfs) < ""
Dec 06 10:20:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7adc54b5-eca0-4f1a-9df6-2cf3e56ffc2b, vol_name:cephfs) < ""
Dec 06 10:20:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:20:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:20:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:20:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:20:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:20:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:20:12 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:12.222 2 INFO neutron.agent.securitygroups_rpc [None req-6d93ce58-a6ee-4351-b70e-4269edfdd4c8 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['81acd248-ff6c-407a-a3e7-57e59597aa28', 'c05cd5e8-c5d4-4d05-80ba-b6a4af8b3ba8', '1d275e53-d6a2-4014-8325-c04642bc5279']
Dec 06 10:20:12 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:12.251 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:20:12 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:12.253 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:20:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:12.252 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:12 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:20:12 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:20:12 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:20:12 np0005548790.localdomain systemd[1]: tmp-crun.Cow1Ec.mount: Deactivated successfully.
Dec 06 10:20:12 np0005548790.localdomain podman[315862]: 2025-12-06 10:20:12.584734003 +0000 UTC m=+0.085416302 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:20:12 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d4fe01f5-ec52-46f8-9434-cea0a18dbdb7", "format": "json"}]: dispatch
Dec 06 10:20:12 np0005548790.localdomain ceph-mon[301742]: pgmap v327: 177 pgs: 177 active+clean; 192 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 8.2 KiB/s wr, 2 op/s
Dec 06 10:20:12 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:20:12 np0005548790.localdomain podman[315862]: 2025-12-06 10:20:12.594089227 +0000 UTC m=+0.094771556 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:20:12 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:20:12 np0005548790.localdomain podman[315863]: 2025-12-06 10:20:12.687954788 +0000 UTC m=+0.185337848 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.buildah.version=1.33.7, architecture=x86_64, maintainer=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, release=1755695350, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 06 10:20:12 np0005548790.localdomain podman[315863]: 2025-12-06 10:20:12.704998879 +0000 UTC m=+0.202381889 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 06 10:20:12 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:20:12 np0005548790.localdomain podman[315861]: 2025-12-06 10:20:12.788823728 +0000 UTC m=+0.289867247 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:20:12 np0005548790.localdomain podman[315861]: 2025-12-06 10:20:12.826266782 +0000 UTC m=+0.327310271 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 10:20:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:12.830 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:12 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:20:12 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:12.873 2 INFO neutron.agent.securitygroups_rpc [None req-5ab424f2-09a3-4942-a99f-ad10877e0761 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['81acd248-ff6c-407a-a3e7-57e59597aa28', '1d275e53-d6a2-4014-8325-c04642bc5279']
Dec 06 10:20:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:20:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 8400.2 total, 600.0 interval
                                                          Cumulative writes: 13K writes, 49K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.01 MB/s
                                                          Cumulative WAL: 13K writes, 4123 syncs, 3.23 writes per sync, written: 0.04 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 7636 writes, 24K keys, 7636 commit groups, 1.0 writes per commit group, ingest: 22.35 MB, 0.04 MB/s
                                                          Interval WAL: 7636 writes, 3272 syncs, 2.33 writes per sync, written: 0.02 GB, 0.04 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 10:20:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v328: 177 pgs: 177 active+clean; 192 MiB data, 896 MiB used, 41 GiB / 42 GiB avail; 3.3 KiB/s rd, 15 KiB/s wr, 9 op/s
Dec 06 10:20:13 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e165 e165: 6 total, 6 up, 6 in
Dec 06 10:20:13 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7adc54b5-eca0-4f1a-9df6-2cf3e56ffc2b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:20:13 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7adc54b5-eca0-4f1a-9df6-2cf3e56ffc2b", "format": "json"}]: dispatch
Dec 06 10:20:13 np0005548790.localdomain ceph-mon[301742]: osdmap e165: 6 total, 6 up, 6 in
Dec 06 10:20:13 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:13.721 262327 INFO neutron.agent.linux.ip_lib [None req-7a9aa61d-79ad-4578-a894-ec4e64e5022c - - - - - -] Device tap0321377d-c7 cannot be used as it has no MAC address
Dec 06 10:20:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:13.745 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:13 np0005548790.localdomain kernel: device tap0321377d-c7 entered promiscuous mode
Dec 06 10:20:13 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016413.7534] manager: (tap0321377d-c7): new Generic device (/org/freedesktop/NetworkManager/Devices/36)
Dec 06 10:20:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:13.754 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:13 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:13Z|00164|binding|INFO|Claiming lport 0321377d-c77d-49b4-9d1b-9538a197b836 for this chassis.
Dec 06 10:20:13 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:13Z|00165|binding|INFO|0321377d-c77d-49b4-9d1b-9538a197b836: Claiming unknown
Dec 06 10:20:13 np0005548790.localdomain systemd-udevd[315931]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:20:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:13.765 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-02f2c06c-65d2-4bc9-a1d2-d3d62c57fd20', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02f2c06c-65d2-4bc9-a1d2-d3d62c57fd20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f00ab5f7d934f62991ed1e7e798e47e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=391dad62-94ba-4f16-98ce-d5116b9429d7, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=0321377d-c77d-49b4-9d1b-9538a197b836) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:20:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:13.767 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 0321377d-c77d-49b4-9d1b-9538a197b836 in datapath 02f2c06c-65d2-4bc9-a1d2-d3d62c57fd20 bound to our chassis
Dec 06 10:20:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:13.768 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 02f2c06c-65d2-4bc9-a1d2-d3d62c57fd20 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:20:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:13.769 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[d311b8a6-f708-4624-a3c3-dac945ff288d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:20:13 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap0321377d-c7: No such device
Dec 06 10:20:13 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap0321377d-c7: No such device
Dec 06 10:20:13 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:13Z|00166|binding|INFO|Setting lport 0321377d-c77d-49b4-9d1b-9538a197b836 ovn-installed in OVS
Dec 06 10:20:13 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:13Z|00167|binding|INFO|Setting lport 0321377d-c77d-49b4-9d1b-9538a197b836 up in Southbound
Dec 06 10:20:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:13.793 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:13 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap0321377d-c7: No such device
Dec 06 10:20:13 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap0321377d-c7: No such device
Dec 06 10:20:13 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap0321377d-c7: No such device
Dec 06 10:20:13 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap0321377d-c7: No such device
Dec 06 10:20:13 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap0321377d-c7: No such device
Dec 06 10:20:13 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap0321377d-c7: No such device
Dec 06 10:20:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:13.831 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:13.898 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4d6e1245-952e-4af4-91c8-d91b7174dcef", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:20:13 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4d6e1245-952e-4af4-91c8-d91b7174dcef, vol_name:cephfs) < ""
Dec 06 10:20:14 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4d6e1245-952e-4af4-91c8-d91b7174dcef/.meta.tmp'
Dec 06 10:20:14 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4d6e1245-952e-4af4-91c8-d91b7174dcef/.meta.tmp' to config b'/volumes/_nogroup/4d6e1245-952e-4af4-91c8-d91b7174dcef/.meta'
Dec 06 10:20:14 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4d6e1245-952e-4af4-91c8-d91b7174dcef, vol_name:cephfs) < ""
Dec 06 10:20:14 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4d6e1245-952e-4af4-91c8-d91b7174dcef", "format": "json"}]: dispatch
Dec 06 10:20:14 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4d6e1245-952e-4af4-91c8-d91b7174dcef, vol_name:cephfs) < ""
Dec 06 10:20:14 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4d6e1245-952e-4af4-91c8-d91b7174dcef, vol_name:cephfs) < ""
Dec 06 10:20:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:20:14 np0005548790.localdomain ceph-mon[301742]: pgmap v328: 177 pgs: 177 active+clean; 192 MiB data, 896 MiB used, 41 GiB / 42 GiB avail; 3.3 KiB/s rd, 15 KiB/s wr, 9 op/s
Dec 06 10:20:14 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:20:14 np0005548790.localdomain podman[316002]: 
Dec 06 10:20:14 np0005548790.localdomain podman[316002]: 2025-12-06 10:20:14.739418423 +0000 UTC m=+0.095591698 container create 901e5531e544ba08a69321c97475183fa272a46eb412d8079d565629a948ef9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-02f2c06c-65d2-4bc9-a1d2-d3d62c57fd20, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 10:20:14 np0005548790.localdomain systemd[1]: Started libpod-conmon-901e5531e544ba08a69321c97475183fa272a46eb412d8079d565629a948ef9e.scope.
Dec 06 10:20:14 np0005548790.localdomain podman[316002]: 2025-12-06 10:20:14.693331386 +0000 UTC m=+0.049504711 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:20:14 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:20:14 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/449c806717219ec28a0158ac663e231ee8e12a65b6f924231d776c3bac507f8f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:20:14 np0005548790.localdomain podman[316002]: 2025-12-06 10:20:14.832414051 +0000 UTC m=+0.188587316 container init 901e5531e544ba08a69321c97475183fa272a46eb412d8079d565629a948ef9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-02f2c06c-65d2-4bc9-a1d2-d3d62c57fd20, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 10:20:14 np0005548790.localdomain podman[316002]: 2025-12-06 10:20:14.841564338 +0000 UTC m=+0.197737603 container start 901e5531e544ba08a69321c97475183fa272a46eb412d8079d565629a948ef9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-02f2c06c-65d2-4bc9-a1d2-d3d62c57fd20, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:20:14 np0005548790.localdomain dnsmasq[316022]: started, version 2.85 cachesize 150
Dec 06 10:20:14 np0005548790.localdomain dnsmasq[316022]: DNS service limited to local subnets
Dec 06 10:20:14 np0005548790.localdomain dnsmasq[316022]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:20:14 np0005548790.localdomain dnsmasq[316022]: warning: no upstream servers configured
Dec 06 10:20:14 np0005548790.localdomain dnsmasq-dhcp[316022]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 06 10:20:14 np0005548790.localdomain dnsmasq[316022]: read /var/lib/neutron/dhcp/02f2c06c-65d2-4bc9-a1d2-d3d62c57fd20/addn_hosts - 0 addresses
Dec 06 10:20:14 np0005548790.localdomain dnsmasq-dhcp[316022]: read /var/lib/neutron/dhcp/02f2c06c-65d2-4bc9-a1d2-d3d62c57fd20/host
Dec 06 10:20:14 np0005548790.localdomain dnsmasq-dhcp[316022]: read /var/lib/neutron/dhcp/02f2c06c-65d2-4bc9-a1d2-d3d62c57fd20/opts
Dec 06 10:20:14 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:14.919 262327 INFO neutron.agent.linux.ip_lib [None req-ebe99ef0-a185-42f0-969e-8694fa560e83 - - - - - -] Device tapd02e78d4-85 cannot be used as it has no MAC address
Dec 06 10:20:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:14.989 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:14 np0005548790.localdomain kernel: device tapd02e78d4-85 entered promiscuous mode
Dec 06 10:20:14 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016414.9971] manager: (tapd02e78d4-85): new Generic device (/org/freedesktop/NetworkManager/Devices/37)
Dec 06 10:20:14 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:14Z|00168|binding|INFO|Claiming lport d02e78d4-855e-4b7e-93d0-6b4ebd411b6c for this chassis.
Dec 06 10:20:14 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:14Z|00169|binding|INFO|d02e78d4-855e-4b7e-93d0-6b4ebd411b6c: Claiming unknown
Dec 06 10:20:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:14.997 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:15 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:15.007 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-2d4c92d0-1543-425e-a12e-93b686695cb2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d4c92d0-1543-425e-a12e-93b686695cb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24086b701d6b4d4081d2e63578d18d24', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=480bd9c6-b4ff-4b92-b254-164d0b300da2, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=d02e78d4-855e-4b7e-93d0-6b4ebd411b6c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:20:15 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:15.009 159200 INFO neutron.agent.ovn.metadata.agent [-] Port d02e78d4-855e-4b7e-93d0-6b4ebd411b6c in datapath 2d4c92d0-1543-425e-a12e-93b686695cb2 bound to our chassis
Dec 06 10:20:15 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:15.010 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2d4c92d0-1543-425e-a12e-93b686695cb2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:20:15 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:15.011 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[5c058b8e-3f7f-41a3-8a8a-eae78427c996]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:20:15 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:15.025 262327 INFO neutron.agent.dhcp.agent [None req-ff6d7dac-d36b-407c-b31d-21901d3b2428 - - - - - -] DHCP configuration for ports {'ced2e5a0-aaa2-49f1-a448-2a8c1bac4dd6'} is completed
Dec 06 10:20:15 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:15Z|00170|binding|INFO|Setting lport d02e78d4-855e-4b7e-93d0-6b4ebd411b6c ovn-installed in OVS
Dec 06 10:20:15 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:15Z|00171|binding|INFO|Setting lport d02e78d4-855e-4b7e-93d0-6b4ebd411b6c up in Southbound
Dec 06 10:20:15 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:15.041 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:15 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:15.070 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:15 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:15.094 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v330: 177 pgs: 177 active+clean; 192 MiB data, 896 MiB used, 41 GiB / 42 GiB avail; 4.1 KiB/s rd, 19 KiB/s wr, 11 op/s
Dec 06 10:20:15 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4d6e1245-952e-4af4-91c8-d91b7174dcef", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:20:15 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4d6e1245-952e-4af4-91c8-d91b7174dcef", "format": "json"}]: dispatch
Dec 06 10:20:15 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e166 e166: 6 total, 6 up, 6 in
Dec 06 10:20:15 np0005548790.localdomain podman[316084]: 
Dec 06 10:20:15 np0005548790.localdomain podman[316084]: 2025-12-06 10:20:15.912503995 +0000 UTC m=+0.088956619 container create 54ffc7d4258eb33b21966307b2e018ccd9278691dec20c2467460929c97457b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d4c92d0-1543-425e-a12e-93b686695cb2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Dec 06 10:20:15 np0005548790.localdomain systemd[1]: Started libpod-conmon-54ffc7d4258eb33b21966307b2e018ccd9278691dec20c2467460929c97457b4.scope.
Dec 06 10:20:15 np0005548790.localdomain podman[316084]: 2025-12-06 10:20:15.870283602 +0000 UTC m=+0.046736276 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:20:15 np0005548790.localdomain systemd[1]: tmp-crun.N7aX9T.mount: Deactivated successfully.
Dec 06 10:20:15 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:20:15 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c54e46ed7b711e18fd2bcfd5372d2cf626721dda0b9d8ca45d53eecff2e46161/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:20:15 np0005548790.localdomain podman[316084]: 2025-12-06 10:20:15.991627006 +0000 UTC m=+0.168079640 container init 54ffc7d4258eb33b21966307b2e018ccd9278691dec20c2467460929c97457b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d4c92d0-1543-425e-a12e-93b686695cb2, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:20:16 np0005548790.localdomain podman[316084]: 2025-12-06 10:20:16.000844586 +0000 UTC m=+0.177297230 container start 54ffc7d4258eb33b21966307b2e018ccd9278691dec20c2467460929c97457b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d4c92d0-1543-425e-a12e-93b686695cb2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 06 10:20:16 np0005548790.localdomain dnsmasq[316102]: started, version 2.85 cachesize 150
Dec 06 10:20:16 np0005548790.localdomain dnsmasq[316102]: DNS service limited to local subnets
Dec 06 10:20:16 np0005548790.localdomain dnsmasq[316102]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:20:16 np0005548790.localdomain dnsmasq[316102]: warning: no upstream servers configured
Dec 06 10:20:16 np0005548790.localdomain dnsmasq-dhcp[316102]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 06 10:20:16 np0005548790.localdomain dnsmasq[316102]: read /var/lib/neutron/dhcp/2d4c92d0-1543-425e-a12e-93b686695cb2/addn_hosts - 0 addresses
Dec 06 10:20:16 np0005548790.localdomain dnsmasq-dhcp[316102]: read /var/lib/neutron/dhcp/2d4c92d0-1543-425e-a12e-93b686695cb2/host
Dec 06 10:20:16 np0005548790.localdomain dnsmasq-dhcp[316102]: read /var/lib/neutron/dhcp/2d4c92d0-1543-425e-a12e-93b686695cb2/opts
Dec 06 10:20:16 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:16.061 262327 INFO neutron.agent.dhcp.agent [None req-ebe99ef0-a185-42f0-969e-8694fa560e83 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:20:14Z, description=, device_id=c48f102b-594e-4f9a-9380-ddbbd3b3134a, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c85793520>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c85793a00>], id=9046a2f3-9f73-4df0-a1aa-993bc772e249, ip_allocation=immediate, mac_address=fa:16:3e:bf:8a:4b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:20:12Z, description=, dns_domain=, id=2d4c92d0-1543-425e-a12e-93b686695cb2, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1929639410, port_security_enabled=True, project_id=24086b701d6b4d4081d2e63578d18d24, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=29519, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2331, status=ACTIVE, subnets=['5df78566-239c-43f9-8c84-673c2e29841f'], tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:20:13Z, vlan_transparent=None, network_id=2d4c92d0-1543-425e-a12e-93b686695cb2, port_security_enabled=False, project_id=24086b701d6b4d4081d2e63578d18d24, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2351, status=DOWN, tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:20:14Z on network 2d4c92d0-1543-425e-a12e-93b686695cb2
Dec 06 10:20:16 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:16.188 262327 INFO neutron.agent.dhcp.agent [None req-929d68c5-7dd0-4911-ac90-f7c8729fa402 - - - - - -] DHCP configuration for ports {'2d5eb5f2-c034-44c2-a7db-5c9fbdc0a7ed'} is completed
Dec 06 10:20:16 np0005548790.localdomain dnsmasq[316102]: read /var/lib/neutron/dhcp/2d4c92d0-1543-425e-a12e-93b686695cb2/addn_hosts - 1 addresses
Dec 06 10:20:16 np0005548790.localdomain dnsmasq-dhcp[316102]: read /var/lib/neutron/dhcp/2d4c92d0-1543-425e-a12e-93b686695cb2/host
Dec 06 10:20:16 np0005548790.localdomain podman[316121]: 2025-12-06 10:20:16.245877838 +0000 UTC m=+0.054946698 container kill 54ffc7d4258eb33b21966307b2e018ccd9278691dec20c2467460929c97457b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d4c92d0-1543-425e-a12e-93b686695cb2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:20:16 np0005548790.localdomain dnsmasq-dhcp[316102]: read /var/lib/neutron/dhcp/2d4c92d0-1543-425e-a12e-93b686695cb2/opts
Dec 06 10:20:16 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:16.427 262327 INFO neutron.agent.dhcp.agent [None req-917d5b69-7ed3-4bc9-adea-b9034af2efcf - - - - - -] DHCP configuration for ports {'9046a2f3-9f73-4df0-a1aa-993bc772e249'} is completed
Dec 06 10:20:16 np0005548790.localdomain ceph-mon[301742]: pgmap v330: 177 pgs: 177 active+clean; 192 MiB data, 896 MiB used, 41 GiB / 42 GiB avail; 4.1 KiB/s rd, 19 KiB/s wr, 11 op/s
Dec 06 10:20:16 np0005548790.localdomain ceph-mon[301742]: osdmap e166: 6 total, 6 up, 6 in
Dec 06 10:20:16 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:16.702 2 INFO neutron.agent.securitygroups_rpc [None req-9395d16c-29ac-47bb-b03d-1c577d966648 b9f4d254b45d482eab7cfb178c231d9a 5f8e1c4c589749b99178bbc7c2bea3f0 - - default default] Security group member updated ['cd56abe4-204c-4363-ad64-0a6840260727']
Dec 06 10:20:16 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:16.709 262327 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:20:14Z, description=, device_id=c48f102b-594e-4f9a-9380-ddbbd3b3134a, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c856d0f70>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c856d0df0>], id=9046a2f3-9f73-4df0-a1aa-993bc772e249, ip_allocation=immediate, mac_address=fa:16:3e:bf:8a:4b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:20:12Z, description=, dns_domain=, id=2d4c92d0-1543-425e-a12e-93b686695cb2, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1929639410, port_security_enabled=True, project_id=24086b701d6b4d4081d2e63578d18d24, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=29519, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2331, status=ACTIVE, subnets=['5df78566-239c-43f9-8c84-673c2e29841f'], tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:20:13Z, vlan_transparent=None, network_id=2d4c92d0-1543-425e-a12e-93b686695cb2, port_security_enabled=False, project_id=24086b701d6b4d4081d2e63578d18d24, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2351, status=DOWN, tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:20:14Z on network 2d4c92d0-1543-425e-a12e-93b686695cb2
Dec 06 10:20:16 np0005548790.localdomain podman[316159]: 2025-12-06 10:20:16.890755022 +0000 UTC m=+0.060883888 container kill 54ffc7d4258eb33b21966307b2e018ccd9278691dec20c2467460929c97457b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d4c92d0-1543-425e-a12e-93b686695cb2, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 10:20:16 np0005548790.localdomain dnsmasq[316102]: read /var/lib/neutron/dhcp/2d4c92d0-1543-425e-a12e-93b686695cb2/addn_hosts - 1 addresses
Dec 06 10:20:16 np0005548790.localdomain dnsmasq-dhcp[316102]: read /var/lib/neutron/dhcp/2d4c92d0-1543-425e-a12e-93b686695cb2/host
Dec 06 10:20:16 np0005548790.localdomain dnsmasq-dhcp[316102]: read /var/lib/neutron/dhcp/2d4c92d0-1543-425e-a12e-93b686695cb2/opts
Dec 06 10:20:17 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:17.095 262327 INFO neutron.agent.dhcp.agent [None req-8ad997dc-bf44-4040-9898-1c1f2b095602 - - - - - -] DHCP configuration for ports {'9046a2f3-9f73-4df0-a1aa-993bc772e249'} is completed
Dec 06 10:20:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v332: 177 pgs: 177 active+clean; 192 MiB data, 896 MiB used, 41 GiB / 42 GiB avail; 5.0 KiB/s rd, 16 KiB/s wr, 11 op/s
Dec 06 10:20:17 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:17.255 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=33b2d0f4-3dae-458c-b286-c937c7cb3d9e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:20:17 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e167 e167: 6 total, 6 up, 6 in
Dec 06 10:20:17 np0005548790.localdomain dnsmasq[316102]: read /var/lib/neutron/dhcp/2d4c92d0-1543-425e-a12e-93b686695cb2/addn_hosts - 0 addresses
Dec 06 10:20:17 np0005548790.localdomain dnsmasq-dhcp[316102]: read /var/lib/neutron/dhcp/2d4c92d0-1543-425e-a12e-93b686695cb2/host
Dec 06 10:20:17 np0005548790.localdomain podman[316196]: 2025-12-06 10:20:17.719888064 +0000 UTC m=+0.070680474 container kill 54ffc7d4258eb33b21966307b2e018ccd9278691dec20c2467460929c97457b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d4c92d0-1543-425e-a12e-93b686695cb2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 10:20:17 np0005548790.localdomain dnsmasq-dhcp[316102]: read /var/lib/neutron/dhcp/2d4c92d0-1543-425e-a12e-93b686695cb2/opts
Dec 06 10:20:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:20:17 np0005548790.localdomain systemd[1]: tmp-crun.cU2huV.mount: Deactivated successfully.
Dec 06 10:20:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4d6e1245-952e-4af4-91c8-d91b7174dcef", "format": "json"}]: dispatch
Dec 06 10:20:17 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4d6e1245-952e-4af4-91c8-d91b7174dcef, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:20:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:17.883 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:17 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4d6e1245-952e-4af4-91c8-d91b7174dcef, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:20:17 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4d6e1245-952e-4af4-91c8-d91b7174dcef' of type subvolume
Dec 06 10:20:17 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:20:17.883+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4d6e1245-952e-4af4-91c8-d91b7174dcef' of type subvolume
Dec 06 10:20:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4d6e1245-952e-4af4-91c8-d91b7174dcef", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:17 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4d6e1245-952e-4af4-91c8-d91b7174dcef, vol_name:cephfs) < ""
Dec 06 10:20:17 np0005548790.localdomain podman[316211]: 2025-12-06 10:20:17.889369411 +0000 UTC m=+0.144206884 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 10:20:17 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4d6e1245-952e-4af4-91c8-d91b7174dcef'' moved to trashcan
Dec 06 10:20:17 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:20:17 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4d6e1245-952e-4af4-91c8-d91b7174dcef, vol_name:cephfs) < ""
Dec 06 10:20:17 np0005548790.localdomain podman[316211]: 2025-12-06 10:20:17.922496978 +0000 UTC m=+0.177334441 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:20:17 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:20:17 np0005548790.localdomain kernel: device tapd02e78d4-85 left promiscuous mode
Dec 06 10:20:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:17.948 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:17 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:17Z|00172|binding|INFO|Releasing lport d02e78d4-855e-4b7e-93d0-6b4ebd411b6c from this chassis (sb_readonly=0)
Dec 06 10:20:17 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:17Z|00173|binding|INFO|Setting lport d02e78d4-855e-4b7e-93d0-6b4ebd411b6c down in Southbound
Dec 06 10:20:17 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:17.958 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-2d4c92d0-1543-425e-a12e-93b686695cb2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d4c92d0-1543-425e-a12e-93b686695cb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24086b701d6b4d4081d2e63578d18d24', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=480bd9c6-b4ff-4b92-b254-164d0b300da2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=d02e78d4-855e-4b7e-93d0-6b4ebd411b6c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:20:17 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:17.959 159200 INFO neutron.agent.ovn.metadata.agent [-] Port d02e78d4-855e-4b7e-93d0-6b4ebd411b6c in datapath 2d4c92d0-1543-425e-a12e-93b686695cb2 unbound from our chassis
Dec 06 10:20:17 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:17.960 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2d4c92d0-1543-425e-a12e-93b686695cb2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:20:17 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:17.960 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[d1c034a0-aeef-41c4-945f-75489069ef9c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:20:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:17.972 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:18 np0005548790.localdomain ceph-mon[301742]: pgmap v332: 177 pgs: 177 active+clean; 192 MiB data, 896 MiB used, 41 GiB / 42 GiB avail; 5.0 KiB/s rd, 16 KiB/s wr, 11 op/s
Dec 06 10:20:18 np0005548790.localdomain ceph-mon[301742]: osdmap e167: 6 total, 6 up, 6 in
Dec 06 10:20:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:20:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:20:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:20:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 160368 "" "Go-http-client/1.1"
Dec 06 10:20:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:20:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20148 "" "Go-http-client/1.1"
Dec 06 10:20:18 np0005548790.localdomain dnsmasq[316102]: exiting on receipt of SIGTERM
Dec 06 10:20:18 np0005548790.localdomain podman[316257]: 2025-12-06 10:20:18.902261716 +0000 UTC m=+0.059329357 container kill 54ffc7d4258eb33b21966307b2e018ccd9278691dec20c2467460929c97457b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d4c92d0-1543-425e-a12e-93b686695cb2, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:20:18 np0005548790.localdomain systemd[1]: libpod-54ffc7d4258eb33b21966307b2e018ccd9278691dec20c2467460929c97457b4.scope: Deactivated successfully.
Dec 06 10:20:18 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7adc54b5-eca0-4f1a-9df6-2cf3e56ffc2b", "format": "json"}]: dispatch
Dec 06 10:20:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7adc54b5-eca0-4f1a-9df6-2cf3e56ffc2b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:20:18 np0005548790.localdomain podman[316271]: 2025-12-06 10:20:18.986637561 +0000 UTC m=+0.068764963 container died 54ffc7d4258eb33b21966307b2e018ccd9278691dec20c2467460929c97457b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d4c92d0-1543-425e-a12e-93b686695cb2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:20:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7adc54b5-eca0-4f1a-9df6-2cf3e56ffc2b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:20:18 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:20:18.989+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7adc54b5-eca0-4f1a-9df6-2cf3e56ffc2b' of type subvolume
Dec 06 10:20:18 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7adc54b5-eca0-4f1a-9df6-2cf3e56ffc2b' of type subvolume
Dec 06 10:20:18 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7adc54b5-eca0-4f1a-9df6-2cf3e56ffc2b", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7adc54b5-eca0-4f1a-9df6-2cf3e56ffc2b, vol_name:cephfs) < ""
Dec 06 10:20:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7adc54b5-eca0-4f1a-9df6-2cf3e56ffc2b'' moved to trashcan
Dec 06 10:20:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:20:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7adc54b5-eca0-4f1a-9df6-2cf3e56ffc2b, vol_name:cephfs) < ""
Dec 06 10:20:19 np0005548790.localdomain systemd[1]: tmp-crun.FRSDwM.mount: Deactivated successfully.
Dec 06 10:20:19 np0005548790.localdomain podman[316271]: 2025-12-06 10:20:19.0316985 +0000 UTC m=+0.113825862 container cleanup 54ffc7d4258eb33b21966307b2e018ccd9278691dec20c2467460929c97457b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d4c92d0-1543-425e-a12e-93b686695cb2, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 10:20:19 np0005548790.localdomain systemd[1]: libpod-conmon-54ffc7d4258eb33b21966307b2e018ccd9278691dec20c2467460929c97457b4.scope: Deactivated successfully.
Dec 06 10:20:19 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:20:19 np0005548790.localdomain podman[316272]: 2025-12-06 10:20:19.116512196 +0000 UTC m=+0.193331934 container remove 54ffc7d4258eb33b21966307b2e018ccd9278691dec20c2467460929c97457b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-2d4c92d0-1543-425e-a12e-93b686695cb2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:20:19 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:19.143 262327 INFO neutron.agent.dhcp.agent [None req-2fcb0f89-c5fe-4951-a820-04373618b87c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:20:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v334: 177 pgs: 177 active+clean; 192 MiB data, 901 MiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 19 KiB/s wr, 47 op/s
Dec 06 10:20:19 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e168 e168: 6 total, 6 up, 6 in
Dec 06 10:20:19 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:19.297 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:20:19 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4d6e1245-952e-4af4-91c8-d91b7174dcef", "format": "json"}]: dispatch
Dec 06 10:20:19 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4d6e1245-952e-4af4-91c8-d91b7174dcef", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:19 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:19.352 262327 INFO neutron.agent.linux.ip_lib [None req-7ca97541-feae-4cf6-924a-e826d33b447c - - - - - -] Device tapf7ebd70b-fa cannot be used as it has no MAC address
Dec 06 10:20:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:19.375 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:19 np0005548790.localdomain kernel: device tapf7ebd70b-fa entered promiscuous mode
Dec 06 10:20:19 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016419.3825] manager: (tapf7ebd70b-fa): new Generic device (/org/freedesktop/NetworkManager/Devices/38)
Dec 06 10:20:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:19.384 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:19 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:19Z|00174|binding|INFO|Claiming lport f7ebd70b-fabe-47ad-a777-57d49037899d for this chassis.
Dec 06 10:20:19 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:19Z|00175|binding|INFO|f7ebd70b-fabe-47ad-a777-57d49037899d: Claiming unknown
Dec 06 10:20:19 np0005548790.localdomain systemd-udevd[316309]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:20:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:19.398 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-86455fc4-77a8-4614-b523-6dbd6eda7e6b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86455fc4-77a8-4614-b523-6dbd6eda7e6b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f00ab5f7d934f62991ed1e7e798e47e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e956acb2-5296-454f-a064-ce75f6fb6601, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=f7ebd70b-fabe-47ad-a777-57d49037899d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:20:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:19.400 159200 INFO neutron.agent.ovn.metadata.agent [-] Port f7ebd70b-fabe-47ad-a777-57d49037899d in datapath 86455fc4-77a8-4614-b523-6dbd6eda7e6b bound to our chassis
Dec 06 10:20:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:19.400 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 86455fc4-77a8-4614-b523-6dbd6eda7e6b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:20:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:19.401 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[c45202c6-8f22-4a92-bbe1-a051e3d58039]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:20:19 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapf7ebd70b-fa: No such device
Dec 06 10:20:19 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:19Z|00176|binding|INFO|Setting lport f7ebd70b-fabe-47ad-a777-57d49037899d ovn-installed in OVS
Dec 06 10:20:19 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:19Z|00177|binding|INFO|Setting lport f7ebd70b-fabe-47ad-a777-57d49037899d up in Southbound
Dec 06 10:20:19 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapf7ebd70b-fa: No such device
Dec 06 10:20:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:19.424 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:19 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapf7ebd70b-fa: No such device
Dec 06 10:20:19 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapf7ebd70b-fa: No such device
Dec 06 10:20:19 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapf7ebd70b-fa: No such device
Dec 06 10:20:19 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapf7ebd70b-fa: No such device
Dec 06 10:20:19 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapf7ebd70b-fa: No such device
Dec 06 10:20:19 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapf7ebd70b-fa: No such device
Dec 06 10:20:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:19.457 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:19.486 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:19.518 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:19 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c54e46ed7b711e18fd2bcfd5372d2cf626721dda0b9d8ca45d53eecff2e46161-merged.mount: Deactivated successfully.
Dec 06 10:20:19 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-54ffc7d4258eb33b21966307b2e018ccd9278691dec20c2467460929c97457b4-userdata-shm.mount: Deactivated successfully.
Dec 06 10:20:19 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2d2d4c92d0\x2d1543\x2d425e\x2da12e\x2d93b686695cb2.mount: Deactivated successfully.
Dec 06 10:20:20 np0005548790.localdomain podman[316378]: 
Dec 06 10:20:20 np0005548790.localdomain podman[316378]: 2025-12-06 10:20:20.300543544 +0000 UTC m=+0.084668253 container create 3df0be59ca94d9149aff1517a732edfd396af414e38eb9297e85abc1517822ac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86455fc4-77a8-4614-b523-6dbd6eda7e6b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 06 10:20:20 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7adc54b5-eca0-4f1a-9df6-2cf3e56ffc2b", "format": "json"}]: dispatch
Dec 06 10:20:20 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7adc54b5-eca0-4f1a-9df6-2cf3e56ffc2b", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:20 np0005548790.localdomain ceph-mon[301742]: pgmap v334: 177 pgs: 177 active+clean; 192 MiB data, 901 MiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 19 KiB/s wr, 47 op/s
Dec 06 10:20:20 np0005548790.localdomain ceph-mon[301742]: osdmap e168: 6 total, 6 up, 6 in
Dec 06 10:20:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:20:20 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e169 e169: 6 total, 6 up, 6 in
Dec 06 10:20:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:20:20 np0005548790.localdomain systemd[1]: Started libpod-conmon-3df0be59ca94d9149aff1517a732edfd396af414e38eb9297e85abc1517822ac.scope.
Dec 06 10:20:20 np0005548790.localdomain podman[316378]: 2025-12-06 10:20:20.259246236 +0000 UTC m=+0.043370985 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:20:20 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:20Z|00178|binding|INFO|Removing iface tapf7ebd70b-fa ovn-installed in OVS
Dec 06 10:20:20 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:20Z|00179|binding|INFO|Removing lport f7ebd70b-fabe-47ad-a777-57d49037899d ovn-installed in OVS
Dec 06 10:20:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:20.359 159200 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 4a375d17-4e45-44c9-82cf-4b95e63055b3 with type ""
Dec 06 10:20:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:20.361 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-86455fc4-77a8-4614-b523-6dbd6eda7e6b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86455fc4-77a8-4614-b523-6dbd6eda7e6b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f00ab5f7d934f62991ed1e7e798e47e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e956acb2-5296-454f-a064-ce75f6fb6601, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=f7ebd70b-fabe-47ad-a777-57d49037899d) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:20:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:20.363 159200 INFO neutron.agent.ovn.metadata.agent [-] Port f7ebd70b-fabe-47ad-a777-57d49037899d in datapath 86455fc4-77a8-4614-b523-6dbd6eda7e6b unbound from our chassis
Dec 06 10:20:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:20.364 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 86455fc4-77a8-4614-b523-6dbd6eda7e6b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:20:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:20.365 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[ef4051fd-e40b-4503-a35e-ff31f41f9923]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:20:20 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:20:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:20.395 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:20 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72844f04254838603302763b9d32e168ee4a3dd668ccbae06dae68a3439bb56b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:20:20 np0005548790.localdomain podman[316378]: 2025-12-06 10:20:20.408805764 +0000 UTC m=+0.192930553 container init 3df0be59ca94d9149aff1517a732edfd396af414e38eb9297e85abc1517822ac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86455fc4-77a8-4614-b523-6dbd6eda7e6b, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 06 10:20:20 np0005548790.localdomain podman[316378]: 2025-12-06 10:20:20.41972471 +0000 UTC m=+0.203849419 container start 3df0be59ca94d9149aff1517a732edfd396af414e38eb9297e85abc1517822ac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86455fc4-77a8-4614-b523-6dbd6eda7e6b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 06 10:20:20 np0005548790.localdomain dnsmasq[316414]: started, version 2.85 cachesize 150
Dec 06 10:20:20 np0005548790.localdomain dnsmasq[316414]: DNS service limited to local subnets
Dec 06 10:20:20 np0005548790.localdomain dnsmasq[316414]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:20:20 np0005548790.localdomain dnsmasq[316414]: warning: no upstream servers configured
Dec 06 10:20:20 np0005548790.localdomain dnsmasq-dhcp[316414]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 06 10:20:20 np0005548790.localdomain dnsmasq[316414]: read /var/lib/neutron/dhcp/86455fc4-77a8-4614-b523-6dbd6eda7e6b/addn_hosts - 0 addresses
Dec 06 10:20:20 np0005548790.localdomain dnsmasq-dhcp[316414]: read /var/lib/neutron/dhcp/86455fc4-77a8-4614-b523-6dbd6eda7e6b/host
Dec 06 10:20:20 np0005548790.localdomain dnsmasq-dhcp[316414]: read /var/lib/neutron/dhcp/86455fc4-77a8-4614-b523-6dbd6eda7e6b/opts
Dec 06 10:20:20 np0005548790.localdomain podman[316393]: 2025-12-06 10:20:20.465593352 +0000 UTC m=+0.126040443 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, tcib_managed=true)
Dec 06 10:20:20 np0005548790.localdomain podman[316393]: 2025-12-06 10:20:20.500090095 +0000 UTC m=+0.160537146 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 10:20:20 np0005548790.localdomain podman[316391]: 2025-12-06 10:20:20.509820108 +0000 UTC m=+0.174175455 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:20:20 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:20:20 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:20.526 262327 INFO neutron.agent.dhcp.agent [None req-7a9e4651-03cd-4fb9-b7a0-e9c77b2ce131 - - - - - -] DHCP configuration for ports {'011e5f55-b01b-4fc6-aafe-7573402679c1'} is completed
Dec 06 10:20:20 np0005548790.localdomain podman[316391]: 2025-12-06 10:20:20.550240493 +0000 UTC m=+0.214595850 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:20:20 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:20:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:20.595 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:20 np0005548790.localdomain dnsmasq[316414]: exiting on receipt of SIGTERM
Dec 06 10:20:20 np0005548790.localdomain podman[316460]: 2025-12-06 10:20:20.658260616 +0000 UTC m=+0.043934360 container kill 3df0be59ca94d9149aff1517a732edfd396af414e38eb9297e85abc1517822ac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86455fc4-77a8-4614-b523-6dbd6eda7e6b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 06 10:20:20 np0005548790.localdomain systemd[1]: libpod-3df0be59ca94d9149aff1517a732edfd396af414e38eb9297e85abc1517822ac.scope: Deactivated successfully.
Dec 06 10:20:20 np0005548790.localdomain podman[316475]: 2025-12-06 10:20:20.713995324 +0000 UTC m=+0.039790817 container died 3df0be59ca94d9149aff1517a732edfd396af414e38eb9297e85abc1517822ac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86455fc4-77a8-4614-b523-6dbd6eda7e6b, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 10:20:20 np0005548790.localdomain podman[316475]: 2025-12-06 10:20:20.749739952 +0000 UTC m=+0.075535375 container remove 3df0be59ca94d9149aff1517a732edfd396af414e38eb9297e85abc1517822ac (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-86455fc4-77a8-4614-b523-6dbd6eda7e6b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:20:20 np0005548790.localdomain systemd[1]: libpod-conmon-3df0be59ca94d9149aff1517a732edfd396af414e38eb9297e85abc1517822ac.scope: Deactivated successfully.
Dec 06 10:20:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:20.759 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:20 np0005548790.localdomain kernel: device tapf7ebd70b-fa left promiscuous mode
Dec 06 10:20:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:20.767 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:20 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:20.800 262327 INFO neutron.agent.dhcp.agent [None req-008fbcb6-a2b7-41a4-a8e4-412526f3a362 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:20:20 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:20.801 262327 INFO neutron.agent.dhcp.agent [None req-008fbcb6-a2b7-41a4-a8e4-412526f3a362 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:20:20 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-72844f04254838603302763b9d32e168ee4a3dd668ccbae06dae68a3439bb56b-merged.mount: Deactivated successfully.
Dec 06 10:20:20 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3df0be59ca94d9149aff1517a732edfd396af414e38eb9297e85abc1517822ac-userdata-shm.mount: Deactivated successfully.
Dec 06 10:20:20 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2d86455fc4\x2d77a8\x2d4614\x2db523\x2d6dbd6eda7e6b.mount: Deactivated successfully.
Dec 06 10:20:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "81753d92-4847-43cb-b357-c4adab052a83", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:20:21 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:81753d92-4847-43cb-b357-c4adab052a83, vol_name:cephfs) < ""
Dec 06 10:20:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v337: 177 pgs: 177 active+clean; 192 MiB data, 901 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 21 KiB/s wr, 51 op/s
Dec 06 10:20:21 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/81753d92-4847-43cb-b357-c4adab052a83/.meta.tmp'
Dec 06 10:20:21 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/81753d92-4847-43cb-b357-c4adab052a83/.meta.tmp' to config b'/volumes/_nogroup/81753d92-4847-43cb-b357-c4adab052a83/.meta'
Dec 06 10:20:21 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:81753d92-4847-43cb-b357-c4adab052a83, vol_name:cephfs) < ""
Dec 06 10:20:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "81753d92-4847-43cb-b357-c4adab052a83", "format": "json"}]: dispatch
Dec 06 10:20:21 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:81753d92-4847-43cb-b357-c4adab052a83, vol_name:cephfs) < ""
Dec 06 10:20:21 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:81753d92-4847-43cb-b357-c4adab052a83, vol_name:cephfs) < ""
Dec 06 10:20:21 np0005548790.localdomain ceph-mon[301742]: osdmap e169: 6 total, 6 up, 6 in
Dec 06 10:20:21 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:20:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e170 e170: 6 total, 6 up, 6 in
Dec 06 10:20:22 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e171 e171: 6 total, 6 up, 6 in
Dec 06 10:20:22 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "81753d92-4847-43cb-b357-c4adab052a83", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:20:22 np0005548790.localdomain ceph-mon[301742]: pgmap v337: 177 pgs: 177 active+clean; 192 MiB data, 901 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 21 KiB/s wr, 51 op/s
Dec 06 10:20:22 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "81753d92-4847-43cb-b357-c4adab052a83", "format": "json"}]: dispatch
Dec 06 10:20:22 np0005548790.localdomain ceph-mon[301742]: osdmap e170: 6 total, 6 up, 6 in
Dec 06 10:20:22 np0005548790.localdomain ceph-mon[301742]: osdmap e171: 6 total, 6 up, 6 in
Dec 06 10:20:22 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9cf2fd4c-90f2-4604-9f84-44511d67581f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:20:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9cf2fd4c-90f2-4604-9f84-44511d67581f, vol_name:cephfs) < ""
Dec 06 10:20:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9cf2fd4c-90f2-4604-9f84-44511d67581f/.meta.tmp'
Dec 06 10:20:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9cf2fd4c-90f2-4604-9f84-44511d67581f/.meta.tmp' to config b'/volumes/_nogroup/9cf2fd4c-90f2-4604-9f84-44511d67581f/.meta'
Dec 06 10:20:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9cf2fd4c-90f2-4604-9f84-44511d67581f, vol_name:cephfs) < ""
Dec 06 10:20:22 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9cf2fd4c-90f2-4604-9f84-44511d67581f", "format": "json"}]: dispatch
Dec 06 10:20:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9cf2fd4c-90f2-4604-9f84-44511d67581f, vol_name:cephfs) < ""
Dec 06 10:20:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9cf2fd4c-90f2-4604-9f84-44511d67581f, vol_name:cephfs) < ""
Dec 06 10:20:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:22.910 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:23 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v340: 177 pgs: 177 active+clean; 192 MiB data, 901 MiB used, 41 GiB / 42 GiB avail; 80 KiB/s rd, 29 KiB/s wr, 113 op/s
Dec 06 10:20:23 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e172 e172: 6 total, 6 up, 6 in
Dec 06 10:20:23 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9cf2fd4c-90f2-4604-9f84-44511d67581f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:20:23 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:20:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:20:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:20:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:20:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:20:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:20:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:20:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:20:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:20:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:20:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:20:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:20:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:20:23 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:23.710 2 INFO neutron.agent.securitygroups_rpc [None req-048ea5a4-2d0a-4365-a398-a917ab48f027 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['eb426258-160f-4f74-a9d2-50e476134e75']
Dec 06 10:20:23 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:23.992 2 INFO neutron.agent.securitygroups_rpc [None req-6df8f9bf-dce2-42c5-9279-2397b4b4c0d3 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['eb426258-160f-4f74-a9d2-50e476134e75']
Dec 06 10:20:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:20:24 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9cf2fd4c-90f2-4604-9f84-44511d67581f", "format": "json"}]: dispatch
Dec 06 10:20:24 np0005548790.localdomain ceph-mon[301742]: pgmap v340: 177 pgs: 177 active+clean; 192 MiB data, 901 MiB used, 41 GiB / 42 GiB avail; 80 KiB/s rd, 29 KiB/s wr, 113 op/s
Dec 06 10:20:24 np0005548790.localdomain ceph-mon[301742]: osdmap e172: 6 total, 6 up, 6 in
Dec 06 10:20:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v342: 177 pgs: 177 active+clean; 192 MiB data, 901 MiB used, 41 GiB / 42 GiB avail; 66 KiB/s rd, 24 KiB/s wr, 93 op/s
Dec 06 10:20:25 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e173 e173: 6 total, 6 up, 6 in
Dec 06 10:20:25 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:25.649 262327 INFO neutron.agent.linux.ip_lib [None req-d7b0b5e5-6a6c-4f88-8d63-80dac35274fd - - - - - -] Device tap38dccc84-87 cannot be used as it has no MAC address
Dec 06 10:20:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:25.669 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:25 np0005548790.localdomain kernel: device tap38dccc84-87 entered promiscuous mode
Dec 06 10:20:25 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016425.6763] manager: (tap38dccc84-87): new Generic device (/org/freedesktop/NetworkManager/Devices/39)
Dec 06 10:20:25 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:25Z|00180|binding|INFO|Claiming lport 38dccc84-8754-456c-9205-f96baa3e1510 for this chassis.
Dec 06 10:20:25 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:25Z|00181|binding|INFO|38dccc84-8754-456c-9205-f96baa3e1510: Claiming unknown
Dec 06 10:20:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:25.677 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:25 np0005548790.localdomain systemd-udevd[316511]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:20:25 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:25.684 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-bbb07d43-4846-4239-a49d-277cfc45f0fe', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bbb07d43-4846-4239-a49d-277cfc45f0fe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fa76bcfc789b4e53acf344cd0b1cd7c5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75a335d9-38e8-4774-a6af-55c82ee21924, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=38dccc84-8754-456c-9205-f96baa3e1510) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:20:25 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:25.688 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 38dccc84-8754-456c-9205-f96baa3e1510 in datapath bbb07d43-4846-4239-a49d-277cfc45f0fe bound to our chassis
Dec 06 10:20:25 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:25.689 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bbb07d43-4846-4239-a49d-277cfc45f0fe or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:20:25 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:25.689 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[d45b8415-0544-4e6a-a9b9-d07dad977bda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:20:25 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:25Z|00182|binding|INFO|Setting lport 38dccc84-8754-456c-9205-f96baa3e1510 ovn-installed in OVS
Dec 06 10:20:25 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:25Z|00183|binding|INFO|Setting lport 38dccc84-8754-456c-9205-f96baa3e1510 up in Southbound
Dec 06 10:20:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:25.716 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:25.752 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:25.778 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "81753d92-4847-43cb-b357-c4adab052a83", "format": "json"}]: dispatch
Dec 06 10:20:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:81753d92-4847-43cb-b357-c4adab052a83, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:20:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:81753d92-4847-43cb-b357-c4adab052a83, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:20:25 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:20:25.819+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '81753d92-4847-43cb-b357-c4adab052a83' of type subvolume
Dec 06 10:20:25 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '81753d92-4847-43cb-b357-c4adab052a83' of type subvolume
Dec 06 10:20:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "81753d92-4847-43cb-b357-c4adab052a83", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:81753d92-4847-43cb-b357-c4adab052a83, vol_name:cephfs) < ""
Dec 06 10:20:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/81753d92-4847-43cb-b357-c4adab052a83'' moved to trashcan
Dec 06 10:20:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:20:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:81753d92-4847-43cb-b357-c4adab052a83, vol_name:cephfs) < ""
Dec 06 10:20:26 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:26.137 2 INFO neutron.agent.securitygroups_rpc [None req-ef8cf78f-f1a9-46f9-a6c4-622f166b1f57 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['bd391031-3a56-454d-bf4f-b44b517a0aeb']
Dec 06 10:20:26 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ffb9c87f-9478-478c-bceb-83016344d5b8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:20:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ffb9c87f-9478-478c-bceb-83016344d5b8, vol_name:cephfs) < ""
Dec 06 10:20:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ffb9c87f-9478-478c-bceb-83016344d5b8/.meta.tmp'
Dec 06 10:20:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ffb9c87f-9478-478c-bceb-83016344d5b8/.meta.tmp' to config b'/volumes/_nogroup/ffb9c87f-9478-478c-bceb-83016344d5b8/.meta'
Dec 06 10:20:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ffb9c87f-9478-478c-bceb-83016344d5b8, vol_name:cephfs) < ""
Dec 06 10:20:26 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ffb9c87f-9478-478c-bceb-83016344d5b8", "format": "json"}]: dispatch
Dec 06 10:20:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ffb9c87f-9478-478c-bceb-83016344d5b8, vol_name:cephfs) < ""
Dec 06 10:20:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ffb9c87f-9478-478c-bceb-83016344d5b8, vol_name:cephfs) < ""
Dec 06 10:20:26 np0005548790.localdomain ceph-mon[301742]: pgmap v342: 177 pgs: 177 active+clean; 192 MiB data, 901 MiB used, 41 GiB / 42 GiB avail; 66 KiB/s rd, 24 KiB/s wr, 93 op/s
Dec 06 10:20:26 np0005548790.localdomain ceph-mon[301742]: osdmap e173: 6 total, 6 up, 6 in
Dec 06 10:20:26 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:20:26 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1ddc97c2-406f-4edf-8fc4-61a5ba2286e5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:20:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1ddc97c2-406f-4edf-8fc4-61a5ba2286e5, vol_name:cephfs) < ""
Dec 06 10:20:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1ddc97c2-406f-4edf-8fc4-61a5ba2286e5/.meta.tmp'
Dec 06 10:20:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1ddc97c2-406f-4edf-8fc4-61a5ba2286e5/.meta.tmp' to config b'/volumes/_nogroup/1ddc97c2-406f-4edf-8fc4-61a5ba2286e5/.meta'
Dec 06 10:20:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1ddc97c2-406f-4edf-8fc4-61a5ba2286e5, vol_name:cephfs) < ""
Dec 06 10:20:26 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1ddc97c2-406f-4edf-8fc4-61a5ba2286e5", "format": "json"}]: dispatch
Dec 06 10:20:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1ddc97c2-406f-4edf-8fc4-61a5ba2286e5, vol_name:cephfs) < ""
Dec 06 10:20:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1ddc97c2-406f-4edf-8fc4-61a5ba2286e5, vol_name:cephfs) < ""
Dec 06 10:20:26 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:26.558 2 INFO neutron.agent.securitygroups_rpc [None req-6a0c3d03-5496-4b9d-aee6-2794cf73d3e3 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['bd391031-3a56-454d-bf4f-b44b517a0aeb']
Dec 06 10:20:26 np0005548790.localdomain podman[316564]: 
Dec 06 10:20:26 np0005548790.localdomain podman[316564]: 2025-12-06 10:20:26.585033253 +0000 UTC m=+0.084046075 container create bddd4dcda7fc8be9cfb5336988ad031ef5031e501ec1b77ef99a86f3c162c2cf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbb07d43-4846-4239-a49d-277cfc45f0fe, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 10:20:26 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:26.623 159200 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 374999fe-ac9f-4ed2-ab21-4e1f9cf91db1 with type ""
Dec 06 10:20:26 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:26Z|00184|binding|INFO|Removing iface tap38dccc84-87 ovn-installed in OVS
Dec 06 10:20:26 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:26Z|00185|binding|INFO|Removing lport 38dccc84-8754-456c-9205-f96baa3e1510 ovn-installed in OVS
Dec 06 10:20:26 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:26.625 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-bbb07d43-4846-4239-a49d-277cfc45f0fe', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bbb07d43-4846-4239-a49d-277cfc45f0fe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fa76bcfc789b4e53acf344cd0b1cd7c5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75a335d9-38e8-4774-a6af-55c82ee21924, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=38dccc84-8754-456c-9205-f96baa3e1510) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:20:26 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:26.627 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 38dccc84-8754-456c-9205-f96baa3e1510 in datapath bbb07d43-4846-4239-a49d-277cfc45f0fe unbound from our chassis
Dec 06 10:20:26 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:26.627 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bbb07d43-4846-4239-a49d-277cfc45f0fe or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:20:26 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:26.628 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[3231f0fe-49f7-48f2-ba90-55212e80f010]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:20:26 np0005548790.localdomain systemd[1]: Started libpod-conmon-bddd4dcda7fc8be9cfb5336988ad031ef5031e501ec1b77ef99a86f3c162c2cf.scope.
Dec 06 10:20:26 np0005548790.localdomain podman[316564]: 2025-12-06 10:20:26.540058036 +0000 UTC m=+0.039070888 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:20:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:26.666 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:26.667 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:26 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:20:26 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c477c7b089f6907fb81679bf151dfef9a27707423f32cbd4150ac2d7ceb3ff1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:20:26 np0005548790.localdomain podman[316564]: 2025-12-06 10:20:26.693224342 +0000 UTC m=+0.192237134 container init bddd4dcda7fc8be9cfb5336988ad031ef5031e501ec1b77ef99a86f3c162c2cf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbb07d43-4846-4239-a49d-277cfc45f0fe, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:20:26 np0005548790.localdomain podman[316564]: 2025-12-06 10:20:26.701118955 +0000 UTC m=+0.200131747 container start bddd4dcda7fc8be9cfb5336988ad031ef5031e501ec1b77ef99a86f3c162c2cf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbb07d43-4846-4239-a49d-277cfc45f0fe, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:20:26 np0005548790.localdomain dnsmasq[316582]: started, version 2.85 cachesize 150
Dec 06 10:20:26 np0005548790.localdomain dnsmasq[316582]: DNS service limited to local subnets
Dec 06 10:20:26 np0005548790.localdomain dnsmasq[316582]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:20:26 np0005548790.localdomain dnsmasq[316582]: warning: no upstream servers configured
Dec 06 10:20:26 np0005548790.localdomain dnsmasq-dhcp[316582]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 06 10:20:26 np0005548790.localdomain dnsmasq[316582]: read /var/lib/neutron/dhcp/bbb07d43-4846-4239-a49d-277cfc45f0fe/addn_hosts - 0 addresses
Dec 06 10:20:26 np0005548790.localdomain dnsmasq-dhcp[316582]: read /var/lib/neutron/dhcp/bbb07d43-4846-4239-a49d-277cfc45f0fe/host
Dec 06 10:20:26 np0005548790.localdomain dnsmasq-dhcp[316582]: read /var/lib/neutron/dhcp/bbb07d43-4846-4239-a49d-277cfc45f0fe/opts
Dec 06 10:20:26 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:26.778 2 INFO neutron.agent.securitygroups_rpc [None req-cb5ad30b-b885-42c3-a286-99bf227690f5 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['bd391031-3a56-454d-bf4f-b44b517a0aeb']
Dec 06 10:20:26 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:26.813 262327 INFO neutron.agent.dhcp.agent [None req-08598bbe-61fb-4c6c-a23a-52f685499a88 - - - - - -] DHCP configuration for ports {'f594e54d-2e5f-4176-8189-08757276b09c'} is completed
Dec 06 10:20:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:26.935 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:26 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:26.973 2 INFO neutron.agent.securitygroups_rpc [None req-415499aa-cb15-4206-8b55-b9a21ed2dc86 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['bd391031-3a56-454d-bf4f-b44b517a0aeb']
Dec 06 10:20:27 np0005548790.localdomain dnsmasq[316582]: exiting on receipt of SIGTERM
Dec 06 10:20:27 np0005548790.localdomain podman[316600]: 2025-12-06 10:20:27.060688538 +0000 UTC m=+0.068739832 container kill bddd4dcda7fc8be9cfb5336988ad031ef5031e501ec1b77ef99a86f3c162c2cf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbb07d43-4846-4239-a49d-277cfc45f0fe, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 06 10:20:27 np0005548790.localdomain systemd[1]: libpod-bddd4dcda7fc8be9cfb5336988ad031ef5031e501ec1b77ef99a86f3c162c2cf.scope: Deactivated successfully.
Dec 06 10:20:27 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:27.100 2 INFO neutron.agent.securitygroups_rpc [None req-2e89e39c-a441-4c37-8f6e-df561eb77ca2 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['bd391031-3a56-454d-bf4f-b44b517a0aeb']
Dec 06 10:20:27 np0005548790.localdomain podman[316612]: 2025-12-06 10:20:27.124606148 +0000 UTC m=+0.053700825 container died bddd4dcda7fc8be9cfb5336988ad031ef5031e501ec1b77ef99a86f3c162c2cf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbb07d43-4846-4239-a49d-277cfc45f0fe, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 10:20:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v344: 177 pgs: 177 active+clean; 192 MiB data, 901 MiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 20 KiB/s wr, 77 op/s
Dec 06 10:20:27 np0005548790.localdomain podman[316612]: 2025-12-06 10:20:27.203464202 +0000 UTC m=+0.132558839 container cleanup bddd4dcda7fc8be9cfb5336988ad031ef5031e501ec1b77ef99a86f3c162c2cf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbb07d43-4846-4239-a49d-277cfc45f0fe, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 10:20:27 np0005548790.localdomain systemd[1]: libpod-conmon-bddd4dcda7fc8be9cfb5336988ad031ef5031e501ec1b77ef99a86f3c162c2cf.scope: Deactivated successfully.
Dec 06 10:20:27 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:27.212 2 INFO neutron.agent.securitygroups_rpc [None req-a03daea5-0289-4f98-a7ae-aa6379a3c0f5 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['bd391031-3a56-454d-bf4f-b44b517a0aeb']
Dec 06 10:20:27 np0005548790.localdomain podman[316619]: 2025-12-06 10:20:27.225105217 +0000 UTC m=+0.141061248 container remove bddd4dcda7fc8be9cfb5336988ad031ef5031e501ec1b77ef99a86f3c162c2cf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bbb07d43-4846-4239-a49d-277cfc45f0fe, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:20:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:27.237 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:27 np0005548790.localdomain kernel: device tap38dccc84-87 left promiscuous mode
Dec 06 10:20:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:27.258 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:27 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e174 e174: 6 total, 6 up, 6 in
Dec 06 10:20:27 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:27.295 262327 INFO neutron.agent.dhcp.agent [None req-7b030a7a-0b70-4c6b-8bfe-a36b93b704cc - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:20:27 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:27.296 262327 INFO neutron.agent.dhcp.agent [None req-7b030a7a-0b70-4c6b-8bfe-a36b93b704cc - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:20:27 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "81753d92-4847-43cb-b357-c4adab052a83", "format": "json"}]: dispatch
Dec 06 10:20:27 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "81753d92-4847-43cb-b357-c4adab052a83", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:27 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ffb9c87f-9478-478c-bceb-83016344d5b8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:20:27 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ffb9c87f-9478-478c-bceb-83016344d5b8", "format": "json"}]: dispatch
Dec 06 10:20:27 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1ddc97c2-406f-4edf-8fc4-61a5ba2286e5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:20:27 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1ddc97c2-406f-4edf-8fc4-61a5ba2286e5", "format": "json"}]: dispatch
Dec 06 10:20:27 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:20:27 np0005548790.localdomain ceph-mon[301742]: osdmap e174: 6 total, 6 up, 6 in
Dec 06 10:20:27 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-7c477c7b089f6907fb81679bf151dfef9a27707423f32cbd4150ac2d7ceb3ff1-merged.mount: Deactivated successfully.
Dec 06 10:20:27 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bddd4dcda7fc8be9cfb5336988ad031ef5031e501ec1b77ef99a86f3c162c2cf-userdata-shm.mount: Deactivated successfully.
Dec 06 10:20:27 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2dbbb07d43\x2d4846\x2d4239\x2da49d\x2d277cfc45f0fe.mount: Deactivated successfully.
Dec 06 10:20:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:27.959 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:28 np0005548790.localdomain ceph-mon[301742]: pgmap v344: 177 pgs: 177 active+clean; 192 MiB data, 901 MiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 20 KiB/s wr, 77 op/s
Dec 06 10:20:29 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:29.041 2 INFO neutron.agent.securitygroups_rpc [None req-3c4f32bd-b684-4b66-9a34-68450fbeb73b 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['bd391031-3a56-454d-bf4f-b44b517a0aeb']
Dec 06 10:20:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:20:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c706f26e-f87f-4af4-a717-78c8b08cc789", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:20:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c706f26e-f87f-4af4-a717-78c8b08cc789, vol_name:cephfs) < ""
Dec 06 10:20:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v346: 177 pgs: 177 active+clean; 192 MiB data, 902 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 19 KiB/s wr, 42 op/s
Dec 06 10:20:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c706f26e-f87f-4af4-a717-78c8b08cc789/.meta.tmp'
Dec 06 10:20:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c706f26e-f87f-4af4-a717-78c8b08cc789/.meta.tmp' to config b'/volumes/_nogroup/c706f26e-f87f-4af4-a717-78c8b08cc789/.meta'
Dec 06 10:20:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c706f26e-f87f-4af4-a717-78c8b08cc789, vol_name:cephfs) < ""
Dec 06 10:20:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c706f26e-f87f-4af4-a717-78c8b08cc789", "format": "json"}]: dispatch
Dec 06 10:20:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c706f26e-f87f-4af4-a717-78c8b08cc789, vol_name:cephfs) < ""
Dec 06 10:20:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c706f26e-f87f-4af4-a717-78c8b08cc789, vol_name:cephfs) < ""
Dec 06 10:20:29 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:29.256 2 INFO neutron.agent.securitygroups_rpc [None req-03f78b6a-c5ab-4048-95e5-dac6933624ce 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['bd391031-3a56-454d-bf4f-b44b517a0aeb']
Dec 06 10:20:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e175 e175: 6 total, 6 up, 6 in
Dec 06 10:20:29 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:20:29 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:29.503 2 INFO neutron.agent.securitygroups_rpc [None req-bec77d6f-35e0-4121-9e3c-321d796fa6a3 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['bd391031-3a56-454d-bf4f-b44b517a0aeb']
Dec 06 10:20:29 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:29.698 2 INFO neutron.agent.securitygroups_rpc [None req-b83ddb52-2122-4d26-8d71-acc6737aed87 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['bd391031-3a56-454d-bf4f-b44b517a0aeb']
Dec 06 10:20:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ffb9c87f-9478-478c-bceb-83016344d5b8", "snap_name": "bc9ed0e0-cf16-4288-b389-6bbd7bbfed8b", "format": "json"}]: dispatch
Dec 06 10:20:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bc9ed0e0-cf16-4288-b389-6bbd7bbfed8b, sub_name:ffb9c87f-9478-478c-bceb-83016344d5b8, vol_name:cephfs) < ""
Dec 06 10:20:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bc9ed0e0-cf16-4288-b389-6bbd7bbfed8b, sub_name:ffb9c87f-9478-478c-bceb-83016344d5b8, vol_name:cephfs) < ""
Dec 06 10:20:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1ddc97c2-406f-4edf-8fc4-61a5ba2286e5", "snap_name": "802dfa0a-e026-43d4-b3ba-67c6242d0ddb", "format": "json"}]: dispatch
Dec 06 10:20:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:802dfa0a-e026-43d4-b3ba-67c6242d0ddb, sub_name:1ddc97c2-406f-4edf-8fc4-61a5ba2286e5, vol_name:cephfs) < ""
Dec 06 10:20:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:802dfa0a-e026-43d4-b3ba-67c6242d0ddb, sub_name:1ddc97c2-406f-4edf-8fc4-61a5ba2286e5, vol_name:cephfs) < ""
Dec 06 10:20:30 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c706f26e-f87f-4af4-a717-78c8b08cc789", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:20:30 np0005548790.localdomain ceph-mon[301742]: pgmap v346: 177 pgs: 177 active+clean; 192 MiB data, 902 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 19 KiB/s wr, 42 op/s
Dec 06 10:20:30 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c706f26e-f87f-4af4-a717-78c8b08cc789", "format": "json"}]: dispatch
Dec 06 10:20:30 np0005548790.localdomain ceph-mon[301742]: osdmap e175: 6 total, 6 up, 6 in
Dec 06 10:20:30 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ffb9c87f-9478-478c-bceb-83016344d5b8", "snap_name": "bc9ed0e0-cf16-4288-b389-6bbd7bbfed8b", "format": "json"}]: dispatch
Dec 06 10:20:30 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:30.742 2 INFO neutron.agent.securitygroups_rpc [None req-1a7eb84d-a3b4-4a88-a0ae-062b0b90ebc4 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['ea4ca242-5187-4603-82cf-af66665b0039']
Dec 06 10:20:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v348: 177 pgs: 177 active+clean; 192 MiB data, 902 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 19 KiB/s wr, 42 op/s
Dec 06 10:20:31 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1ddc97c2-406f-4edf-8fc4-61a5ba2286e5", "snap_name": "802dfa0a-e026-43d4-b3ba-67c6242d0ddb", "format": "json"}]: dispatch
Dec 06 10:20:31 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3989965424' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:20:31 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3989965424' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:20:32 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e176 e176: 6 total, 6 up, 6 in
Dec 06 10:20:32 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c706f26e-f87f-4af4-a717-78c8b08cc789", "format": "json"}]: dispatch
Dec 06 10:20:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c706f26e-f87f-4af4-a717-78c8b08cc789, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:20:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c706f26e-f87f-4af4-a717-78c8b08cc789, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:20:32 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:20:32.455+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c706f26e-f87f-4af4-a717-78c8b08cc789' of type subvolume
Dec 06 10:20:32 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c706f26e-f87f-4af4-a717-78c8b08cc789' of type subvolume
Dec 06 10:20:32 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c706f26e-f87f-4af4-a717-78c8b08cc789", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c706f26e-f87f-4af4-a717-78c8b08cc789, vol_name:cephfs) < ""
Dec 06 10:20:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c706f26e-f87f-4af4-a717-78c8b08cc789'' moved to trashcan
Dec 06 10:20:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:20:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c706f26e-f87f-4af4-a717-78c8b08cc789, vol_name:cephfs) < ""
Dec 06 10:20:32 np0005548790.localdomain ceph-mon[301742]: pgmap v348: 177 pgs: 177 active+clean; 192 MiB data, 902 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 19 KiB/s wr, 42 op/s
Dec 06 10:20:32 np0005548790.localdomain ceph-mon[301742]: osdmap e176: 6 total, 6 up, 6 in
Dec 06 10:20:32 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:32.641 2 INFO neutron.agent.securitygroups_rpc [None req-8df6a51f-2782-49f1-a34d-739b1e2f53d1 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['0223fd9f-7d67-4f35-8221-a118caed647f']
Dec 06 10:20:32 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:32.951 2 INFO neutron.agent.securitygroups_rpc [None req-c356675a-b0a4-4bc7-b431-054879bdecb2 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['0223fd9f-7d67-4f35-8221-a118caed647f']
Dec 06 10:20:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:32.961 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:20:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:32.963 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:20:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:32.963 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 10:20:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:32.963 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:20:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:33.000 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:33.001 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:20:33 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:33.079 262327 INFO neutron.agent.linux.ip_lib [None req-3ac4f928-87a4-4faa-80bc-481dfc40dd28 - - - - - -] Device tapd744111f-47 cannot be used as it has no MAC address
Dec 06 10:20:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:33.104 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:33 np0005548790.localdomain kernel: device tapd744111f-47 entered promiscuous mode
Dec 06 10:20:33 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016433.1123] manager: (tapd744111f-47): new Generic device (/org/freedesktop/NetworkManager/Devices/40)
Dec 06 10:20:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:33.113 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:33 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:33Z|00186|binding|INFO|Claiming lport d744111f-47d1-49d1-89dd-f13551e20dec for this chassis.
Dec 06 10:20:33 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:33Z|00187|binding|INFO|d744111f-47d1-49d1-89dd-f13551e20dec: Claiming unknown
Dec 06 10:20:33 np0005548790.localdomain systemd-udevd[316651]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:20:33 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:33.127 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-dce9cf93-75e3-453b-bd42-cd705aed0f53', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dce9cf93-75e3-453b-bd42-cd705aed0f53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f00ab5f7d934f62991ed1e7e798e47e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b8fe5d2-8761-4006-a8c5-55feeb23e9b6, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=d744111f-47d1-49d1-89dd-f13551e20dec) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:20:33 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:33.129 159200 INFO neutron.agent.ovn.metadata.agent [-] Port d744111f-47d1-49d1-89dd-f13551e20dec in datapath dce9cf93-75e3-453b-bd42-cd705aed0f53 bound to our chassis
Dec 06 10:20:33 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:33.130 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dce9cf93-75e3-453b-bd42-cd705aed0f53 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:20:33 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:33.131 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[eae9f5c7-936d-4a5a-bd98-ef2f98a98a78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:20:33 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapd744111f-47: No such device
Dec 06 10:20:33 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapd744111f-47: No such device
Dec 06 10:20:33 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapd744111f-47: No such device
Dec 06 10:20:33 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapd744111f-47: No such device
Dec 06 10:20:33 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapd744111f-47: No such device
Dec 06 10:20:33 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapd744111f-47: No such device
Dec 06 10:20:33 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapd744111f-47: No such device
Dec 06 10:20:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:33.168 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:33 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapd744111f-47: No such device
Dec 06 10:20:33 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:33Z|00188|binding|INFO|Setting lport d744111f-47d1-49d1-89dd-f13551e20dec ovn-installed in OVS
Dec 06 10:20:33 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:33Z|00189|binding|INFO|Setting lport d744111f-47d1-49d1-89dd-f13551e20dec up in Southbound
Dec 06 10:20:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:33.173 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:33.179 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "1ddc97c2-406f-4edf-8fc4-61a5ba2286e5", "snap_name": "802dfa0a-e026-43d4-b3ba-67c6242d0ddb", "target_sub_name": "6e6fd32a-770e-4afe-8d83-a1956fc630a7", "format": "json"}]: dispatch
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:802dfa0a-e026-43d4-b3ba-67c6242d0ddb, sub_name:1ddc97c2-406f-4edf-8fc4-61a5ba2286e5, target_sub_name:6e6fd32a-770e-4afe-8d83-a1956fc630a7, vol_name:cephfs) < ""
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v350: 177 pgs: 177 active+clean; 192 MiB data, 902 MiB used, 41 GiB / 42 GiB avail; 62 KiB/s rd, 37 KiB/s wr, 91 op/s
Dec 06 10:20:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:33.217 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/6e6fd32a-770e-4afe-8d83-a1956fc630a7/.meta.tmp'
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6e6fd32a-770e-4afe-8d83-a1956fc630a7/.meta.tmp' to config b'/volumes/_nogroup/6e6fd32a-770e-4afe-8d83-a1956fc630a7/.meta'
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 49ec765e-8e2c-449c-9bb6-dae32b7575ea for path b'/volumes/_nogroup/6e6fd32a-770e-4afe-8d83-a1956fc630a7'
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/1ddc97c2-406f-4edf-8fc4-61a5ba2286e5/.meta.tmp'
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1ddc97c2-406f-4edf-8fc4-61a5ba2286e5/.meta.tmp' to config b'/volumes/_nogroup/1ddc97c2-406f-4edf-8fc4-61a5ba2286e5/.meta'
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:802dfa0a-e026-43d4-b3ba-67c6242d0ddb, sub_name:1ddc97c2-406f-4edf-8fc4-61a5ba2286e5, target_sub_name:6e6fd32a-770e-4afe-8d83-a1956fc630a7, vol_name:cephfs) < ""
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6e6fd32a-770e-4afe-8d83-a1956fc630a7", "format": "json"}]: dispatch
Dec 06 10:20:33 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:20:33.276+0000 7f0638df5640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:20:33 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:20:33.276+0000 7f0638df5640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:20:33 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:20:33.276+0000 7f0638df5640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:20:33 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:20:33.276+0000 7f0638df5640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:20:33 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:20:33.276+0000 7f0638df5640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6e6fd32a-770e-4afe-8d83-a1956fc630a7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6e6fd32a-770e-4afe-8d83-a1956fc630a7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/6e6fd32a-770e-4afe-8d83-a1956fc630a7
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 6e6fd32a-770e-4afe-8d83-a1956fc630a7)
Dec 06 10:20:33 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:20:33.302+0000 7f0639df7640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:20:33 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:20:33.302+0000 7f0639df7640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:20:33 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:20:33.302+0000 7f0639df7640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:20:33 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:20:33.302+0000 7f0639df7640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:20:33 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:20:33.302+0000 7f0639df7640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 6e6fd32a-770e-4afe-8d83-a1956fc630a7) -- by 0 seconds
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/6e6fd32a-770e-4afe-8d83-a1956fc630a7/.meta.tmp'
Dec 06 10:20:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6e6fd32a-770e-4afe-8d83-a1956fc630a7/.meta.tmp' to config b'/volumes/_nogroup/6e6fd32a-770e-4afe-8d83-a1956fc630a7/.meta'
Dec 06 10:20:33 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c706f26e-f87f-4af4-a717-78c8b08cc789", "format": "json"}]: dispatch
Dec 06 10:20:33 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c706f26e-f87f-4af4-a717-78c8b08cc789", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:34 np0005548790.localdomain podman[316746]: 
Dec 06 10:20:34 np0005548790.localdomain podman[316746]: 2025-12-06 10:20:34.034201706 +0000 UTC m=+0.088528008 container create 680daae037818eb7201a69cdb6ee7ab22bd17ef9a0b2891422231e78ef8323bb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dce9cf93-75e3-453b-bd42-cd705aed0f53, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 10:20:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:20:34 np0005548790.localdomain systemd[1]: Started libpod-conmon-680daae037818eb7201a69cdb6ee7ab22bd17ef9a0b2891422231e78ef8323bb.scope.
Dec 06 10:20:34 np0005548790.localdomain systemd[1]: tmp-crun.AXOU5l.mount: Deactivated successfully.
Dec 06 10:20:34 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:20:34 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c86963464ebe72afcb63a58d814e3a53812c0ac46d30e2bf67f47dc57d9bc4d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:20:34 np0005548790.localdomain podman[316746]: 2025-12-06 10:20:34.095767112 +0000 UTC m=+0.150093374 container init 680daae037818eb7201a69cdb6ee7ab22bd17ef9a0b2891422231e78ef8323bb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dce9cf93-75e3-453b-bd42-cd705aed0f53, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 06 10:20:34 np0005548790.localdomain podman[316746]: 2025-12-06 10:20:33.999850366 +0000 UTC m=+0.054176668 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:20:34 np0005548790.localdomain podman[316746]: 2025-12-06 10:20:34.105193827 +0000 UTC m=+0.159520089 container start 680daae037818eb7201a69cdb6ee7ab22bd17ef9a0b2891422231e78ef8323bb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dce9cf93-75e3-453b-bd42-cd705aed0f53, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:20:34 np0005548790.localdomain dnsmasq[316764]: started, version 2.85 cachesize 150
Dec 06 10:20:34 np0005548790.localdomain dnsmasq[316764]: DNS service limited to local subnets
Dec 06 10:20:34 np0005548790.localdomain dnsmasq[316764]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:20:34 np0005548790.localdomain dnsmasq[316764]: warning: no upstream servers configured
Dec 06 10:20:34 np0005548790.localdomain dnsmasq-dhcp[316764]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 06 10:20:34 np0005548790.localdomain dnsmasq[316764]: read /var/lib/neutron/dhcp/dce9cf93-75e3-453b-bd42-cd705aed0f53/addn_hosts - 0 addresses
Dec 06 10:20:34 np0005548790.localdomain dnsmasq-dhcp[316764]: read /var/lib/neutron/dhcp/dce9cf93-75e3-453b-bd42-cd705aed0f53/host
Dec 06 10:20:34 np0005548790.localdomain dnsmasq-dhcp[316764]: read /var/lib/neutron/dhcp/dce9cf93-75e3-453b-bd42-cd705aed0f53/opts
Dec 06 10:20:34 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:34.300 262327 INFO neutron.agent.dhcp.agent [None req-4c582ad2-6c0d-4292-b37b-5a4df66fbc11 - - - - - -] DHCP configuration for ports {'9a5d4d53-6454-474c-b66b-4d40a40a1b2b'} is completed
Dec 06 10:20:34 np0005548790.localdomain dnsmasq[316764]: read /var/lib/neutron/dhcp/dce9cf93-75e3-453b-bd42-cd705aed0f53/addn_hosts - 0 addresses
Dec 06 10:20:34 np0005548790.localdomain dnsmasq-dhcp[316764]: read /var/lib/neutron/dhcp/dce9cf93-75e3-453b-bd42-cd705aed0f53/host
Dec 06 10:20:34 np0005548790.localdomain podman[316780]: 2025-12-06 10:20:34.413543983 +0000 UTC m=+0.044204547 container kill 680daae037818eb7201a69cdb6ee7ab22bd17ef9a0b2891422231e78ef8323bb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dce9cf93-75e3-453b-bd42-cd705aed0f53, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:20:34 np0005548790.localdomain dnsmasq-dhcp[316764]: read /var/lib/neutron/dhcp/dce9cf93-75e3-453b-bd42-cd705aed0f53/opts
Dec 06 10:20:34 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "1ddc97c2-406f-4edf-8fc4-61a5ba2286e5", "snap_name": "802dfa0a-e026-43d4-b3ba-67c6242d0ddb", "target_sub_name": "6e6fd32a-770e-4afe-8d83-a1956fc630a7", "format": "json"}]: dispatch
Dec 06 10:20:34 np0005548790.localdomain ceph-mon[301742]: pgmap v350: 177 pgs: 177 active+clean; 192 MiB data, 902 MiB used, 41 GiB / 42 GiB avail; 62 KiB/s rd, 37 KiB/s wr, 91 op/s
Dec 06 10:20:34 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6e6fd32a-770e-4afe-8d83-a1956fc630a7", "format": "json"}]: dispatch
Dec 06 10:20:34 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:34.665 262327 INFO neutron.agent.dhcp.agent [None req-4586cf7f-e917-4597-83ee-1cf0dc936d15 - - - - - -] DHCP configuration for ports {'d744111f-47d1-49d1-89dd-f13551e20dec', '9a5d4d53-6454-474c-b66b-4d40a40a1b2b'} is completed
Dec 06 10:20:34 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6e6fd32a-770e-4afe-8d83-a1956fc630a7", "format": "json"}]: dispatch
Dec 06 10:20:34 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6e6fd32a-770e-4afe-8d83-a1956fc630a7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:20:34 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:34Z|00190|binding|INFO|Removing iface tapd744111f-47 ovn-installed in OVS
Dec 06 10:20:34 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:34.709 159200 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 8cd6a6f0-f22c-43ed-bf45-841932d96ac2 with type ""
Dec 06 10:20:34 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:34Z|00191|binding|INFO|Removing lport d744111f-47d1-49d1-89dd-f13551e20dec ovn-installed in OVS
Dec 06 10:20:34 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:34.712 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-dce9cf93-75e3-453b-bd42-cd705aed0f53', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dce9cf93-75e3-453b-bd42-cd705aed0f53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f00ab5f7d934f62991ed1e7e798e47e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b8fe5d2-8761-4006-a8c5-55feeb23e9b6, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=d744111f-47d1-49d1-89dd-f13551e20dec) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:20:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:34.711 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:34 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:34.715 159200 INFO neutron.agent.ovn.metadata.agent [-] Port d744111f-47d1-49d1-89dd-f13551e20dec in datapath dce9cf93-75e3-453b-bd42-cd705aed0f53 unbound from our chassis
Dec 06 10:20:34 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:34.718 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dce9cf93-75e3-453b-bd42-cd705aed0f53, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:20:34 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:34.719 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[507c8be6-af3a-485b-95be-3a77b2f8208d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:20:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:34.722 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:34 np0005548790.localdomain dnsmasq[316764]: exiting on receipt of SIGTERM
Dec 06 10:20:34 np0005548790.localdomain podman[316818]: 2025-12-06 10:20:34.814596849 +0000 UTC m=+0.058546116 container kill 680daae037818eb7201a69cdb6ee7ab22bd17ef9a0b2891422231e78ef8323bb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dce9cf93-75e3-453b-bd42-cd705aed0f53, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:20:34 np0005548790.localdomain systemd[1]: libpod-680daae037818eb7201a69cdb6ee7ab22bd17ef9a0b2891422231e78ef8323bb.scope: Deactivated successfully.
Dec 06 10:20:34 np0005548790.localdomain podman[316830]: 2025-12-06 10:20:34.871247502 +0000 UTC m=+0.044803554 container died 680daae037818eb7201a69cdb6ee7ab22bd17ef9a0b2891422231e78ef8323bb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dce9cf93-75e3-453b-bd42-cd705aed0f53, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 06 10:20:34 np0005548790.localdomain podman[316830]: 2025-12-06 10:20:34.959984164 +0000 UTC m=+0.133540176 container cleanup 680daae037818eb7201a69cdb6ee7ab22bd17ef9a0b2891422231e78ef8323bb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dce9cf93-75e3-453b-bd42-cd705aed0f53, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 10:20:34 np0005548790.localdomain systemd[1]: libpod-conmon-680daae037818eb7201a69cdb6ee7ab22bd17ef9a0b2891422231e78ef8323bb.scope: Deactivated successfully.
Dec 06 10:20:34 np0005548790.localdomain podman[316832]: 2025-12-06 10:20:34.979609746 +0000 UTC m=+0.143182008 container remove 680daae037818eb7201a69cdb6ee7ab22bd17ef9a0b2891422231e78ef8323bb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dce9cf93-75e3-453b-bd42-cd705aed0f53, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 10:20:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:34.991 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:34 np0005548790.localdomain kernel: device tapd744111f-47 left promiscuous mode
Dec 06 10:20:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:35.005 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:35 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-5c86963464ebe72afcb63a58d814e3a53812c0ac46d30e2bf67f47dc57d9bc4d-merged.mount: Deactivated successfully.
Dec 06 10:20:35 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-680daae037818eb7201a69cdb6ee7ab22bd17ef9a0b2891422231e78ef8323bb-userdata-shm.mount: Deactivated successfully.
Dec 06 10:20:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v351: 177 pgs: 177 active+clean; 192 MiB data, 902 MiB used, 41 GiB / 42 GiB avail; 47 KiB/s rd, 28 KiB/s wr, 69 op/s
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.362 262327 INFO neutron.agent.dhcp.agent [None req-ec2e0b7b-ca98-4d5c-b043-2d0825631077 - - - - - -] Synchronizing state
Dec 06 10:20:35 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2ddce9cf93\x2d75e3\x2d453b\x2dbd42\x2dcd705aed0f53.mount: Deactivated successfully.
Dec 06 10:20:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6e6fd32a-770e-4afe-8d83-a1956fc630a7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:20:35 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:35.501 2 INFO neutron.agent.securitygroups_rpc [None req-70f019b0-4c49-406a-b078-506915b4f443 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['2bea4444-1a2f-4249-8686-d0a5b03f529f']
Dec 06 10:20:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/1ddc97c2-406f-4edf-8fc4-61a5ba2286e5/.snap/802dfa0a-e026-43d4-b3ba-67c6242d0ddb/d1fa3e3a-6af5-45e3-acdb-573eb99ee54f' to b'/volumes/_nogroup/6e6fd32a-770e-4afe-8d83-a1956fc630a7/309f56d1-d97f-49d8-9053-6bedbe1fff53'
Dec 06 10:20:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/6e6fd32a-770e-4afe-8d83-a1956fc630a7/.meta.tmp'
Dec 06 10:20:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6e6fd32a-770e-4afe-8d83-a1956fc630a7/.meta.tmp' to config b'/volumes/_nogroup/6e6fd32a-770e-4afe-8d83-a1956fc630a7/.meta'
Dec 06 10:20:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.clone_index] untracking 49ec765e-8e2c-449c-9bb6-dae32b7575ea
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.579 262327 INFO neutron.agent.dhcp.agent [None req-5c2ec399-4229-400a-8417-66519646f114 - - - - - -] All active networks have been fetched through RPC.
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.580 262327 INFO neutron.agent.dhcp.agent [-] Starting network dce9cf93-75e3-453b-bd42-cd705aed0f53 dhcp configuration
Dec 06 10:20:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1ddc97c2-406f-4edf-8fc4-61a5ba2286e5/.meta.tmp'
Dec 06 10:20:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1ddc97c2-406f-4edf-8fc4-61a5ba2286e5/.meta.tmp' to config b'/volumes/_nogroup/1ddc97c2-406f-4edf-8fc4-61a5ba2286e5/.meta'
Dec 06 10:20:35 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6e6fd32a-770e-4afe-8d83-a1956fc630a7", "format": "json"}]: dispatch
Dec 06 10:20:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/6e6fd32a-770e-4afe-8d83-a1956fc630a7/.meta.tmp'
Dec 06 10:20:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6e6fd32a-770e-4afe-8d83-a1956fc630a7/.meta.tmp' to config b'/volumes/_nogroup/6e6fd32a-770e-4afe-8d83-a1956fc630a7/.meta'
Dec 06 10:20:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 6e6fd32a-770e-4afe-8d83-a1956fc630a7)
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent [None req-f4fb29f2-5170-4f59-8626-9a43ad8026db - - - - - -] Unable to enable dhcp for dce9cf93-75e3-453b-bd42-cd705aed0f53.: oslo_messaging.rpc.client.RemoteError: Remote error: SubnetInUse Unable to complete operation on subnet 5741a793-00c3-4daf-b3d6-c6f2e6003ed6: This subnet is being modified by another concurrent operation.
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming\n    res = self.dispatcher.dispatch(message)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch\n    return self._do_dispatch(endpoint, method, ctxt, args)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch\n    result = func(ctxt, **new_args)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner\n    return func(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n    setattr(e, \'_RETRY_EXCEEDED\', True)\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n    context_reference.session.rollback()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n    return f(*dup_args, **dup_kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/quota/resource_registry.py", line 95, in wrapper\n    ret_val = f(_self, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 292, in create_dhcp_port\n    return self._port_action(plugin, context, port, \'create_port\')\n', '  File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 118, in _port_action\n    return p_utils.create_port(plugin, context, port)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/plugins/utils.py", line 338, in create_port\n    return core_plugin.create_port(\n', '  File "/usr/lib/python3.9/site-packages/neutron/common/utils.py", line 728, in inner\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 226, in wrapped\n    return f_with_retry(*args, **kwargs,\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n    setattr(e, \'_RETRY_EXCEEDED\', True)\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n    context_reference.session.rollback()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n    return f(*dup_args, **dup_kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 1582, in create_port\n    result, mech_context = self._create_port_db(context, port)\n', '  File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 1547, in _create_port_db\n    port_db = self.create_port_db(context, port)\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/db_base_plugin_v2.py", line 1501, in create_port_db\n    self.ipam.allocate_ips_for_port_and_store(\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 219, in allocate_ips_for_port_and_store\n    ips = self.allocate_ips_for_port(context, port_copy)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 224, in wrapped\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 1044, in wrapper\n    return fn(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 226, in allocate_ips_for_port\n    return self._allocate_ips_for_port(context, port)\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 258, in _allocate_ips_for_port\n    subnets = self._ipam_get_subnets(\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_backend_mixin.py", line 686, in _ipam_get_subnets\n    subnet.read_lock_register(\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/models_v2.py", line 81, in read_lock_register\n    raise exception\n', 'neutron_lib.exceptions.SubnetInUse: Unable to complete operation on subnet 5741a793-00c3-4daf-b3d6-c6f2e6003ed6: This subnet is being modified by another concurrent operation.\n'].
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 324, in enable
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent     common_utils.wait_until_true(self._enable, timeout=300)
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/common/utils.py", line 744, in wait_until_true
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent     while not predicate():
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 336, in _enable
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent     interface_name = self.device_manager.setup(
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1825, in setup
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent     self.cleanup_stale_devices(network, dhcp_port=None)
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent     self.force_reraise()
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent     raise self.value
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1820, in setup
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent     port = self.setup_dhcp_port(network, segment)
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1755, in setup_dhcp_port
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent     dhcp_port = setup_method(network, device_id, dhcp_subnets)
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1703, in _setup_new_dhcp_port
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent     return self.plugin.create_dhcp_port({'port': port_dict})
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 893, in create_dhcp_port
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent     port = cctxt.call(self.context, 'create_dhcp_port',
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron_lib/rpc.py", line 157, in call
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent     return self._original_context.call(ctxt, method, **kwargs)
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent     result = self.transport._send(
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent     return self._driver.send(target, ctxt, message,
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent     return self._send(target, ctxt, message, wait_for_reply, timeout,
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent     raise result
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent oslo_messaging.rpc.client.RemoteError: Remote error: SubnetInUse Unable to complete operation on subnet 5741a793-00c3-4daf-b3d6-c6f2e6003ed6: This subnet is being modified by another concurrent operation.
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming\n    res = self.dispatcher.dispatch(message)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch\n    return self._do_dispatch(endpoint, method, ctxt, args)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch\n    result = func(ctxt, **new_args)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner\n    return func(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n    setattr(e, \'_RETRY_EXCEEDED\', True)\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n    context_reference.session.rollback()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n    return f(*dup_args, **dup_kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/quota/resource_registry.py", line 95, in wrapper\n    ret_val = f(_self, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 292, in create_dhcp_port\n    return self._port_action(plugin, context, port, \'create_port\')\n', '  File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 118, in _port_action\n    return p_utils.create_port(plugin, context, port)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/plugins/utils.py", line 338, in create_port\n    return core_plugin.create_port(\n', '  File "/usr/lib/python3.9/site-packages/neutron/common/utils.py", line 728, in inner\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 226, in wrapped\n    return f_with_retry(*args, **kwargs,\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n    setattr(e, \'_RETRY_EXCEEDED\', True)\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n    context_reference.session.rollback()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n    return f(*dup_args, **dup_kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 1582, in create_port\n    result, mech_context = self._create_port_db(context, port)\n', '  File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 1547, in _create_port_db\n    port_db = self.create_port_db(context, port)\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/db_base_plugin_v2.py", line 1501, in create_port_db\n    self.ipam.allocate_ips_for_port_and_store(\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 219, in allocate_ips_for_port_and_store\n    ips = self.allocate_ips_for_port(context, port_copy)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 224, in wrapped\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 1044, in wrapper\n    return fn(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 226, in allocate_ips_for_port\n    return self._allocate_ips_for_port(context, port)\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 258, in _allocate_ips_for_port\n    subnets = self._ipam_get_subnets(\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_backend_mixin.py", line 686, in _ipam_get_subnets\n    subnet.read_lock_register(\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/models_v2.py", line 81, in read_lock_register\n    raise exception\n', 'neutron_lib.exceptions.SubnetInUse: Unable to complete operation on subnet 5741a793-00c3-4daf-b3d6-c6f2e6003ed6: This subnet is being modified by another concurrent operation.\n'].
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.662 262327 ERROR neutron.agent.dhcp.agent 
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.665 262327 INFO neutron.agent.dhcp.agent [None req-f4fb29f2-5170-4f59-8626-9a43ad8026db - - - - - -] Finished network dce9cf93-75e3-453b-bd42-cd705aed0f53 dhcp configuration
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.665 262327 INFO neutron.agent.dhcp.agent [None req-5c2ec399-4229-400a-8417-66519646f114 - - - - - -] Synchronizing state complete
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.666 262327 INFO neutron.agent.dhcp.agent [None req-5c2ec399-4229-400a-8417-66519646f114 - - - - - -] Synchronizing state
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.697 262327 INFO neutron.agent.dhcp.agent [None req-1626489d-39ca-4519-a5b0-ff379ba3ec0e - - - - - -] DHCP configuration for ports {'9a5d4d53-6454-474c-b66b-4d40a40a1b2b'} is completed
Dec 06 10:20:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e60ef5e5-d14b-4505-930a-87b6f2872763", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:20:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e60ef5e5-d14b-4505-930a-87b6f2872763, vol_name:cephfs) < ""
Dec 06 10:20:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e60ef5e5-d14b-4505-930a-87b6f2872763/.meta.tmp'
Dec 06 10:20:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e60ef5e5-d14b-4505-930a-87b6f2872763/.meta.tmp' to config b'/volumes/_nogroup/e60ef5e5-d14b-4505-930a-87b6f2872763/.meta'
Dec 06 10:20:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e60ef5e5-d14b-4505-930a-87b6f2872763, vol_name:cephfs) < ""
Dec 06 10:20:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e60ef5e5-d14b-4505-930a-87b6f2872763", "format": "json"}]: dispatch
Dec 06 10:20:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e60ef5e5-d14b-4505-930a-87b6f2872763, vol_name:cephfs) < ""
Dec 06 10:20:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e60ef5e5-d14b-4505-930a-87b6f2872763, vol_name:cephfs) < ""
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.819 262327 INFO neutron.agent.dhcp.agent [None req-d1ef8786-5bfe-4930-9742-caca6b682477 - - - - - -] All active networks have been fetched through RPC.
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.820 262327 INFO neutron.agent.dhcp.agent [-] Starting network dce9cf93-75e3-453b-bd42-cd705aed0f53 dhcp configuration
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.821 262327 INFO neutron.agent.dhcp.agent [-] Finished network dce9cf93-75e3-453b-bd42-cd705aed0f53 dhcp configuration
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.821 262327 INFO neutron.agent.dhcp.agent [None req-d1ef8786-5bfe-4930-9742-caca6b682477 - - - - - -] Synchronizing state complete
Dec 06 10:20:35 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:35.895 2 INFO neutron.agent.securitygroups_rpc [None req-76169eb0-4558-4a54-88c6-853dfb7935a8 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['2bea4444-1a2f-4249-8686-d0a5b03f529f']
Dec 06 10:20:35 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:35.909 262327 INFO neutron.agent.dhcp.agent [None req-08e5210f-769e-46d8-8af7-5c83c0a175d7 - - - - - -] DHCP configuration for ports {'9a5d4d53-6454-474c-b66b-4d40a40a1b2b'} is completed
Dec 06 10:20:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:36.032 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:36 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ffb9c87f-9478-478c-bceb-83016344d5b8", "snap_name": "bc9ed0e0-cf16-4288-b389-6bbd7bbfed8b_19292787-d6d4-497b-bcb5-105ffd3d6c15", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bc9ed0e0-cf16-4288-b389-6bbd7bbfed8b_19292787-d6d4-497b-bcb5-105ffd3d6c15, sub_name:ffb9c87f-9478-478c-bceb-83016344d5b8, vol_name:cephfs) < ""
Dec 06 10:20:36 np0005548790.localdomain ceph-mon[301742]: pgmap v351: 177 pgs: 177 active+clean; 192 MiB data, 902 MiB used, 41 GiB / 42 GiB avail; 47 KiB/s rd, 28 KiB/s wr, 69 op/s
Dec 06 10:20:36 np0005548790.localdomain ceph-mon[301742]: mgrmap e49: np0005548790.kvkfyr(active, since 8m), standbys: np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548789.mzhmje
Dec 06 10:20:36 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e60ef5e5-d14b-4505-930a-87b6f2872763", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:20:36 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:20:36 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/3962963758' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:20:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ffb9c87f-9478-478c-bceb-83016344d5b8/.meta.tmp'
Dec 06 10:20:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ffb9c87f-9478-478c-bceb-83016344d5b8/.meta.tmp' to config b'/volumes/_nogroup/ffb9c87f-9478-478c-bceb-83016344d5b8/.meta'
Dec 06 10:20:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bc9ed0e0-cf16-4288-b389-6bbd7bbfed8b_19292787-d6d4-497b-bcb5-105ffd3d6c15, sub_name:ffb9c87f-9478-478c-bceb-83016344d5b8, vol_name:cephfs) < ""
Dec 06 10:20:36 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ffb9c87f-9478-478c-bceb-83016344d5b8", "snap_name": "bc9ed0e0-cf16-4288-b389-6bbd7bbfed8b", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bc9ed0e0-cf16-4288-b389-6bbd7bbfed8b, sub_name:ffb9c87f-9478-478c-bceb-83016344d5b8, vol_name:cephfs) < ""
Dec 06 10:20:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ffb9c87f-9478-478c-bceb-83016344d5b8/.meta.tmp'
Dec 06 10:20:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ffb9c87f-9478-478c-bceb-83016344d5b8/.meta.tmp' to config b'/volumes/_nogroup/ffb9c87f-9478-478c-bceb-83016344d5b8/.meta'
Dec 06 10:20:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bc9ed0e0-cf16-4288-b389-6bbd7bbfed8b, sub_name:ffb9c87f-9478-478c-bceb-83016344d5b8, vol_name:cephfs) < ""
Dec 06 10:20:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v352: 177 pgs: 177 active+clean; 192 MiB data, 902 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 13 KiB/s wr, 36 op/s
Dec 06 10:20:37 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e177 e177: 6 total, 6 up, 6 in
Dec 06 10:20:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:37.335 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:20:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:37.335 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:20:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:37.335 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:20:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:37.365 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:20:37 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e60ef5e5-d14b-4505-930a-87b6f2872763", "format": "json"}]: dispatch
Dec 06 10:20:37 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ffb9c87f-9478-478c-bceb-83016344d5b8", "snap_name": "bc9ed0e0-cf16-4288-b389-6bbd7bbfed8b_19292787-d6d4-497b-bcb5-105ffd3d6c15", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:37 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ffb9c87f-9478-478c-bceb-83016344d5b8", "snap_name": "bc9ed0e0-cf16-4288-b389-6bbd7bbfed8b", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:37 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/1176083142' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:20:37 np0005548790.localdomain ceph-mon[301742]: osdmap e177: 6 total, 6 up, 6 in
Dec 06 10:20:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:38.036 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:38 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:38.295 2 INFO neutron.agent.securitygroups_rpc [None req-10166391-fcb0-4201-a7d2-7443ab5c9b01 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['344ee16e-9c56-4a03-94c4-4549205a4025']
Dec 06 10:20:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:20:38 np0005548790.localdomain systemd[1]: tmp-crun.guWFxE.mount: Deactivated successfully.
Dec 06 10:20:38 np0005548790.localdomain podman[316858]: 2025-12-06 10:20:38.560013004 +0000 UTC m=+0.072842302 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 10:20:38 np0005548790.localdomain podman[316858]: 2025-12-06 10:20:38.593113299 +0000 UTC m=+0.105942577 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 10:20:38 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:20:38 np0005548790.localdomain ceph-mon[301742]: pgmap v352: 177 pgs: 177 active+clean; 192 MiB data, 902 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 13 KiB/s wr, 36 op/s
Dec 06 10:20:38 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:38.765 2 INFO neutron.agent.securitygroups_rpc [None req-867b4687-c36d-47aa-8d2e-c76597d4a6cb 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['344ee16e-9c56-4a03-94c4-4549205a4025']
Dec 06 10:20:38 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:38.971 2 INFO neutron.agent.securitygroups_rpc [None req-80b8119e-1e57-4c4d-b95c-97abe74340b6 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['344ee16e-9c56-4a03-94c4-4549205a4025']
Dec 06 10:20:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:20:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:20:39 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2101341474' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:20:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:20:39 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2101341474' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:20:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v354: 177 pgs: 177 active+clean; 193 MiB data, 903 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 46 KiB/s wr, 46 op/s
Dec 06 10:20:39 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:39.299 2 INFO neutron.agent.securitygroups_rpc [None req-335b324d-f81b-4cc4-b913-ab18408e9420 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['344ee16e-9c56-4a03-94c4-4549205a4025']
Dec 06 10:20:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:39.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:20:39 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:39.650 2 INFO neutron.agent.securitygroups_rpc [None req-c2f184e7-70de-480a-95d7-35dc53af97f7 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['344ee16e-9c56-4a03-94c4-4549205a4025']
Dec 06 10:20:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2229978036' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:20:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2229978036' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:20:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2101341474' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:20:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2101341474' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:20:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ffb9c87f-9478-478c-bceb-83016344d5b8", "format": "json"}]: dispatch
Dec 06 10:20:39 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ffb9c87f-9478-478c-bceb-83016344d5b8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:20:39 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ffb9c87f-9478-478c-bceb-83016344d5b8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:20:39 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:20:39.901+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ffb9c87f-9478-478c-bceb-83016344d5b8' of type subvolume
Dec 06 10:20:39 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ffb9c87f-9478-478c-bceb-83016344d5b8' of type subvolume
Dec 06 10:20:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ffb9c87f-9478-478c-bceb-83016344d5b8", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:39 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ffb9c87f-9478-478c-bceb-83016344d5b8, vol_name:cephfs) < ""
Dec 06 10:20:39 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ffb9c87f-9478-478c-bceb-83016344d5b8'' moved to trashcan
Dec 06 10:20:39 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:20:39 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ffb9c87f-9478-478c-bceb-83016344d5b8, vol_name:cephfs) < ""
Dec 06 10:20:40 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:40.454 2 INFO neutron.agent.securitygroups_rpc [None req-4752e8b4-a74a-419a-afa7-aac12ad63453 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['344ee16e-9c56-4a03-94c4-4549205a4025']
Dec 06 10:20:40 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e60ef5e5-d14b-4505-930a-87b6f2872763", "format": "json"}]: dispatch
Dec 06 10:20:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e60ef5e5-d14b-4505-930a-87b6f2872763, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:20:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e60ef5e5-d14b-4505-930a-87b6f2872763, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:20:40 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:20:40.555+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e60ef5e5-d14b-4505-930a-87b6f2872763' of type subvolume
Dec 06 10:20:40 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e60ef5e5-d14b-4505-930a-87b6f2872763' of type subvolume
Dec 06 10:20:40 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e60ef5e5-d14b-4505-930a-87b6f2872763", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e60ef5e5-d14b-4505-930a-87b6f2872763, vol_name:cephfs) < ""
Dec 06 10:20:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e60ef5e5-d14b-4505-930a-87b6f2872763'' moved to trashcan
Dec 06 10:20:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:20:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e60ef5e5-d14b-4505-930a-87b6f2872763, vol_name:cephfs) < ""
Dec 06 10:20:40 np0005548790.localdomain ceph-mon[301742]: pgmap v354: 177 pgs: 177 active+clean; 193 MiB data, 903 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 46 KiB/s wr, 46 op/s
Dec 06 10:20:40 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e178 e178: 6 total, 6 up, 6 in
Dec 06 10:20:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v356: 177 pgs: 177 active+clean; 193 MiB data, 903 MiB used, 41 GiB / 42 GiB avail; 1.5 KiB/s rd, 32 KiB/s wr, 9 op/s
Dec 06 10:20:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:41.329 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:20:41 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:41.454 262327 INFO neutron.agent.linux.ip_lib [None req-9558100e-24c0-4ea9-a5ef-c43d152bd7dd - - - - - -] Device tapd511f2db-2e cannot be used as it has no MAC address
Dec 06 10:20:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:41.479 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:41 np0005548790.localdomain kernel: device tapd511f2db-2e entered promiscuous mode
Dec 06 10:20:41 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016441.4871] manager: (tapd511f2db-2e): new Generic device (/org/freedesktop/NetworkManager/Devices/41)
Dec 06 10:20:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:41.488 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:41 np0005548790.localdomain systemd-udevd[316886]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:20:41 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:41Z|00192|binding|INFO|Claiming lport d511f2db-2eda-419d-8a75-30958d043a26 for this chassis.
Dec 06 10:20:41 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:41Z|00193|binding|INFO|d511f2db-2eda-419d-8a75-30958d043a26: Claiming unknown
Dec 06 10:20:41 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:41.502 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-e89dbb24-6358-4511-9ab9-ea980f0ccf77', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e89dbb24-6358-4511-9ab9-ea980f0ccf77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f00ab5f7d934f62991ed1e7e798e47e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75034a42-da53-4f06-84ac-1587af3093a1, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=d511f2db-2eda-419d-8a75-30958d043a26) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:20:41 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:41.507 159200 INFO neutron.agent.ovn.metadata.agent [-] Port d511f2db-2eda-419d-8a75-30958d043a26 in datapath e89dbb24-6358-4511-9ab9-ea980f0ccf77 bound to our chassis
Dec 06 10:20:41 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:41.508 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e89dbb24-6358-4511-9ab9-ea980f0ccf77 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:20:41 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:41.509 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce18b91-721b-4493-8074-0551a0f4bf87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:20:41 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapd511f2db-2e: No such device
Dec 06 10:20:41 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:41Z|00194|binding|INFO|Setting lport d511f2db-2eda-419d-8a75-30958d043a26 ovn-installed in OVS
Dec 06 10:20:41 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:41Z|00195|binding|INFO|Setting lport d511f2db-2eda-419d-8a75-30958d043a26 up in Southbound
Dec 06 10:20:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:41.521 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:41 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapd511f2db-2e: No such device
Dec 06 10:20:41 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapd511f2db-2e: No such device
Dec 06 10:20:41 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapd511f2db-2e: No such device
Dec 06 10:20:41 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapd511f2db-2e: No such device
Dec 06 10:20:41 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapd511f2db-2e: No such device
Dec 06 10:20:41 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapd511f2db-2e: No such device
Dec 06 10:20:41 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapd511f2db-2e: No such device
Dec 06 10:20:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:41.564 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:41.637 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:41 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ffb9c87f-9478-478c-bceb-83016344d5b8", "format": "json"}]: dispatch
Dec 06 10:20:41 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ffb9c87f-9478-478c-bceb-83016344d5b8", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:41 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e60ef5e5-d14b-4505-930a-87b6f2872763", "format": "json"}]: dispatch
Dec 06 10:20:41 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e60ef5e5-d14b-4505-930a-87b6f2872763", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:41 np0005548790.localdomain ceph-mon[301742]: osdmap e178: 6 total, 6 up, 6 in
Dec 06 10:20:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Optimize plan auto_2025-12-06_10:20:41
Dec 06 10:20:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:20:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] do_upmap
Dec 06 10:20:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] pools ['.mgr', 'images', 'vms', 'volumes', 'manila_metadata', 'backups', 'manila_data']
Dec 06 10:20:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] prepared 0/10 changes
Dec 06 10:20:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:20:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:20:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:20:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:20:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:20:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:20:42 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:20:42.095 2 INFO neutron.agent.securitygroups_rpc [None req-739af509-ab08-45e2-ba83-57dd6efc5660 95268a68b5c84162ba789100555874fb 7787060a7af94f168805e73d06841337 - - default default] Security group rule updated ['6ae4fdb3-8bab-4aac-9ae7-1f521287092b']
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32)
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014858362995533222 of space, bias 1.0, pg target 0.29667198114414667 quantized to 32 (current 32)
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.9989356504745952e-06 of space, bias 1.0, pg target 0.0005967881944444444 quantized to 32 (current 32)
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 6.761236739251814e-05 of space, bias 4.0, pg target 0.05381944444444444 quantized to 16 (current 16)
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:20:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:20:42 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:42Z|00196|binding|INFO|Removing iface tapd511f2db-2e ovn-installed in OVS
Dec 06 10:20:42 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:42.301 159200 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port a58f2cf9-9cd8-42df-9dab-4cd5bd0f2e17 with type ""
Dec 06 10:20:42 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:42Z|00197|binding|INFO|Removing lport d511f2db-2eda-419d-8a75-30958d043a26 ovn-installed in OVS
Dec 06 10:20:42 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:42.302 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-e89dbb24-6358-4511-9ab9-ea980f0ccf77', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e89dbb24-6358-4511-9ab9-ea980f0ccf77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f00ab5f7d934f62991ed1e7e798e47e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75034a42-da53-4f06-84ac-1587af3093a1, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=d511f2db-2eda-419d-8a75-30958d043a26) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:20:42 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:42.304 159200 INFO neutron.agent.ovn.metadata.agent [-] Port d511f2db-2eda-419d-8a75-30958d043a26 in datapath e89dbb24-6358-4511-9ab9-ea980f0ccf77 unbound from our chassis
Dec 06 10:20:42 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:42.305 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e89dbb24-6358-4511-9ab9-ea980f0ccf77 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:20:42 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:42.306 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[e6732ad3-a267-4dd2-a7f1-b170aed02d09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:20:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:42.311 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:42.313 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:42.327 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:20:42 np0005548790.localdomain podman[316957]: 
Dec 06 10:20:42 np0005548790.localdomain podman[316957]: 2025-12-06 10:20:42.425981492 +0000 UTC m=+0.085031893 container create b4dbcc9a522eee2ac333fb9754711660339f5ddfa02c212e5130c3b413d9a865 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e89dbb24-6358-4511-9ab9-ea980f0ccf77, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:20:42 np0005548790.localdomain systemd[1]: Started libpod-conmon-b4dbcc9a522eee2ac333fb9754711660339f5ddfa02c212e5130c3b413d9a865.scope.
Dec 06 10:20:42 np0005548790.localdomain systemd[1]: tmp-crun.izgFJV.mount: Deactivated successfully.
Dec 06 10:20:42 np0005548790.localdomain podman[316957]: 2025-12-06 10:20:42.384989203 +0000 UTC m=+0.044039684 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:20:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:42.495 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:42 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:20:42 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99f0e9fd5bcf1c27d8640475614ec823ce5d84b7166c403da2c7e4d1381f578/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:20:42 np0005548790.localdomain podman[316957]: 2025-12-06 10:20:42.513203023 +0000 UTC m=+0.172253474 container init b4dbcc9a522eee2ac333fb9754711660339f5ddfa02c212e5130c3b413d9a865 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e89dbb24-6358-4511-9ab9-ea980f0ccf77, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:20:42 np0005548790.localdomain podman[316957]: 2025-12-06 10:20:42.529911245 +0000 UTC m=+0.188961666 container start b4dbcc9a522eee2ac333fb9754711660339f5ddfa02c212e5130c3b413d9a865 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e89dbb24-6358-4511-9ab9-ea980f0ccf77, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 10:20:42 np0005548790.localdomain dnsmasq[316975]: started, version 2.85 cachesize 150
Dec 06 10:20:42 np0005548790.localdomain dnsmasq[316975]: DNS service limited to local subnets
Dec 06 10:20:42 np0005548790.localdomain dnsmasq[316975]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:20:42 np0005548790.localdomain dnsmasq[316975]: warning: no upstream servers configured
Dec 06 10:20:42 np0005548790.localdomain dnsmasq-dhcp[316975]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 06 10:20:42 np0005548790.localdomain dnsmasq[316975]: read /var/lib/neutron/dhcp/e89dbb24-6358-4511-9ab9-ea980f0ccf77/addn_hosts - 0 addresses
Dec 06 10:20:42 np0005548790.localdomain dnsmasq-dhcp[316975]: read /var/lib/neutron/dhcp/e89dbb24-6358-4511-9ab9-ea980f0ccf77/host
Dec 06 10:20:42 np0005548790.localdomain dnsmasq-dhcp[316975]: read /var/lib/neutron/dhcp/e89dbb24-6358-4511-9ab9-ea980f0ccf77/opts
Dec 06 10:20:42 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:42.673 262327 INFO neutron.agent.dhcp.agent [None req-ba94dc30-6fc9-443d-bad9-cf3f0fafe997 - - - - - -] DHCP configuration for ports {'b38061dd-99da-4fce-a940-48d094a5dc6b'} is completed
Dec 06 10:20:42 np0005548790.localdomain ceph-mon[301742]: pgmap v356: 177 pgs: 177 active+clean; 193 MiB data, 903 MiB used, 41 GiB / 42 GiB avail; 1.5 KiB/s rd, 32 KiB/s wr, 9 op/s
Dec 06 10:20:42 np0005548790.localdomain dnsmasq[316975]: exiting on receipt of SIGTERM
Dec 06 10:20:42 np0005548790.localdomain podman[316993]: 2025-12-06 10:20:42.758759569 +0000 UTC m=+0.045937384 container kill b4dbcc9a522eee2ac333fb9754711660339f5ddfa02c212e5130c3b413d9a865 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e89dbb24-6358-4511-9ab9-ea980f0ccf77, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:20:42 np0005548790.localdomain systemd[1]: libpod-b4dbcc9a522eee2ac333fb9754711660339f5ddfa02c212e5130c3b413d9a865.scope: Deactivated successfully.
Dec 06 10:20:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:20:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:20:42 np0005548790.localdomain podman[317016]: 2025-12-06 10:20:42.85818085 +0000 UTC m=+0.069050640 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, io.buildah.version=1.33.7, version=9.6)
Dec 06 10:20:42 np0005548790.localdomain podman[317016]: 2025-12-06 10:20:42.877210816 +0000 UTC m=+0.088080636 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, io.buildah.version=1.33.7, vcs-type=git, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 06 10:20:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:20:42 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:20:42 np0005548790.localdomain podman[317009]: 2025-12-06 10:20:42.903167198 +0000 UTC m=+0.119832825 container died b4dbcc9a522eee2ac333fb9754711660339f5ddfa02c212e5130c3b413d9a865 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e89dbb24-6358-4511-9ab9-ea980f0ccf77, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 10:20:42 np0005548790.localdomain podman[317009]: 2025-12-06 10:20:42.944491067 +0000 UTC m=+0.161156674 container remove b4dbcc9a522eee2ac333fb9754711660339f5ddfa02c212e5130c3b413d9a865 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e89dbb24-6358-4511-9ab9-ea980f0ccf77, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 10:20:43 np0005548790.localdomain kernel: device tapd511f2db-2e left promiscuous mode
Dec 06 10:20:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:43.001 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:43.010 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:43 np0005548790.localdomain podman[317062]: 2025-12-06 10:20:43.011795848 +0000 UTC m=+0.115621921 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:20:43 np0005548790.localdomain systemd[1]: libpod-conmon-b4dbcc9a522eee2ac333fb9754711660339f5ddfa02c212e5130c3b413d9a865.scope: Deactivated successfully.
Dec 06 10:20:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:43.038 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:43 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:43.038 262327 INFO neutron.agent.dhcp.agent [None req-8d402ca3-b3d4-43c7-8a2d-d627379e126b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:20:43 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:43.039 262327 INFO neutron.agent.dhcp.agent [None req-8d402ca3-b3d4-43c7-8a2d-d627379e126b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:20:43 np0005548790.localdomain podman[317062]: 2025-12-06 10:20:43.048286446 +0000 UTC m=+0.152112499 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:20:43 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:20:43 np0005548790.localdomain podman[317015]: 2025-12-06 10:20:43.014985484 +0000 UTC m=+0.222151073 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute)
Dec 06 10:20:43 np0005548790.localdomain podman[317015]: 2025-12-06 10:20:43.097281232 +0000 UTC m=+0.304446851 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec 06 10:20:43 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:20:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v357: 177 pgs: 177 active+clean; 193 MiB data, 903 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 45 KiB/s wr, 33 op/s
Dec 06 10:20:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9cf2fd4c-90f2-4604-9f84-44511d67581f", "format": "json"}]: dispatch
Dec 06 10:20:43 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9cf2fd4c-90f2-4604-9f84-44511d67581f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:20:43 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9cf2fd4c-90f2-4604-9f84-44511d67581f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:20:43 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:20:43.215+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9cf2fd4c-90f2-4604-9f84-44511d67581f' of type subvolume
Dec 06 10:20:43 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9cf2fd4c-90f2-4604-9f84-44511d67581f' of type subvolume
Dec 06 10:20:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9cf2fd4c-90f2-4604-9f84-44511d67581f", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:43 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9cf2fd4c-90f2-4604-9f84-44511d67581f, vol_name:cephfs) < ""
Dec 06 10:20:43 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9cf2fd4c-90f2-4604-9f84-44511d67581f'' moved to trashcan
Dec 06 10:20:43 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:20:43 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9cf2fd4c-90f2-4604-9f84-44511d67581f, vol_name:cephfs) < ""
Dec 06 10:20:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:43.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:20:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:43.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:20:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:43.334 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:20:43 np0005548790.localdomain systemd[1]: tmp-crun.27oapj.mount: Deactivated successfully.
Dec 06 10:20:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-e99f0e9fd5bcf1c27d8640475614ec823ce5d84b7166c403da2c7e4d1381f578-merged.mount: Deactivated successfully.
Dec 06 10:20:43 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b4dbcc9a522eee2ac333fb9754711660339f5ddfa02c212e5130c3b413d9a865-userdata-shm.mount: Deactivated successfully.
Dec 06 10:20:43 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2de89dbb24\x2d6358\x2d4511\x2d9ab9\x2dea980f0ccf77.mount: Deactivated successfully.
Dec 06 10:20:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:20:44 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "046a7cd8-3e98-42c3-b5fe-95f442606a2a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:20:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:046a7cd8-3e98-42c3-b5fe-95f442606a2a, vol_name:cephfs) < ""
Dec 06 10:20:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/046a7cd8-3e98-42c3-b5fe-95f442606a2a/.meta.tmp'
Dec 06 10:20:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/046a7cd8-3e98-42c3-b5fe-95f442606a2a/.meta.tmp' to config b'/volumes/_nogroup/046a7cd8-3e98-42c3-b5fe-95f442606a2a/.meta'
Dec 06 10:20:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:046a7cd8-3e98-42c3-b5fe-95f442606a2a, vol_name:cephfs) < ""
Dec 06 10:20:44 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "046a7cd8-3e98-42c3-b5fe-95f442606a2a", "format": "json"}]: dispatch
Dec 06 10:20:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:046a7cd8-3e98-42c3-b5fe-95f442606a2a, vol_name:cephfs) < ""
Dec 06 10:20:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:046a7cd8-3e98-42c3-b5fe-95f442606a2a, vol_name:cephfs) < ""
Dec 06 10:20:44 np0005548790.localdomain ceph-mon[301742]: pgmap v357: 177 pgs: 177 active+clean; 193 MiB data, 903 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 45 KiB/s wr, 33 op/s
Dec 06 10:20:44 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9cf2fd4c-90f2-4604-9f84-44511d67581f", "format": "json"}]: dispatch
Dec 06 10:20:44 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9cf2fd4c-90f2-4604-9f84-44511d67581f", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:44 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:20:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v358: 177 pgs: 177 active+clean; 193 MiB data, 903 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 45 KiB/s wr, 33 op/s
Dec 06 10:20:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:45.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:20:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:45.352 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:20:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:45.352 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:20:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:45.353 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:20:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:45.353 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:20:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:45.354 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:20:46 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:20:46 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4269218752' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:20:46 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "046a7cd8-3e98-42c3-b5fe-95f442606a2a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:20:46 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "046a7cd8-3e98-42c3-b5fe-95f442606a2a", "format": "json"}]: dispatch
Dec 06 10:20:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:46.253 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.900s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:20:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:46.420 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:20:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:46.422 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=11566MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:20:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:46.423 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:20:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:46.423 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:20:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:46.490 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:20:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:46.490 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:20:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:46.521 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:20:46 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:20:46 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2201144363' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:20:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:46.943 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:20:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:46.949 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:20:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:46.973 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:20:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:46.975 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:20:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:46.976 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:20:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v359: 177 pgs: 177 active+clean; 193 MiB data, 903 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 37 KiB/s wr, 27 op/s
Dec 06 10:20:47 np0005548790.localdomain ceph-mon[301742]: pgmap v358: 177 pgs: 177 active+clean; 193 MiB data, 903 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 45 KiB/s wr, 33 op/s
Dec 06 10:20:47 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/4269218752' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:20:47 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2201144363' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:20:47 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e179 e179: 6 total, 6 up, 6 in
Dec 06 10:20:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:47.977 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:20:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:48.041 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:20:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:48.043 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:48.043 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 10:20:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:48.043 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:20:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:48.044 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:20:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:48.048 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:48 np0005548790.localdomain ceph-mon[301742]: pgmap v359: 177 pgs: 177 active+clean; 193 MiB data, 903 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 37 KiB/s wr, 27 op/s
Dec 06 10:20:48 np0005548790.localdomain ceph-mon[301742]: osdmap e179: 6 total, 6 up, 6 in
Dec 06 10:20:48 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2851515333' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:20:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:48.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:20:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:48.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:20:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:20:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:20:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:48.402 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:20:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:48.402 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:20:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:48.402 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:20:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:20:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158551 "" "Go-http-client/1.1"
Dec 06 10:20:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:20:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19682 "" "Go-http-client/1.1"
Dec 06 10:20:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:20:48 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:20:48 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3778715811' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:20:48 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:20:48 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3778715811' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:20:48 np0005548790.localdomain podman[317140]: 2025-12-06 10:20:48.571098328 +0000 UTC m=+0.082470843 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 10:20:48 np0005548790.localdomain podman[317140]: 2025-12-06 10:20:48.583131674 +0000 UTC m=+0.094504239 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 10:20:48 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:20:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "046a7cd8-3e98-42c3-b5fe-95f442606a2a", "format": "json"}]: dispatch
Dec 06 10:20:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:046a7cd8-3e98-42c3-b5fe-95f442606a2a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:20:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:046a7cd8-3e98-42c3-b5fe-95f442606a2a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:20:49 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:20:49.066+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '046a7cd8-3e98-42c3-b5fe-95f442606a2a' of type subvolume
Dec 06 10:20:49 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '046a7cd8-3e98-42c3-b5fe-95f442606a2a' of type subvolume
Dec 06 10:20:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:20:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "046a7cd8-3e98-42c3-b5fe-95f442606a2a", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:046a7cd8-3e98-42c3-b5fe-95f442606a2a, vol_name:cephfs) < ""
Dec 06 10:20:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/046a7cd8-3e98-42c3-b5fe-95f442606a2a'' moved to trashcan
Dec 06 10:20:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:20:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:046a7cd8-3e98-42c3-b5fe-95f442606a2a, vol_name:cephfs) < ""
Dec 06 10:20:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v361: 177 pgs: 177 active+clean; 193 MiB data, 904 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 26 KiB/s wr, 31 op/s
Dec 06 10:20:49 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2477604507' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:20:49 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3778715811' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:20:49 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3778715811' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:20:50 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "046a7cd8-3e98-42c3-b5fe-95f442606a2a", "format": "json"}]: dispatch
Dec 06 10:20:50 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "046a7cd8-3e98-42c3-b5fe-95f442606a2a", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:50 np0005548790.localdomain ceph-mon[301742]: pgmap v361: 177 pgs: 177 active+clean; 193 MiB data, 904 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 26 KiB/s wr, 31 op/s
Dec 06 10:20:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v362: 177 pgs: 177 active+clean; 193 MiB data, 904 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 22 KiB/s wr, 26 op/s
Dec 06 10:20:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:20:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:20:51 np0005548790.localdomain podman[317159]: 2025-12-06 10:20:51.590276666 +0000 UTC m=+0.100421709 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:20:51 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:51.606 262327 INFO neutron.agent.linux.ip_lib [None req-63048699-8b90-4f58-bc78-9965669b1f9e - - - - - -] Device tap70395743-5f cannot be used as it has no MAC address
Dec 06 10:20:51 np0005548790.localdomain podman[317160]: 2025-12-06 10:20:51.606865985 +0000 UTC m=+0.113388830 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 10:20:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:51.627 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:51 np0005548790.localdomain podman[317159]: 2025-12-06 10:20:51.629125398 +0000 UTC m=+0.139270361 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:20:51 np0005548790.localdomain kernel: device tap70395743-5f entered promiscuous mode
Dec 06 10:20:51 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016451.6348] manager: (tap70395743-5f): new Generic device (/org/freedesktop/NetworkManager/Devices/42)
Dec 06 10:20:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:51.634 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:51 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:51Z|00198|binding|INFO|Claiming lport 70395743-5ff9-4f02-8e8c-3b5a5e0a434b for this chassis.
Dec 06 10:20:51 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:51Z|00199|binding|INFO|70395743-5ff9-4f02-8e8c-3b5a5e0a434b: Claiming unknown
Dec 06 10:20:51 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:20:51 np0005548790.localdomain systemd-udevd[317217]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:20:51 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:51.646 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:3::2/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-dcb7444b-b5da-4fd8-9a90-6d4b016cbb58', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dcb7444b-b5da-4fd8-9a90-6d4b016cbb58', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24086b701d6b4d4081d2e63578d18d24', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5f8e190b-c842-4c4b-b45d-aa984767b341, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=70395743-5ff9-4f02-8e8c-3b5a5e0a434b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:20:51 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:51.648 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 70395743-5ff9-4f02-8e8c-3b5a5e0a434b in datapath dcb7444b-b5da-4fd8-9a90-6d4b016cbb58 bound to our chassis
Dec 06 10:20:51 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:51.650 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dcb7444b-b5da-4fd8-9a90-6d4b016cbb58 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:20:51 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:51.652 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[ad32aeab-d04c-41f4-bc88-af1b7c45b8c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:20:51 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap70395743-5f: No such device
Dec 06 10:20:51 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap70395743-5f: No such device
Dec 06 10:20:51 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap70395743-5f: No such device
Dec 06 10:20:51 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:51Z|00200|binding|INFO|Setting lport 70395743-5ff9-4f02-8e8c-3b5a5e0a434b ovn-installed in OVS
Dec 06 10:20:51 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:51Z|00201|binding|INFO|Setting lport 70395743-5ff9-4f02-8e8c-3b5a5e0a434b up in Southbound
Dec 06 10:20:51 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap70395743-5f: No such device
Dec 06 10:20:51 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap70395743-5f: No such device
Dec 06 10:20:51 np0005548790.localdomain podman[317160]: 2025-12-06 10:20:51.707278384 +0000 UTC m=+0.213801219 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Dec 06 10:20:51 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap70395743-5f: No such device
Dec 06 10:20:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:51.709 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:51 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap70395743-5f: No such device
Dec 06 10:20:51 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap70395743-5f: No such device
Dec 06 10:20:51 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:20:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:51.725 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:52 np0005548790.localdomain ceph-mon[301742]: pgmap v362: 177 pgs: 177 active+clean; 193 MiB data, 904 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 22 KiB/s wr, 26 op/s
Dec 06 10:20:52 np0005548790.localdomain dnsmasq[316022]: exiting on receipt of SIGTERM
Dec 06 10:20:52 np0005548790.localdomain podman[317299]: 2025-12-06 10:20:52.437381834 +0000 UTC m=+0.064574279 container kill 901e5531e544ba08a69321c97475183fa272a46eb412d8079d565629a948ef9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-02f2c06c-65d2-4bc9-a1d2-d3d62c57fd20, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 10:20:52 np0005548790.localdomain systemd[1]: libpod-901e5531e544ba08a69321c97475183fa272a46eb412d8079d565629a948ef9e.scope: Deactivated successfully.
Dec 06 10:20:52 np0005548790.localdomain podman[317315]: 
Dec 06 10:20:52 np0005548790.localdomain podman[317315]: 2025-12-06 10:20:52.50258577 +0000 UTC m=+0.092413053 container create 367ce7997c2da196af40963b70e09aa7de264f1ead7fd6fbae7b0bac50ce7932 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcb7444b-b5da-4fd8-9a90-6d4b016cbb58, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 06 10:20:52 np0005548790.localdomain podman[317330]: 2025-12-06 10:20:52.516906777 +0000 UTC m=+0.066434969 container died 901e5531e544ba08a69321c97475183fa272a46eb412d8079d565629a948ef9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-02f2c06c-65d2-4bc9-a1d2-d3d62c57fd20, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 10:20:52 np0005548790.localdomain systemd[1]: tmp-crun.PVkOy2.mount: Deactivated successfully.
Dec 06 10:20:52 np0005548790.localdomain podman[317330]: 2025-12-06 10:20:52.55283695 +0000 UTC m=+0.102365112 container cleanup 901e5531e544ba08a69321c97475183fa272a46eb412d8079d565629a948ef9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-02f2c06c-65d2-4bc9-a1d2-d3d62c57fd20, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:20:52 np0005548790.localdomain systemd[1]: libpod-conmon-901e5531e544ba08a69321c97475183fa272a46eb412d8079d565629a948ef9e.scope: Deactivated successfully.
Dec 06 10:20:52 np0005548790.localdomain podman[317315]: 2025-12-06 10:20:52.463402169 +0000 UTC m=+0.053229502 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:20:52 np0005548790.localdomain systemd[1]: Started libpod-conmon-367ce7997c2da196af40963b70e09aa7de264f1ead7fd6fbae7b0bac50ce7932.scope.
Dec 06 10:20:52 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:20:52 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68809b2689087c09d34a7605133a76f8a5ca0e326ece1aa36fca4a8ce2d96db7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:20:52 np0005548790.localdomain podman[317331]: 2025-12-06 10:20:52.599973875 +0000 UTC m=+0.142224040 container remove 901e5531e544ba08a69321c97475183fa272a46eb412d8079d565629a948ef9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-02f2c06c-65d2-4bc9-a1d2-d3d62c57fd20, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 10:20:52 np0005548790.localdomain podman[317315]: 2025-12-06 10:20:52.610044268 +0000 UTC m=+0.199871541 container init 367ce7997c2da196af40963b70e09aa7de264f1ead7fd6fbae7b0bac50ce7932 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcb7444b-b5da-4fd8-9a90-6d4b016cbb58, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:20:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:52.613 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:52 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:52Z|00202|binding|INFO|Releasing lport 0321377d-c77d-49b4-9d1b-9538a197b836 from this chassis (sb_readonly=0)
Dec 06 10:20:52 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:52Z|00203|binding|INFO|Setting lport 0321377d-c77d-49b4-9d1b-9538a197b836 down in Southbound
Dec 06 10:20:52 np0005548790.localdomain kernel: device tap0321377d-c7 left promiscuous mode
Dec 06 10:20:52 np0005548790.localdomain podman[317315]: 2025-12-06 10:20:52.619548625 +0000 UTC m=+0.209375878 container start 367ce7997c2da196af40963b70e09aa7de264f1ead7fd6fbae7b0bac50ce7932 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcb7444b-b5da-4fd8-9a90-6d4b016cbb58, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 06 10:20:52 np0005548790.localdomain dnsmasq[317365]: started, version 2.85 cachesize 150
Dec 06 10:20:52 np0005548790.localdomain dnsmasq[317365]: DNS service limited to local subnets
Dec 06 10:20:52 np0005548790.localdomain dnsmasq[317365]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:20:52 np0005548790.localdomain dnsmasq[317365]: warning: no upstream servers configured
Dec 06 10:20:52 np0005548790.localdomain dnsmasq-dhcp[317365]: DHCPv6, static leases only on 2001:db8:3::, lease time 1d
Dec 06 10:20:52 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:52.630 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-02f2c06c-65d2-4bc9-a1d2-d3d62c57fd20', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02f2c06c-65d2-4bc9-a1d2-d3d62c57fd20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f00ab5f7d934f62991ed1e7e798e47e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=391dad62-94ba-4f16-98ce-d5116b9429d7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=0321377d-c77d-49b4-9d1b-9538a197b836) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:20:52 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:52.632 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 0321377d-c77d-49b4-9d1b-9538a197b836 in datapath 02f2c06c-65d2-4bc9-a1d2-d3d62c57fd20 unbound from our chassis
Dec 06 10:20:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:52.635 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:52 np0005548790.localdomain dnsmasq[317365]: read /var/lib/neutron/dhcp/dcb7444b-b5da-4fd8-9a90-6d4b016cbb58/addn_hosts - 0 addresses
Dec 06 10:20:52 np0005548790.localdomain dnsmasq-dhcp[317365]: read /var/lib/neutron/dhcp/dcb7444b-b5da-4fd8-9a90-6d4b016cbb58/host
Dec 06 10:20:52 np0005548790.localdomain dnsmasq-dhcp[317365]: read /var/lib/neutron/dhcp/dcb7444b-b5da-4fd8-9a90-6d4b016cbb58/opts
Dec 06 10:20:52 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:52.636 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 02f2c06c-65d2-4bc9-a1d2-d3d62c57fd20 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:20:52 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:52.637 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[4869b5b5-8f1f-4484-9131-a0ff5ddc59e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:20:52 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:52.664 262327 INFO neutron.agent.dhcp.agent [None req-63048699-8b90-4f58-bc78-9965669b1f9e - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:20:51Z, description=, device_id=b4262675-0888-4a13-bb89-bb38cc732c6a, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c8581f4c0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c8581fd30>], id=34d49a3e-8b26-4fc6-a534-64218d8d052c, ip_allocation=immediate, mac_address=fa:16:3e:4d:51:68, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:20:48Z, description=, dns_domain=, id=dcb7444b-b5da-4fd8-9a90-6d4b016cbb58, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1912587890, port_security_enabled=True, project_id=24086b701d6b4d4081d2e63578d18d24, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=57236, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2547, status=ACTIVE, subnets=['0d5e3ec1-4f30-465e-a937-bc19bcea0f52'], tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:20:50Z, vlan_transparent=None, network_id=dcb7444b-b5da-4fd8-9a90-6d4b016cbb58, port_security_enabled=False, project_id=24086b701d6b4d4081d2e63578d18d24, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2572, status=DOWN, tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:20:51Z on network dcb7444b-b5da-4fd8-9a90-6d4b016cbb58
Dec 06 10:20:52 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:52.827 262327 INFO neutron.agent.dhcp.agent [None req-7679edfa-f924-453e-8a84-94a7bb430150 - - - - - -] DHCP configuration for ports {'0ae3d025-d8ad-4d2a-a60b-0ab03f8b62b4'} is completed
Dec 06 10:20:52 np0005548790.localdomain dnsmasq[317365]: read /var/lib/neutron/dhcp/dcb7444b-b5da-4fd8-9a90-6d4b016cbb58/addn_hosts - 1 addresses
Dec 06 10:20:52 np0005548790.localdomain podman[317382]: 2025-12-06 10:20:52.83478368 +0000 UTC m=+0.044055002 container kill 367ce7997c2da196af40963b70e09aa7de264f1ead7fd6fbae7b0bac50ce7932 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcb7444b-b5da-4fd8-9a90-6d4b016cbb58, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:20:52 np0005548790.localdomain dnsmasq-dhcp[317365]: read /var/lib/neutron/dhcp/dcb7444b-b5da-4fd8-9a90-6d4b016cbb58/host
Dec 06 10:20:52 np0005548790.localdomain dnsmasq-dhcp[317365]: read /var/lib/neutron/dhcp/dcb7444b-b5da-4fd8-9a90-6d4b016cbb58/opts
Dec 06 10:20:52 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:52.873 262327 INFO neutron.agent.dhcp.agent [None req-f71ef16d-3ca0-4bbe-9e0e-286549696771 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:20:52 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:52.875 262327 INFO neutron.agent.dhcp.agent [None req-f71ef16d-3ca0-4bbe-9e0e-286549696771 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:20:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:53.046 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:53 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:53.051 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:20:53 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:53.117 262327 INFO neutron.agent.dhcp.agent [None req-d0c251e2-cb63-4238-a33e-a14a94ee4bf7 - - - - - -] DHCP configuration for ports {'34d49a3e-8b26-4fc6-a534-64218d8d052c'} is completed
Dec 06 10:20:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v363: 177 pgs: 177 active+clean; 193 MiB data, 904 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 17 KiB/s wr, 21 op/s
Dec 06 10:20:53 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:53.237 262327 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:20:51Z, description=, device_id=b4262675-0888-4a13-bb89-bb38cc732c6a, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c8567b850>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c8567bf70>], id=34d49a3e-8b26-4fc6-a534-64218d8d052c, ip_allocation=immediate, mac_address=fa:16:3e:4d:51:68, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:20:48Z, description=, dns_domain=, id=dcb7444b-b5da-4fd8-9a90-6d4b016cbb58, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1912587890, port_security_enabled=True, project_id=24086b701d6b4d4081d2e63578d18d24, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=57236, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2547, status=ACTIVE, subnets=['0d5e3ec1-4f30-465e-a937-bc19bcea0f52'], tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:20:50Z, vlan_transparent=None, network_id=dcb7444b-b5da-4fd8-9a90-6d4b016cbb58, port_security_enabled=False, project_id=24086b701d6b4d4081d2e63578d18d24, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2572, status=DOWN, tags=[], tenant_id=24086b701d6b4d4081d2e63578d18d24, updated_at=2025-12-06T10:20:51Z on network dcb7444b-b5da-4fd8-9a90-6d4b016cbb58
Dec 06 10:20:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:53.263 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:53 np0005548790.localdomain dnsmasq[317365]: read /var/lib/neutron/dhcp/dcb7444b-b5da-4fd8-9a90-6d4b016cbb58/addn_hosts - 1 addresses
Dec 06 10:20:53 np0005548790.localdomain dnsmasq-dhcp[317365]: read /var/lib/neutron/dhcp/dcb7444b-b5da-4fd8-9a90-6d4b016cbb58/host
Dec 06 10:20:53 np0005548790.localdomain dnsmasq-dhcp[317365]: read /var/lib/neutron/dhcp/dcb7444b-b5da-4fd8-9a90-6d4b016cbb58/opts
Dec 06 10:20:53 np0005548790.localdomain podman[317420]: 2025-12-06 10:20:53.419696092 +0000 UTC m=+0.055822002 container kill 367ce7997c2da196af40963b70e09aa7de264f1ead7fd6fbae7b0bac50ce7932 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcb7444b-b5da-4fd8-9a90-6d4b016cbb58, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:20:53 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-449c806717219ec28a0158ac663e231ee8e12a65b6f924231d776c3bac507f8f-merged.mount: Deactivated successfully.
Dec 06 10:20:53 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-901e5531e544ba08a69321c97475183fa272a46eb412d8079d565629a948ef9e-userdata-shm.mount: Deactivated successfully.
Dec 06 10:20:53 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2d02f2c06c\x2d65d2\x2d4bc9\x2da1d2\x2dd3d62c57fd20.mount: Deactivated successfully.
Dec 06 10:20:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:53.531 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:20:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:20:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:20:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:20:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:20:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:20:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:20:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:20:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:20:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:20:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:20:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:20:53 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:53.694 262327 INFO neutron.agent.dhcp.agent [None req-215852da-d560-4a68-97d2-aae977cc17a3 - - - - - -] DHCP configuration for ports {'34d49a3e-8b26-4fc6-a534-64218d8d052c'} is completed
Dec 06 10:20:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d4fe01f5-ec52-46f8-9434-cea0a18dbdb7", "format": "json"}]: dispatch
Dec 06 10:20:53 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d4fe01f5-ec52-46f8-9434-cea0a18dbdb7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:20:53 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d4fe01f5-ec52-46f8-9434-cea0a18dbdb7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:20:53 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:20:53.796+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd4fe01f5-ec52-46f8-9434-cea0a18dbdb7' of type subvolume
Dec 06 10:20:53 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd4fe01f5-ec52-46f8-9434-cea0a18dbdb7' of type subvolume
Dec 06 10:20:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d4fe01f5-ec52-46f8-9434-cea0a18dbdb7", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:53 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d4fe01f5-ec52-46f8-9434-cea0a18dbdb7, vol_name:cephfs) < ""
Dec 06 10:20:53 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d4fe01f5-ec52-46f8-9434-cea0a18dbdb7'' moved to trashcan
Dec 06 10:20:53 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:20:53 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d4fe01f5-ec52-46f8-9434-cea0a18dbdb7, vol_name:cephfs) < ""
Dec 06 10:20:53 np0005548790.localdomain dnsmasq[315859]: exiting on receipt of SIGTERM
Dec 06 10:20:53 np0005548790.localdomain podman[317456]: 2025-12-06 10:20:53.981376335 +0000 UTC m=+0.061981839 container kill fb7ac37f81200a0b2f7cd85c1820dde5fa28f9a4019b3cdd2bbbc8fe17cd558c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1fd699fe-a5f3-49c0-88c7-01911eab5153, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 06 10:20:53 np0005548790.localdomain systemd[1]: libpod-fb7ac37f81200a0b2f7cd85c1820dde5fa28f9a4019b3cdd2bbbc8fe17cd558c.scope: Deactivated successfully.
Dec 06 10:20:54 np0005548790.localdomain podman[317476]: 2025-12-06 10:20:54.050442594 +0000 UTC m=+0.044013623 container died fb7ac37f81200a0b2f7cd85c1820dde5fa28f9a4019b3cdd2bbbc8fe17cd558c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1fd699fe-a5f3-49c0-88c7-01911eab5153, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 10:20:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:20:54 np0005548790.localdomain systemd[1]: tmp-crun.mp1i84.mount: Deactivated successfully.
Dec 06 10:20:54 np0005548790.localdomain podman[317476]: 2025-12-06 10:20:54.101967539 +0000 UTC m=+0.095538518 container remove fb7ac37f81200a0b2f7cd85c1820dde5fa28f9a4019b3cdd2bbbc8fe17cd558c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1fd699fe-a5f3-49c0-88c7-01911eab5153, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:20:54 np0005548790.localdomain systemd[1]: libpod-conmon-fb7ac37f81200a0b2f7cd85c1820dde5fa28f9a4019b3cdd2bbbc8fe17cd558c.scope: Deactivated successfully.
Dec 06 10:20:54 np0005548790.localdomain kernel: device tap95e0b5c7-fe left promiscuous mode
Dec 06 10:20:54 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:54Z|00204|binding|INFO|Releasing lport 95e0b5c7-fe8e-4869-ba1c-3e8e00e81df6 from this chassis (sb_readonly=0)
Dec 06 10:20:54 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:54Z|00205|binding|INFO|Setting lport 95e0b5c7-fe8e-4869-ba1c-3e8e00e81df6 down in Southbound
Dec 06 10:20:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:54.113 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:54 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:54.120 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-1fd699fe-a5f3-49c0-88c7-01911eab5153', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1fd699fe-a5f3-49c0-88c7-01911eab5153', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f00ab5f7d934f62991ed1e7e798e47e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99a438ef-9e8b-4c22-9cf2-faf6518a2343, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=95e0b5c7-fe8e-4869-ba1c-3e8e00e81df6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:20:54 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:54.121 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 95e0b5c7-fe8e-4869-ba1c-3e8e00e81df6 in datapath 1fd699fe-a5f3-49c0-88c7-01911eab5153 unbound from our chassis
Dec 06 10:20:54 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:54.122 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1fd699fe-a5f3-49c0-88c7-01911eab5153 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:20:54 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:54.123 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[87d40484-44a9-4101-9f32-59c4f0e1273c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:20:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:54.135 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:54 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:54.219 262327 INFO neutron.agent.dhcp.agent [None req-f61ddfc6-1327-4fe7-8bd4-30cdb1b40b81 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:20:54 np0005548790.localdomain ceph-mon[301742]: pgmap v363: 177 pgs: 177 active+clean; 193 MiB data, 904 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 17 KiB/s wr, 21 op/s
Dec 06 10:20:54 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:54.500 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:20:54 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-97aced54ce5c907e9167b63ed37d52762223f03a0d1ef7b2633584df53445ee0-merged.mount: Deactivated successfully.
Dec 06 10:20:54 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fb7ac37f81200a0b2f7cd85c1820dde5fa28f9a4019b3cdd2bbbc8fe17cd558c-userdata-shm.mount: Deactivated successfully.
Dec 06 10:20:54 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2d1fd699fe\x2da5f3\x2d49c0\x2d88c7\x2d01911eab5153.mount: Deactivated successfully.
Dec 06 10:20:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:55.040 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v364: 177 pgs: 177 active+clean; 193 MiB data, 904 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 17 KiB/s wr, 21 op/s
Dec 06 10:20:55 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d4fe01f5-ec52-46f8-9434-cea0a18dbdb7", "format": "json"}]: dispatch
Dec 06 10:20:55 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d4fe01f5-ec52-46f8-9434-cea0a18dbdb7", "force": true, "format": "json"}]: dispatch
Dec 06 10:20:56 np0005548790.localdomain ceph-mon[301742]: pgmap v364: 177 pgs: 177 active+clean; 193 MiB data, 904 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 17 KiB/s wr, 21 op/s
Dec 06 10:20:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v365: 177 pgs: 177 active+clean; 193 MiB data, 904 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 17 KiB/s wr, 21 op/s
Dec 06 10:20:57 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:57.619 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:20:57 np0005548790.localdomain dnsmasq[317365]: read /var/lib/neutron/dhcp/dcb7444b-b5da-4fd8-9a90-6d4b016cbb58/addn_hosts - 0 addresses
Dec 06 10:20:57 np0005548790.localdomain dnsmasq-dhcp[317365]: read /var/lib/neutron/dhcp/dcb7444b-b5da-4fd8-9a90-6d4b016cbb58/host
Dec 06 10:20:57 np0005548790.localdomain dnsmasq-dhcp[317365]: read /var/lib/neutron/dhcp/dcb7444b-b5da-4fd8-9a90-6d4b016cbb58/opts
Dec 06 10:20:57 np0005548790.localdomain podman[317513]: 2025-12-06 10:20:57.631092998 +0000 UTC m=+0.058121103 container kill 367ce7997c2da196af40963b70e09aa7de264f1ead7fd6fbae7b0bac50ce7932 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcb7444b-b5da-4fd8-9a90-6d4b016cbb58, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec 06 10:20:57 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:57Z|00206|binding|INFO|Releasing lport 70395743-5ff9-4f02-8e8c-3b5a5e0a434b from this chassis (sb_readonly=0)
Dec 06 10:20:57 np0005548790.localdomain kernel: device tap70395743-5f left promiscuous mode
Dec 06 10:20:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:57.775 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:57 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:20:57Z|00207|binding|INFO|Setting lport 70395743-5ff9-4f02-8e8c-3b5a5e0a434b down in Southbound
Dec 06 10:20:57 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:57.786 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:3::2/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-dcb7444b-b5da-4fd8-9a90-6d4b016cbb58', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dcb7444b-b5da-4fd8-9a90-6d4b016cbb58', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '24086b701d6b4d4081d2e63578d18d24', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5f8e190b-c842-4c4b-b45d-aa984767b341, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=70395743-5ff9-4f02-8e8c-3b5a5e0a434b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:20:57 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:57.787 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 70395743-5ff9-4f02-8e8c-3b5a5e0a434b in datapath dcb7444b-b5da-4fd8-9a90-6d4b016cbb58 unbound from our chassis
Dec 06 10:20:57 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:57.788 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dcb7444b-b5da-4fd8-9a90-6d4b016cbb58 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:20:57 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:20:57.789 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[07ce2a06-dae3-404f-a68d-c3fef5aecb21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:20:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:57.798 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:58.048 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:58 np0005548790.localdomain ceph-mon[301742]: pgmap v365: 177 pgs: 177 active+clean; 193 MiB data, 904 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 17 KiB/s wr, 21 op/s
Dec 06 10:20:58 np0005548790.localdomain dnsmasq[317365]: exiting on receipt of SIGTERM
Dec 06 10:20:58 np0005548790.localdomain podman[317552]: 2025-12-06 10:20:58.602886212 +0000 UTC m=+0.058794003 container kill 367ce7997c2da196af40963b70e09aa7de264f1ead7fd6fbae7b0bac50ce7932 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcb7444b-b5da-4fd8-9a90-6d4b016cbb58, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 10:20:58 np0005548790.localdomain systemd[1]: libpod-367ce7997c2da196af40963b70e09aa7de264f1ead7fd6fbae7b0bac50ce7932.scope: Deactivated successfully.
Dec 06 10:20:58 np0005548790.localdomain podman[317566]: 2025-12-06 10:20:58.660146342 +0000 UTC m=+0.047647750 container died 367ce7997c2da196af40963b70e09aa7de264f1ead7fd6fbae7b0bac50ce7932 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcb7444b-b5da-4fd8-9a90-6d4b016cbb58, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:20:58 np0005548790.localdomain systemd[1]: tmp-crun.Mcb80d.mount: Deactivated successfully.
Dec 06 10:20:58 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-367ce7997c2da196af40963b70e09aa7de264f1ead7fd6fbae7b0bac50ce7932-userdata-shm.mount: Deactivated successfully.
Dec 06 10:20:58 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-68809b2689087c09d34a7605133a76f8a5ca0e326ece1aa36fca4a8ce2d96db7-merged.mount: Deactivated successfully.
Dec 06 10:20:58 np0005548790.localdomain podman[317566]: 2025-12-06 10:20:58.74469163 +0000 UTC m=+0.132192968 container cleanup 367ce7997c2da196af40963b70e09aa7de264f1ead7fd6fbae7b0bac50ce7932 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcb7444b-b5da-4fd8-9a90-6d4b016cbb58, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 10:20:58 np0005548790.localdomain systemd[1]: libpod-conmon-367ce7997c2da196af40963b70e09aa7de264f1ead7fd6fbae7b0bac50ce7932.scope: Deactivated successfully.
Dec 06 10:20:58 np0005548790.localdomain podman[317573]: 2025-12-06 10:20:58.773441818 +0000 UTC m=+0.149129727 container remove 367ce7997c2da196af40963b70e09aa7de264f1ead7fd6fbae7b0bac50ce7932 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcb7444b-b5da-4fd8-9a90-6d4b016cbb58, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 10:20:58 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:58.802 262327 INFO neutron.agent.dhcp.agent [None req-0d300077-1fa0-447e-ae3d-acc04bb0c7f3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:20:58 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:20:58.844 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:20:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:20:59.052 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:20:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:20:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v366: 177 pgs: 177 active+clean; 193 MiB data, 904 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 21 KiB/s wr, 20 op/s
Dec 06 10:20:59 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/4153025700' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:20:59 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/4153025700' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:20:59 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2ddcb7444b\x2db5da\x2d4fd8\x2d9a90\x2d6d4b016cbb58.mount: Deactivated successfully.
Dec 06 10:21:00 np0005548790.localdomain sudo[317596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:21:00 np0005548790.localdomain sudo[317596]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:21:00 np0005548790.localdomain sudo[317596]: pam_unix(sudo:session): session closed for user root
Dec 06 10:21:00 np0005548790.localdomain sudo[317614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:21:00 np0005548790.localdomain sudo[317614]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:21:00 np0005548790.localdomain ceph-mon[301742]: pgmap v366: 177 pgs: 177 active+clean; 193 MiB data, 904 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 21 KiB/s wr, 20 op/s
Dec 06 10:21:00 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/11180419' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:21:00 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/11180419' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:21:00 np0005548790.localdomain sudo[317614]: pam_unix(sudo:session): session closed for user root
Dec 06 10:21:01 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:21:01 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:21:01 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:21:01 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:21:01 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:21:01 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] update: starting ev cd566e46-d5ff-4d43-9500-c4a43b8b7a82 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:21:01 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] complete: finished ev cd566e46-d5ff-4d43-9500-c4a43b8b7a82 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:21:01 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Completed event cd566e46-d5ff-4d43-9500-c4a43b8b7a82 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 06 10:21:01 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:21:01 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:21:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v367: 177 pgs: 177 active+clean; 193 MiB data, 904 MiB used, 41 GiB / 42 GiB avail; 8.7 KiB/s rd, 11 KiB/s wr, 14 op/s
Dec 06 10:21:01 np0005548790.localdomain sudo[317664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:21:01 np0005548790.localdomain sudo[317664]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:21:01 np0005548790.localdomain sudo[317664]: pam_unix(sudo:session): session closed for user root
Dec 06 10:21:01 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:21:01 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:21:01 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:21:01 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:21:02 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Writing back 50 completed events
Dec 06 10:21:02 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:21:02 np0005548790.localdomain ceph-mon[301742]: pgmap v367: 177 pgs: 177 active+clean; 193 MiB data, 904 MiB used, 41 GiB / 42 GiB avail; 8.7 KiB/s rd, 11 KiB/s wr, 14 op/s
Dec 06 10:21:02 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:21:03 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:03.052 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v368: 177 pgs: 177 active+clean; 193 MiB data, 904 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 31 op/s
Dec 06 10:21:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:21:04 np0005548790.localdomain ceph-mon[301742]: pgmap v368: 177 pgs: 177 active+clean; 193 MiB data, 904 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 12 KiB/s wr, 31 op/s
Dec 06 10:21:04 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:04.619 2 INFO neutron.agent.securitygroups_rpc [None req-a8006eb6-87bf-4e51-be14-a8e4ed75c69e a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']
Dec 06 10:21:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v369: 177 pgs: 177 active+clean; 193 MiB data, 904 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 7.7 KiB/s wr, 19 op/s
Dec 06 10:21:06 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:06.330 2 INFO neutron.agent.securitygroups_rpc [None req-58c814b2-70d6-487c-b572-f1a393accc35 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']
Dec 06 10:21:06 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:06.563 2 INFO neutron.agent.securitygroups_rpc [None req-58c814b2-70d6-487c-b572-f1a393accc35 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']
Dec 06 10:21:06 np0005548790.localdomain ceph-mon[301742]: pgmap v369: 177 pgs: 177 active+clean; 193 MiB data, 904 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 7.7 KiB/s wr, 19 op/s
Dec 06 10:21:06 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:06.852 262327 INFO neutron.agent.linux.ip_lib [None req-b232fb85-2bf5-4df0-8711-47f02dad0501 - - - - - -] Device tap9d07566a-d2 cannot be used as it has no MAC address
Dec 06 10:21:06 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:06.882 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:06 np0005548790.localdomain kernel: device tap9d07566a-d2 entered promiscuous mode
Dec 06 10:21:06 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:06Z|00208|binding|INFO|Claiming lport 9d07566a-d24e-4190-a69c-6da5b41866d2 for this chassis.
Dec 06 10:21:06 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:06Z|00209|binding|INFO|9d07566a-d24e-4190-a69c-6da5b41866d2: Claiming unknown
Dec 06 10:21:06 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016466.8950] manager: (tap9d07566a-d2): new Generic device (/org/freedesktop/NetworkManager/Devices/43)
Dec 06 10:21:06 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:06.893 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:06 np0005548790.localdomain systemd-udevd[317692]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:21:06 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:06.905 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.3/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-c60b3ea9-a160-4cba-8b8f-5c4ac907ced9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c60b3ea9-a160-4cba-8b8f-5c4ac907ced9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e82deaff368b4feea9fec0f06459a6ca', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e971920a-12b9-4c1b-998f-dbefa09cd7d0, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=9d07566a-d24e-4190-a69c-6da5b41866d2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:21:06 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:06.907 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 9d07566a-d24e-4190-a69c-6da5b41866d2 in datapath c60b3ea9-a160-4cba-8b8f-5c4ac907ced9 bound to our chassis
Dec 06 10:21:06 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:06.909 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Port 3ae59f42-7b20-43ef-a037-9c3d7cff0db8 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 06 10:21:06 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:06.910 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c60b3ea9-a160-4cba-8b8f-5c4ac907ced9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:21:06 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:06.911 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[90f53f0a-b576-4067-958c-654728cf71f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:21:06 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap9d07566a-d2: No such device
Dec 06 10:21:06 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:06Z|00210|binding|INFO|Setting lport 9d07566a-d24e-4190-a69c-6da5b41866d2 ovn-installed in OVS
Dec 06 10:21:06 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:06Z|00211|binding|INFO|Setting lport 9d07566a-d24e-4190-a69c-6da5b41866d2 up in Southbound
Dec 06 10:21:06 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:06.938 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:06 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap9d07566a-d2: No such device
Dec 06 10:21:06 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap9d07566a-d2: No such device
Dec 06 10:21:06 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap9d07566a-d2: No such device
Dec 06 10:21:06 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap9d07566a-d2: No such device
Dec 06 10:21:06 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap9d07566a-d2: No such device
Dec 06 10:21:06 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap9d07566a-d2: No such device
Dec 06 10:21:06 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap9d07566a-d2: No such device
Dec 06 10:21:06 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:06.983 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:07 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:07.014 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v370: 177 pgs: 177 active+clean; 193 MiB data, 904 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 7.7 KiB/s wr, 19 op/s
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.332 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.332 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.332 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.333 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.333 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.333 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.334 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:21:07.334 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:21:07 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:21:07 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2520375608' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:21:07 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:21:07 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2520375608' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:21:07 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:07.527 2 INFO neutron.agent.securitygroups_rpc [None req-beefe55f-e6d8-4aae-b3a4-9c077707e8ab a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']
Dec 06 10:21:07 np0005548790.localdomain podman[317763]: 
Dec 06 10:21:07 np0005548790.localdomain podman[317763]: 2025-12-06 10:21:07.931718161 +0000 UTC m=+0.067560539 container create 4b007b648654143fe518784a80ef5bda81ebea2796f663dcfd48c1f1fe9f2a11 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c60b3ea9-a160-4cba-8b8f-5c4ac907ced9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 06 10:21:07 np0005548790.localdomain systemd[1]: Started libpod-conmon-4b007b648654143fe518784a80ef5bda81ebea2796f663dcfd48c1f1fe9f2a11.scope.
Dec 06 10:21:07 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:21:07 np0005548790.localdomain podman[317763]: 2025-12-06 10:21:07.898559774 +0000 UTC m=+0.034402172 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:21:07 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fc205753b7223112e57e7cca29b79b68d1163e3150c21ca4f508614ee3b641d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:21:08 np0005548790.localdomain podman[317763]: 2025-12-06 10:21:08.005749496 +0000 UTC m=+0.141591874 container init 4b007b648654143fe518784a80ef5bda81ebea2796f663dcfd48c1f1fe9f2a11 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c60b3ea9-a160-4cba-8b8f-5c4ac907ced9, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:21:08 np0005548790.localdomain podman[317763]: 2025-12-06 10:21:08.015324435 +0000 UTC m=+0.151166813 container start 4b007b648654143fe518784a80ef5bda81ebea2796f663dcfd48c1f1fe9f2a11 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c60b3ea9-a160-4cba-8b8f-5c4ac907ced9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 06 10:21:08 np0005548790.localdomain dnsmasq[317782]: started, version 2.85 cachesize 150
Dec 06 10:21:08 np0005548790.localdomain dnsmasq[317782]: DNS service limited to local subnets
Dec 06 10:21:08 np0005548790.localdomain dnsmasq[317782]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:21:08 np0005548790.localdomain dnsmasq[317782]: warning: no upstream servers configured
Dec 06 10:21:08 np0005548790.localdomain dnsmasq-dhcp[317782]: DHCP, static leases only on 10.101.0.0, lease time 1d
Dec 06 10:21:08 np0005548790.localdomain dnsmasq[317782]: read /var/lib/neutron/dhcp/c60b3ea9-a160-4cba-8b8f-5c4ac907ced9/addn_hosts - 0 addresses
Dec 06 10:21:08 np0005548790.localdomain dnsmasq-dhcp[317782]: read /var/lib/neutron/dhcp/c60b3ea9-a160-4cba-8b8f-5c4ac907ced9/host
Dec 06 10:21:08 np0005548790.localdomain dnsmasq-dhcp[317782]: read /var/lib/neutron/dhcp/c60b3ea9-a160-4cba-8b8f-5c4ac907ced9/opts
Dec 06 10:21:08 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:08.053 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:08 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:08.057 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:08 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:08.143 262327 INFO neutron.agent.dhcp.agent [None req-a518ec98-681d-45cb-a223-6a0417cff518 - - - - - -] DHCP configuration for ports {'ef9ad3f7-7e37-41e9-bf45-757e4ae852c5'} is completed
Dec 06 10:21:08 np0005548790.localdomain ceph-mon[301742]: pgmap v370: 177 pgs: 177 active+clean; 193 MiB data, 904 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 7.7 KiB/s wr, 19 op/s
Dec 06 10:21:08 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2520375608' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:21:08 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2520375608' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:21:08 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3418725974' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:21:08 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3418725974' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:21:08 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:08.503 2 INFO neutron.agent.securitygroups_rpc [None req-4469b72b-a2a8-46c0-b170-fb645c70fec6 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']
Dec 06 10:21:08 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "279e0b5d-0e40-4978-82df-d3bcd56c5a3c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:21:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:279e0b5d-0e40-4978-82df-d3bcd56c5a3c, vol_name:cephfs) < ""
Dec 06 10:21:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/279e0b5d-0e40-4978-82df-d3bcd56c5a3c/.meta.tmp'
Dec 06 10:21:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/279e0b5d-0e40-4978-82df-d3bcd56c5a3c/.meta.tmp' to config b'/volumes/_nogroup/279e0b5d-0e40-4978-82df-d3bcd56c5a3c/.meta'
Dec 06 10:21:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:279e0b5d-0e40-4978-82df-d3bcd56c5a3c, vol_name:cephfs) < ""
Dec 06 10:21:08 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "279e0b5d-0e40-4978-82df-d3bcd56c5a3c", "format": "json"}]: dispatch
Dec 06 10:21:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:279e0b5d-0e40-4978-82df-d3bcd56c5a3c, vol_name:cephfs) < ""
Dec 06 10:21:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:279e0b5d-0e40-4978-82df-d3bcd56c5a3c, vol_name:cephfs) < ""
Dec 06 10:21:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:21:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v371: 177 pgs: 177 active+clean; 193 MiB data, 904 MiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 9.9 KiB/s wr, 62 op/s
Dec 06 10:21:09 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "279e0b5d-0e40-4978-82df-d3bcd56c5a3c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:21:09 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "279e0b5d-0e40-4978-82df-d3bcd56c5a3c", "format": "json"}]: dispatch
Dec 06 10:21:09 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:21:09 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/126387762' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:21:09 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/126387762' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:21:09 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3307122424' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:21:09 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3307122424' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:21:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:21:09 np0005548790.localdomain podman[317783]: 2025-12-06 10:21:09.574426395 +0000 UTC m=+0.083194813 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 06 10:21:09 np0005548790.localdomain podman[317783]: 2025-12-06 10:21:09.609187745 +0000 UTC m=+0.117956063 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 06 10:21:09 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:21:10 np0005548790.localdomain ceph-mon[301742]: pgmap v371: 177 pgs: 177 active+clean; 193 MiB data, 904 MiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 9.9 KiB/s wr, 62 op/s
Dec 06 10:21:10 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1826059961' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:21:10 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1826059961' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:21:10 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:10.863 262327 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:21:10Z, description=, device_id=7ecd23ba-4ca3-4eae-9829-cff158a165a0, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c856dbe80>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c856dbd00>], id=23c354bf-bd32-4338-adf6-2fda06de88cc, ip_allocation=immediate, mac_address=fa:16:3e:38:ce:3a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:21:01Z, description=, dns_domain=, id=c60b3ea9-a160-4cba-8b8f-5c4ac907ced9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-810087815, port_security_enabled=True, project_id=e82deaff368b4feea9fec0f06459a6ca, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=3635, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2617, status=ACTIVE, subnets=['396905c4-a35e-4112-b7e3-493c2c119a0d'], tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:21:04Z, vlan_transparent=None, network_id=c60b3ea9-a160-4cba-8b8f-5c4ac907ced9, port_security_enabled=False, project_id=e82deaff368b4feea9fec0f06459a6ca, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2665, status=DOWN, tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:21:10Z on network c60b3ea9-a160-4cba-8b8f-5c4ac907ced9
Dec 06 10:21:11 np0005548790.localdomain dnsmasq[317782]: read /var/lib/neutron/dhcp/c60b3ea9-a160-4cba-8b8f-5c4ac907ced9/addn_hosts - 1 addresses
Dec 06 10:21:11 np0005548790.localdomain dnsmasq-dhcp[317782]: read /var/lib/neutron/dhcp/c60b3ea9-a160-4cba-8b8f-5c4ac907ced9/host
Dec 06 10:21:11 np0005548790.localdomain dnsmasq-dhcp[317782]: read /var/lib/neutron/dhcp/c60b3ea9-a160-4cba-8b8f-5c4ac907ced9/opts
Dec 06 10:21:11 np0005548790.localdomain podman[317816]: 2025-12-06 10:21:11.122708691 +0000 UTC m=+0.063415697 container kill 4b007b648654143fe518784a80ef5bda81ebea2796f663dcfd48c1f1fe9f2a11 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c60b3ea9-a160-4cba-8b8f-5c4ac907ced9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 10:21:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v372: 177 pgs: 177 active+clean; 193 MiB data, 904 MiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 3.7 KiB/s wr, 60 op/s
Dec 06 10:21:11 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:11.390 262327 INFO neutron.agent.dhcp.agent [None req-74cf98b8-a293-43e0-a6e4-8d10b7d91409 - - - - - -] DHCP configuration for ports {'23c354bf-bd32-4338-adf6-2fda06de88cc'} is completed
Dec 06 10:21:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "279e0b5d-0e40-4978-82df-d3bcd56c5a3c", "snap_name": "2eb048c9-c193-4307-94d1-af1469cf7b46", "format": "json"}]: dispatch
Dec 06 10:21:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2eb048c9-c193-4307-94d1-af1469cf7b46, sub_name:279e0b5d-0e40-4978-82df-d3bcd56c5a3c, vol_name:cephfs) < ""
Dec 06 10:21:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2eb048c9-c193-4307-94d1-af1469cf7b46, sub_name:279e0b5d-0e40-4978-82df-d3bcd56c5a3c, vol_name:cephfs) < ""
Dec 06 10:21:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:21:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:21:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:21:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:21:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:21:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:21:12 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:12.056 2 INFO neutron.agent.securitygroups_rpc [None req-a8044dd6-257e-4d6a-a5a6-1617984725a4 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']
Dec 06 10:21:12 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:12.190 262327 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:21:10Z, description=, device_id=7ecd23ba-4ca3-4eae-9829-cff158a165a0, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c85602940>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c856028b0>], id=23c354bf-bd32-4338-adf6-2fda06de88cc, ip_allocation=immediate, mac_address=fa:16:3e:38:ce:3a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:21:01Z, description=, dns_domain=, id=c60b3ea9-a160-4cba-8b8f-5c4ac907ced9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-810087815, port_security_enabled=True, project_id=e82deaff368b4feea9fec0f06459a6ca, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=3635, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2617, status=ACTIVE, subnets=['396905c4-a35e-4112-b7e3-493c2c119a0d'], tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:21:04Z, vlan_transparent=None, network_id=c60b3ea9-a160-4cba-8b8f-5c4ac907ced9, port_security_enabled=False, project_id=e82deaff368b4feea9fec0f06459a6ca, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2665, status=DOWN, tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:21:10Z on network c60b3ea9-a160-4cba-8b8f-5c4ac907ced9
Dec 06 10:21:12 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:12.210 262327 INFO neutron.agent.linux.ip_lib [None req-8a38b5a4-1ff6-4295-9c6f-5b8185f87f60 - - - - - -] Device tap058a6a84-b8 cannot be used as it has no MAC address
Dec 06 10:21:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:12.235 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:12 np0005548790.localdomain kernel: device tap058a6a84-b8 entered promiscuous mode
Dec 06 10:21:12 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016472.2444] manager: (tap058a6a84-b8): new Generic device (/org/freedesktop/NetworkManager/Devices/44)
Dec 06 10:21:12 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:12Z|00212|binding|INFO|Claiming lport 058a6a84-b82f-4330-a0e4-8c6ceb06bf50 for this chassis.
Dec 06 10:21:12 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:12Z|00213|binding|INFO|058a6a84-b82f-4330-a0e4-8c6ceb06bf50: Claiming unknown
Dec 06 10:21:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:12.245 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:12 np0005548790.localdomain systemd-udevd[317846]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:21:12 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:12.257 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-442df335-20da-42ba-b388-7067c8ba7a13', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-442df335-20da-42ba-b388-7067c8ba7a13', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a18f82f0d09644c7b6d23e2fece8be4f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e2d2a64-8d59-4866-b6b5-5768fec159c0, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=058a6a84-b82f-4330-a0e4-8c6ceb06bf50) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:21:12 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:12.260 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 058a6a84-b82f-4330-a0e4-8c6ceb06bf50 in datapath 442df335-20da-42ba-b388-7067c8ba7a13 bound to our chassis
Dec 06 10:21:12 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:12.261 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 442df335-20da-42ba-b388-7067c8ba7a13 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:21:12 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:12.262 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[abc2e7ef-4050-40b8-915f-fdfb9f1d5586]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:21:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:12.282 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:12 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:12Z|00214|binding|INFO|Setting lport 058a6a84-b82f-4330-a0e4-8c6ceb06bf50 ovn-installed in OVS
Dec 06 10:21:12 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:12Z|00215|binding|INFO|Setting lport 058a6a84-b82f-4330-a0e4-8c6ceb06bf50 up in Southbound
Dec 06 10:21:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:12.287 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:12.324 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:12.356 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:12 np0005548790.localdomain ceph-mon[301742]: pgmap v372: 177 pgs: 177 active+clean; 193 MiB data, 904 MiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 3.7 KiB/s wr, 60 op/s
Dec 06 10:21:12 np0005548790.localdomain dnsmasq[317782]: read /var/lib/neutron/dhcp/c60b3ea9-a160-4cba-8b8f-5c4ac907ced9/addn_hosts - 1 addresses
Dec 06 10:21:12 np0005548790.localdomain dnsmasq-dhcp[317782]: read /var/lib/neutron/dhcp/c60b3ea9-a160-4cba-8b8f-5c4ac907ced9/host
Dec 06 10:21:12 np0005548790.localdomain podman[317871]: 2025-12-06 10:21:12.430550319 +0000 UTC m=+0.064662501 container kill 4b007b648654143fe518784a80ef5bda81ebea2796f663dcfd48c1f1fe9f2a11 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c60b3ea9-a160-4cba-8b8f-5c4ac907ced9, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 10:21:12 np0005548790.localdomain dnsmasq-dhcp[317782]: read /var/lib/neutron/dhcp/c60b3ea9-a160-4cba-8b8f-5c4ac907ced9/opts
Dec 06 10:21:12 np0005548790.localdomain systemd[1]: tmp-crun.uQGTH1.mount: Deactivated successfully.
Dec 06 10:21:12 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:12.764 2 INFO neutron.agent.securitygroups_rpc [None req-ce6b28f9-e93f-45d2-8a9b-cc88bd3abac1 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']
Dec 06 10:21:12 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:12.765 262327 INFO neutron.agent.dhcp.agent [None req-6c0e775c-c587-45e0-9747-82bd478eccc5 - - - - - -] DHCP configuration for ports {'23c354bf-bd32-4338-adf6-2fda06de88cc'} is completed
Dec 06 10:21:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:13.091 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:13.091 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:21:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:13.096 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:21:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:13.097 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:21:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v373: 177 pgs: 177 active+clean; 193 MiB data, 908 MiB used, 41 GiB / 42 GiB avail; 72 KiB/s rd, 8.5 KiB/s wr, 101 op/s
Dec 06 10:21:13 np0005548790.localdomain podman[317938]: 
Dec 06 10:21:13 np0005548790.localdomain podman[317938]: 2025-12-06 10:21:13.249956488 +0000 UTC m=+0.085481385 container create 110c6ec19194739467012c145e4d46715c9f58af3d60a5e78cc35f520f97ffae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-442df335-20da-42ba-b388-7067c8ba7a13, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 10:21:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:21:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:21:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:21:13 np0005548790.localdomain systemd[1]: Started libpod-conmon-110c6ec19194739467012c145e4d46715c9f58af3d60a5e78cc35f520f97ffae.scope.
Dec 06 10:21:13 np0005548790.localdomain podman[317938]: 2025-12-06 10:21:13.209142303 +0000 UTC m=+0.044667270 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:21:13 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:21:13 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a2c673bc634d414924c9bdee0eea8e42bb900df86c676c16df72832331a8dd8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:21:13 np0005548790.localdomain podman[317938]: 2025-12-06 10:21:13.344414674 +0000 UTC m=+0.179939591 container init 110c6ec19194739467012c145e4d46715c9f58af3d60a5e78cc35f520f97ffae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-442df335-20da-42ba-b388-7067c8ba7a13, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:21:13 np0005548790.localdomain dnsmasq[317998]: started, version 2.85 cachesize 150
Dec 06 10:21:13 np0005548790.localdomain dnsmasq[317998]: DNS service limited to local subnets
Dec 06 10:21:13 np0005548790.localdomain dnsmasq[317998]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:21:13 np0005548790.localdomain dnsmasq[317998]: warning: no upstream servers configured
Dec 06 10:21:13 np0005548790.localdomain dnsmasq-dhcp[317998]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 06 10:21:13 np0005548790.localdomain dnsmasq[317998]: read /var/lib/neutron/dhcp/442df335-20da-42ba-b388-7067c8ba7a13/addn_hosts - 0 addresses
Dec 06 10:21:13 np0005548790.localdomain dnsmasq-dhcp[317998]: read /var/lib/neutron/dhcp/442df335-20da-42ba-b388-7067c8ba7a13/host
Dec 06 10:21:13 np0005548790.localdomain dnsmasq-dhcp[317998]: read /var/lib/neutron/dhcp/442df335-20da-42ba-b388-7067c8ba7a13/opts
Dec 06 10:21:13 np0005548790.localdomain podman[317952]: 2025-12-06 10:21:13.387365067 +0000 UTC m=+0.096036860 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 10:21:13 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "279e0b5d-0e40-4978-82df-d3bcd56c5a3c", "snap_name": "2eb048c9-c193-4307-94d1-af1469cf7b46", "format": "json"}]: dispatch
Dec 06 10:21:13 np0005548790.localdomain podman[317952]: 2025-12-06 10:21:13.402025844 +0000 UTC m=+0.110697607 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:21:13 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:21:13 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:13Z|00216|binding|INFO|Removing iface tap058a6a84-b8 ovn-installed in OVS
Dec 06 10:21:13 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:13Z|00217|binding|INFO|Removing lport 058a6a84-b82f-4330-a0e4-8c6ceb06bf50 ovn-installed in OVS
Dec 06 10:21:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:13.423 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:13.424 159200 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port cca7ecda-1d38-419b-92ab-1de2abcf94b9 with type ""
Dec 06 10:21:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:13.426 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-442df335-20da-42ba-b388-7067c8ba7a13', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-442df335-20da-42ba-b388-7067c8ba7a13', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a18f82f0d09644c7b6d23e2fece8be4f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e2d2a64-8d59-4866-b6b5-5768fec159c0, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=058a6a84-b82f-4330-a0e4-8c6ceb06bf50) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:21:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:13.428 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 058a6a84-b82f-4330-a0e4-8c6ceb06bf50 in datapath 442df335-20da-42ba-b388-7067c8ba7a13 unbound from our chassis
Dec 06 10:21:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:13.429 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 442df335-20da-42ba-b388-7067c8ba7a13 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:21:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:13.430 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[c7600b6f-e154-4319-ba25-df8b75a3b1dd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:21:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:13.432 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:13 np0005548790.localdomain podman[317938]: 2025-12-06 10:21:13.508181757 +0000 UTC m=+0.343706684 container start 110c6ec19194739467012c145e4d46715c9f58af3d60a5e78cc35f520f97ffae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-442df335-20da-42ba-b388-7067c8ba7a13, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 06 10:21:13 np0005548790.localdomain dnsmasq[317782]: read /var/lib/neutron/dhcp/c60b3ea9-a160-4cba-8b8f-5c4ac907ced9/addn_hosts - 0 addresses
Dec 06 10:21:13 np0005548790.localdomain dnsmasq-dhcp[317782]: read /var/lib/neutron/dhcp/c60b3ea9-a160-4cba-8b8f-5c4ac907ced9/host
Dec 06 10:21:13 np0005548790.localdomain dnsmasq-dhcp[317782]: read /var/lib/neutron/dhcp/c60b3ea9-a160-4cba-8b8f-5c4ac907ced9/opts
Dec 06 10:21:13 np0005548790.localdomain podman[318026]: 2025-12-06 10:21:13.54633675 +0000 UTC m=+0.057979470 container kill 4b007b648654143fe518784a80ef5bda81ebea2796f663dcfd48c1f1fe9f2a11 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c60b3ea9-a160-4cba-8b8f-5c4ac907ced9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:21:13 np0005548790.localdomain podman[317953]: 2025-12-06 10:21:13.596896309 +0000 UTC m=+0.301595595 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 06 10:21:13 np0005548790.localdomain podman[317954]: 2025-12-06 10:21:13.457414683 +0000 UTC m=+0.157564815 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_id=edpm, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 06 10:21:13 np0005548790.localdomain podman[317954]: 2025-12-06 10:21:13.642565685 +0000 UTC m=+0.342715877 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, release=1755695350, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=)
Dec 06 10:21:13 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:21:13 np0005548790.localdomain podman[317953]: 2025-12-06 10:21:13.66235505 +0000 UTC m=+0.367054366 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:21:13 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:21:13 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:13.686 262327 INFO neutron.agent.dhcp.agent [None req-4c022391-e75e-4ac6-82b6-a2deb3dd38a0 - - - - - -] DHCP configuration for ports {'3daae02e-18da-4a6d-9985-48a869110f8d'} is completed
Dec 06 10:21:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:13.729 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:13 np0005548790.localdomain kernel: device tap9d07566a-d2 left promiscuous mode
Dec 06 10:21:13 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:13Z|00218|binding|INFO|Releasing lport 9d07566a-d24e-4190-a69c-6da5b41866d2 from this chassis (sb_readonly=0)
Dec 06 10:21:13 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:13Z|00219|binding|INFO|Setting lport 9d07566a-d24e-4190-a69c-6da5b41866d2 down in Southbound
Dec 06 10:21:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:13.739 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.3/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-c60b3ea9-a160-4cba-8b8f-5c4ac907ced9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c60b3ea9-a160-4cba-8b8f-5c4ac907ced9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e82deaff368b4feea9fec0f06459a6ca', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e971920a-12b9-4c1b-998f-dbefa09cd7d0, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=9d07566a-d24e-4190-a69c-6da5b41866d2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:21:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:13.741 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 9d07566a-d24e-4190-a69c-6da5b41866d2 in datapath c60b3ea9-a160-4cba-8b8f-5c4ac907ced9 unbound from our chassis
Dec 06 10:21:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:13.743 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c60b3ea9-a160-4cba-8b8f-5c4ac907ced9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:21:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:13.744 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[7ddab8d6-7d72-4068-862f-5c73e235ea93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:21:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:13.751 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:13 np0005548790.localdomain dnsmasq[317998]: exiting on receipt of SIGTERM
Dec 06 10:21:13 np0005548790.localdomain podman[318073]: 2025-12-06 10:21:13.840591645 +0000 UTC m=+0.060093388 container kill 110c6ec19194739467012c145e4d46715c9f58af3d60a5e78cc35f520f97ffae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-442df335-20da-42ba-b388-7067c8ba7a13, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:21:13 np0005548790.localdomain systemd[1]: libpod-110c6ec19194739467012c145e4d46715c9f58af3d60a5e78cc35f520f97ffae.scope: Deactivated successfully.
Dec 06 10:21:13 np0005548790.localdomain podman[318085]: 2025-12-06 10:21:13.912927522 +0000 UTC m=+0.058437012 container died 110c6ec19194739467012c145e4d46715c9f58af3d60a5e78cc35f520f97ffae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-442df335-20da-42ba-b388-7067c8ba7a13, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:21:13 np0005548790.localdomain podman[318085]: 2025-12-06 10:21:13.950741836 +0000 UTC m=+0.096251276 container cleanup 110c6ec19194739467012c145e4d46715c9f58af3d60a5e78cc35f520f97ffae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-442df335-20da-42ba-b388-7067c8ba7a13, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 10:21:13 np0005548790.localdomain systemd[1]: libpod-conmon-110c6ec19194739467012c145e4d46715c9f58af3d60a5e78cc35f520f97ffae.scope: Deactivated successfully.
Dec 06 10:21:13 np0005548790.localdomain podman[318087]: 2025-12-06 10:21:13.994269934 +0000 UTC m=+0.132974070 container remove 110c6ec19194739467012c145e4d46715c9f58af3d60a5e78cc35f520f97ffae (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-442df335-20da-42ba-b388-7067c8ba7a13, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:21:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:14.005 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:14 np0005548790.localdomain kernel: device tap058a6a84-b8 left promiscuous mode
Dec 06 10:21:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:14.019 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:14 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:14.035 262327 INFO neutron.agent.dhcp.agent [None req-16003862-c72e-49aa-9f75-063071343743 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:21:14 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:14.036 262327 INFO neutron.agent.dhcp.agent [None req-16003862-c72e-49aa-9f75-063071343743 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:21:14 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:14.036 262327 INFO neutron.agent.dhcp.agent [None req-16003862-c72e-49aa-9f75-063071343743 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:21:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:21:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:14.175 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:14 np0005548790.localdomain ceph-mon[301742]: pgmap v373: 177 pgs: 177 active+clean; 193 MiB data, 908 MiB used, 41 GiB / 42 GiB avail; 72 KiB/s rd, 8.5 KiB/s wr, 101 op/s
Dec 06 10:21:14 np0005548790.localdomain systemd[1]: tmp-crun.V8bNiY.mount: Deactivated successfully.
Dec 06 10:21:14 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-0a2c673bc634d414924c9bdee0eea8e42bb900df86c676c16df72832331a8dd8-merged.mount: Deactivated successfully.
Dec 06 10:21:14 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-110c6ec19194739467012c145e4d46715c9f58af3d60a5e78cc35f520f97ffae-userdata-shm.mount: Deactivated successfully.
Dec 06 10:21:14 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2d442df335\x2d20da\x2d42ba\x2db388\x2d7067c8ba7a13.mount: Deactivated successfully.
Dec 06 10:21:14 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:14.548 2 INFO neutron.agent.securitygroups_rpc [None req-ff4891e2-327c-4b51-b5fc-5a809ca2f304 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['903bc67e-3e9d-4b17-b669-c51b1cfd9fe6']
Dec 06 10:21:14 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:14.890 2 INFO neutron.agent.securitygroups_rpc [None req-4caeb4e0-6839-4739-a438-dff319ba5ebb 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['903bc67e-3e9d-4b17-b669-c51b1cfd9fe6']
Dec 06 10:21:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v374: 177 pgs: 177 active+clean; 193 MiB data, 908 MiB used, 41 GiB / 42 GiB avail; 61 KiB/s rd, 7.0 KiB/s wr, 84 op/s
Dec 06 10:21:15 np0005548790.localdomain dnsmasq[317782]: exiting on receipt of SIGTERM
Dec 06 10:21:15 np0005548790.localdomain podman[318132]: 2025-12-06 10:21:15.795331952 +0000 UTC m=+0.057910708 container kill 4b007b648654143fe518784a80ef5bda81ebea2796f663dcfd48c1f1fe9f2a11 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c60b3ea9-a160-4cba-8b8f-5c4ac907ced9, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 06 10:21:15 np0005548790.localdomain systemd[1]: libpod-4b007b648654143fe518784a80ef5bda81ebea2796f663dcfd48c1f1fe9f2a11.scope: Deactivated successfully.
Dec 06 10:21:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "279e0b5d-0e40-4978-82df-d3bcd56c5a3c", "snap_name": "2eb048c9-c193-4307-94d1-af1469cf7b46_f41ba66f-ce64-424e-90ff-4fce011eb0df", "force": true, "format": "json"}]: dispatch
Dec 06 10:21:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2eb048c9-c193-4307-94d1-af1469cf7b46_f41ba66f-ce64-424e-90ff-4fce011eb0df, sub_name:279e0b5d-0e40-4978-82df-d3bcd56c5a3c, vol_name:cephfs) < ""
Dec 06 10:21:15 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:15.814 2 INFO neutron.agent.securitygroups_rpc [None req-90100f1d-f80a-4a04-b568-2870d38561f7 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['0559edeb-45f4-4489-a671-f1d11bd0b93c']
Dec 06 10:21:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/279e0b5d-0e40-4978-82df-d3bcd56c5a3c/.meta.tmp'
Dec 06 10:21:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/279e0b5d-0e40-4978-82df-d3bcd56c5a3c/.meta.tmp' to config b'/volumes/_nogroup/279e0b5d-0e40-4978-82df-d3bcd56c5a3c/.meta'
Dec 06 10:21:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2eb048c9-c193-4307-94d1-af1469cf7b46_f41ba66f-ce64-424e-90ff-4fce011eb0df, sub_name:279e0b5d-0e40-4978-82df-d3bcd56c5a3c, vol_name:cephfs) < ""
Dec 06 10:21:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "279e0b5d-0e40-4978-82df-d3bcd56c5a3c", "snap_name": "2eb048c9-c193-4307-94d1-af1469cf7b46", "force": true, "format": "json"}]: dispatch
Dec 06 10:21:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2eb048c9-c193-4307-94d1-af1469cf7b46, sub_name:279e0b5d-0e40-4978-82df-d3bcd56c5a3c, vol_name:cephfs) < ""
Dec 06 10:21:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/279e0b5d-0e40-4978-82df-d3bcd56c5a3c/.meta.tmp'
Dec 06 10:21:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/279e0b5d-0e40-4978-82df-d3bcd56c5a3c/.meta.tmp' to config b'/volumes/_nogroup/279e0b5d-0e40-4978-82df-d3bcd56c5a3c/.meta'
Dec 06 10:21:15 np0005548790.localdomain podman[318146]: 2025-12-06 10:21:15.868576115 +0000 UTC m=+0.060581561 container died 4b007b648654143fe518784a80ef5bda81ebea2796f663dcfd48c1f1fe9f2a11 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c60b3ea9-a160-4cba-8b8f-5c4ac907ced9, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 10:21:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2eb048c9-c193-4307-94d1-af1469cf7b46, sub_name:279e0b5d-0e40-4978-82df-d3bcd56c5a3c, vol_name:cephfs) < ""
Dec 06 10:21:15 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4b007b648654143fe518784a80ef5bda81ebea2796f663dcfd48c1f1fe9f2a11-userdata-shm.mount: Deactivated successfully.
Dec 06 10:21:15 np0005548790.localdomain podman[318146]: 2025-12-06 10:21:15.899104281 +0000 UTC m=+0.091109697 container cleanup 4b007b648654143fe518784a80ef5bda81ebea2796f663dcfd48c1f1fe9f2a11 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c60b3ea9-a160-4cba-8b8f-5c4ac907ced9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:21:15 np0005548790.localdomain systemd[1]: libpod-conmon-4b007b648654143fe518784a80ef5bda81ebea2796f663dcfd48c1f1fe9f2a11.scope: Deactivated successfully.
Dec 06 10:21:15 np0005548790.localdomain podman[318148]: 2025-12-06 10:21:15.949736141 +0000 UTC m=+0.132702862 container remove 4b007b648654143fe518784a80ef5bda81ebea2796f663dcfd48c1f1fe9f2a11 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c60b3ea9-a160-4cba-8b8f-5c4ac907ced9, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:21:16 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:16.201 262327 INFO neutron.agent.dhcp.agent [None req-3e5415a3-b860-4288-af46-986bda4b52de - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:21:16 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:16.204 2 INFO neutron.agent.securitygroups_rpc [None req-1962285c-ec71-4abe-922e-2812550a1f59 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['0559edeb-45f4-4489-a671-f1d11bd0b93c']
Dec 06 10:21:16 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:16.319 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:21:16 np0005548790.localdomain ceph-mon[301742]: pgmap v374: 177 pgs: 177 active+clean; 193 MiB data, 908 MiB used, 41 GiB / 42 GiB avail; 61 KiB/s rd, 7.0 KiB/s wr, 84 op/s
Dec 06 10:21:16 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:16.659 2 INFO neutron.agent.securitygroups_rpc [None req-62279be6-b47d-4f82-9bae-6dcf35358e74 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['0559edeb-45f4-4489-a671-f1d11bd0b93c']
Dec 06 10:21:16 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:16.716 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:21:16 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-2fc205753b7223112e57e7cca29b79b68d1163e3150c21ca4f508614ee3b641d-merged.mount: Deactivated successfully.
Dec 06 10:21:16 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2dc60b3ea9\x2da160\x2d4cba\x2d8b8f\x2d5c4ac907ced9.mount: Deactivated successfully.
Dec 06 10:21:16 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:16.876 2 INFO neutron.agent.securitygroups_rpc [None req-c061e6e9-3e5b-41ac-9ef0-5285d101333c 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['0559edeb-45f4-4489-a671-f1d11bd0b93c']
Dec 06 10:21:17 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:17.120 2 INFO neutron.agent.securitygroups_rpc [None req-2a7b6fd7-f3e3-4d7a-9d68-bfb25de0babb 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['0559edeb-45f4-4489-a671-f1d11bd0b93c']
Dec 06 10:21:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v375: 177 pgs: 177 active+clean; 193 MiB data, 908 MiB used, 41 GiB / 42 GiB avail; 61 KiB/s rd, 7.0 KiB/s wr, 84 op/s
Dec 06 10:21:17 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:17.300 2 INFO neutron.agent.securitygroups_rpc [None req-a7542a85-dea5-4041-a20f-6eded131077b 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['0559edeb-45f4-4489-a671-f1d11bd0b93c']
Dec 06 10:21:17 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "279e0b5d-0e40-4978-82df-d3bcd56c5a3c", "snap_name": "2eb048c9-c193-4307-94d1-af1469cf7b46_f41ba66f-ce64-424e-90ff-4fce011eb0df", "force": true, "format": "json"}]: dispatch
Dec 06 10:21:17 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "279e0b5d-0e40-4978-82df-d3bcd56c5a3c", "snap_name": "2eb048c9-c193-4307-94d1-af1469cf7b46", "force": true, "format": "json"}]: dispatch
Dec 06 10:21:17 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:17.520 2 INFO neutron.agent.securitygroups_rpc [None req-2962579a-8cbd-4a57-b13b-f9182cbc39c2 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['0559edeb-45f4-4489-a671-f1d11bd0b93c']
Dec 06 10:21:18 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:18.071 2 INFO neutron.agent.securitygroups_rpc [None req-c4974b4a-fd74-4b95-bbd1-546aef178ffe 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['0559edeb-45f4-4489-a671-f1d11bd0b93c']
Dec 06 10:21:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:18.134 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:18 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:18.196 2 INFO neutron.agent.securitygroups_rpc [None req-3a1797f5-55dd-437d-aa3c-dfbf79d9d8b2 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['0559edeb-45f4-4489-a671-f1d11bd0b93c']
Dec 06 10:21:18 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:18.213 2 INFO neutron.agent.securitygroups_rpc [None req-44e28314-226f-4847-9798-44b59d6b4b35 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']
Dec 06 10:21:18 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:18.334 2 INFO neutron.agent.securitygroups_rpc [None req-b55ae0b1-9a25-4f2d-ae6a-9ce8bc1e6fe6 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['0559edeb-45f4-4489-a671-f1d11bd0b93c']
Dec 06 10:21:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:21:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:21:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:21:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:21:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:21:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18747 "" "Go-http-client/1.1"
Dec 06 10:21:18 np0005548790.localdomain ceph-mon[301742]: pgmap v375: 177 pgs: 177 active+clean; 193 MiB data, 908 MiB used, 41 GiB / 42 GiB avail; 61 KiB/s rd, 7.0 KiB/s wr, 84 op/s
Dec 06 10:21:18 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:21:18 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1072020705' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:21:18 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:21:18 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1072020705' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:21:18 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:18.576 2 INFO neutron.agent.securitygroups_rpc [None req-b097e5fe-7591-4c88-9845-0ee25de4ff5d a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']
Dec 06 10:21:19 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:19.022 2 INFO neutron.agent.securitygroups_rpc [None req-634a6650-1d18-4bf0-bb6f-8096dc9b484b a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']
Dec 06 10:21:19 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 06 10:21:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "279e0b5d-0e40-4978-82df-d3bcd56c5a3c", "format": "json"}]: dispatch
Dec 06 10:21:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:279e0b5d-0e40-4978-82df-d3bcd56c5a3c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:21:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:19.099 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=33b2d0f4-3dae-458c-b286-c937c7cb3d9e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:21:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:279e0b5d-0e40-4978-82df-d3bcd56c5a3c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:21:19 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:21:19.102+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '279e0b5d-0e40-4978-82df-d3bcd56c5a3c' of type subvolume
Dec 06 10:21:19 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '279e0b5d-0e40-4978-82df-d3bcd56c5a3c' of type subvolume
Dec 06 10:21:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "279e0b5d-0e40-4978-82df-d3bcd56c5a3c", "force": true, "format": "json"}]: dispatch
Dec 06 10:21:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:279e0b5d-0e40-4978-82df-d3bcd56c5a3c, vol_name:cephfs) < ""
Dec 06 10:21:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/279e0b5d-0e40-4978-82df-d3bcd56c5a3c'' moved to trashcan
Dec 06 10:21:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:21:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:279e0b5d-0e40-4978-82df-d3bcd56c5a3c, vol_name:cephfs) < ""
Dec 06 10:21:19 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:19.198 2 INFO neutron.agent.securitygroups_rpc [None req-8740ee6b-b668-48f2-97f8-ee50cd2a4f10 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['3e4cda00-96df-465b-a218-fdd9aa158162']
Dec 06 10:21:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v376: 177 pgs: 177 active+clean; 193 MiB data, 909 MiB used, 41 GiB / 42 GiB avail; 62 KiB/s rd, 14 KiB/s wr, 87 op/s
Dec 06 10:21:19 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:19.440 2 INFO neutron.agent.securitygroups_rpc [None req-44297d43-b394-4987-b7fa-1e1c5c65d7e5 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']
Dec 06 10:21:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:21:19 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1072020705' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:21:19 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1072020705' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:21:19 np0005548790.localdomain podman[318176]: 2025-12-06 10:21:19.571616473 +0000 UTC m=+0.081533968 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible)
Dec 06 10:21:19 np0005548790.localdomain podman[318176]: 2025-12-06 10:21:19.586143847 +0000 UTC m=+0.096061382 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 06 10:21:19 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:21:19 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:19.802 2 INFO neutron.agent.securitygroups_rpc [None req-ec21728d-7a8b-4886-a802-a9f2fd832d5f a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']
Dec 06 10:21:20 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:20.072 2 INFO neutron.agent.securitygroups_rpc [None req-ad373519-aa70-4a24-9c68-f8b3d34f07d1 05cea3733946411abb747782f855ad13 e82deaff368b4feea9fec0f06459a6ca - - default default] Security group member updated ['49ffd6de-2ba3-48fb-87b6-b485622383ee']
Dec 06 10:21:20 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:20.107 2 INFO neutron.agent.securitygroups_rpc [None req-387cce7d-30a2-4fff-b469-9e054d53578e 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['2dafdbdc-1eca-4442-97f4-c504a138db8a']
Dec 06 10:21:20 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e180 e180: 6 total, 6 up, 6 in
Dec 06 10:21:20 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "279e0b5d-0e40-4978-82df-d3bcd56c5a3c", "format": "json"}]: dispatch
Dec 06 10:21:20 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "279e0b5d-0e40-4978-82df-d3bcd56c5a3c", "force": true, "format": "json"}]: dispatch
Dec 06 10:21:20 np0005548790.localdomain ceph-mon[301742]: pgmap v376: 177 pgs: 177 active+clean; 193 MiB data, 909 MiB used, 41 GiB / 42 GiB avail; 62 KiB/s rd, 14 KiB/s wr, 87 op/s
Dec 06 10:21:20 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:20.586 2 INFO neutron.agent.securitygroups_rpc [None req-d0a643e4-c9dc-4942-bc89-0b7141ebc4ce 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['2dafdbdc-1eca-4442-97f4-c504a138db8a']
Dec 06 10:21:20 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:20.878 2 INFO neutron.agent.securitygroups_rpc [None req-46e2ac08-cde4-4c43-95df-6267eb9b2508 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']
Dec 06 10:21:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v378: 177 pgs: 177 active+clean; 193 MiB data, 909 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 14 KiB/s wr, 52 op/s
Dec 06 10:21:21 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:21.310 2 INFO neutron.agent.securitygroups_rpc [None req-7161a7af-e2d1-4aef-85f9-991736510e62 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['9ec4be56-d6a3-492b-8ed5-b0e035114ef3']
Dec 06 10:21:21 np0005548790.localdomain ceph-mon[301742]: osdmap e180: 6 total, 6 up, 6 in
Dec 06 10:21:21 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:21.956 2 INFO neutron.agent.securitygroups_rpc [None req-5446130a-9f42-445f-9ada-e84f5ef55ebf 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['9ec4be56-d6a3-492b-8ed5-b0e035114ef3']
Dec 06 10:21:22 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:21:22 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:21:22 np0005548790.localdomain ceph-mon[301742]: pgmap v378: 177 pgs: 177 active+clean; 193 MiB data, 909 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 14 KiB/s wr, 52 op/s
Dec 06 10:21:22 np0005548790.localdomain podman[318196]: 2025-12-06 10:21:22.579869056 +0000 UTC m=+0.088967529 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 10:21:22 np0005548790.localdomain podman[318196]: 2025-12-06 10:21:22.621442551 +0000 UTC m=+0.130541034 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 06 10:21:22 np0005548790.localdomain systemd[1]: tmp-crun.BWMdMx.mount: Deactivated successfully.
Dec 06 10:21:22 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:21:22 np0005548790.localdomain podman[318195]: 2025-12-06 10:21:22.622546771 +0000 UTC m=+0.134424350 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 10:21:22 np0005548790.localdomain podman[318195]: 2025-12-06 10:21:22.702035203 +0000 UTC m=+0.213912832 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:21:22 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:21:23 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:23.030 2 INFO neutron.agent.securitygroups_rpc [None req-027b5fbe-7eaf-4a47-82ae-b4c37a4f7304 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['e92480e8-6d06-4c62-9df4-cfcbb83c433c']
Dec 06 10:21:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:23.138 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:21:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:23.140 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:21:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:23.140 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 10:21:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:23.140 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:21:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:23.187 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:23.188 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:21:23 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v379: 177 pgs: 177 active+clean; 193 MiB data, 913 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 16 KiB/s wr, 38 op/s
Dec 06 10:21:23 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:23.385 2 INFO neutron.agent.securitygroups_rpc [None req-045a3cff-be52-4b3f-bacc-9f67cad8a71e 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['e92480e8-6d06-4c62-9df4-cfcbb83c433c']
Dec 06 10:21:23 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1202839971' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:21:23 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1202839971' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:21:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:21:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:21:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:21:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:21:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:21:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:21:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:21:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:21:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:21:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:21:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:21:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:21:23 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:23.768 2 INFO neutron.agent.securitygroups_rpc [None req-7256d2df-f723-495d-9fc4-3babfe884545 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['e92480e8-6d06-4c62-9df4-cfcbb83c433c']
Dec 06 10:21:24 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:24.018 2 INFO neutron.agent.securitygroups_rpc [None req-41802285-db31-4507-ae87-65b8788f6146 05cea3733946411abb747782f855ad13 e82deaff368b4feea9fec0f06459a6ca - - default default] Security group member updated ['49ffd6de-2ba3-48fb-87b6-b485622383ee']
Dec 06 10:21:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:21:24 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:24.196 2 INFO neutron.agent.securitygroups_rpc [None req-2554a227-dcf4-4ba9-820f-f0670d58a0fd 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['e92480e8-6d06-4c62-9df4-cfcbb83c433c']
Dec 06 10:21:24 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:24.513 2 INFO neutron.agent.securitygroups_rpc [None req-59f267c4-4d57-4206-9795-2fa0d0bba04b a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']
Dec 06 10:21:24 np0005548790.localdomain ceph-mon[301742]: pgmap v379: 177 pgs: 177 active+clean; 193 MiB data, 913 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 16 KiB/s wr, 38 op/s
Dec 06 10:21:25 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:25.010 2 INFO neutron.agent.securitygroups_rpc [None req-e2ff1412-56d0-4e73-b6d0-7b3abe3cbdda 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['e92480e8-6d06-4c62-9df4-cfcbb83c433c']
Dec 06 10:21:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v380: 177 pgs: 177 active+clean; 193 MiB data, 913 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 16 KiB/s wr, 38 op/s
Dec 06 10:21:25 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:25.308 2 INFO neutron.agent.securitygroups_rpc [None req-502d7035-1806-4b5f-89df-e87b8d86a05e 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['e92480e8-6d06-4c62-9df4-cfcbb83c433c']
Dec 06 10:21:25 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:25.524 2 INFO neutron.agent.securitygroups_rpc [None req-4635dd6b-d978-4577-967f-9e6194773640 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']
Dec 06 10:21:26 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:26.028 2 INFO neutron.agent.securitygroups_rpc [None req-47fed9fe-9d6c-4fec-9c42-e4b99f91a654 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']
Dec 06 10:21:26 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:26.084 2 INFO neutron.agent.securitygroups_rpc [None req-23962b7d-779d-4700-97cd-a1c3604fb216 14db54f428d84c88aba32e6937011f75 969bd3cedbbc4b03a4546a8b852f13f2 - - default default] Security group rule updated ['4227525c-3196-4e1b-83f0-62a3222dd04d']
Dec 06 10:21:26 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:26.542 2 INFO neutron.agent.securitygroups_rpc [None req-79d1c16c-d523-499a-9021-4bdb2b87f9e2 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']
Dec 06 10:21:26 np0005548790.localdomain ceph-mon[301742]: pgmap v380: 177 pgs: 177 active+clean; 193 MiB data, 913 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 16 KiB/s wr, 38 op/s
Dec 06 10:21:26 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/921077461' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:21:26 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/921077461' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:21:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v381: 177 pgs: 177 active+clean; 193 MiB data, 913 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 16 KiB/s wr, 38 op/s
Dec 06 10:21:27 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e181 e181: 6 total, 6 up, 6 in
Dec 06 10:21:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:28.189 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:21:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:28.189 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:21:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:28.190 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 10:21:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:28.190 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:21:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:28.242 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:28.243 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:21:28 np0005548790.localdomain ceph-mon[301742]: pgmap v381: 177 pgs: 177 active+clean; 193 MiB data, 913 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 16 KiB/s wr, 38 op/s
Dec 06 10:21:28 np0005548790.localdomain ceph-mon[301742]: osdmap e181: 6 total, 6 up, 6 in
Dec 06 10:21:28 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "586fac81-a866-4e62-9321-4e3fa9e20434", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:21:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:586fac81-a866-4e62-9321-4e3fa9e20434, vol_name:cephfs) < ""
Dec 06 10:21:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/586fac81-a866-4e62-9321-4e3fa9e20434/.meta.tmp'
Dec 06 10:21:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/586fac81-a866-4e62-9321-4e3fa9e20434/.meta.tmp' to config b'/volumes/_nogroup/586fac81-a866-4e62-9321-4e3fa9e20434/.meta'
Dec 06 10:21:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:586fac81-a866-4e62-9321-4e3fa9e20434, vol_name:cephfs) < ""
Dec 06 10:21:28 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "586fac81-a866-4e62-9321-4e3fa9e20434", "format": "json"}]: dispatch
Dec 06 10:21:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:586fac81-a866-4e62-9321-4e3fa9e20434, vol_name:cephfs) < ""
Dec 06 10:21:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:586fac81-a866-4e62-9321-4e3fa9e20434, vol_name:cephfs) < ""
Dec 06 10:21:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:21:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v383: 177 pgs: 177 active+clean; 193 MiB data, 913 MiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 16 KiB/s wr, 61 op/s
Dec 06 10:21:29 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "586fac81-a866-4e62-9321-4e3fa9e20434", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:21:29 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "586fac81-a866-4e62-9321-4e3fa9e20434", "format": "json"}]: dispatch
Dec 06 10:21:29 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:21:29 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:29.395 262327 INFO neutron.agent.linux.ip_lib [None req-9b1281ff-b323-4c18-b259-f3d764beab27 - - - - - -] Device tap699fdaf2-b6 cannot be used as it has no MAC address
Dec 06 10:21:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:29.455 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:29 np0005548790.localdomain kernel: device tap699fdaf2-b6 entered promiscuous mode
Dec 06 10:21:29 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016489.4649] manager: (tap699fdaf2-b6): new Generic device (/org/freedesktop/NetworkManager/Devices/45)
Dec 06 10:21:29 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:29Z|00220|binding|INFO|Claiming lport 699fdaf2-b63f-467d-898b-17603a68ff5c for this chassis.
Dec 06 10:21:29 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:29Z|00221|binding|INFO|699fdaf2-b63f-467d-898b-17603a68ff5c: Claiming unknown
Dec 06 10:21:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:29.466 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:29 np0005548790.localdomain systemd-udevd[318252]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:21:29 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:29.479 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-0bac4930-1414-40dc-976f-512e822c0f9e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bac4930-1414-40dc-976f-512e822c0f9e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e82deaff368b4feea9fec0f06459a6ca', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3e1893f5-2554-46ca-bbb6-fd5149c345b0, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=699fdaf2-b63f-467d-898b-17603a68ff5c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:21:29 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:29.480 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 699fdaf2-b63f-467d-898b-17603a68ff5c in datapath 0bac4930-1414-40dc-976f-512e822c0f9e bound to our chassis
Dec 06 10:21:29 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:29.482 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Port f8ee3ca5-bdb6-448b-bebc-8dfc77274a4c IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 06 10:21:29 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:29.483 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bac4930-1414-40dc-976f-512e822c0f9e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:21:29 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:29.483 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[b2286d30-9b79-419d-aae8-7f69da11cab3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:21:29 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap699fdaf2-b6: No such device
Dec 06 10:21:29 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap699fdaf2-b6: No such device
Dec 06 10:21:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:29.498 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:29 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:29Z|00222|binding|INFO|Setting lport 699fdaf2-b63f-467d-898b-17603a68ff5c ovn-installed in OVS
Dec 06 10:21:29 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:29Z|00223|binding|INFO|Setting lport 699fdaf2-b63f-467d-898b-17603a68ff5c up in Southbound
Dec 06 10:21:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:29.501 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:29 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap699fdaf2-b6: No such device
Dec 06 10:21:29 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap699fdaf2-b6: No such device
Dec 06 10:21:29 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap699fdaf2-b6: No such device
Dec 06 10:21:29 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap699fdaf2-b6: No such device
Dec 06 10:21:29 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap699fdaf2-b6: No such device
Dec 06 10:21:29 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap699fdaf2-b6: No such device
Dec 06 10:21:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:29.535 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:29.565 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:30 np0005548790.localdomain ceph-mon[301742]: pgmap v383: 177 pgs: 177 active+clean; 193 MiB data, 913 MiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 16 KiB/s wr, 61 op/s
Dec 06 10:21:30 np0005548790.localdomain podman[318323]: 
Dec 06 10:21:30 np0005548790.localdomain podman[318323]: 2025-12-06 10:21:30.433478776 +0000 UTC m=+0.074309363 container create afb59ef8b990624c9c6d257d24476dc655c840efb7c731ed3a243992cd484b63 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0bac4930-1414-40dc-976f-512e822c0f9e, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 10:21:30 np0005548790.localdomain systemd[1]: Started libpod-conmon-afb59ef8b990624c9c6d257d24476dc655c840efb7c731ed3a243992cd484b63.scope.
Dec 06 10:21:30 np0005548790.localdomain systemd[1]: tmp-crun.hfUsWD.mount: Deactivated successfully.
Dec 06 10:21:30 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:21:30 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87a882ea1dae0fd34bfc7904ccbb503923beaa2c425624cf7c82d8c53d414b96/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:21:30 np0005548790.localdomain podman[318323]: 2025-12-06 10:21:30.405053276 +0000 UTC m=+0.045883883 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:21:30 np0005548790.localdomain podman[318323]: 2025-12-06 10:21:30.516173223 +0000 UTC m=+0.157003830 container init afb59ef8b990624c9c6d257d24476dc655c840efb7c731ed3a243992cd484b63 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0bac4930-1414-40dc-976f-512e822c0f9e, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 10:21:30 np0005548790.localdomain podman[318323]: 2025-12-06 10:21:30.52601848 +0000 UTC m=+0.166849087 container start afb59ef8b990624c9c6d257d24476dc655c840efb7c731ed3a243992cd484b63 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0bac4930-1414-40dc-976f-512e822c0f9e, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 10:21:30 np0005548790.localdomain dnsmasq[318342]: started, version 2.85 cachesize 150
Dec 06 10:21:30 np0005548790.localdomain dnsmasq[318342]: DNS service limited to local subnets
Dec 06 10:21:30 np0005548790.localdomain dnsmasq[318342]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:21:30 np0005548790.localdomain dnsmasq[318342]: warning: no upstream servers configured
Dec 06 10:21:30 np0005548790.localdomain dnsmasq-dhcp[318342]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 06 10:21:30 np0005548790.localdomain dnsmasq[318342]: read /var/lib/neutron/dhcp/0bac4930-1414-40dc-976f-512e822c0f9e/addn_hosts - 0 addresses
Dec 06 10:21:30 np0005548790.localdomain dnsmasq-dhcp[318342]: read /var/lib/neutron/dhcp/0bac4930-1414-40dc-976f-512e822c0f9e/host
Dec 06 10:21:30 np0005548790.localdomain dnsmasq-dhcp[318342]: read /var/lib/neutron/dhcp/0bac4930-1414-40dc-976f-512e822c0f9e/opts
Dec 06 10:21:30 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:30.587 262327 INFO neutron.agent.dhcp.agent [None req-425265c4-d8a5-4f41-83e7-70b59b90ac02 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:21:29Z, description=, device_id=e8d95510-b62b-4c0b-b2c2-d92ce953154b, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c8565be80>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c856d0190>], id=fb134b93-1422-45c3-9ffd-7f42984cbf6e, ip_allocation=immediate, mac_address=fa:16:3e:a0:4f:87, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:21:26Z, description=, dns_domain=, id=0bac4930-1414-40dc-976f-512e822c0f9e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-351399502, port_security_enabled=True, project_id=e82deaff368b4feea9fec0f06459a6ca, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=6187, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2813, status=ACTIVE, subnets=['e2eeaa5c-d203-4543-9d06-25483da9321e'], tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:21:27Z, vlan_transparent=None, network_id=0bac4930-1414-40dc-976f-512e822c0f9e, port_security_enabled=False, project_id=e82deaff368b4feea9fec0f06459a6ca, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2842, status=DOWN, tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:21:29Z on network 0bac4930-1414-40dc-976f-512e822c0f9e
Dec 06 10:21:30 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:30.645 2 INFO neutron.agent.securitygroups_rpc [None req-650d6f16-dff9-4e9f-a532-00c1d7cd9ac8 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']
Dec 06 10:21:30 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:30.659 262327 INFO neutron.agent.dhcp.agent [None req-b23a092c-cabf-4af2-aea2-4a0d6c900f3d - - - - - -] DHCP configuration for ports {'f69dcd64-7a84-41c5-884f-ec15dea26060'} is completed
Dec 06 10:21:30 np0005548790.localdomain dnsmasq[318342]: read /var/lib/neutron/dhcp/0bac4930-1414-40dc-976f-512e822c0f9e/addn_hosts - 1 addresses
Dec 06 10:21:30 np0005548790.localdomain podman[318360]: 2025-12-06 10:21:30.813037418 +0000 UTC m=+0.041748231 container kill afb59ef8b990624c9c6d257d24476dc655c840efb7c731ed3a243992cd484b63 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0bac4930-1414-40dc-976f-512e822c0f9e, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 10:21:30 np0005548790.localdomain dnsmasq-dhcp[318342]: read /var/lib/neutron/dhcp/0bac4930-1414-40dc-976f-512e822c0f9e/host
Dec 06 10:21:30 np0005548790.localdomain dnsmasq-dhcp[318342]: read /var/lib/neutron/dhcp/0bac4930-1414-40dc-976f-512e822c0f9e/opts
Dec 06 10:21:30 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:30.952 262327 INFO neutron.agent.dhcp.agent [None req-70a9049b-5300-48e2-b1a2-837f61268198 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:21:29Z, description=, device_id=e8d95510-b62b-4c0b-b2c2-d92ce953154b, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c859cad90>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c859cac70>], id=fb134b93-1422-45c3-9ffd-7f42984cbf6e, ip_allocation=immediate, mac_address=fa:16:3e:a0:4f:87, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:21:26Z, description=, dns_domain=, id=0bac4930-1414-40dc-976f-512e822c0f9e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-351399502, port_security_enabled=True, project_id=e82deaff368b4feea9fec0f06459a6ca, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=6187, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2813, status=ACTIVE, subnets=['e2eeaa5c-d203-4543-9d06-25483da9321e'], tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:21:27Z, vlan_transparent=None, network_id=0bac4930-1414-40dc-976f-512e822c0f9e, port_security_enabled=False, project_id=e82deaff368b4feea9fec0f06459a6ca, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2842, status=DOWN, tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:21:29Z on network 0bac4930-1414-40dc-976f-512e822c0f9e
Dec 06 10:21:30 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:21:30 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3604592233' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:21:30 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:21:30 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3604592233' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:21:31 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:31.038 262327 INFO neutron.agent.dhcp.agent [None req-e50b7541-5e84-4780-9fbe-c684ab6e07d1 - - - - - -] DHCP configuration for ports {'fb134b93-1422-45c3-9ffd-7f42984cbf6e'} is completed
Dec 06 10:21:31 np0005548790.localdomain dnsmasq[318342]: read /var/lib/neutron/dhcp/0bac4930-1414-40dc-976f-512e822c0f9e/addn_hosts - 1 addresses
Dec 06 10:21:31 np0005548790.localdomain dnsmasq-dhcp[318342]: read /var/lib/neutron/dhcp/0bac4930-1414-40dc-976f-512e822c0f9e/host
Dec 06 10:21:31 np0005548790.localdomain podman[318398]: 2025-12-06 10:21:31.204579166 +0000 UTC m=+0.054552498 container kill afb59ef8b990624c9c6d257d24476dc655c840efb7c731ed3a243992cd484b63 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0bac4930-1414-40dc-976f-512e822c0f9e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:21:31 np0005548790.localdomain dnsmasq-dhcp[318342]: read /var/lib/neutron/dhcp/0bac4930-1414-40dc-976f-512e822c0f9e/opts
Dec 06 10:21:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v384: 177 pgs: 177 active+clean; 193 MiB data, 913 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 14 KiB/s wr, 53 op/s
Dec 06 10:21:31 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3604592233' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:21:31 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3604592233' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:21:31 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:31.384 2 INFO neutron.agent.securitygroups_rpc [None req-1b25accf-a6fc-48da-91f7-6e8e14cb52bd a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']
Dec 06 10:21:31 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:31.439 2 INFO neutron.agent.securitygroups_rpc [None req-253b8277-210a-4573-82a4-3ce2f38be71e cc9a0aebc5df40baa5d30408481c8824 5ea98fc77f0c4728a4c2d7a5429d8129 - - default default] Security group rule updated ['113d3ef2-1b05-41a6-846b-b981d95adda0']
Dec 06 10:21:31 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:31.459 262327 INFO neutron.agent.dhcp.agent [None req-4ca55388-0f4b-44d1-8eba-8583d70c6fee - - - - - -] DHCP configuration for ports {'fb134b93-1422-45c3-9ffd-7f42984cbf6e'} is completed
Dec 06 10:21:31 np0005548790.localdomain dnsmasq[318342]: read /var/lib/neutron/dhcp/0bac4930-1414-40dc-976f-512e822c0f9e/addn_hosts - 0 addresses
Dec 06 10:21:31 np0005548790.localdomain dnsmasq-dhcp[318342]: read /var/lib/neutron/dhcp/0bac4930-1414-40dc-976f-512e822c0f9e/host
Dec 06 10:21:31 np0005548790.localdomain podman[318435]: 2025-12-06 10:21:31.618280994 +0000 UTC m=+0.063697925 container kill afb59ef8b990624c9c6d257d24476dc655c840efb7c731ed3a243992cd484b63 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0bac4930-1414-40dc-976f-512e822c0f9e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:21:31 np0005548790.localdomain dnsmasq-dhcp[318342]: read /var/lib/neutron/dhcp/0bac4930-1414-40dc-976f-512e822c0f9e/opts
Dec 06 10:21:31 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:31Z|00224|binding|INFO|Releasing lport 699fdaf2-b63f-467d-898b-17603a68ff5c from this chassis (sb_readonly=0)
Dec 06 10:21:31 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:31Z|00225|binding|INFO|Setting lport 699fdaf2-b63f-467d-898b-17603a68ff5c down in Southbound
Dec 06 10:21:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:31.860 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:31 np0005548790.localdomain kernel: device tap699fdaf2-b6 left promiscuous mode
Dec 06 10:21:31 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:31.870 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-0bac4930-1414-40dc-976f-512e822c0f9e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0bac4930-1414-40dc-976f-512e822c0f9e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e82deaff368b4feea9fec0f06459a6ca', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3e1893f5-2554-46ca-bbb6-fd5149c345b0, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=699fdaf2-b63f-467d-898b-17603a68ff5c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:21:31 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:31.873 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 699fdaf2-b63f-467d-898b-17603a68ff5c in datapath 0bac4930-1414-40dc-976f-512e822c0f9e unbound from our chassis
Dec 06 10:21:31 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:31.875 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0bac4930-1414-40dc-976f-512e822c0f9e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:21:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:31.875 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:31 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:31.876 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ea6326-e233-4800-b40a-e7b4959161c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:21:31 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:31.888 2 INFO neutron.agent.securitygroups_rpc [None req-0a2b9713-3499-411d-9f50-733d3731fca5 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']
Dec 06 10:21:32 np0005548790.localdomain ceph-mon[301742]: pgmap v384: 177 pgs: 177 active+clean; 193 MiB data, 913 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 14 KiB/s wr, 53 op/s
Dec 06 10:21:32 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:32.510 2 INFO neutron.agent.securitygroups_rpc [None req-9523440b-4459-4eba-b7c1-0671f42e7aa4 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']
Dec 06 10:21:32 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "586fac81-a866-4e62-9321-4e3fa9e20434", "format": "json"}]: dispatch
Dec 06 10:21:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:586fac81-a866-4e62-9321-4e3fa9e20434, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:21:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:586fac81-a866-4e62-9321-4e3fa9e20434, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:21:32 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:21:32.677+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '586fac81-a866-4e62-9321-4e3fa9e20434' of type subvolume
Dec 06 10:21:32 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '586fac81-a866-4e62-9321-4e3fa9e20434' of type subvolume
Dec 06 10:21:32 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "586fac81-a866-4e62-9321-4e3fa9e20434", "force": true, "format": "json"}]: dispatch
Dec 06 10:21:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:586fac81-a866-4e62-9321-4e3fa9e20434, vol_name:cephfs) < ""
Dec 06 10:21:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/586fac81-a866-4e62-9321-4e3fa9e20434'' moved to trashcan
Dec 06 10:21:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:21:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:586fac81-a866-4e62-9321-4e3fa9e20434, vol_name:cephfs) < ""
Dec 06 10:21:33 np0005548790.localdomain dnsmasq[318342]: exiting on receipt of SIGTERM
Dec 06 10:21:33 np0005548790.localdomain podman[318474]: 2025-12-06 10:21:33.002920371 +0000 UTC m=+0.064030184 container kill afb59ef8b990624c9c6d257d24476dc655c840efb7c731ed3a243992cd484b63 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0bac4930-1414-40dc-976f-512e822c0f9e, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 06 10:21:33 np0005548790.localdomain systemd[1]: libpod-afb59ef8b990624c9c6d257d24476dc655c840efb7c731ed3a243992cd484b63.scope: Deactivated successfully.
Dec 06 10:21:33 np0005548790.localdomain podman[318488]: 2025-12-06 10:21:33.082177946 +0000 UTC m=+0.064198569 container died afb59ef8b990624c9c6d257d24476dc655c840efb7c731ed3a243992cd484b63 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0bac4930-1414-40dc-976f-512e822c0f9e, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:21:33 np0005548790.localdomain systemd[1]: tmp-crun.3w8dYN.mount: Deactivated successfully.
Dec 06 10:21:33 np0005548790.localdomain podman[318488]: 2025-12-06 10:21:33.121722066 +0000 UTC m=+0.103742669 container cleanup afb59ef8b990624c9c6d257d24476dc655c840efb7c731ed3a243992cd484b63 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0bac4930-1414-40dc-976f-512e822c0f9e, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 10:21:33 np0005548790.localdomain systemd[1]: libpod-conmon-afb59ef8b990624c9c6d257d24476dc655c840efb7c731ed3a243992cd484b63.scope: Deactivated successfully.
Dec 06 10:21:33 np0005548790.localdomain podman[318490]: 2025-12-06 10:21:33.169337995 +0000 UTC m=+0.141765367 container remove afb59ef8b990624c9c6d257d24476dc655c840efb7c731ed3a243992cd484b63 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0bac4930-1414-40dc-976f-512e822c0f9e, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:21:33 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:33.202 262327 INFO neutron.agent.dhcp.agent [None req-cbd43211-6e1d-4740-92e6-f40e80a5721f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:21:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v385: 177 pgs: 177 active+clean; 193 MiB data, 913 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 15 KiB/s wr, 42 op/s
Dec 06 10:21:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:33.259 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:33.263 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:33 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:33.270 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:21:33 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "586fac81-a866-4e62-9321-4e3fa9e20434", "format": "json"}]: dispatch
Dec 06 10:21:33 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "586fac81-a866-4e62-9321-4e3fa9e20434", "force": true, "format": "json"}]: dispatch
Dec 06 10:21:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:33.525 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:34 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-87a882ea1dae0fd34bfc7904ccbb503923beaa2c425624cf7c82d8c53d414b96-merged.mount: Deactivated successfully.
Dec 06 10:21:34 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-afb59ef8b990624c9c6d257d24476dc655c840efb7c731ed3a243992cd484b63-userdata-shm.mount: Deactivated successfully.
Dec 06 10:21:34 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2d0bac4930\x2d1414\x2d40dc\x2d976f\x2d512e822c0f9e.mount: Deactivated successfully.
Dec 06 10:21:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:21:34 np0005548790.localdomain ceph-mon[301742]: pgmap v385: 177 pgs: 177 active+clean; 193 MiB data, 913 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 15 KiB/s wr, 42 op/s
Dec 06 10:21:34 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/620957375' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:21:34 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/620957375' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:21:34 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6e6fd32a-770e-4afe-8d83-a1956fc630a7", "format": "json"}]: dispatch
Dec 06 10:21:34 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6e6fd32a-770e-4afe-8d83-a1956fc630a7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:21:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:21:34 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2748713948' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:21:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v386: 177 pgs: 177 active+clean; 193 MiB data, 913 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 15 KiB/s wr, 42 op/s
Dec 06 10:21:35 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e182 e182: 6 total, 6 up, 6 in
Dec 06 10:21:35 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6e6fd32a-770e-4afe-8d83-a1956fc630a7", "format": "json"}]: dispatch
Dec 06 10:21:35 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2748713948' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:21:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6e6fd32a-770e-4afe-8d83-a1956fc630a7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:21:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6e6fd32a-770e-4afe-8d83-a1956fc630a7", "format": "json"}]: dispatch
Dec 06 10:21:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6e6fd32a-770e-4afe-8d83-a1956fc630a7, vol_name:cephfs) < ""
Dec 06 10:21:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6e6fd32a-770e-4afe-8d83-a1956fc630a7, vol_name:cephfs) < ""
Dec 06 10:21:35 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:35.707 2 INFO neutron.agent.securitygroups_rpc [None req-b9509f86-2a19-49fe-82cf-8c23b9e8fca9 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']
Dec 06 10:21:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4689547c-b07c-49f8-ab25-594d5c576c89", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:21:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4689547c-b07c-49f8-ab25-594d5c576c89, vol_name:cephfs) < ""
Dec 06 10:21:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4689547c-b07c-49f8-ab25-594d5c576c89/.meta.tmp'
Dec 06 10:21:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4689547c-b07c-49f8-ab25-594d5c576c89/.meta.tmp' to config b'/volumes/_nogroup/4689547c-b07c-49f8-ab25-594d5c576c89/.meta'
Dec 06 10:21:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4689547c-b07c-49f8-ab25-594d5c576c89, vol_name:cephfs) < ""
Dec 06 10:21:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4689547c-b07c-49f8-ab25-594d5c576c89", "format": "json"}]: dispatch
Dec 06 10:21:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4689547c-b07c-49f8-ab25-594d5c576c89, vol_name:cephfs) < ""
Dec 06 10:21:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4689547c-b07c-49f8-ab25-594d5c576c89, vol_name:cephfs) < ""
Dec 06 10:21:36 np0005548790.localdomain ceph-mon[301742]: pgmap v386: 177 pgs: 177 active+clean; 193 MiB data, 913 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 15 KiB/s wr, 42 op/s
Dec 06 10:21:36 np0005548790.localdomain ceph-mon[301742]: osdmap e182: 6 total, 6 up, 6 in
Dec 06 10:21:36 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6e6fd32a-770e-4afe-8d83-a1956fc630a7", "format": "json"}]: dispatch
Dec 06 10:21:36 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:21:36 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:21:36 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e183 e183: 6 total, 6 up, 6 in
Dec 06 10:21:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v389: 177 pgs: 177 active+clean; 193 MiB data, 913 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Dec 06 10:21:37 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4689547c-b07c-49f8-ab25-594d5c576c89", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:21:37 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4689547c-b07c-49f8-ab25-594d5c576c89", "format": "json"}]: dispatch
Dec 06 10:21:37 np0005548790.localdomain ceph-mon[301742]: osdmap e183: 6 total, 6 up, 6 in
Dec 06 10:21:37 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/2846497516' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:21:37 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/1607464631' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:21:37 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e184 e184: 6 total, 6 up, 6 in
Dec 06 10:21:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:38.263 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:38.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:21:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:38.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:21:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:38.334 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:21:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:38.356 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:21:38 np0005548790.localdomain ceph-mon[301742]: pgmap v389: 177 pgs: 177 active+clean; 193 MiB data, 913 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Dec 06 10:21:38 np0005548790.localdomain ceph-mon[301742]: osdmap e184: 6 total, 6 up, 6 in
Dec 06 10:21:38 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e185 e185: 6 total, 6 up, 6 in
Dec 06 10:21:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:21:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v392: 177 pgs: 177 active+clean; 193 MiB data, 931 MiB used, 41 GiB / 42 GiB avail; 47 KiB/s rd, 25 KiB/s wr, 71 op/s
Dec 06 10:21:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6e6fd32a-770e-4afe-8d83-a1956fc630a7", "format": "json"}]: dispatch
Dec 06 10:21:39 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6e6fd32a-770e-4afe-8d83-a1956fc630a7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:21:39 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6e6fd32a-770e-4afe-8d83-a1956fc630a7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:21:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6e6fd32a-770e-4afe-8d83-a1956fc630a7", "force": true, "format": "json"}]: dispatch
Dec 06 10:21:39 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6e6fd32a-770e-4afe-8d83-a1956fc630a7, vol_name:cephfs) < ""
Dec 06 10:21:39 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6e6fd32a-770e-4afe-8d83-a1956fc630a7'' moved to trashcan
Dec 06 10:21:39 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:21:39 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6e6fd32a-770e-4afe-8d83-a1956fc630a7, vol_name:cephfs) < ""
Dec 06 10:21:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4689547c-b07c-49f8-ab25-594d5c576c89", "format": "json"}]: dispatch
Dec 06 10:21:39 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4689547c-b07c-49f8-ab25-594d5c576c89, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:21:39 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4689547c-b07c-49f8-ab25-594d5c576c89, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:21:39 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:21:39.472+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4689547c-b07c-49f8-ab25-594d5c576c89' of type subvolume
Dec 06 10:21:39 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4689547c-b07c-49f8-ab25-594d5c576c89' of type subvolume
Dec 06 10:21:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4689547c-b07c-49f8-ab25-594d5c576c89", "force": true, "format": "json"}]: dispatch
Dec 06 10:21:39 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4689547c-b07c-49f8-ab25-594d5c576c89, vol_name:cephfs) < ""
Dec 06 10:21:39 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4689547c-b07c-49f8-ab25-594d5c576c89'' moved to trashcan
Dec 06 10:21:39 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:21:39 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4689547c-b07c-49f8-ab25-594d5c576c89, vol_name:cephfs) < ""
Dec 06 10:21:39 np0005548790.localdomain ceph-mon[301742]: osdmap e185: 6 total, 6 up, 6 in
Dec 06 10:21:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/929112309' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:21:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/929112309' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:21:39 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:39.902 2 INFO neutron.agent.securitygroups_rpc [None req-07bddad5-b128-4210-8a04-c1adeb45eb18 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']
Dec 06 10:21:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:21:40 np0005548790.localdomain ceph-mon[301742]: pgmap v392: 177 pgs: 177 active+clean; 193 MiB data, 931 MiB used, 41 GiB / 42 GiB avail; 47 KiB/s rd, 25 KiB/s wr, 71 op/s
Dec 06 10:21:40 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6e6fd32a-770e-4afe-8d83-a1956fc630a7", "format": "json"}]: dispatch
Dec 06 10:21:40 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6e6fd32a-770e-4afe-8d83-a1956fc630a7", "force": true, "format": "json"}]: dispatch
Dec 06 10:21:40 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4689547c-b07c-49f8-ab25-594d5c576c89", "format": "json"}]: dispatch
Dec 06 10:21:40 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4689547c-b07c-49f8-ab25-594d5c576c89", "force": true, "format": "json"}]: dispatch
Dec 06 10:21:40 np0005548790.localdomain podman[318519]: 2025-12-06 10:21:40.576256224 +0000 UTC m=+0.087731825 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:21:40 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e186 e186: 6 total, 6 up, 6 in
Dec 06 10:21:40 np0005548790.localdomain podman[318519]: 2025-12-06 10:21:40.612289299 +0000 UTC m=+0.123764890 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:21:40 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:21:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v394: 177 pgs: 177 active+clean; 193 MiB data, 931 MiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 22 KiB/s wr, 60 op/s
Dec 06 10:21:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:41.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:21:41 np0005548790.localdomain ceph-mon[301742]: osdmap e186: 6 total, 6 up, 6 in
Dec 06 10:21:41 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:41.827 262327 INFO neutron.agent.linux.ip_lib [None req-337a784d-f036-40ff-abc1-30f52e197c50 - - - - - -] Device tap091ccf4f-57 cannot be used as it has no MAC address
Dec 06 10:21:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Optimize plan auto_2025-12-06_10:21:41
Dec 06 10:21:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:21:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] do_upmap
Dec 06 10:21:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] pools ['backups', 'images', 'volumes', 'vms', 'manila_metadata', '.mgr', 'manila_data']
Dec 06 10:21:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] prepared 0/10 changes
Dec 06 10:21:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:41.897 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:41 np0005548790.localdomain kernel: device tap091ccf4f-57 entered promiscuous mode
Dec 06 10:21:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:21:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:21:41 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016501.9063] manager: (tap091ccf4f-57): new Generic device (/org/freedesktop/NetworkManager/Devices/46)
Dec 06 10:21:41 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:41Z|00226|binding|INFO|Claiming lport 091ccf4f-572f-483e-9cda-10c1c3782141 for this chassis.
Dec 06 10:21:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:41.906 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:41 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:41Z|00227|binding|INFO|091ccf4f-572f-483e-9cda-10c1c3782141: Claiming unknown
Dec 06 10:21:41 np0005548790.localdomain systemd-udevd[318547]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:21:41 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap091ccf4f-57: No such device
Dec 06 10:21:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:21:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f0618c33cd0>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f0618c322e0>)]
Dec 06 10:21:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 06 10:21:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:21:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:21:41 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:41Z|00228|binding|INFO|Setting lport 091ccf4f-572f-483e-9cda-10c1c3782141 ovn-installed in OVS
Dec 06 10:21:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:41.940 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:41 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap091ccf4f-57: No such device
Dec 06 10:21:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:41.942 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:41 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap091ccf4f-57: No such device
Dec 06 10:21:41 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap091ccf4f-57: No such device
Dec 06 10:21:41 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap091ccf4f-57: No such device
Dec 06 10:21:41 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap091ccf4f-57: No such device
Dec 06 10:21:41 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap091ccf4f-57: No such device
Dec 06 10:21:41 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap091ccf4f-57: No such device
Dec 06 10:21:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:41.976 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:41 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:41Z|00229|binding|INFO|Setting lport 091ccf4f-572f-483e-9cda-10c1c3782141 up in Southbound
Dec 06 10:21:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:42.008 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:42 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:42.108 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-c489aae1-1b92-4ad9-822c-2807b1d77588', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c489aae1-1b92-4ad9-822c-2807b1d77588', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e82deaff368b4feea9fec0f06459a6ca', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70ceb46b-52ec-436f-abbe-b13de2757264, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=091ccf4f-572f-483e-9cda-10c1c3782141) old=Port_Binding(up=[False], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:21:42 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:42.110 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 091ccf4f-572f-483e-9cda-10c1c3782141 in datapath c489aae1-1b92-4ad9-822c-2807b1d77588 bound to our chassis
Dec 06 10:21:42 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:42.113 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c489aae1-1b92-4ad9-822c-2807b1d77588 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:21:42 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:42.113 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[40b85be0-6fae-47aa-840e-0619cac5e5b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32)
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32)
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 8.17891541038526e-07 of space, bias 1.0, pg target 0.00016276041666666666 quantized to 32 (current 32)
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00021701388888888888 quantized to 32 (current 32)
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0001134142936906756 of space, bias 4.0, pg target 0.09027777777777778 quantized to 16 (current 16)
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 06 10:21:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:42.328 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:21:42 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e187 e187: 6 total, 6 up, 6 in
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1ddc97c2-406f-4edf-8fc4-61a5ba2286e5", "snap_name": "802dfa0a-e026-43d4-b3ba-67c6242d0ddb_a3496f8c-73ba-4d3a-8401-faf81aff8654", "force": true, "format": "json"}]: dispatch
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:802dfa0a-e026-43d4-b3ba-67c6242d0ddb_a3496f8c-73ba-4d3a-8401-faf81aff8654, sub_name:1ddc97c2-406f-4edf-8fc4-61a5ba2286e5, vol_name:cephfs) < ""
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1ddc97c2-406f-4edf-8fc4-61a5ba2286e5/.meta.tmp'
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1ddc97c2-406f-4edf-8fc4-61a5ba2286e5/.meta.tmp' to config b'/volumes/_nogroup/1ddc97c2-406f-4edf-8fc4-61a5ba2286e5/.meta'
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:802dfa0a-e026-43d4-b3ba-67c6242d0ddb_a3496f8c-73ba-4d3a-8401-faf81aff8654, sub_name:1ddc97c2-406f-4edf-8fc4-61a5ba2286e5, vol_name:cephfs) < ""
Dec 06 10:21:42 np0005548790.localdomain ceph-mon[301742]: pgmap v394: 177 pgs: 177 active+clean; 193 MiB data, 931 MiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 22 KiB/s wr, 60 op/s
Dec 06 10:21:42 np0005548790.localdomain ceph-mon[301742]: osdmap e187: 6 total, 6 up, 6 in
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1ddc97c2-406f-4edf-8fc4-61a5ba2286e5", "snap_name": "802dfa0a-e026-43d4-b3ba-67c6242d0ddb", "force": true, "format": "json"}]: dispatch
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:802dfa0a-e026-43d4-b3ba-67c6242d0ddb, sub_name:1ddc97c2-406f-4edf-8fc4-61a5ba2286e5, vol_name:cephfs) < ""
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1ddc97c2-406f-4edf-8fc4-61a5ba2286e5/.meta.tmp'
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1ddc97c2-406f-4edf-8fc4-61a5ba2286e5/.meta.tmp' to config b'/volumes/_nogroup/1ddc97c2-406f-4edf-8fc4-61a5ba2286e5/.meta'
Dec 06 10:21:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:802dfa0a-e026-43d4-b3ba-67c6242d0ddb, sub_name:1ddc97c2-406f-4edf-8fc4-61a5ba2286e5, vol_name:cephfs) < ""
Dec 06 10:21:42 np0005548790.localdomain podman[318618]: 
Dec 06 10:21:42 np0005548790.localdomain podman[318618]: 2025-12-06 10:21:42.96025477 +0000 UTC m=+0.091179949 container create ca3a3fb106b4b31103c8b279412ebd96adeb5fcec1366d20f5d94fbd0cb727bd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c489aae1-1b92-4ad9-822c-2807b1d77588, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:21:43 np0005548790.localdomain systemd[1]: Started libpod-conmon-ca3a3fb106b4b31103c8b279412ebd96adeb5fcec1366d20f5d94fbd0cb727bd.scope.
Dec 06 10:21:43 np0005548790.localdomain podman[318618]: 2025-12-06 10:21:42.916507956 +0000 UTC m=+0.047433155 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:21:43 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:21:43 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1274f9a65089a3391047c2d29ab9631b2a0cbdab77db8c197fc4564864a98ab9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:21:43 np0005548790.localdomain podman[318618]: 2025-12-06 10:21:43.036160094 +0000 UTC m=+0.167085273 container init ca3a3fb106b4b31103c8b279412ebd96adeb5fcec1366d20f5d94fbd0cb727bd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c489aae1-1b92-4ad9-822c-2807b1d77588, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 10:21:43 np0005548790.localdomain podman[318618]: 2025-12-06 10:21:43.044951943 +0000 UTC m=+0.175877122 container start ca3a3fb106b4b31103c8b279412ebd96adeb5fcec1366d20f5d94fbd0cb727bd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c489aae1-1b92-4ad9-822c-2807b1d77588, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec 06 10:21:43 np0005548790.localdomain dnsmasq[318636]: started, version 2.85 cachesize 150
Dec 06 10:21:43 np0005548790.localdomain dnsmasq[318636]: DNS service limited to local subnets
Dec 06 10:21:43 np0005548790.localdomain dnsmasq[318636]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:21:43 np0005548790.localdomain dnsmasq[318636]: warning: no upstream servers configured
Dec 06 10:21:43 np0005548790.localdomain dnsmasq-dhcp[318636]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 06 10:21:43 np0005548790.localdomain dnsmasq[318636]: read /var/lib/neutron/dhcp/c489aae1-1b92-4ad9-822c-2807b1d77588/addn_hosts - 0 addresses
Dec 06 10:21:43 np0005548790.localdomain dnsmasq-dhcp[318636]: read /var/lib/neutron/dhcp/c489aae1-1b92-4ad9-822c-2807b1d77588/host
Dec 06 10:21:43 np0005548790.localdomain dnsmasq-dhcp[318636]: read /var/lib/neutron/dhcp/c489aae1-1b92-4ad9-822c-2807b1d77588/opts
Dec 06 10:21:43 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:43.178 262327 INFO neutron.agent.dhcp.agent [None req-99f110be-dd11-4c5d-b51c-05a26022e55c - - - - - -] DHCP configuration for ports {'4da850c4-747e-4805-a36c-2111d194a6bc'} is completed
Dec 06 10:21:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v396: 177 pgs: 177 active+clean; 193 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 97 KiB/s rd, 52 KiB/s wr, 140 op/s
Dec 06 10:21:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:43.266 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:43.331 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:21:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:21:43 np0005548790.localdomain podman[318637]: 2025-12-06 10:21:43.569530831 +0000 UTC m=+0.084462677 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:21:43 np0005548790.localdomain podman[318637]: 2025-12-06 10:21:43.582136752 +0000 UTC m=+0.097068638 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:21:43 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:21:43 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1ddc97c2-406f-4edf-8fc4-61a5ba2286e5", "snap_name": "802dfa0a-e026-43d4-b3ba-67c6242d0ddb_a3496f8c-73ba-4d3a-8401-faf81aff8654", "force": true, "format": "json"}]: dispatch
Dec 06 10:21:43 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1ddc97c2-406f-4edf-8fc4-61a5ba2286e5", "snap_name": "802dfa0a-e026-43d4-b3ba-67c6242d0ddb", "force": true, "format": "json"}]: dispatch
Dec 06 10:21:43 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/518841331' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:21:43 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/518841331' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:21:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:21:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:21:43 np0005548790.localdomain podman[318661]: 2025-12-06 10:21:43.818278874 +0000 UTC m=+0.081791706 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 06 10:21:43 np0005548790.localdomain podman[318661]: 2025-12-06 10:21:43.858126612 +0000 UTC m=+0.121639414 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 10:21:43 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:21:43 np0005548790.localdomain podman[318662]: 2025-12-06 10:21:43.879213293 +0000 UTC m=+0.139860327 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, version=9.6, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=edpm, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, release=1755695350, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc.)
Dec 06 10:21:43 np0005548790.localdomain podman[318662]: 2025-12-06 10:21:43.893438688 +0000 UTC m=+0.154085722 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, distribution-scope=public)
Dec 06 10:21:43 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:21:43 np0005548790.localdomain ceph-mgr[286934]: [devicehealth INFO root] Check health
Dec 06 10:21:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:21:44 np0005548790.localdomain sshd[318699]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:21:44 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:44.321 2 INFO neutron.agent.securitygroups_rpc [None req-e0373cd9-ce28-4174-b7a6-2d0d511546d5 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['ecf618e7-df48-4fb6-89e3-d9952de70569']
Dec 06 10:21:44 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:44.490 262327 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:21:43Z, description=, device_id=9f053bd8-0d00-46da-b19e-b225e7c8d058, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c85803df0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c855e8a60>], id=97116a0e-963c-4e10-8861-a52bc14ea74f, ip_allocation=immediate, mac_address=fa:16:3e:de:c2:90, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:21:39Z, description=, dns_domain=, id=c489aae1-1b92-4ad9-822c-2807b1d77588, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-486981734, port_security_enabled=True, project_id=e82deaff368b4feea9fec0f06459a6ca, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=38001, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2913, status=ACTIVE, subnets=['fee26856-d888-4094-8347-fe29352361c5'], tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:21:40Z, vlan_transparent=None, network_id=c489aae1-1b92-4ad9-822c-2807b1d77588, port_security_enabled=False, project_id=e82deaff368b4feea9fec0f06459a6ca, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2949, status=DOWN, tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:21:44Z on network c489aae1-1b92-4ad9-822c-2807b1d77588
Dec 06 10:21:44 np0005548790.localdomain dnsmasq[318636]: read /var/lib/neutron/dhcp/c489aae1-1b92-4ad9-822c-2807b1d77588/addn_hosts - 1 addresses
Dec 06 10:21:44 np0005548790.localdomain podman[318718]: 2025-12-06 10:21:44.71889705 +0000 UTC m=+0.064361253 container kill ca3a3fb106b4b31103c8b279412ebd96adeb5fcec1366d20f5d94fbd0cb727bd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c489aae1-1b92-4ad9-822c-2807b1d77588, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:21:44 np0005548790.localdomain dnsmasq-dhcp[318636]: read /var/lib/neutron/dhcp/c489aae1-1b92-4ad9-822c-2807b1d77588/host
Dec 06 10:21:44 np0005548790.localdomain dnsmasq-dhcp[318636]: read /var/lib/neutron/dhcp/c489aae1-1b92-4ad9-822c-2807b1d77588/opts
Dec 06 10:21:44 np0005548790.localdomain ceph-mon[301742]: pgmap v396: 177 pgs: 177 active+clean; 193 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 97 KiB/s rd, 52 KiB/s wr, 140 op/s
Dec 06 10:21:44 np0005548790.localdomain ceph-mon[301742]: mgrmap e50: np0005548790.kvkfyr(active, since 10m), standbys: np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548789.mzhmje
Dec 06 10:21:45 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:45.074 262327 INFO neutron.agent.dhcp.agent [None req-797efc5e-3e05-46ac-b17b-bf748f709854 - - - - - -] DHCP configuration for ports {'97116a0e-963c-4e10-8861-a52bc14ea74f'} is completed
Dec 06 10:21:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v397: 177 pgs: 177 active+clean; 193 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 29 KiB/s wr, 77 op/s
Dec 06 10:21:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:45.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:21:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:45.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:21:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1ddc97c2-406f-4edf-8fc4-61a5ba2286e5", "format": "json"}]: dispatch
Dec 06 10:21:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1ddc97c2-406f-4edf-8fc4-61a5ba2286e5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:21:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1ddc97c2-406f-4edf-8fc4-61a5ba2286e5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:21:45 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:21:45.904+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1ddc97c2-406f-4edf-8fc4-61a5ba2286e5' of type subvolume
Dec 06 10:21:45 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1ddc97c2-406f-4edf-8fc4-61a5ba2286e5' of type subvolume
Dec 06 10:21:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1ddc97c2-406f-4edf-8fc4-61a5ba2286e5", "force": true, "format": "json"}]: dispatch
Dec 06 10:21:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1ddc97c2-406f-4edf-8fc4-61a5ba2286e5, vol_name:cephfs) < ""
Dec 06 10:21:46 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:46.310 262327 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:21:43Z, description=, device_id=9f053bd8-0d00-46da-b19e-b225e7c8d058, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c85602dc0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c856022e0>], id=97116a0e-963c-4e10-8861-a52bc14ea74f, ip_allocation=immediate, mac_address=fa:16:3e:de:c2:90, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:21:39Z, description=, dns_domain=, id=c489aae1-1b92-4ad9-822c-2807b1d77588, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-486981734, port_security_enabled=True, project_id=e82deaff368b4feea9fec0f06459a6ca, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=38001, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2913, status=ACTIVE, subnets=['fee26856-d888-4094-8347-fe29352361c5'], tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:21:40Z, vlan_transparent=None, network_id=c489aae1-1b92-4ad9-822c-2807b1d77588, port_security_enabled=False, project_id=e82deaff368b4feea9fec0f06459a6ca, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2949, status=DOWN, tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:21:44Z on network c489aae1-1b92-4ad9-822c-2807b1d77588
Dec 06 10:21:46 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1ddc97c2-406f-4edf-8fc4-61a5ba2286e5'' moved to trashcan
Dec 06 10:21:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:46.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:21:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:46.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:21:46 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:21:46 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1ddc97c2-406f-4edf-8fc4-61a5ba2286e5, vol_name:cephfs) < ""
Dec 06 10:21:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:46.359 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:21:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:46.359 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:21:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:46.360 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:21:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:46.360 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:21:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:46.360 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:21:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:46.435 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:46 np0005548790.localdomain dnsmasq[318636]: read /var/lib/neutron/dhcp/c489aae1-1b92-4ad9-822c-2807b1d77588/addn_hosts - 1 addresses
Dec 06 10:21:46 np0005548790.localdomain dnsmasq-dhcp[318636]: read /var/lib/neutron/dhcp/c489aae1-1b92-4ad9-822c-2807b1d77588/host
Dec 06 10:21:46 np0005548790.localdomain podman[318759]: 2025-12-06 10:21:46.558240625 +0000 UTC m=+0.064513717 container kill ca3a3fb106b4b31103c8b279412ebd96adeb5fcec1366d20f5d94fbd0cb727bd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c489aae1-1b92-4ad9-822c-2807b1d77588, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 06 10:21:46 np0005548790.localdomain dnsmasq-dhcp[318636]: read /var/lib/neutron/dhcp/c489aae1-1b92-4ad9-822c-2807b1d77588/opts
Dec 06 10:21:46 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:46.599 2 INFO neutron.agent.securitygroups_rpc [None req-f9e919cd-0c1a-4346-a33d-5b1944f06d7b a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['d5c7cc25-d8ea-40c9-b1b2-04cf074e64bb', 'ecf618e7-df48-4fb6-89e3-d9952de70569']
Dec 06 10:21:46 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:21:46 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3433747594' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:21:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:46.817 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:21:46 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:46.842 262327 INFO neutron.agent.dhcp.agent [None req-1fbe754a-0dd6-4d94-9108-5dc58ded72e5 - - - - - -] DHCP configuration for ports {'97116a0e-963c-4e10-8861-a52bc14ea74f'} is completed
Dec 06 10:21:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:47.046 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:21:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:47.048 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=11556MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:21:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:47.048 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:21:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:47.049 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:21:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:47.138 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:21:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:47.138 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:21:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:47.188 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:21:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v398: 177 pgs: 177 active+clean; 193 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 46 KiB/s rd, 24 KiB/s wr, 64 op/s
Dec 06 10:21:47 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:47.252 2 INFO neutron.agent.securitygroups_rpc [None req-cf0a2394-56b4-45a3-97ca-aedf09b47c4d a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['d5c7cc25-d8ea-40c9-b1b2-04cf074e64bb']
Dec 06 10:21:47 np0005548790.localdomain ceph-mon[301742]: pgmap v397: 177 pgs: 177 active+clean; 193 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 29 KiB/s wr, 77 op/s
Dec 06 10:21:47 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1ddc97c2-406f-4edf-8fc4-61a5ba2286e5", "format": "json"}]: dispatch
Dec 06 10:21:47 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1ddc97c2-406f-4edf-8fc4-61a5ba2286e5", "force": true, "format": "json"}]: dispatch
Dec 06 10:21:47 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/3433747594' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:21:47 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e188 e188: 6 total, 6 up, 6 in
Dec 06 10:21:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9d3cab8c-98e7-4693-a92b-d356598b900a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:21:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9d3cab8c-98e7-4693-a92b-d356598b900a, vol_name:cephfs) < ""
Dec 06 10:21:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9d3cab8c-98e7-4693-a92b-d356598b900a/.meta.tmp'
Dec 06 10:21:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9d3cab8c-98e7-4693-a92b-d356598b900a/.meta.tmp' to config b'/volumes/_nogroup/9d3cab8c-98e7-4693-a92b-d356598b900a/.meta'
Dec 06 10:21:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9d3cab8c-98e7-4693-a92b-d356598b900a, vol_name:cephfs) < ""
Dec 06 10:21:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9d3cab8c-98e7-4693-a92b-d356598b900a", "format": "json"}]: dispatch
Dec 06 10:21:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9d3cab8c-98e7-4693-a92b-d356598b900a, vol_name:cephfs) < ""
Dec 06 10:21:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9d3cab8c-98e7-4693-a92b-d356598b900a, vol_name:cephfs) < ""
Dec 06 10:21:47 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:21:47 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3407317111' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:21:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:47.665 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:21:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:47.672 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:21:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:47.884 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:21:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:47.887 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:21:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:47.888 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:21:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:48.301 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:48 np0005548790.localdomain ceph-mon[301742]: pgmap v398: 177 pgs: 177 active+clean; 193 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 46 KiB/s rd, 24 KiB/s wr, 64 op/s
Dec 06 10:21:48 np0005548790.localdomain ceph-mon[301742]: osdmap e188: 6 total, 6 up, 6 in
Dec 06 10:21:48 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/4095004814' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:21:48 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9d3cab8c-98e7-4693-a92b-d356598b900a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:21:48 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9d3cab8c-98e7-4693-a92b-d356598b900a", "format": "json"}]: dispatch
Dec 06 10:21:48 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:21:48 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/3407317111' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:21:48 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e189 e189: 6 total, 6 up, 6 in
Dec 06 10:21:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:21:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:21:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:48.403 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:21:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:48.404 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:21:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:48.404 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:21:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:21:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156742 "" "Go-http-client/1.1"
Dec 06 10:21:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:21:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19212 "" "Go-http-client/1.1"
Dec 06 10:21:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:21:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v401: 177 pgs: 177 active+clean; 194 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 47 KiB/s wr, 78 op/s
Dec 06 10:21:49 np0005548790.localdomain dnsmasq[318636]: read /var/lib/neutron/dhcp/c489aae1-1b92-4ad9-822c-2807b1d77588/addn_hosts - 0 addresses
Dec 06 10:21:49 np0005548790.localdomain dnsmasq-dhcp[318636]: read /var/lib/neutron/dhcp/c489aae1-1b92-4ad9-822c-2807b1d77588/host
Dec 06 10:21:49 np0005548790.localdomain dnsmasq-dhcp[318636]: read /var/lib/neutron/dhcp/c489aae1-1b92-4ad9-822c-2807b1d77588/opts
Dec 06 10:21:49 np0005548790.localdomain podman[318836]: 2025-12-06 10:21:49.31793783 +0000 UTC m=+0.059191603 container kill ca3a3fb106b4b31103c8b279412ebd96adeb5fcec1366d20f5d94fbd0cb727bd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c489aae1-1b92-4ad9-822c-2807b1d77588, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:21:49 np0005548790.localdomain systemd[1]: tmp-crun.ZTUMLF.mount: Deactivated successfully.
Dec 06 10:21:49 np0005548790.localdomain ceph-mon[301742]: osdmap e189: 6 total, 6 up, 6 in
Dec 06 10:21:49 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2519745187' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:21:49 np0005548790.localdomain sshd[318699]: Connection closed by 101.47.160.186 port 37016 [preauth]
Dec 06 10:21:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:49.481 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:49 np0005548790.localdomain kernel: device tap091ccf4f-57 left promiscuous mode
Dec 06 10:21:49 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:49Z|00230|binding|INFO|Releasing lport 091ccf4f-572f-483e-9cda-10c1c3782141 from this chassis (sb_readonly=0)
Dec 06 10:21:49 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:49Z|00231|binding|INFO|Setting lport 091ccf4f-572f-483e-9cda-10c1c3782141 down in Southbound
Dec 06 10:21:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:49.497 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-c489aae1-1b92-4ad9-822c-2807b1d77588', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c489aae1-1b92-4ad9-822c-2807b1d77588', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e82deaff368b4feea9fec0f06459a6ca', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70ceb46b-52ec-436f-abbe-b13de2757264, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=091ccf4f-572f-483e-9cda-10c1c3782141) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:21:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:49.499 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 091ccf4f-572f-483e-9cda-10c1c3782141 in datapath c489aae1-1b92-4ad9-822c-2807b1d77588 unbound from our chassis
Dec 06 10:21:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:49.501 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c489aae1-1b92-4ad9-822c-2807b1d77588, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:21:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:49.502 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:49.502 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[2924ef31-da57-4673-aad1-4f8ea377a78b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:21:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:49.889 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:21:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:49.890 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:21:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:49.966 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:50 np0005548790.localdomain ceph-mon[301742]: pgmap v401: 177 pgs: 177 active+clean; 194 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 47 KiB/s wr, 78 op/s
Dec 06 10:21:50 np0005548790.localdomain dnsmasq[318636]: exiting on receipt of SIGTERM
Dec 06 10:21:50 np0005548790.localdomain podman[318875]: 2025-12-06 10:21:50.465951423 +0000 UTC m=+0.057209900 container kill ca3a3fb106b4b31103c8b279412ebd96adeb5fcec1366d20f5d94fbd0cb727bd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c489aae1-1b92-4ad9-822c-2807b1d77588, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 10:21:50 np0005548790.localdomain systemd[1]: libpod-ca3a3fb106b4b31103c8b279412ebd96adeb5fcec1366d20f5d94fbd0cb727bd.scope: Deactivated successfully.
Dec 06 10:21:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:21:50 np0005548790.localdomain podman[318889]: 2025-12-06 10:21:50.537110108 +0000 UTC m=+0.054352311 container died ca3a3fb106b4b31103c8b279412ebd96adeb5fcec1366d20f5d94fbd0cb727bd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c489aae1-1b92-4ad9-822c-2807b1d77588, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 06 10:21:50 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ca3a3fb106b4b31103c8b279412ebd96adeb5fcec1366d20f5d94fbd0cb727bd-userdata-shm.mount: Deactivated successfully.
Dec 06 10:21:50 np0005548790.localdomain podman[318889]: 2025-12-06 10:21:50.563627506 +0000 UTC m=+0.080869649 container cleanup ca3a3fb106b4b31103c8b279412ebd96adeb5fcec1366d20f5d94fbd0cb727bd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c489aae1-1b92-4ad9-822c-2807b1d77588, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 06 10:21:50 np0005548790.localdomain systemd[1]: libpod-conmon-ca3a3fb106b4b31103c8b279412ebd96adeb5fcec1366d20f5d94fbd0cb727bd.scope: Deactivated successfully.
Dec 06 10:21:50 np0005548790.localdomain podman[318890]: 2025-12-06 10:21:50.617264948 +0000 UTC m=+0.128898870 container remove ca3a3fb106b4b31103c8b279412ebd96adeb5fcec1366d20f5d94fbd0cb727bd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c489aae1-1b92-4ad9-822c-2807b1d77588, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 06 10:21:50 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:50.649 262327 INFO neutron.agent.dhcp.agent [None req-0df6b3e0-9e7e-4bf8-b8ea-4bc6fa7ddb01 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:21:50 np0005548790.localdomain podman[318891]: 2025-12-06 10:21:50.675402041 +0000 UTC m=+0.183464586 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 06 10:21:50 np0005548790.localdomain podman[318891]: 2025-12-06 10:21:50.688428674 +0000 UTC m=+0.196491219 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 10:21:50 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:21:50 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "9d3cab8c-98e7-4693-a92b-d356598b900a", "snap_name": "34ca82e6-47fe-4edf-98f0-b49a22c3b971", "format": "json"}]: dispatch
Dec 06 10:21:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:34ca82e6-47fe-4edf-98f0-b49a22c3b971, sub_name:9d3cab8c-98e7-4693-a92b-d356598b900a, vol_name:cephfs) < ""
Dec 06 10:21:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:34ca82e6-47fe-4edf-98f0-b49a22c3b971, sub_name:9d3cab8c-98e7-4693-a92b-d356598b900a, vol_name:cephfs) < ""
Dec 06 10:21:51 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:51.069 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:21:51 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:51.079 2 INFO neutron.agent.securitygroups_rpc [None req-9aa7129f-1e5c-4b11-befb-2e4631e91223 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['2129cf94-a39f-4e4e-ab36-0d488acfdae6']
Dec 06 10:21:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v402: 177 pgs: 177 active+clean; 194 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 17 KiB/s wr, 3 op/s
Dec 06 10:21:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:51.313 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:51 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-1274f9a65089a3391047c2d29ab9631b2a0cbdab77db8c197fc4564864a98ab9-merged.mount: Deactivated successfully.
Dec 06 10:21:51 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2dc489aae1\x2d1b92\x2d4ad9\x2d822c\x2d2807b1d77588.mount: Deactivated successfully.
Dec 06 10:21:52 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "58c822ee-e1d1-46ef-8c6f-1d173f1e10cf", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:21:52 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:58c822ee-e1d1-46ef-8c6f-1d173f1e10cf, vol_name:cephfs) < ""
Dec 06 10:21:52 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/58c822ee-e1d1-46ef-8c6f-1d173f1e10cf/.meta.tmp'
Dec 06 10:21:52 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/58c822ee-e1d1-46ef-8c6f-1d173f1e10cf/.meta.tmp' to config b'/volumes/_nogroup/58c822ee-e1d1-46ef-8c6f-1d173f1e10cf/.meta'
Dec 06 10:21:52 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:58c822ee-e1d1-46ef-8c6f-1d173f1e10cf, vol_name:cephfs) < ""
Dec 06 10:21:52 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "58c822ee-e1d1-46ef-8c6f-1d173f1e10cf", "format": "json"}]: dispatch
Dec 06 10:21:52 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:58c822ee-e1d1-46ef-8c6f-1d173f1e10cf, vol_name:cephfs) < ""
Dec 06 10:21:52 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:58c822ee-e1d1-46ef-8c6f-1d173f1e10cf, vol_name:cephfs) < ""
Dec 06 10:21:52 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e190 e190: 6 total, 6 up, 6 in
Dec 06 10:21:52 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "9d3cab8c-98e7-4693-a92b-d356598b900a", "snap_name": "34ca82e6-47fe-4edf-98f0-b49a22c3b971", "format": "json"}]: dispatch
Dec 06 10:21:52 np0005548790.localdomain ceph-mon[301742]: pgmap v402: 177 pgs: 177 active+clean; 194 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 17 KiB/s wr, 3 op/s
Dec 06 10:21:52 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:21:52 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:52.769 2 INFO neutron.agent.securitygroups_rpc [None req-1911ca26-173a-428d-a937-0c8a9ec17a18 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['2129cf94-a39f-4e4e-ab36-0d488acfdae6', '398ea86c-4672-439c-a6e6-0b07306b07fb', 'fb8e4f1d-f459-47af-ae16-0205f1ef2540']
Dec 06 10:21:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v404: 177 pgs: 177 active+clean; 194 MiB data, 937 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 37 KiB/s wr, 53 op/s
Dec 06 10:21:53 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:53.251 2 INFO neutron.agent.securitygroups_rpc [None req-e01a41a9-2c25-4070-a80a-0e758300cea8 a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['398ea86c-4672-439c-a6e6-0b07306b07fb', 'fb8e4f1d-f459-47af-ae16-0205f1ef2540']
Dec 06 10:21:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:53.303 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:53 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "58c822ee-e1d1-46ef-8c6f-1d173f1e10cf", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:21:53 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "58c822ee-e1d1-46ef-8c6f-1d173f1e10cf", "format": "json"}]: dispatch
Dec 06 10:21:53 np0005548790.localdomain ceph-mon[301742]: osdmap e190: 6 total, 6 up, 6 in
Dec 06 10:21:53 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:21:53 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:21:53 np0005548790.localdomain podman[318934]: 2025-12-06 10:21:53.562350481 +0000 UTC m=+0.074590741 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:21:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:21:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:21:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:21:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:21:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:21:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:21:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:21:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:21:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:21:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:21:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:21:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:21:53 np0005548790.localdomain systemd[1]: tmp-crun.Kzpv8y.mount: Deactivated successfully.
Dec 06 10:21:53 np0005548790.localdomain podman[318935]: 2025-12-06 10:21:53.629960281 +0000 UTC m=+0.138640874 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 10:21:53 np0005548790.localdomain podman[318934]: 2025-12-06 10:21:53.650322661 +0000 UTC m=+0.162562881 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:21:53 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:21:53 np0005548790.localdomain podman[318935]: 2025-12-06 10:21:53.667229429 +0000 UTC m=+0.175910012 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:21:53 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:21:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:21:54 np0005548790.localdomain ceph-mon[301742]: pgmap v404: 177 pgs: 177 active+clean; 194 MiB data, 937 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 37 KiB/s wr, 53 op/s
Dec 06 10:21:54 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "67d6cee7-d14d-4a25-bb5e-de16ccdb07ed", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:21:54 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:67d6cee7-d14d-4a25-bb5e-de16ccdb07ed, vol_name:cephfs) < ""
Dec 06 10:21:54 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/67d6cee7-d14d-4a25-bb5e-de16ccdb07ed/.meta.tmp'
Dec 06 10:21:54 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/67d6cee7-d14d-4a25-bb5e-de16ccdb07ed/.meta.tmp' to config b'/volumes/_nogroup/67d6cee7-d14d-4a25-bb5e-de16ccdb07ed/.meta'
Dec 06 10:21:54 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:67d6cee7-d14d-4a25-bb5e-de16ccdb07ed, vol_name:cephfs) < ""
Dec 06 10:21:54 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "67d6cee7-d14d-4a25-bb5e-de16ccdb07ed", "format": "json"}]: dispatch
Dec 06 10:21:54 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:67d6cee7-d14d-4a25-bb5e-de16ccdb07ed, vol_name:cephfs) < ""
Dec 06 10:21:54 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:67d6cee7-d14d-4a25-bb5e-de16ccdb07ed, vol_name:cephfs) < ""
Dec 06 10:21:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v405: 177 pgs: 177 active+clean; 194 MiB data, 937 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 28 KiB/s wr, 40 op/s
Dec 06 10:21:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "58c822ee-e1d1-46ef-8c6f-1d173f1e10cf", "snap_name": "80ddbc86-47b7-40f0-8588-929d2cc3feb6", "format": "json"}]: dispatch
Dec 06 10:21:55 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:80ddbc86-47b7-40f0-8588-929d2cc3feb6, sub_name:58c822ee-e1d1-46ef-8c6f-1d173f1e10cf, vol_name:cephfs) < ""
Dec 06 10:21:55 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:80ddbc86-47b7-40f0-8588-929d2cc3feb6, sub_name:58c822ee-e1d1-46ef-8c6f-1d173f1e10cf, vol_name:cephfs) < ""
Dec 06 10:21:55 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:21:55.378 2 INFO neutron.agent.securitygroups_rpc [None req-41969300-114c-4f41-96fa-3fc73bdf2a0b a2f6f80b9d5a42ccb727340d59efb967 a18f82f0d09644c7b6d23e2fece8be4f - - default default] Security group member updated ['f89459b7-5955-49a9-980d-ccf671c641e2']
Dec 06 10:21:55 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1393965120' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:21:55 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1393965120' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:21:55 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "67d6cee7-d14d-4a25-bb5e-de16ccdb07ed", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:21:55 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:21:56 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "67d6cee7-d14d-4a25-bb5e-de16ccdb07ed", "format": "json"}]: dispatch
Dec 06 10:21:56 np0005548790.localdomain ceph-mon[301742]: pgmap v405: 177 pgs: 177 active+clean; 194 MiB data, 937 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 28 KiB/s wr, 40 op/s
Dec 06 10:21:56 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "58c822ee-e1d1-46ef-8c6f-1d173f1e10cf", "snap_name": "80ddbc86-47b7-40f0-8588-929d2cc3feb6", "format": "json"}]: dispatch
Dec 06 10:21:56 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/880620488' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:21:56 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/880620488' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:21:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v406: 177 pgs: 177 active+clean; 194 MiB data, 937 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 10 KiB/s wr, 33 op/s
Dec 06 10:21:57 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e191 e191: 6 total, 6 up, 6 in
Dec 06 10:21:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:58.308 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:21:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:58.310 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:21:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:58.310 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 10:21:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:58.310 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:21:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:58.345 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:58.346 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:21:58 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "67d6cee7-d14d-4a25-bb5e-de16ccdb07ed", "format": "json"}]: dispatch
Dec 06 10:21:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:67d6cee7-d14d-4a25-bb5e-de16ccdb07ed, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:21:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:67d6cee7-d14d-4a25-bb5e-de16ccdb07ed, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:21:58 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:21:58.365+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '67d6cee7-d14d-4a25-bb5e-de16ccdb07ed' of type subvolume
Dec 06 10:21:58 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '67d6cee7-d14d-4a25-bb5e-de16ccdb07ed' of type subvolume
Dec 06 10:21:58 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "67d6cee7-d14d-4a25-bb5e-de16ccdb07ed", "force": true, "format": "json"}]: dispatch
Dec 06 10:21:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:67d6cee7-d14d-4a25-bb5e-de16ccdb07ed, vol_name:cephfs) < ""
Dec 06 10:21:58 np0005548790.localdomain ceph-mon[301742]: pgmap v406: 177 pgs: 177 active+clean; 194 MiB data, 937 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 10 KiB/s wr, 33 op/s
Dec 06 10:21:58 np0005548790.localdomain ceph-mon[301742]: osdmap e191: 6 total, 6 up, 6 in
Dec 06 10:21:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/67d6cee7-d14d-4a25-bb5e-de16ccdb07ed'' moved to trashcan
Dec 06 10:21:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:21:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:67d6cee7-d14d-4a25-bb5e-de16ccdb07ed, vol_name:cephfs) < ""
Dec 06 10:21:58 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:58.433 262327 INFO neutron.agent.linux.ip_lib [None req-bf5e380b-d9e4-48ca-8dfb-6ae300521fe1 - - - - - -] Device tap727d3eb4-92 cannot be used as it has no MAC address
Dec 06 10:21:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:58.455 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:58 np0005548790.localdomain kernel: device tap727d3eb4-92 entered promiscuous mode
Dec 06 10:21:58 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016518.4670] manager: (tap727d3eb4-92): new Generic device (/org/freedesktop/NetworkManager/Devices/47)
Dec 06 10:21:58 np0005548790.localdomain systemd-udevd[318992]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:21:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:58.471 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:58 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:58Z|00232|binding|INFO|Claiming lport 727d3eb4-92bc-4821-b7ef-8068b10cc075 for this chassis.
Dec 06 10:21:58 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:58Z|00233|binding|INFO|727d3eb4-92bc-4821-b7ef-8068b10cc075: Claiming unknown
Dec 06 10:21:58 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap727d3eb4-92: No such device
Dec 06 10:21:58 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:58Z|00234|binding|INFO|Setting lport 727d3eb4-92bc-4821-b7ef-8068b10cc075 ovn-installed in OVS
Dec 06 10:21:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:58.511 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:58 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap727d3eb4-92: No such device
Dec 06 10:21:58 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap727d3eb4-92: No such device
Dec 06 10:21:58 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap727d3eb4-92: No such device
Dec 06 10:21:58 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap727d3eb4-92: No such device
Dec 06 10:21:58 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap727d3eb4-92: No such device
Dec 06 10:21:58 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap727d3eb4-92: No such device
Dec 06 10:21:58 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tap727d3eb4-92: No such device
Dec 06 10:21:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:58.546 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:58.577 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:58 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:21:58Z|00235|binding|INFO|Setting lport 727d3eb4-92bc-4821-b7ef-8068b10cc075 up in Southbound
Dec 06 10:21:58 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:58.610 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.3/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-d87c79bc-37e1-4b28-b5d3-b5f930e4ffad', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d87c79bc-37e1-4b28-b5d3-b5f930e4ffad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e82deaff368b4feea9fec0f06459a6ca', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=74acc2e1-8d76-43a2-a761-033f794445a3, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=727d3eb4-92bc-4821-b7ef-8068b10cc075) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:21:58 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:58.612 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 727d3eb4-92bc-4821-b7ef-8068b10cc075 in datapath d87c79bc-37e1-4b28-b5d3-b5f930e4ffad bound to our chassis
Dec 06 10:21:58 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:58.615 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Port 97082834-063d-4510-87b3-2ebbbf3ad1e3 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 06 10:21:58 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:58.615 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d87c79bc-37e1-4b28-b5d3-b5f930e4ffad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:21:58 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:21:58.616 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[21963fba-4264-4848-9b44-da324e2b61c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:21:58 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "58c822ee-e1d1-46ef-8c6f-1d173f1e10cf", "snap_name": "80ddbc86-47b7-40f0-8588-929d2cc3feb6_17d9a7bb-1ff4-4445-bd23-f0d930d9f87b", "force": true, "format": "json"}]: dispatch
Dec 06 10:21:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:80ddbc86-47b7-40f0-8588-929d2cc3feb6_17d9a7bb-1ff4-4445-bd23-f0d930d9f87b, sub_name:58c822ee-e1d1-46ef-8c6f-1d173f1e10cf, vol_name:cephfs) < ""
Dec 06 10:21:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/58c822ee-e1d1-46ef-8c6f-1d173f1e10cf/.meta.tmp'
Dec 06 10:21:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/58c822ee-e1d1-46ef-8c6f-1d173f1e10cf/.meta.tmp' to config b'/volumes/_nogroup/58c822ee-e1d1-46ef-8c6f-1d173f1e10cf/.meta'
Dec 06 10:21:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:80ddbc86-47b7-40f0-8588-929d2cc3feb6_17d9a7bb-1ff4-4445-bd23-f0d930d9f87b, sub_name:58c822ee-e1d1-46ef-8c6f-1d173f1e10cf, vol_name:cephfs) < ""
Dec 06 10:21:58 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "58c822ee-e1d1-46ef-8c6f-1d173f1e10cf", "snap_name": "80ddbc86-47b7-40f0-8588-929d2cc3feb6", "force": true, "format": "json"}]: dispatch
Dec 06 10:21:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:80ddbc86-47b7-40f0-8588-929d2cc3feb6, sub_name:58c822ee-e1d1-46ef-8c6f-1d173f1e10cf, vol_name:cephfs) < ""
Dec 06 10:21:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/58c822ee-e1d1-46ef-8c6f-1d173f1e10cf/.meta.tmp'
Dec 06 10:21:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/58c822ee-e1d1-46ef-8c6f-1d173f1e10cf/.meta.tmp' to config b'/volumes/_nogroup/58c822ee-e1d1-46ef-8c6f-1d173f1e10cf/.meta'
Dec 06 10:21:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:80ddbc86-47b7-40f0-8588-929d2cc3feb6, sub_name:58c822ee-e1d1-46ef-8c6f-1d173f1e10cf, vol_name:cephfs) < ""
Dec 06 10:21:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:21:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v408: 177 pgs: 177 active+clean; 282 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 73 KiB/s rd, 11 MiB/s wr, 105 op/s
Dec 06 10:21:59 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "67d6cee7-d14d-4a25-bb5e-de16ccdb07ed", "format": "json"}]: dispatch
Dec 06 10:21:59 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "67d6cee7-d14d-4a25-bb5e-de16ccdb07ed", "force": true, "format": "json"}]: dispatch
Dec 06 10:21:59 np0005548790.localdomain podman[319064]: 
Dec 06 10:21:59 np0005548790.localdomain podman[319064]: 2025-12-06 10:21:59.454197492 +0000 UTC m=+0.083498071 container create 8fe8adb776e3f11e3c210fa0626551e77a8bf4653ed93099491d771f2d9989b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d87c79bc-37e1-4b28-b5d3-b5f930e4ffad, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:21:59 np0005548790.localdomain systemd[1]: Started libpod-conmon-8fe8adb776e3f11e3c210fa0626551e77a8bf4653ed93099491d771f2d9989b1.scope.
Dec 06 10:21:59 np0005548790.localdomain podman[319064]: 2025-12-06 10:21:59.413345267 +0000 UTC m=+0.042645856 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:21:59 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:21:59 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaaab77e0559799f1c31030594b7ff933f206d40e78c1f9cc566f48d9a6455b3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:21:59 np0005548790.localdomain podman[319064]: 2025-12-06 10:21:59.538997947 +0000 UTC m=+0.168298526 container init 8fe8adb776e3f11e3c210fa0626551e77a8bf4653ed93099491d771f2d9989b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d87c79bc-37e1-4b28-b5d3-b5f930e4ffad, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 06 10:21:59 np0005548790.localdomain dnsmasq[319083]: started, version 2.85 cachesize 150
Dec 06 10:21:59 np0005548790.localdomain dnsmasq[319083]: DNS service limited to local subnets
Dec 06 10:21:59 np0005548790.localdomain dnsmasq[319083]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:21:59 np0005548790.localdomain dnsmasq[319083]: warning: no upstream servers configured
Dec 06 10:21:59 np0005548790.localdomain dnsmasq-dhcp[319083]: DHCP, static leases only on 10.101.0.0, lease time 1d
Dec 06 10:21:59 np0005548790.localdomain dnsmasq[319083]: read /var/lib/neutron/dhcp/d87c79bc-37e1-4b28-b5d3-b5f930e4ffad/addn_hosts - 0 addresses
Dec 06 10:21:59 np0005548790.localdomain dnsmasq-dhcp[319083]: read /var/lib/neutron/dhcp/d87c79bc-37e1-4b28-b5d3-b5f930e4ffad/host
Dec 06 10:21:59 np0005548790.localdomain dnsmasq-dhcp[319083]: read /var/lib/neutron/dhcp/d87c79bc-37e1-4b28-b5d3-b5f930e4ffad/opts
Dec 06 10:21:59 np0005548790.localdomain podman[319064]: 2025-12-06 10:21:59.570294015 +0000 UTC m=+0.199594614 container start 8fe8adb776e3f11e3c210fa0626551e77a8bf4653ed93099491d771f2d9989b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d87c79bc-37e1-4b28-b5d3-b5f930e4ffad, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:21:59 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:59.627 262327 INFO neutron.agent.dhcp.agent [None req-64028f6f-27fa-469c-853a-2c4f69d52e3f - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:21:57Z, description=, device_id=2b50982b-9c62-4665-9d93-c7bc12ad6e52, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c85606dc0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c85606f10>], id=eb38612e-334c-45bc-b600-439f9a97eb9d, ip_allocation=immediate, mac_address=fa:16:3e:2e:49:6a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:21:55Z, description=, dns_domain=, id=d87c79bc-37e1-4b28-b5d3-b5f930e4ffad, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-620311885, port_security_enabled=True, project_id=e82deaff368b4feea9fec0f06459a6ca, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=54243, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3015, status=ACTIVE, subnets=['3b2e467a-a20c-4cd3-be0f-48eca9a948a4'], tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:21:56Z, vlan_transparent=None, network_id=d87c79bc-37e1-4b28-b5d3-b5f930e4ffad, port_security_enabled=False, project_id=e82deaff368b4feea9fec0f06459a6ca, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3035, status=DOWN, tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:21:57Z on network d87c79bc-37e1-4b28-b5d3-b5f930e4ffad
Dec 06 10:21:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0512b167-8d53-4c3b-a204-1d35afbfc3ad", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:21:59 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0512b167-8d53-4c3b-a204-1d35afbfc3ad, vol_name:cephfs) < ""
Dec 06 10:21:59 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0512b167-8d53-4c3b-a204-1d35afbfc3ad/.meta.tmp'
Dec 06 10:21:59 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0512b167-8d53-4c3b-a204-1d35afbfc3ad/.meta.tmp' to config b'/volumes/_nogroup/0512b167-8d53-4c3b-a204-1d35afbfc3ad/.meta'
Dec 06 10:21:59 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0512b167-8d53-4c3b-a204-1d35afbfc3ad, vol_name:cephfs) < ""
Dec 06 10:21:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0512b167-8d53-4c3b-a204-1d35afbfc3ad", "format": "json"}]: dispatch
Dec 06 10:21:59 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0512b167-8d53-4c3b-a204-1d35afbfc3ad, vol_name:cephfs) < ""
Dec 06 10:21:59 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0512b167-8d53-4c3b-a204-1d35afbfc3ad, vol_name:cephfs) < ""
Dec 06 10:21:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:21:59.719 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:21:59 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:59.780 262327 INFO neutron.agent.dhcp.agent [None req-b7c75f1e-d950-4dad-8dbd-97fbc98a285c - - - - - -] DHCP configuration for ports {'26ee22de-092a-4a4c-85cb-3ee93271611a'} is completed
Dec 06 10:21:59 np0005548790.localdomain dnsmasq[319083]: read /var/lib/neutron/dhcp/d87c79bc-37e1-4b28-b5d3-b5f930e4ffad/addn_hosts - 1 addresses
Dec 06 10:21:59 np0005548790.localdomain dnsmasq-dhcp[319083]: read /var/lib/neutron/dhcp/d87c79bc-37e1-4b28-b5d3-b5f930e4ffad/host
Dec 06 10:21:59 np0005548790.localdomain dnsmasq-dhcp[319083]: read /var/lib/neutron/dhcp/d87c79bc-37e1-4b28-b5d3-b5f930e4ffad/opts
Dec 06 10:21:59 np0005548790.localdomain podman[319099]: 2025-12-06 10:21:59.821158004 +0000 UTC m=+0.061721961 container kill 8fe8adb776e3f11e3c210fa0626551e77a8bf4653ed93099491d771f2d9989b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d87c79bc-37e1-4b28-b5d3-b5f930e4ffad, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:21:59 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:21:59.962 262327 INFO neutron.agent.dhcp.agent [None req-5eef0414-2559-4166-927e-636edf70d127 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:21:57Z, description=, device_id=2b50982b-9c62-4665-9d93-c7bc12ad6e52, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c856dba60>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c856dbd30>], id=eb38612e-334c-45bc-b600-439f9a97eb9d, ip_allocation=immediate, mac_address=fa:16:3e:2e:49:6a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:21:55Z, description=, dns_domain=, id=d87c79bc-37e1-4b28-b5d3-b5f930e4ffad, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-620311885, port_security_enabled=True, project_id=e82deaff368b4feea9fec0f06459a6ca, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=54243, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3015, status=ACTIVE, subnets=['3b2e467a-a20c-4cd3-be0f-48eca9a948a4'], tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:21:56Z, vlan_transparent=None, network_id=d87c79bc-37e1-4b28-b5d3-b5f930e4ffad, port_security_enabled=False, project_id=e82deaff368b4feea9fec0f06459a6ca, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3035, status=DOWN, tags=[], tenant_id=e82deaff368b4feea9fec0f06459a6ca, updated_at=2025-12-06T10:21:57Z on network d87c79bc-37e1-4b28-b5d3-b5f930e4ffad
Dec 06 10:22:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:22:00.034 262327 INFO neutron.agent.dhcp.agent [None req-8cad5b37-81bf-437d-8036-4af9bb51c8a1 - - - - - -] DHCP configuration for ports {'eb38612e-334c-45bc-b600-439f9a97eb9d'} is completed
Dec 06 10:22:00 np0005548790.localdomain dnsmasq[319083]: read /var/lib/neutron/dhcp/d87c79bc-37e1-4b28-b5d3-b5f930e4ffad/addn_hosts - 1 addresses
Dec 06 10:22:00 np0005548790.localdomain dnsmasq-dhcp[319083]: read /var/lib/neutron/dhcp/d87c79bc-37e1-4b28-b5d3-b5f930e4ffad/host
Dec 06 10:22:00 np0005548790.localdomain podman[319138]: 2025-12-06 10:22:00.171041305 +0000 UTC m=+0.047506527 container kill 8fe8adb776e3f11e3c210fa0626551e77a8bf4653ed93099491d771f2d9989b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d87c79bc-37e1-4b28-b5d3-b5f930e4ffad, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 10:22:00 np0005548790.localdomain dnsmasq-dhcp[319083]: read /var/lib/neutron/dhcp/d87c79bc-37e1-4b28-b5d3-b5f930e4ffad/opts
Dec 06 10:22:00 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:22:00.612 262327 INFO neutron.agent.dhcp.agent [None req-cace5fd3-7b2c-4658-93b0-08869c7381cc - - - - - -] DHCP configuration for ports {'eb38612e-334c-45bc-b600-439f9a97eb9d'} is completed
Dec 06 10:22:00 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "58c822ee-e1d1-46ef-8c6f-1d173f1e10cf", "snap_name": "80ddbc86-47b7-40f0-8588-929d2cc3feb6_17d9a7bb-1ff4-4445-bd23-f0d930d9f87b", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:00 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "58c822ee-e1d1-46ef-8c6f-1d173f1e10cf", "snap_name": "80ddbc86-47b7-40f0-8588-929d2cc3feb6", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:00 np0005548790.localdomain ceph-mon[301742]: pgmap v408: 177 pgs: 177 active+clean; 282 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 73 KiB/s rd, 11 MiB/s wr, 105 op/s
Dec 06 10:22:00 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0512b167-8d53-4c3b-a204-1d35afbfc3ad", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:00 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0512b167-8d53-4c3b-a204-1d35afbfc3ad", "format": "json"}]: dispatch
Dec 06 10:22:00 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:22:00 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e192 e192: 6 total, 6 up, 6 in
Dec 06 10:22:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v410: 177 pgs: 177 active+clean; 282 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 47 KiB/s rd, 11 MiB/s wr, 68 op/s
Dec 06 10:22:01 np0005548790.localdomain sudo[319159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:22:01 np0005548790.localdomain sudo[319159]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:22:01 np0005548790.localdomain sudo[319159]: pam_unix(sudo:session): session closed for user root
Dec 06 10:22:01 np0005548790.localdomain sudo[319177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 10:22:01 np0005548790.localdomain sudo[319177]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:22:01 np0005548790.localdomain ceph-mon[301742]: osdmap e192: 6 total, 6 up, 6 in
Dec 06 10:22:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1c1b8ab9-c6b5-4cb1-94fb-5001df83d93e", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:01 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:1c1b8ab9-c6b5-4cb1-94fb-5001df83d93e, vol_name:cephfs) < ""
Dec 06 10:22:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1c1b8ab9-c6b5-4cb1-94fb-5001df83d93e/.meta.tmp'
Dec 06 10:22:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1c1b8ab9-c6b5-4cb1-94fb-5001df83d93e/.meta.tmp' to config b'/volumes/_nogroup/1c1b8ab9-c6b5-4cb1-94fb-5001df83d93e/.meta'
Dec 06 10:22:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:1c1b8ab9-c6b5-4cb1-94fb-5001df83d93e, vol_name:cephfs) < ""
Dec 06 10:22:02 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1c1b8ab9-c6b5-4cb1-94fb-5001df83d93e", "format": "json"}]: dispatch
Dec 06 10:22:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1c1b8ab9-c6b5-4cb1-94fb-5001df83d93e, vol_name:cephfs) < ""
Dec 06 10:22:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1c1b8ab9-c6b5-4cb1-94fb-5001df83d93e, vol_name:cephfs) < ""
Dec 06 10:22:02 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "58c822ee-e1d1-46ef-8c6f-1d173f1e10cf", "format": "json"}]: dispatch
Dec 06 10:22:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:58c822ee-e1d1-46ef-8c6f-1d173f1e10cf, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:58c822ee-e1d1-46ef-8c6f-1d173f1e10cf, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:02 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '58c822ee-e1d1-46ef-8c6f-1d173f1e10cf' of type subvolume
Dec 06 10:22:02 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:22:02.304+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '58c822ee-e1d1-46ef-8c6f-1d173f1e10cf' of type subvolume
Dec 06 10:22:02 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "58c822ee-e1d1-46ef-8c6f-1d173f1e10cf", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:58c822ee-e1d1-46ef-8c6f-1d173f1e10cf, vol_name:cephfs) < ""
Dec 06 10:22:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/58c822ee-e1d1-46ef-8c6f-1d173f1e10cf'' moved to trashcan
Dec 06 10:22:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:22:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:58c822ee-e1d1-46ef-8c6f-1d173f1e10cf, vol_name:cephfs) < ""
Dec 06 10:22:02 np0005548790.localdomain podman[319267]: 2025-12-06 10:22:02.369316495 +0000 UTC m=+0.102737812 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.buildah.version=1.41.4, vcs-type=git, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main)
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:02.381280) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016522381340, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2406, "num_deletes": 261, "total_data_size": 4407873, "memory_usage": 4468152, "flush_reason": "Manual Compaction"}
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016522414252, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 2373234, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23716, "largest_seqno": 26117, "table_properties": {"data_size": 2365392, "index_size": 4347, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 21482, "raw_average_key_size": 22, "raw_value_size": 2347822, "raw_average_value_size": 2443, "num_data_blocks": 187, "num_entries": 961, "num_filter_entries": 961, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016395, "oldest_key_time": 1765016395, "file_creation_time": 1765016522, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 33033 microseconds, and 6060 cpu microseconds.
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:02.414317) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 2373234 bytes OK
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:02.414342) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:02.416207) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:02.416234) EVENT_LOG_v1 {"time_micros": 1765016522416226, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:02.416257) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 4396686, prev total WAL file size 4397010, number of live WAL files 2.
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:02.417444) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034303034' seq:72057594037927935, type:22 .. '6D6772737461740034323536' seq:0, type:0; will stop at (end)
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(2317KB)], [36(17MB)]
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016522417513, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 21195768, "oldest_snapshot_seqno": -1}
Dec 06 10:22:02 np0005548790.localdomain podman[319267]: 2025-12-06 10:22:02.501817021 +0000 UTC m=+0.235238278 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, RELEASE=main, ceph=True, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main)
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 13319 keys, 19558227 bytes, temperature: kUnknown
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016522540621, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 19558227, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19482968, "index_size": 40854, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33349, "raw_key_size": 355689, "raw_average_key_size": 26, "raw_value_size": 19257420, "raw_average_value_size": 1445, "num_data_blocks": 1544, "num_entries": 13319, "num_filter_entries": 13319, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015768, "oldest_key_time": 0, "file_creation_time": 1765016522, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:02.540944) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 19558227 bytes
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:02.543982) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.1 rd, 158.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 18.0 +0.0 blob) out(18.7 +0.0 blob), read-write-amplify(17.2) write-amplify(8.2) OK, records in: 13797, records dropped: 478 output_compression: NoCompression
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:02.544021) EVENT_LOG_v1 {"time_micros": 1765016522544003, "job": 20, "event": "compaction_finished", "compaction_time_micros": 123189, "compaction_time_cpu_micros": 32272, "output_level": 6, "num_output_files": 1, "total_output_size": 19558227, "num_input_records": 13797, "num_output_records": 13319, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016522544566, "job": 20, "event": "table_file_deletion", "file_number": 38}
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016522547438, "job": 20, "event": "table_file_deletion", "file_number": 36}
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:02.417327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:02.547496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:02.547503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:02.547506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:02.547509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:02.547512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: pgmap v410: 177 pgs: 177 active+clean; 282 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 47 KiB/s rd, 11 MiB/s wr, 68 op/s
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1c1b8ab9-c6b5-4cb1-94fb-5001df83d93e", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1c1b8ab9-c6b5-4cb1-94fb-5001df83d93e", "format": "json"}]: dispatch
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "58c822ee-e1d1-46ef-8c6f-1d173f1e10cf", "format": "json"}]: dispatch
Dec 06 10:22:02 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "58c822ee-e1d1-46ef-8c6f-1d173f1e10cf", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:03 np0005548790.localdomain sudo[319177]: pam_unix(sudo:session): session closed for user root
Dec 06 10:22:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0)
Dec 06 10:22:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0)
Dec 06 10:22:03 np0005548790.localdomain sudo[319387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:22:03 np0005548790.localdomain sudo[319387]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:22:03 np0005548790.localdomain sudo[319387]: pam_unix(sudo:session): session closed for user root
Dec 06 10:22:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v411: 177 pgs: 177 active+clean; 394 MiB data, 1.5 GiB used, 40 GiB / 42 GiB avail; 59 KiB/s rd, 25 MiB/s wr, 92 op/s
Dec 06 10:22:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:22:03 np0005548790.localdomain sudo[319405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:22:03 np0005548790.localdomain sudo[319405]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:22:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0)
Dec 06 10:22:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:22:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0)
Dec 06 10:22:03 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:03.350 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:03 np0005548790.localdomain sudo[319405]: pam_unix(sudo:session): session closed for user root
Dec 06 10:22:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 06 10:22:03 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 06 10:22:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0)
Dec 06 10:22:03 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 06 10:22:03 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO root] Adjusting osd_memory_target on np0005548790.localdomain to 836.6M
Dec 06 10:22:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005548790.localdomain to 836.6M
Dec 06 10:22:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 06 10:22:03 np0005548790.localdomain ceph-mgr[286934]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:22:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:22:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0512b167-8d53-4c3b-a204-1d35afbfc3ad", "format": "json"}]: dispatch
Dec 06 10:22:03 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0512b167-8d53-4c3b-a204-1d35afbfc3ad, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0512b167-8d53-4c3b-a204-1d35afbfc3ad, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:04 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:22:04.005+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0512b167-8d53-4c3b-a204-1d35afbfc3ad' of type subvolume
Dec 06 10:22:04 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0512b167-8d53-4c3b-a204-1d35afbfc3ad' of type subvolume
Dec 06 10:22:04 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0512b167-8d53-4c3b-a204-1d35afbfc3ad", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0512b167-8d53-4c3b-a204-1d35afbfc3ad, vol_name:cephfs) < ""
Dec 06 10:22:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0512b167-8d53-4c3b-a204-1d35afbfc3ad'' moved to trashcan
Dec 06 10:22:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:22:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0512b167-8d53-4c3b-a204-1d35afbfc3ad, vol_name:cephfs) < ""
Dec 06 10:22:04 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:22:04.037 262327 INFO neutron.agent.linux.ip_lib [None req-29782c15-b0d0-4bf6-b7a9-e37182d6f637 - - - - - -] Device tap15da9b9b-1f cannot be used as it has no MAC address
Dec 06 10:22:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:04.056 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:04 np0005548790.localdomain kernel: device tap15da9b9b-1f entered promiscuous mode
Dec 06 10:22:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:04.064 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:04 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016524.0675] manager: (tap15da9b9b-1f): new Generic device (/org/freedesktop/NetworkManager/Devices/48)
Dec 06 10:22:04 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:22:04Z|00236|binding|INFO|Claiming lport 15da9b9b-1fc2-4ebc-bf06-0a44b13aa4e6 for this chassis.
Dec 06 10:22:04 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:22:04Z|00237|binding|INFO|15da9b9b-1fc2-4ebc-bf06-0a44b13aa4e6: Claiming unknown
Dec 06 10:22:04 np0005548790.localdomain systemd-udevd[319464]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:22:04 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:04.082 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-b7407861-c6cf-4598-89e1-e36e48483082', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b7407861-c6cf-4598-89e1-e36e48483082', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4daafaf0e264da6a728bdd60c5d6377', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d7e2708-917f-4980-97f9-eb0ae9a1323f, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=15da9b9b-1fc2-4ebc-bf06-0a44b13aa4e6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:22:04 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:04.084 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 15da9b9b-1fc2-4ebc-bf06-0a44b13aa4e6 in datapath b7407861-c6cf-4598-89e1-e36e48483082 bound to our chassis
Dec 06 10:22:04 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:04.086 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b7407861-c6cf-4598-89e1-e36e48483082 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:22:04 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:04.087 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[460ed84f-12ee-475d-9289-c76cf5490da2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:22:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:04.091 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:04 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:22:04Z|00238|binding|INFO|Setting lport 15da9b9b-1fc2-4ebc-bf06-0a44b13aa4e6 ovn-installed in OVS
Dec 06 10:22:04 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:22:04Z|00239|binding|INFO|Setting lport 15da9b9b-1fc2-4ebc-bf06-0a44b13aa4e6 up in Southbound
Dec 06 10:22:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:04.097 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:04.099 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:04.123 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: pgmap v411: 177 pgs: 177 active+clean; 394 MiB data, 1.5 GiB used, 40 GiB / 42 GiB avail; 59 KiB/s rd, 25 MiB/s wr, 92 op/s
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 06 10:22:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:04.149 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0)
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0)
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 06 10:22:04 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO root] Adjusting osd_memory_target on np0005548789.localdomain to 836.6M
Dec 06 10:22:04 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005548789.localdomain to 836.6M
Dec 06 10:22:04 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO root] Adjusting osd_memory_target on np0005548788.localdomain to 836.6M
Dec 06 10:22:04 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005548788.localdomain to 836.6M
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 06 10:22:04 np0005548790.localdomain ceph-mgr[286934]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Dec 06 10:22:04 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Dec 06 10:22:04 np0005548790.localdomain ceph-mgr[286934]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:22:04 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:22:04 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] update: starting ev 97122bad-0d39-4c54-89e9-b821a806fce0 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:22:04 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] complete: finished ev 97122bad-0d39-4c54-89e9-b821a806fce0 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:22:04 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Completed event 97122bad-0d39-4c54-89e9-b821a806fce0 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:22:04 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:22:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:04.374 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:04 np0005548790.localdomain sudo[319487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:22:04 np0005548790.localdomain sudo[319487]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:22:04 np0005548790.localdomain sudo[319487]: pam_unix(sudo:session): session closed for user root
Dec 06 10:22:04 np0005548790.localdomain podman[319535]: 
Dec 06 10:22:04 np0005548790.localdomain podman[319535]: 2025-12-06 10:22:04.969738819 +0000 UTC m=+0.080178032 container create 7bd1f232b3ffb7d1f2215ff4841866601a1cfb8fb10bf283581563d252272c17 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b7407861-c6cf-4598-89e1-e36e48483082, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 06 10:22:05 np0005548790.localdomain systemd[1]: Started libpod-conmon-7bd1f232b3ffb7d1f2215ff4841866601a1cfb8fb10bf283581563d252272c17.scope.
Dec 06 10:22:05 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:22:05 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/518cca98c465df79cc2d3f96a17fa6b4edb79d852644e8a50566fa5d5ac420d2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:22:05 np0005548790.localdomain podman[319535]: 2025-12-06 10:22:04.935656926 +0000 UTC m=+0.046096189 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:22:05 np0005548790.localdomain podman[319535]: 2025-12-06 10:22:05.037953205 +0000 UTC m=+0.148392448 container init 7bd1f232b3ffb7d1f2215ff4841866601a1cfb8fb10bf283581563d252272c17 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b7407861-c6cf-4598-89e1-e36e48483082, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 06 10:22:05 np0005548790.localdomain podman[319535]: 2025-12-06 10:22:05.045386226 +0000 UTC m=+0.155825469 container start 7bd1f232b3ffb7d1f2215ff4841866601a1cfb8fb10bf283581563d252272c17 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b7407861-c6cf-4598-89e1-e36e48483082, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:22:05 np0005548790.localdomain dnsmasq[319553]: started, version 2.85 cachesize 150
Dec 06 10:22:05 np0005548790.localdomain dnsmasq[319553]: DNS service limited to local subnets
Dec 06 10:22:05 np0005548790.localdomain dnsmasq[319553]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:22:05 np0005548790.localdomain dnsmasq[319553]: warning: no upstream servers configured
Dec 06 10:22:05 np0005548790.localdomain dnsmasq-dhcp[319553]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 06 10:22:05 np0005548790.localdomain dnsmasq[319553]: read /var/lib/neutron/dhcp/b7407861-c6cf-4598-89e1-e36e48483082/addn_hosts - 0 addresses
Dec 06 10:22:05 np0005548790.localdomain dnsmasq-dhcp[319553]: read /var/lib/neutron/dhcp/b7407861-c6cf-4598-89e1-e36e48483082/host
Dec 06 10:22:05 np0005548790.localdomain dnsmasq-dhcp[319553]: read /var/lib/neutron/dhcp/b7407861-c6cf-4598-89e1-e36e48483082/opts
Dec 06 10:22:05 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:22:05.090 262327 INFO neutron.agent.dhcp.agent [None req-29782c15-b0d0-4bf6-b7a9-e37182d6f637 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:22:03Z, description=, device_id=d826eb4e-7fd5-4cdb-9ce2-8323b958c600, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c85620d30>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c85620e20>], id=15c435a6-24c2-449b-80f1-b5c887430dd6, ip_allocation=immediate, mac_address=fa:16:3e:28:2e:db, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:22:01Z, description=, dns_domain=, id=b7407861-c6cf-4598-89e1-e36e48483082, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-router-network01--1817397334, port_security_enabled=True, project_id=b4daafaf0e264da6a728bdd60c5d6377, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=44157, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3062, status=ACTIVE, subnets=['5e474b64-667a-42d9-8669-d3682204d710'], tags=[], tenant_id=b4daafaf0e264da6a728bdd60c5d6377, updated_at=2025-12-06T10:22:02Z, vlan_transparent=None, network_id=b7407861-c6cf-4598-89e1-e36e48483082, port_security_enabled=False, project_id=b4daafaf0e264da6a728bdd60c5d6377, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3080, status=DOWN, tags=[], tenant_id=b4daafaf0e264da6a728bdd60c5d6377, updated_at=2025-12-06T10:22:03Z on network b7407861-c6cf-4598-89e1-e36e48483082
Dec 06 10:22:05 np0005548790.localdomain ceph-mon[301742]: Adjusting osd_memory_target on np0005548790.localdomain to 836.6M
Dec 06 10:22:05 np0005548790.localdomain ceph-mon[301742]: Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:22:05 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0512b167-8d53-4c3b-a204-1d35afbfc3ad", "format": "json"}]: dispatch
Dec 06 10:22:05 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0512b167-8d53-4c3b-a204-1d35afbfc3ad", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:05 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 06 10:22:05 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 06 10:22:05 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 06 10:22:05 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 06 10:22:05 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 06 10:22:05 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 06 10:22:05 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 06 10:22:05 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 06 10:22:05 np0005548790.localdomain ceph-mon[301742]: Adjusting osd_memory_target on np0005548789.localdomain to 836.6M
Dec 06 10:22:05 np0005548790.localdomain ceph-mon[301742]: Adjusting osd_memory_target on np0005548788.localdomain to 836.6M
Dec 06 10:22:05 np0005548790.localdomain ceph-mon[301742]: Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Dec 06 10:22:05 np0005548790.localdomain ceph-mon[301742]: Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:22:05 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:22:05 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:22:05 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:22:05 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:22:05 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:22:05.211 262327 INFO neutron.agent.dhcp.agent [None req-550f24a1-86d7-4739-8b57-c7874b0c6f6d - - - - - -] DHCP configuration for ports {'9db71aef-fe33-48b0-8d94-731d492ffc16'} is completed
Dec 06 10:22:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v412: 177 pgs: 177 active+clean; 394 MiB data, 1.5 GiB used, 40 GiB / 42 GiB avail; 59 KiB/s rd, 25 MiB/s wr, 92 op/s
Dec 06 10:22:05 np0005548790.localdomain dnsmasq[319553]: read /var/lib/neutron/dhcp/b7407861-c6cf-4598-89e1-e36e48483082/addn_hosts - 1 addresses
Dec 06 10:22:05 np0005548790.localdomain dnsmasq-dhcp[319553]: read /var/lib/neutron/dhcp/b7407861-c6cf-4598-89e1-e36e48483082/host
Dec 06 10:22:05 np0005548790.localdomain dnsmasq-dhcp[319553]: read /var/lib/neutron/dhcp/b7407861-c6cf-4598-89e1-e36e48483082/opts
Dec 06 10:22:05 np0005548790.localdomain podman[319570]: 2025-12-06 10:22:05.284696663 +0000 UTC m=+0.068945346 container kill 7bd1f232b3ffb7d1f2215ff4841866601a1cfb8fb10bf283581563d252272c17 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b7407861-c6cf-4598-89e1-e36e48483082, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 10:22:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1c1b8ab9-c6b5-4cb1-94fb-5001df83d93e", "format": "json"}]: dispatch
Dec 06 10:22:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1c1b8ab9-c6b5-4cb1-94fb-5001df83d93e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1c1b8ab9-c6b5-4cb1-94fb-5001df83d93e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:05 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:22:05.426+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1c1b8ab9-c6b5-4cb1-94fb-5001df83d93e' of type subvolume
Dec 06 10:22:05 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1c1b8ab9-c6b5-4cb1-94fb-5001df83d93e' of type subvolume
Dec 06 10:22:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1c1b8ab9-c6b5-4cb1-94fb-5001df83d93e", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1c1b8ab9-c6b5-4cb1-94fb-5001df83d93e, vol_name:cephfs) < ""
Dec 06 10:22:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1c1b8ab9-c6b5-4cb1-94fb-5001df83d93e'' moved to trashcan
Dec 06 10:22:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:22:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1c1b8ab9-c6b5-4cb1-94fb-5001df83d93e, vol_name:cephfs) < ""
Dec 06 10:22:05 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:22:05.502 262327 INFO neutron.agent.dhcp.agent [None req-29782c15-b0d0-4bf6-b7a9-e37182d6f637 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:22:03Z, description=, device_id=d826eb4e-7fd5-4cdb-9ce2-8323b958c600, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c858617f0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c85842e50>], id=15c435a6-24c2-449b-80f1-b5c887430dd6, ip_allocation=immediate, mac_address=fa:16:3e:28:2e:db, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:22:01Z, description=, dns_domain=, id=b7407861-c6cf-4598-89e1-e36e48483082, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-router-network01--1817397334, port_security_enabled=True, project_id=b4daafaf0e264da6a728bdd60c5d6377, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=44157, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3062, status=ACTIVE, subnets=['5e474b64-667a-42d9-8669-d3682204d710'], tags=[], tenant_id=b4daafaf0e264da6a728bdd60c5d6377, updated_at=2025-12-06T10:22:02Z, vlan_transparent=None, network_id=b7407861-c6cf-4598-89e1-e36e48483082, port_security_enabled=False, project_id=b4daafaf0e264da6a728bdd60c5d6377, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3080, status=DOWN, tags=[], tenant_id=b4daafaf0e264da6a728bdd60c5d6377, updated_at=2025-12-06T10:22:03Z on network b7407861-c6cf-4598-89e1-e36e48483082
Dec 06 10:22:05 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:22:05.558 262327 INFO neutron.agent.dhcp.agent [None req-2720f845-c4d4-46f2-b7b8-ab78234768e3 - - - - - -] DHCP configuration for ports {'15c435a6-24c2-449b-80f1-b5c887430dd6'} is completed
Dec 06 10:22:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4a25353d-60c6-4e9f-96d0-37cd3eb987b5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4a25353d-60c6-4e9f-96d0-37cd3eb987b5, vol_name:cephfs) < ""
Dec 06 10:22:05 np0005548790.localdomain dnsmasq[319553]: read /var/lib/neutron/dhcp/b7407861-c6cf-4598-89e1-e36e48483082/addn_hosts - 1 addresses
Dec 06 10:22:05 np0005548790.localdomain dnsmasq-dhcp[319553]: read /var/lib/neutron/dhcp/b7407861-c6cf-4598-89e1-e36e48483082/host
Dec 06 10:22:05 np0005548790.localdomain podman[319609]: 2025-12-06 10:22:05.705863833 +0000 UTC m=+0.063415008 container kill 7bd1f232b3ffb7d1f2215ff4841866601a1cfb8fb10bf283581563d252272c17 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b7407861-c6cf-4598-89e1-e36e48483082, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 10:22:05 np0005548790.localdomain dnsmasq-dhcp[319553]: read /var/lib/neutron/dhcp/b7407861-c6cf-4598-89e1-e36e48483082/opts
Dec 06 10:22:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4a25353d-60c6-4e9f-96d0-37cd3eb987b5/.meta.tmp'
Dec 06 10:22:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4a25353d-60c6-4e9f-96d0-37cd3eb987b5/.meta.tmp' to config b'/volumes/_nogroup/4a25353d-60c6-4e9f-96d0-37cd3eb987b5/.meta'
Dec 06 10:22:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4a25353d-60c6-4e9f-96d0-37cd3eb987b5, vol_name:cephfs) < ""
Dec 06 10:22:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4a25353d-60c6-4e9f-96d0-37cd3eb987b5", "format": "json"}]: dispatch
Dec 06 10:22:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4a25353d-60c6-4e9f-96d0-37cd3eb987b5, vol_name:cephfs) < ""
Dec 06 10:22:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4a25353d-60c6-4e9f-96d0-37cd3eb987b5, vol_name:cephfs) < ""
Dec 06 10:22:05 np0005548790.localdomain systemd[1]: tmp-crun.4RUGUi.mount: Deactivated successfully.
Dec 06 10:22:06 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:22:06.022 262327 INFO neutron.agent.dhcp.agent [None req-acbe6e77-3cf5-46c8-8046-bf92cea84246 - - - - - -] DHCP configuration for ports {'15c435a6-24c2-449b-80f1-b5c887430dd6'} is completed
Dec 06 10:22:06 np0005548790.localdomain dnsmasq[319553]: read /var/lib/neutron/dhcp/b7407861-c6cf-4598-89e1-e36e48483082/addn_hosts - 0 addresses
Dec 06 10:22:06 np0005548790.localdomain dnsmasq-dhcp[319553]: read /var/lib/neutron/dhcp/b7407861-c6cf-4598-89e1-e36e48483082/host
Dec 06 10:22:06 np0005548790.localdomain dnsmasq-dhcp[319553]: read /var/lib/neutron/dhcp/b7407861-c6cf-4598-89e1-e36e48483082/opts
Dec 06 10:22:06 np0005548790.localdomain podman[319647]: 2025-12-06 10:22:06.213943155 +0000 UTC m=+0.081536168 container kill 7bd1f232b3ffb7d1f2215ff4841866601a1cfb8fb10bf283581563d252272c17 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b7407861-c6cf-4598-89e1-e36e48483082, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:22:06 np0005548790.localdomain ceph-mon[301742]: pgmap v412: 177 pgs: 177 active+clean; 394 MiB data, 1.5 GiB used, 40 GiB / 42 GiB avail; 59 KiB/s rd, 25 MiB/s wr, 92 op/s
Dec 06 10:22:06 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1c1b8ab9-c6b5-4cb1-94fb-5001df83d93e", "format": "json"}]: dispatch
Dec 06 10:22:06 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1c1b8ab9-c6b5-4cb1-94fb-5001df83d93e", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:06 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4a25353d-60c6-4e9f-96d0-37cd3eb987b5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:06 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:22:06 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:06.433 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:06 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:22:06Z|00240|binding|INFO|Releasing lport 15da9b9b-1fc2-4ebc-bf06-0a44b13aa4e6 from this chassis (sb_readonly=0)
Dec 06 10:22:06 np0005548790.localdomain kernel: device tap15da9b9b-1f left promiscuous mode
Dec 06 10:22:06 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:22:06Z|00241|binding|INFO|Setting lport 15da9b9b-1fc2-4ebc-bf06-0a44b13aa4e6 down in Southbound
Dec 06 10:22:06 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:06.453 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:06 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:06.499 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-b7407861-c6cf-4598-89e1-e36e48483082', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b7407861-c6cf-4598-89e1-e36e48483082', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4daafaf0e264da6a728bdd60c5d6377', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d7e2708-917f-4980-97f9-eb0ae9a1323f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=15da9b9b-1fc2-4ebc-bf06-0a44b13aa4e6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:22:06 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:06.501 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 15da9b9b-1fc2-4ebc-bf06-0a44b13aa4e6 in datapath b7407861-c6cf-4598-89e1-e36e48483082 unbound from our chassis
Dec 06 10:22:06 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:06.503 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b7407861-c6cf-4598-89e1-e36e48483082 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:22:06 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:06.504 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[306eda9b-65c0-43ab-ba0e-88b250fd9057]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:22:07 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Writing back 50 completed events
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4a25353d-60c6-4e9f-96d0-37cd3eb987b5", "format": "json"}]: dispatch
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:22:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v413: 177 pgs: 177 active+clean; 394 MiB data, 1.5 GiB used, 40 GiB / 42 GiB avail; 48 KiB/s rd, 20 MiB/s wr, 74 op/s
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e193 e193: 6 total, 6 up, 6 in
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:07.406539) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016527406578, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 447, "num_deletes": 253, "total_data_size": 409840, "memory_usage": 419144, "flush_reason": "Manual Compaction"}
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016527410708, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 270596, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26122, "largest_seqno": 26564, "table_properties": {"data_size": 267814, "index_size": 765, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6684, "raw_average_key_size": 18, "raw_value_size": 261976, "raw_average_value_size": 727, "num_data_blocks": 29, "num_entries": 360, "num_filter_entries": 360, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016522, "oldest_key_time": 1765016522, "file_creation_time": 1765016527, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 4217 microseconds, and 1957 cpu microseconds.
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:07.410753) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 270596 bytes OK
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:07.410808) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:07.412520) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:07.412542) EVENT_LOG_v1 {"time_micros": 1765016527412535, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:07.412561) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 406914, prev total WAL file size 406914, number of live WAL files 2.
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:07.413215) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031353336' seq:72057594037927935, type:22 .. '6B760031373930' seq:0, type:0; will stop at (end)
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(264KB)], [39(18MB)]
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016527413273, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 19828823, "oldest_snapshot_seqno": -1}
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 13147 keys, 18733226 bytes, temperature: kUnknown
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016527508640, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 18733226, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18660174, "index_size": 39095, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32901, "raw_key_size": 353813, "raw_average_key_size": 26, "raw_value_size": 18438364, "raw_average_value_size": 1402, "num_data_blocks": 1451, "num_entries": 13147, "num_filter_entries": 13147, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015768, "oldest_key_time": 0, "file_creation_time": 1765016527, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:07.509434) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 18733226 bytes
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:07.511979) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 207.7 rd, 196.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 18.7 +0.0 blob) out(17.9 +0.0 blob), read-write-amplify(142.5) write-amplify(69.2) OK, records in: 13679, records dropped: 532 output_compression: NoCompression
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:07.512013) EVENT_LOG_v1 {"time_micros": 1765016527511998, "job": 22, "event": "compaction_finished", "compaction_time_micros": 95467, "compaction_time_cpu_micros": 48449, "output_level": 6, "num_output_files": 1, "total_output_size": 18733226, "num_input_records": 13679, "num_output_records": 13147, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016527512321, "job": 22, "event": "table_file_deletion", "file_number": 41}
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016527515389, "job": 22, "event": "table_file_deletion", "file_number": 39}
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:07.413059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:07.515493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:07.515501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:07.515505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:07.515509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:22:07 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:07.515513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:22:08 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:08.375 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:08 np0005548790.localdomain ceph-mon[301742]: pgmap v413: 177 pgs: 177 active+clean; 394 MiB data, 1.5 GiB used, 40 GiB / 42 GiB avail; 48 KiB/s rd, 20 MiB/s wr, 74 op/s
Dec 06 10:22:08 np0005548790.localdomain ceph-mon[301742]: osdmap e193: 6 total, 6 up, 6 in
Dec 06 10:22:08 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4c1f4920-d346-409d-af49-370c9b85e205", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4c1f4920-d346-409d-af49-370c9b85e205, vol_name:cephfs) < ""
Dec 06 10:22:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:22:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4c1f4920-d346-409d-af49-370c9b85e205/.meta.tmp'
Dec 06 10:22:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4c1f4920-d346-409d-af49-370c9b85e205/.meta.tmp' to config b'/volumes/_nogroup/4c1f4920-d346-409d-af49-370c9b85e205/.meta'
Dec 06 10:22:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4c1f4920-d346-409d-af49-370c9b85e205, vol_name:cephfs) < ""
Dec 06 10:22:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4c1f4920-d346-409d-af49-370c9b85e205", "format": "json"}]: dispatch
Dec 06 10:22:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4c1f4920-d346-409d-af49-370c9b85e205, vol_name:cephfs) < ""
Dec 06 10:22:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4c1f4920-d346-409d-af49-370c9b85e205, vol_name:cephfs) < ""
Dec 06 10:22:09 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:22:09.118 262327 INFO neutron.agent.linux.ip_lib [None req-abea6620-26c6-477c-b329-f1d80fa3ecbf - - - - - -] Device tapa93c3801-ac cannot be used as it has no MAC address
Dec 06 10:22:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:09.140 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:09 np0005548790.localdomain kernel: device tapa93c3801-ac entered promiscuous mode
Dec 06 10:22:09 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016529.1488] manager: (tapa93c3801-ac): new Generic device (/org/freedesktop/NetworkManager/Devices/49)
Dec 06 10:22:09 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:22:09Z|00242|binding|INFO|Claiming lport a93c3801-ac62-40f2-92c4-fd51e591ada9 for this chassis.
Dec 06 10:22:09 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:22:09Z|00243|binding|INFO|a93c3801-ac62-40f2-92c4-fd51e591ada9: Claiming unknown
Dec 06 10:22:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:09.149 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:09 np0005548790.localdomain systemd-udevd[319681]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:22:09 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:09.163 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-9366a545-78d0-4ca5-b3cb-58171400754a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9366a545-78d0-4ca5-b3cb-58171400754a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4daafaf0e264da6a728bdd60c5d6377', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0b9eb52-6f16-49ad-91f0-d5358f8fa9b1, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=a93c3801-ac62-40f2-92c4-fd51e591ada9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:22:09 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:09.166 159200 INFO neutron.agent.ovn.metadata.agent [-] Port a93c3801-ac62-40f2-92c4-fd51e591ada9 in datapath 9366a545-78d0-4ca5-b3cb-58171400754a bound to our chassis
Dec 06 10:22:09 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:09.168 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9366a545-78d0-4ca5-b3cb-58171400754a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:22:09 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:09.169 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[118eb561-1fbd-43c0-8f8d-8d96184bbdac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:22:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4a25353d-60c6-4e9f-96d0-37cd3eb987b5", "format": "json"}]: dispatch
Dec 06 10:22:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4a25353d-60c6-4e9f-96d0-37cd3eb987b5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4a25353d-60c6-4e9f-96d0-37cd3eb987b5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:09 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:22:09.187+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4a25353d-60c6-4e9f-96d0-37cd3eb987b5' of type subvolume
Dec 06 10:22:09 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4a25353d-60c6-4e9f-96d0-37cd3eb987b5' of type subvolume
Dec 06 10:22:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4a25353d-60c6-4e9f-96d0-37cd3eb987b5", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4a25353d-60c6-4e9f-96d0-37cd3eb987b5, vol_name:cephfs) < ""
Dec 06 10:22:09 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapa93c3801-ac: No such device
Dec 06 10:22:09 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapa93c3801-ac: No such device
Dec 06 10:22:09 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:22:09Z|00244|binding|INFO|Setting lport a93c3801-ac62-40f2-92c4-fd51e591ada9 ovn-installed in OVS
Dec 06 10:22:09 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:22:09Z|00245|binding|INFO|Setting lport a93c3801-ac62-40f2-92c4-fd51e591ada9 up in Southbound
Dec 06 10:22:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:09.201 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:09 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapa93c3801-ac: No such device
Dec 06 10:22:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4a25353d-60c6-4e9f-96d0-37cd3eb987b5'' moved to trashcan
Dec 06 10:22:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:22:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4a25353d-60c6-4e9f-96d0-37cd3eb987b5, vol_name:cephfs) < ""
Dec 06 10:22:09 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapa93c3801-ac: No such device
Dec 06 10:22:09 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapa93c3801-ac: No such device
Dec 06 10:22:09 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapa93c3801-ac: No such device
Dec 06 10:22:09 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapa93c3801-ac: No such device
Dec 06 10:22:09 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapa93c3801-ac: No such device
Dec 06 10:22:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v415: 177 pgs: 177 active+clean; 514 MiB data, 1.8 GiB used, 40 GiB / 42 GiB avail; 22 KiB/s rd, 27 MiB/s wr, 44 op/s
Dec 06 10:22:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:09.244 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:09.281 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:09 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:22:10 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:10.147 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:10 np0005548790.localdomain podman[319752]: 
Dec 06 10:22:10 np0005548790.localdomain podman[319752]: 2025-12-06 10:22:10.226604463 +0000 UTC m=+0.108579230 container create e74f55ea916cf1a0baca0a9063780ead8b3fdc67c6e9ce4a35523fe16f600be0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9366a545-78d0-4ca5-b3cb-58171400754a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 10:22:10 np0005548790.localdomain podman[319752]: 2025-12-06 10:22:10.168293425 +0000 UTC m=+0.050268202 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:22:10 np0005548790.localdomain systemd[1]: Started libpod-conmon-e74f55ea916cf1a0baca0a9063780ead8b3fdc67c6e9ce4a35523fe16f600be0.scope.
Dec 06 10:22:10 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:22:10 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15d7027b0593ce94089214b724a816bab3f59c7f7ab28b9b49705ed76d439d92/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:22:10 np0005548790.localdomain podman[319752]: 2025-12-06 10:22:10.318271043 +0000 UTC m=+0.200245800 container init e74f55ea916cf1a0baca0a9063780ead8b3fdc67c6e9ce4a35523fe16f600be0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9366a545-78d0-4ca5-b3cb-58171400754a, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 06 10:22:10 np0005548790.localdomain podman[319752]: 2025-12-06 10:22:10.327618497 +0000 UTC m=+0.209593264 container start e74f55ea916cf1a0baca0a9063780ead8b3fdc67c6e9ce4a35523fe16f600be0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9366a545-78d0-4ca5-b3cb-58171400754a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:22:10 np0005548790.localdomain dnsmasq[319771]: started, version 2.85 cachesize 150
Dec 06 10:22:10 np0005548790.localdomain dnsmasq[319771]: DNS service limited to local subnets
Dec 06 10:22:10 np0005548790.localdomain dnsmasq[319771]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:22:10 np0005548790.localdomain dnsmasq[319771]: warning: no upstream servers configured
Dec 06 10:22:10 np0005548790.localdomain dnsmasq-dhcp[319771]: DHCPv6, static leases only on 2001:db8:1::, lease time 1d
Dec 06 10:22:10 np0005548790.localdomain dnsmasq[319771]: read /var/lib/neutron/dhcp/9366a545-78d0-4ca5-b3cb-58171400754a/addn_hosts - 0 addresses
Dec 06 10:22:10 np0005548790.localdomain dnsmasq-dhcp[319771]: read /var/lib/neutron/dhcp/9366a545-78d0-4ca5-b3cb-58171400754a/host
Dec 06 10:22:10 np0005548790.localdomain dnsmasq-dhcp[319771]: read /var/lib/neutron/dhcp/9366a545-78d0-4ca5-b3cb-58171400754a/opts
Dec 06 10:22:10 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4c1f4920-d346-409d-af49-370c9b85e205", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:10 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4c1f4920-d346-409d-af49-370c9b85e205", "format": "json"}]: dispatch
Dec 06 10:22:10 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4a25353d-60c6-4e9f-96d0-37cd3eb987b5", "format": "json"}]: dispatch
Dec 06 10:22:10 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4a25353d-60c6-4e9f-96d0-37cd3eb987b5", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:10 np0005548790.localdomain ceph-mon[301742]: pgmap v415: 177 pgs: 177 active+clean; 514 MiB data, 1.8 GiB used, 40 GiB / 42 GiB avail; 22 KiB/s rd, 27 MiB/s wr, 44 op/s
Dec 06 10:22:10 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:22:10.556 262327 INFO neutron.agent.dhcp.agent [None req-b75ee33c-db71-4ac7-b69b-c063878d5ac4 - - - - - -] DHCP configuration for ports {'6c19091f-012c-48ba-94d6-6c100aaf1890'} is completed
Dec 06 10:22:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v416: 177 pgs: 177 active+clean; 514 MiB data, 1.8 GiB used, 40 GiB / 42 GiB avail; 19 KiB/s rd, 23 MiB/s wr, 37 op/s
Dec 06 10:22:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:22:11 np0005548790.localdomain podman[319772]: 2025-12-06 10:22:11.570486336 +0000 UTC m=+0.083365787 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:22:11 np0005548790.localdomain podman[319772]: 2025-12-06 10:22:11.575770939 +0000 UTC m=+0.088650380 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 06 10:22:11 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:22:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:22:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:22:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:22:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:22:12 np0005548790.localdomain dnsmasq[319771]: exiting on receipt of SIGTERM
Dec 06 10:22:12 np0005548790.localdomain podman[319807]: 2025-12-06 10:22:12.219933655 +0000 UTC m=+0.069442561 container kill e74f55ea916cf1a0baca0a9063780ead8b3fdc67c6e9ce4a35523fe16f600be0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9366a545-78d0-4ca5-b3cb-58171400754a, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 10:22:12 np0005548790.localdomain systemd[1]: libpod-e74f55ea916cf1a0baca0a9063780ead8b3fdc67c6e9ce4a35523fe16f600be0.scope: Deactivated successfully.
Dec 06 10:22:12 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:22:12Z|00246|binding|INFO|Removing iface tapa93c3801-ac ovn-installed in OVS
Dec 06 10:22:12 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:22:12Z|00247|binding|INFO|Removing lport a93c3801-ac62-40f2-92c4-fd51e591ada9 ovn-installed in OVS
Dec 06 10:22:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:12.230 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:12 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:12.234 159200 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 475f962f-914e-4d95-9cff-3976411e3461 with type ""
Dec 06 10:22:12 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:12.236 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-9366a545-78d0-4ca5-b3cb-58171400754a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9366a545-78d0-4ca5-b3cb-58171400754a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b4daafaf0e264da6a728bdd60c5d6377', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0b9eb52-6f16-49ad-91f0-d5358f8fa9b1, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=a93c3801-ac62-40f2-92c4-fd51e591ada9) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:22:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:12.237 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:12 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:12.239 159200 INFO neutron.agent.ovn.metadata.agent [-] Port a93c3801-ac62-40f2-92c4-fd51e591ada9 in datapath 9366a545-78d0-4ca5-b3cb-58171400754a unbound from our chassis
Dec 06 10:22:12 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:12.241 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9366a545-78d0-4ca5-b3cb-58171400754a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:22:12 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:12.242 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[d7f7f567-9441-463d-ba62-82d6dc4ec955]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:22:12 np0005548790.localdomain podman[319821]: 2025-12-06 10:22:12.294102913 +0000 UTC m=+0.061705691 container died e74f55ea916cf1a0baca0a9063780ead8b3fdc67c6e9ce4a35523fe16f600be0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9366a545-78d0-4ca5-b3cb-58171400754a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:22:12 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4c1f4920-d346-409d-af49-370c9b85e205", "format": "json"}]: dispatch
Dec 06 10:22:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4c1f4920-d346-409d-af49-370c9b85e205, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4c1f4920-d346-409d-af49-370c9b85e205, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:12 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:22:12.307+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4c1f4920-d346-409d-af49-370c9b85e205' of type subvolume
Dec 06 10:22:12 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4c1f4920-d346-409d-af49-370c9b85e205' of type subvolume
Dec 06 10:22:12 np0005548790.localdomain systemd[1]: tmp-crun.4oiAr8.mount: Deactivated successfully.
Dec 06 10:22:12 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4c1f4920-d346-409d-af49-370c9b85e205", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4c1f4920-d346-409d-af49-370c9b85e205, vol_name:cephfs) < ""
Dec 06 10:22:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4c1f4920-d346-409d-af49-370c9b85e205'' moved to trashcan
Dec 06 10:22:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:22:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4c1f4920-d346-409d-af49-370c9b85e205, vol_name:cephfs) < ""
Dec 06 10:22:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:22:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:22:12 np0005548790.localdomain podman[319821]: 2025-12-06 10:22:12.39816151 +0000 UTC m=+0.165764248 container cleanup e74f55ea916cf1a0baca0a9063780ead8b3fdc67c6e9ce4a35523fe16f600be0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9366a545-78d0-4ca5-b3cb-58171400754a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 10:22:12 np0005548790.localdomain systemd[1]: libpod-conmon-e74f55ea916cf1a0baca0a9063780ead8b3fdc67c6e9ce4a35523fe16f600be0.scope: Deactivated successfully.
Dec 06 10:22:12 np0005548790.localdomain podman[319823]: 2025-12-06 10:22:12.42403878 +0000 UTC m=+0.181349590 container remove e74f55ea916cf1a0baca0a9063780ead8b3fdc67c6e9ce4a35523fe16f600be0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9366a545-78d0-4ca5-b3cb-58171400754a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 10:22:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:12.442 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:12 np0005548790.localdomain kernel: device tapa93c3801-ac left promiscuous mode
Dec 06 10:22:12 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1d7917f2-7128-46ce-8479-e0741aca1efd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1d7917f2-7128-46ce-8479-e0741aca1efd, vol_name:cephfs) < ""
Dec 06 10:22:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:12.456 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:12 np0005548790.localdomain ceph-mon[301742]: pgmap v416: 177 pgs: 177 active+clean; 514 MiB data, 1.8 GiB used, 40 GiB / 42 GiB avail; 19 KiB/s rd, 23 MiB/s wr, 37 op/s
Dec 06 10:22:12 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:22:12.479 262327 INFO neutron.agent.dhcp.agent [None req-6552f84c-3414-4810-9d0a-e6fe1ce5ef1e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:22:12 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:22:12.480 262327 INFO neutron.agent.dhcp.agent [None req-6552f84c-3414-4810-9d0a-e6fe1ce5ef1e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:22:12 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-15d7027b0593ce94089214b724a816bab3f59c7f7ab28b9b49705ed76d439d92-merged.mount: Deactivated successfully.
Dec 06 10:22:12 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e74f55ea916cf1a0baca0a9063780ead8b3fdc67c6e9ce4a35523fe16f600be0-userdata-shm.mount: Deactivated successfully.
Dec 06 10:22:12 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2d9366a545\x2d78d0\x2d4ca5\x2db3cb\x2d58171400754a.mount: Deactivated successfully.
Dec 06 10:22:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1d7917f2-7128-46ce-8479-e0741aca1efd/.meta.tmp'
Dec 06 10:22:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1d7917f2-7128-46ce-8479-e0741aca1efd/.meta.tmp' to config b'/volumes/_nogroup/1d7917f2-7128-46ce-8479-e0741aca1efd/.meta'
Dec 06 10:22:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1d7917f2-7128-46ce-8479-e0741aca1efd, vol_name:cephfs) < ""
Dec 06 10:22:12 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1d7917f2-7128-46ce-8479-e0741aca1efd", "format": "json"}]: dispatch
Dec 06 10:22:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1d7917f2-7128-46ce-8479-e0741aca1efd, vol_name:cephfs) < ""
Dec 06 10:22:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1d7917f2-7128-46ce-8479-e0741aca1efd, vol_name:cephfs) < ""
Dec 06 10:22:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:12.740 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v417: 177 pgs: 177 active+clean; 642 MiB data, 2.2 GiB used, 40 GiB / 42 GiB avail; 18 KiB/s rd, 25 MiB/s wr, 36 op/s
Dec 06 10:22:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:13.353 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:13.355 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:22:13 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:13.356 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:22:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:13.372 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:13.379 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:13 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4c1f4920-d346-409d-af49-370c9b85e205", "format": "json"}]: dispatch
Dec 06 10:22:13 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4c1f4920-d346-409d-af49-370c9b85e205", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:13 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1d7917f2-7128-46ce-8479-e0741aca1efd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:13 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1d7917f2-7128-46ce-8479-e0741aca1efd", "format": "json"}]: dispatch
Dec 06 10:22:13 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:22:13 np0005548790.localdomain dnsmasq[319553]: exiting on receipt of SIGTERM
Dec 06 10:22:13 np0005548790.localdomain podman[319869]: 2025-12-06 10:22:13.616761522 +0000 UTC m=+0.057553439 container kill 7bd1f232b3ffb7d1f2215ff4841866601a1cfb8fb10bf283581563d252272c17 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b7407861-c6cf-4598-89e1-e36e48483082, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:22:13 np0005548790.localdomain systemd[1]: libpod-7bd1f232b3ffb7d1f2215ff4841866601a1cfb8fb10bf283581563d252272c17.scope: Deactivated successfully.
Dec 06 10:22:13 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:22:13 np0005548790.localdomain podman[319882]: 2025-12-06 10:22:13.674760271 +0000 UTC m=+0.046301473 container died 7bd1f232b3ffb7d1f2215ff4841866601a1cfb8fb10bf283581563d252272c17 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b7407861-c6cf-4598-89e1-e36e48483082, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 10:22:13 np0005548790.localdomain systemd[1]: tmp-crun.ZZvQIb.mount: Deactivated successfully.
Dec 06 10:22:13 np0005548790.localdomain podman[319882]: 2025-12-06 10:22:13.73014065 +0000 UTC m=+0.101681822 container cleanup 7bd1f232b3ffb7d1f2215ff4841866601a1cfb8fb10bf283581563d252272c17 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b7407861-c6cf-4598-89e1-e36e48483082, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 06 10:22:13 np0005548790.localdomain systemd[1]: libpod-conmon-7bd1f232b3ffb7d1f2215ff4841866601a1cfb8fb10bf283581563d252272c17.scope: Deactivated successfully.
Dec 06 10:22:13 np0005548790.localdomain podman[319885]: 2025-12-06 10:22:13.725942117 +0000 UTC m=+0.088009903 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:22:13 np0005548790.localdomain podman[319885]: 2025-12-06 10:22:13.806761424 +0000 UTC m=+0.168829150 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 10:22:13 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:22:13 np0005548790.localdomain podman[319884]: 2025-12-06 10:22:13.865296928 +0000 UTC m=+0.228424773 container remove 7bd1f232b3ffb7d1f2215ff4841866601a1cfb8fb10bf283581563d252272c17 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b7407861-c6cf-4598-89e1-e36e48483082, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 06 10:22:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:22:14 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:22:14.261 262327 INFO neutron.agent.dhcp.agent [None req-4fcbd2b6-52f7-4395-a09f-c41bb70f7079 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:22:14 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:22:14 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:22:14 np0005548790.localdomain ceph-mon[301742]: pgmap v417: 177 pgs: 177 active+clean; 642 MiB data, 2.2 GiB used, 40 GiB / 42 GiB avail; 18 KiB/s rd, 25 MiB/s wr, 36 op/s
Dec 06 10:22:14 np0005548790.localdomain podman[319934]: 2025-12-06 10:22:14.589507191 +0000 UTC m=+0.099849154 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 06 10:22:14 np0005548790.localdomain systemd[1]: tmp-crun.iQnVt7.mount: Deactivated successfully.
Dec 06 10:22:14 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-518cca98c465df79cc2d3f96a17fa6b4edb79d852644e8a50566fa5d5ac420d2-merged.mount: Deactivated successfully.
Dec 06 10:22:14 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7bd1f232b3ffb7d1f2215ff4841866601a1cfb8fb10bf283581563d252272c17-userdata-shm.mount: Deactivated successfully.
Dec 06 10:22:14 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2db7407861\x2dc6cf\x2d4598\x2d89e1\x2de36e48483082.mount: Deactivated successfully.
Dec 06 10:22:14 np0005548790.localdomain podman[319934]: 2025-12-06 10:22:14.62825866 +0000 UTC m=+0.138600573 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125)
Dec 06 10:22:14 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:22:14 np0005548790.localdomain podman[319935]: 2025-12-06 10:22:14.639414862 +0000 UTC m=+0.142474408 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, vcs-type=git, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container)
Dec 06 10:22:14 np0005548790.localdomain podman[319935]: 2025-12-06 10:22:14.723159979 +0000 UTC m=+0.226219475 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.openshift.expose-services=, distribution-scope=public, release=1755695350, architecture=x86_64, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container)
Dec 06 10:22:14 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:22:14 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:22:14.927 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:22:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v418: 177 pgs: 177 active+clean; 642 MiB data, 2.2 GiB used, 40 GiB / 42 GiB avail; 18 KiB/s rd, 25 MiB/s wr, 36 op/s
Dec 06 10:22:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e4229277-1c8f-4f16-a3c9-ae90dd92f437", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e4229277-1c8f-4f16-a3c9-ae90dd92f437, vol_name:cephfs) < ""
Dec 06 10:22:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e4229277-1c8f-4f16-a3c9-ae90dd92f437/.meta.tmp'
Dec 06 10:22:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e4229277-1c8f-4f16-a3c9-ae90dd92f437/.meta.tmp' to config b'/volumes/_nogroup/e4229277-1c8f-4f16-a3c9-ae90dd92f437/.meta'
Dec 06 10:22:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e4229277-1c8f-4f16-a3c9-ae90dd92f437, vol_name:cephfs) < ""
Dec 06 10:22:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e4229277-1c8f-4f16-a3c9-ae90dd92f437", "format": "json"}]: dispatch
Dec 06 10:22:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e4229277-1c8f-4f16-a3c9-ae90dd92f437, vol_name:cephfs) < ""
Dec 06 10:22:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e4229277-1c8f-4f16-a3c9-ae90dd92f437, vol_name:cephfs) < ""
Dec 06 10:22:15 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:15.740 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1d7917f2-7128-46ce-8479-e0741aca1efd", "snap_name": "14d68a54-7a75-45f3-abbd-01dbc7ace567", "format": "json"}]: dispatch
Dec 06 10:22:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:14d68a54-7a75-45f3-abbd-01dbc7ace567, sub_name:1d7917f2-7128-46ce-8479-e0741aca1efd, vol_name:cephfs) < ""
Dec 06 10:22:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:14d68a54-7a75-45f3-abbd-01dbc7ace567, sub_name:1d7917f2-7128-46ce-8479-e0741aca1efd, vol_name:cephfs) < ""
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: pgmap v418: 177 pgs: 177 active+clean; 642 MiB data, 2.2 GiB used, 40 GiB / 42 GiB avail; 18 KiB/s rd, 25 MiB/s wr, 36 op/s
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e4229277-1c8f-4f16-a3c9-ae90dd92f437", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e4229277-1c8f-4f16-a3c9-ae90dd92f437", "format": "json"}]: dispatch
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:16.575864) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016536575903, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 377, "num_deletes": 251, "total_data_size": 194848, "memory_usage": 203192, "flush_reason": "Manual Compaction"}
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016536579004, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 125501, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26569, "largest_seqno": 26941, "table_properties": {"data_size": 123249, "index_size": 363, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6109, "raw_average_key_size": 19, "raw_value_size": 118707, "raw_average_value_size": 384, "num_data_blocks": 16, "num_entries": 309, "num_filter_entries": 309, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016528, "oldest_key_time": 1765016528, "file_creation_time": 1765016536, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 3180 microseconds, and 970 cpu microseconds.
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:16.579045) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 125501 bytes OK
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:16.579063) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:16.581168) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:16.581195) EVENT_LOG_v1 {"time_micros": 1765016536581187, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:16.581219) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 192318, prev total WAL file size 192318, number of live WAL files 2.
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:16.581841) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132323939' seq:72057594037927935, type:22 .. '7061786F73003132353531' seq:0, type:0; will stop at (end)
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(122KB)], [42(17MB)]
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016536581897, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 18858727, "oldest_snapshot_seqno": -1}
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 12941 keys, 17664214 bytes, temperature: kUnknown
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016536691290, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 17664214, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17593780, "index_size": 36989, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32389, "raw_key_size": 350054, "raw_average_key_size": 27, "raw_value_size": 17376846, "raw_average_value_size": 1342, "num_data_blocks": 1359, "num_entries": 12941, "num_filter_entries": 12941, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015768, "oldest_key_time": 0, "file_creation_time": 1765016536, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:16.691604) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 17664214 bytes
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:16.693737) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.2 rd, 161.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 17.9 +0.0 blob) out(16.8 +0.0 blob), read-write-amplify(291.0) write-amplify(140.7) OK, records in: 13456, records dropped: 515 output_compression: NoCompression
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:16.693764) EVENT_LOG_v1 {"time_micros": 1765016536693753, "job": 24, "event": "compaction_finished", "compaction_time_micros": 109508, "compaction_time_cpu_micros": 46725, "output_level": 6, "num_output_files": 1, "total_output_size": 17664214, "num_input_records": 13456, "num_output_records": 12941, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016536694073, "job": 24, "event": "table_file_deletion", "file_number": 44}
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016536696602, "job": 24, "event": "table_file_deletion", "file_number": 42}
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:16.581693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:16.696687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:16.696695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:16.696698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:16.696701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:22:16 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:22:16.696703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:22:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v419: 177 pgs: 177 active+clean; 642 MiB data, 2.2 GiB used, 40 GiB / 42 GiB avail; 18 KiB/s rd, 25 MiB/s wr, 36 op/s
Dec 06 10:22:17 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1d7917f2-7128-46ce-8479-e0741aca1efd", "snap_name": "14d68a54-7a75-45f3-abbd-01dbc7ace567", "format": "json"}]: dispatch
Dec 06 10:22:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:17.962 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:22:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:22:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:18.403 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:22:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156742 "" "Go-http-client/1.1"
Dec 06 10:22:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:22:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19213 "" "Go-http-client/1.1"
Dec 06 10:22:18 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "731d0819-2292-4c89-bff7-ec72ce366121", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:731d0819-2292-4c89-bff7-ec72ce366121, vol_name:cephfs) < ""
Dec 06 10:22:18 np0005548790.localdomain dnsmasq[319083]: read /var/lib/neutron/dhcp/d87c79bc-37e1-4b28-b5d3-b5f930e4ffad/addn_hosts - 0 addresses
Dec 06 10:22:18 np0005548790.localdomain dnsmasq-dhcp[319083]: read /var/lib/neutron/dhcp/d87c79bc-37e1-4b28-b5d3-b5f930e4ffad/host
Dec 06 10:22:18 np0005548790.localdomain podman[319990]: 2025-12-06 10:22:18.618541283 +0000 UTC m=+0.076656625 container kill 8fe8adb776e3f11e3c210fa0626551e77a8bf4653ed93099491d771f2d9989b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d87c79bc-37e1-4b28-b5d3-b5f930e4ffad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 10:22:18 np0005548790.localdomain dnsmasq-dhcp[319083]: read /var/lib/neutron/dhcp/d87c79bc-37e1-4b28-b5d3-b5f930e4ffad/opts
Dec 06 10:22:18 np0005548790.localdomain ceph-mon[301742]: pgmap v419: 177 pgs: 177 active+clean; 642 MiB data, 2.2 GiB used, 40 GiB / 42 GiB avail; 18 KiB/s rd, 25 MiB/s wr, 36 op/s
Dec 06 10:22:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/731d0819-2292-4c89-bff7-ec72ce366121/.meta.tmp'
Dec 06 10:22:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/731d0819-2292-4c89-bff7-ec72ce366121/.meta.tmp' to config b'/volumes/_nogroup/731d0819-2292-4c89-bff7-ec72ce366121/.meta'
Dec 06 10:22:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:731d0819-2292-4c89-bff7-ec72ce366121, vol_name:cephfs) < ""
Dec 06 10:22:18 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "731d0819-2292-4c89-bff7-ec72ce366121", "format": "json"}]: dispatch
Dec 06 10:22:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:731d0819-2292-4c89-bff7-ec72ce366121, vol_name:cephfs) < ""
Dec 06 10:22:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:731d0819-2292-4c89-bff7-ec72ce366121, vol_name:cephfs) < ""
Dec 06 10:22:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:18.834 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:18 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:22:18Z|00248|binding|INFO|Releasing lport 727d3eb4-92bc-4821-b7ef-8068b10cc075 from this chassis (sb_readonly=0)
Dec 06 10:22:18 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:22:18Z|00249|binding|INFO|Setting lport 727d3eb4-92bc-4821-b7ef-8068b10cc075 down in Southbound
Dec 06 10:22:18 np0005548790.localdomain kernel: device tap727d3eb4-92 left promiscuous mode
Dec 06 10:22:18 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:18.851 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.3/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-d87c79bc-37e1-4b28-b5d3-b5f930e4ffad', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d87c79bc-37e1-4b28-b5d3-b5f930e4ffad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e82deaff368b4feea9fec0f06459a6ca', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=74acc2e1-8d76-43a2-a761-033f794445a3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=727d3eb4-92bc-4821-b7ef-8068b10cc075) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:22:18 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:18.853 159200 INFO neutron.agent.ovn.metadata.agent [-] Port 727d3eb4-92bc-4821-b7ef-8068b10cc075 in datapath d87c79bc-37e1-4b28-b5d3-b5f930e4ffad unbound from our chassis
Dec 06 10:22:18 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:18.856 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d87c79bc-37e1-4b28-b5d3-b5f930e4ffad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:22:18 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:18.857 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[c734cc3f-9c79-4f5d-8a77-9d5d03edee82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:22:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:18.869 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:18.869 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:18 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e4229277-1c8f-4f16-a3c9-ae90dd92f437", "format": "json"}]: dispatch
Dec 06 10:22:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e4229277-1c8f-4f16-a3c9-ae90dd92f437, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e4229277-1c8f-4f16-a3c9-ae90dd92f437, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:18 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:22:18.972+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e4229277-1c8f-4f16-a3c9-ae90dd92f437' of type subvolume
Dec 06 10:22:18 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e4229277-1c8f-4f16-a3c9-ae90dd92f437' of type subvolume
Dec 06 10:22:18 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e4229277-1c8f-4f16-a3c9-ae90dd92f437", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e4229277-1c8f-4f16-a3c9-ae90dd92f437, vol_name:cephfs) < ""
Dec 06 10:22:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e4229277-1c8f-4f16-a3c9-ae90dd92f437'' moved to trashcan
Dec 06 10:22:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:22:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e4229277-1c8f-4f16-a3c9-ae90dd92f437, vol_name:cephfs) < ""
Dec 06 10:22:19 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "1d7917f2-7128-46ce-8479-e0741aca1efd", "snap_name": "14d68a54-7a75-45f3-abbd-01dbc7ace567", "target_sub_name": "c4b473ee-dbff-47ba-b3da-5329e0795d44", "format": "json"}]: dispatch
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:14d68a54-7a75-45f3-abbd-01dbc7ace567, sub_name:1d7917f2-7128-46ce-8479-e0741aca1efd, target_sub_name:c4b473ee-dbff-47ba-b3da-5329e0795d44, vol_name:cephfs) < ""
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v420: 177 pgs: 177 active+clean; 746 MiB data, 2.5 GiB used, 39 GiB / 42 GiB avail; 22 KiB/s rd, 30 MiB/s wr, 45 op/s
Dec 06 10:22:19 np0005548790.localdomain podman[320028]: 2025-12-06 10:22:19.298909199 +0000 UTC m=+0.076589774 container kill 8fe8adb776e3f11e3c210fa0626551e77a8bf4653ed93099491d771f2d9989b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d87c79bc-37e1-4b28-b5d3-b5f930e4ffad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 10:22:19 np0005548790.localdomain systemd[1]: tmp-crun.5hTlgV.mount: Deactivated successfully.
Dec 06 10:22:19 np0005548790.localdomain dnsmasq[319083]: exiting on receipt of SIGTERM
Dec 06 10:22:19 np0005548790.localdomain systemd[1]: libpod-8fe8adb776e3f11e3c210fa0626551e77a8bf4653ed93099491d771f2d9989b1.scope: Deactivated successfully.
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/c4b473ee-dbff-47ba-b3da-5329e0795d44/.meta.tmp'
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c4b473ee-dbff-47ba-b3da-5329e0795d44/.meta.tmp' to config b'/volumes/_nogroup/c4b473ee-dbff-47ba-b3da-5329e0795d44/.meta'
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 69ee59db-ec81-47f4-ba9d-fb78d14fbf91 for path b'/volumes/_nogroup/c4b473ee-dbff-47ba-b3da-5329e0795d44'
Dec 06 10:22:19 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:19.358 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=33b2d0f4-3dae-458c-b286-c937c7cb3d9e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/1d7917f2-7128-46ce-8479-e0741aca1efd/.meta.tmp'
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1d7917f2-7128-46ce-8479-e0741aca1efd/.meta.tmp' to config b'/volumes/_nogroup/1d7917f2-7128-46ce-8479-e0741aca1efd/.meta'
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:14d68a54-7a75-45f3-abbd-01dbc7ace567, sub_name:1d7917f2-7128-46ce-8479-e0741aca1efd, target_sub_name:c4b473ee-dbff-47ba-b3da-5329e0795d44, vol_name:cephfs) < ""
Dec 06 10:22:19 np0005548790.localdomain podman[320043]: 2025-12-06 10:22:19.363666571 +0000 UTC m=+0.046798688 container died 8fe8adb776e3f11e3c210fa0626551e77a8bf4653ed93099491d771f2d9989b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d87c79bc-37e1-4b28-b5d3-b5f930e4ffad, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c4b473ee-dbff-47ba-b3da-5329e0795d44", "format": "json"}]: dispatch
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c4b473ee-dbff-47ba-b3da-5329e0795d44, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:19 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:22:19.375+0000 7f0639df7640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:22:19 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:22:19.375+0000 7f0639df7640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:22:19 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:22:19.375+0000 7f0639df7640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:22:19 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:22:19.375+0000 7f0639df7640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:22:19 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:22:19.375+0000 7f0639df7640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c4b473ee-dbff-47ba-b3da-5329e0795d44, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:19 np0005548790.localdomain podman[320043]: 2025-12-06 10:22:19.396452279 +0000 UTC m=+0.079584386 container cleanup 8fe8adb776e3f11e3c210fa0626551e77a8bf4653ed93099491d771f2d9989b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d87c79bc-37e1-4b28-b5d3-b5f930e4ffad, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 10:22:19 np0005548790.localdomain systemd[1]: libpod-conmon-8fe8adb776e3f11e3c210fa0626551e77a8bf4653ed93099491d771f2d9989b1.scope: Deactivated successfully.
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/c4b473ee-dbff-47ba-b3da-5329e0795d44
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, c4b473ee-dbff-47ba-b3da-5329e0795d44)
Dec 06 10:22:19 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:22:19.414+0000 7f063a5f8640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:22:19 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:22:19.414+0000 7f063a5f8640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:22:19 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:22:19.414+0000 7f063a5f8640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:22:19 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:22:19.414+0000 7f063a5f8640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:22:19 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:22:19.414+0000 7f063a5f8640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:22:19 np0005548790.localdomain podman[320044]: 2025-12-06 10:22:19.424746935 +0000 UTC m=+0.100088521 container remove 8fe8adb776e3f11e3c210fa0626551e77a8bf4653ed93099491d771f2d9989b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d87c79bc-37e1-4b28-b5d3-b5f930e4ffad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, c4b473ee-dbff-47ba-b3da-5329e0795d44) -- by 0 seconds
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/c4b473ee-dbff-47ba-b3da-5329e0795d44/.meta.tmp'
Dec 06 10:22:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c4b473ee-dbff-47ba-b3da-5329e0795d44/.meta.tmp' to config b'/volumes/_nogroup/c4b473ee-dbff-47ba-b3da-5329e0795d44/.meta'
Dec 06 10:22:19 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:22:19.591 262327 INFO neutron.agent.dhcp.agent [None req-82284e1a-c03e-4eaf-9df2-bdb2df423b76 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:22:19 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:22:19.592 262327 INFO neutron.agent.dhcp.agent [None req-82284e1a-c03e-4eaf-9df2-bdb2df423b76 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:22:19 np0005548790.localdomain systemd[1]: tmp-crun.OsgmVV.mount: Deactivated successfully.
Dec 06 10:22:19 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-aaaab77e0559799f1c31030594b7ff933f206d40e78c1f9cc566f48d9a6455b3-merged.mount: Deactivated successfully.
Dec 06 10:22:19 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8fe8adb776e3f11e3c210fa0626551e77a8bf4653ed93099491d771f2d9989b1-userdata-shm.mount: Deactivated successfully.
Dec 06 10:22:19 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2dd87c79bc\x2d37e1\x2d4b28\x2db5d3\x2db5f930e4ffad.mount: Deactivated successfully.
Dec 06 10:22:19 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "731d0819-2292-4c89-bff7-ec72ce366121", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:19 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "731d0819-2292-4c89-bff7-ec72ce366121", "format": "json"}]: dispatch
Dec 06 10:22:19 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:22:19 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:22:19.749 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:22:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:19.982 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:20 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/1d7917f2-7128-46ce-8479-e0741aca1efd/.snap/14d68a54-7a75-45f3-abbd-01dbc7ace567/b444b06c-5a06-4977-87e8-20c66e59fc18' to b'/volumes/_nogroup/c4b473ee-dbff-47ba-b3da-5329e0795d44/7f13fd1f-2114-4d78-ab04-46e2346a6487'
Dec 06 10:22:20 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/c4b473ee-dbff-47ba-b3da-5329e0795d44/.meta.tmp'
Dec 06 10:22:20 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c4b473ee-dbff-47ba-b3da-5329e0795d44/.meta.tmp' to config b'/volumes/_nogroup/c4b473ee-dbff-47ba-b3da-5329e0795d44/.meta'
Dec 06 10:22:20 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.clone_index] untracking 69ee59db-ec81-47f4-ba9d-fb78d14fbf91
Dec 06 10:22:20 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1d7917f2-7128-46ce-8479-e0741aca1efd/.meta.tmp'
Dec 06 10:22:20 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1d7917f2-7128-46ce-8479-e0741aca1efd/.meta.tmp' to config b'/volumes/_nogroup/1d7917f2-7128-46ce-8479-e0741aca1efd/.meta'
Dec 06 10:22:20 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e4229277-1c8f-4f16-a3c9-ae90dd92f437", "format": "json"}]: dispatch
Dec 06 10:22:20 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e4229277-1c8f-4f16-a3c9-ae90dd92f437", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:20 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "1d7917f2-7128-46ce-8479-e0741aca1efd", "snap_name": "14d68a54-7a75-45f3-abbd-01dbc7ace567", "target_sub_name": "c4b473ee-dbff-47ba-b3da-5329e0795d44", "format": "json"}]: dispatch
Dec 06 10:22:20 np0005548790.localdomain ceph-mon[301742]: pgmap v420: 177 pgs: 177 active+clean; 746 MiB data, 2.5 GiB used, 39 GiB / 42 GiB avail; 22 KiB/s rd, 30 MiB/s wr, 45 op/s
Dec 06 10:22:20 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c4b473ee-dbff-47ba-b3da-5329e0795d44", "format": "json"}]: dispatch
Dec 06 10:22:20 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/c4b473ee-dbff-47ba-b3da-5329e0795d44/.meta.tmp'
Dec 06 10:22:20 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c4b473ee-dbff-47ba-b3da-5329e0795d44/.meta.tmp' to config b'/volumes/_nogroup/c4b473ee-dbff-47ba-b3da-5329e0795d44/.meta'
Dec 06 10:22:20 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, c4b473ee-dbff-47ba-b3da-5329e0795d44)
Dec 06 10:22:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v421: 177 pgs: 177 active+clean; 746 MiB data, 2.5 GiB used, 39 GiB / 42 GiB avail; 14 KiB/s rd, 19 MiB/s wr, 28 op/s
Dec 06 10:22:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:22:21 np0005548790.localdomain systemd[1]: tmp-crun.WOrC52.mount: Deactivated successfully.
Dec 06 10:22:21 np0005548790.localdomain podman[320092]: 2025-12-06 10:22:21.580509784 +0000 UTC m=+0.091838456 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 10:22:21 np0005548790.localdomain podman[320092]: 2025-12-06 10:22:21.62213711 +0000 UTC m=+0.133465782 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd)
Dec 06 10:22:21 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:22:22 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ea385aa8-7125-4234-a610-ef3ee4890f37", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea385aa8-7125-4234-a610-ef3ee4890f37, vol_name:cephfs) < ""
Dec 06 10:22:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea385aa8-7125-4234-a610-ef3ee4890f37/.meta.tmp'
Dec 06 10:22:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea385aa8-7125-4234-a610-ef3ee4890f37/.meta.tmp' to config b'/volumes/_nogroup/ea385aa8-7125-4234-a610-ef3ee4890f37/.meta'
Dec 06 10:22:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea385aa8-7125-4234-a610-ef3ee4890f37, vol_name:cephfs) < ""
Dec 06 10:22:22 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ea385aa8-7125-4234-a610-ef3ee4890f37", "format": "json"}]: dispatch
Dec 06 10:22:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea385aa8-7125-4234-a610-ef3ee4890f37, vol_name:cephfs) < ""
Dec 06 10:22:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea385aa8-7125-4234-a610-ef3ee4890f37, vol_name:cephfs) < ""
Dec 06 10:22:22 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d56ed464-2af3-4efa-8437-4562ed59da6b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d56ed464-2af3-4efa-8437-4562ed59da6b, vol_name:cephfs) < ""
Dec 06 10:22:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d56ed464-2af3-4efa-8437-4562ed59da6b/.meta.tmp'
Dec 06 10:22:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d56ed464-2af3-4efa-8437-4562ed59da6b/.meta.tmp' to config b'/volumes/_nogroup/d56ed464-2af3-4efa-8437-4562ed59da6b/.meta'
Dec 06 10:22:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d56ed464-2af3-4efa-8437-4562ed59da6b, vol_name:cephfs) < ""
Dec 06 10:22:22 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d56ed464-2af3-4efa-8437-4562ed59da6b", "format": "json"}]: dispatch
Dec 06 10:22:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d56ed464-2af3-4efa-8437-4562ed59da6b, vol_name:cephfs) < ""
Dec 06 10:22:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d56ed464-2af3-4efa-8437-4562ed59da6b, vol_name:cephfs) < ""
Dec 06 10:22:22 np0005548790.localdomain ceph-mon[301742]: pgmap v421: 177 pgs: 177 active+clean; 746 MiB data, 2.5 GiB used, 39 GiB / 42 GiB avail; 14 KiB/s rd, 19 MiB/s wr, 28 op/s
Dec 06 10:22:22 np0005548790.localdomain ceph-mon[301742]: mgrmap e51: np0005548790.kvkfyr(active, since 10m), standbys: np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548789.mzhmje
Dec 06 10:22:22 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:22:22 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:22:22 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2693859626' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:22:22 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2693859626' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:22:23 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v422: 177 pgs: 177 active+clean; 867 MiB data, 2.9 GiB used, 39 GiB / 42 GiB avail; 21 KiB/s rd, 29 MiB/s wr, 44 op/s
Dec 06 10:22:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:23.406 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:23.413 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:22:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:22:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:22:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:22:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:22:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:22:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:22:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:22:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:22:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:22:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:22:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:22:23 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ea385aa8-7125-4234-a610-ef3ee4890f37", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:23 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ea385aa8-7125-4234-a610-ef3ee4890f37", "format": "json"}]: dispatch
Dec 06 10:22:23 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d56ed464-2af3-4efa-8437-4562ed59da6b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:23 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d56ed464-2af3-4efa-8437-4562ed59da6b", "format": "json"}]: dispatch
Dec 06 10:22:23 np0005548790.localdomain ceph-mon[301742]: pgmap v422: 177 pgs: 177 active+clean; 867 MiB data, 2.9 GiB used, 39 GiB / 42 GiB avail; 21 KiB/s rd, 29 MiB/s wr, 44 op/s
Dec 06 10:22:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:22:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:22:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:22:24 np0005548790.localdomain podman[320110]: 2025-12-06 10:22:24.568975211 +0000 UTC m=+0.082028551 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:22:24 np0005548790.localdomain podman[320110]: 2025-12-06 10:22:24.58333215 +0000 UTC m=+0.096385500 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:22:24 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:22:24 np0005548790.localdomain systemd[1]: tmp-crun.N8xOKU.mount: Deactivated successfully.
Dec 06 10:22:24 np0005548790.localdomain podman[320111]: 2025-12-06 10:22:24.640679912 +0000 UTC m=+0.150801382 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:22:24 np0005548790.localdomain podman[320111]: 2025-12-06 10:22:24.707238954 +0000 UTC m=+0.217360454 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 10:22:24 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:22:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v423: 177 pgs: 177 active+clean; 867 MiB data, 2.9 GiB used, 39 GiB / 42 GiB avail; 14 KiB/s rd, 19 MiB/s wr, 30 op/s
Dec 06 10:22:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ea385aa8-7125-4234-a610-ef3ee4890f37", "format": "json"}]: dispatch
Dec 06 10:22:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ea385aa8-7125-4234-a610-ef3ee4890f37, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ea385aa8-7125-4234-a610-ef3ee4890f37, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:25 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea385aa8-7125-4234-a610-ef3ee4890f37' of type subvolume
Dec 06 10:22:25 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:22:25.511+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea385aa8-7125-4234-a610-ef3ee4890f37' of type subvolume
Dec 06 10:22:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ea385aa8-7125-4234-a610-ef3ee4890f37", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea385aa8-7125-4234-a610-ef3ee4890f37, vol_name:cephfs) < ""
Dec 06 10:22:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ea385aa8-7125-4234-a610-ef3ee4890f37'' moved to trashcan
Dec 06 10:22:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:22:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea385aa8-7125-4234-a610-ef3ee4890f37, vol_name:cephfs) < ""
Dec 06 10:22:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d56ed464-2af3-4efa-8437-4562ed59da6b", "format": "json"}]: dispatch
Dec 06 10:22:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d56ed464-2af3-4efa-8437-4562ed59da6b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d56ed464-2af3-4efa-8437-4562ed59da6b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:25 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd56ed464-2af3-4efa-8437-4562ed59da6b' of type subvolume
Dec 06 10:22:25 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:22:25.737+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd56ed464-2af3-4efa-8437-4562ed59da6b' of type subvolume
Dec 06 10:22:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d56ed464-2af3-4efa-8437-4562ed59da6b", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d56ed464-2af3-4efa-8437-4562ed59da6b, vol_name:cephfs) < ""
Dec 06 10:22:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d56ed464-2af3-4efa-8437-4562ed59da6b'' moved to trashcan
Dec 06 10:22:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:22:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d56ed464-2af3-4efa-8437-4562ed59da6b, vol_name:cephfs) < ""
Dec 06 10:22:26 np0005548790.localdomain ceph-mon[301742]: pgmap v423: 177 pgs: 177 active+clean; 867 MiB data, 2.9 GiB used, 39 GiB / 42 GiB avail; 14 KiB/s rd, 19 MiB/s wr, 30 op/s
Dec 06 10:22:26 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ea385aa8-7125-4234-a610-ef3ee4890f37", "format": "json"}]: dispatch
Dec 06 10:22:26 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ea385aa8-7125-4234-a610-ef3ee4890f37", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:26 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d56ed464-2af3-4efa-8437-4562ed59da6b", "format": "json"}]: dispatch
Dec 06 10:22:26 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d56ed464-2af3-4efa-8437-4562ed59da6b", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v424: 177 pgs: 177 active+clean; 867 MiB data, 2.9 GiB used, 39 GiB / 42 GiB avail; 14 KiB/s rd, 19 MiB/s wr, 30 op/s
Dec 06 10:22:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:28.413 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:22:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:28.415 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:22:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:28.416 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 10:22:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:28.416 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:22:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:28.450 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:28.450 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:22:28 np0005548790.localdomain ceph-mon[301742]: pgmap v424: 177 pgs: 177 active+clean; 867 MiB data, 2.9 GiB used, 39 GiB / 42 GiB avail; 14 KiB/s rd, 19 MiB/s wr, 30 op/s
Dec 06 10:22:28 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "731d0819-2292-4c89-bff7-ec72ce366121", "format": "json"}]: dispatch
Dec 06 10:22:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:731d0819-2292-4c89-bff7-ec72ce366121, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:731d0819-2292-4c89-bff7-ec72ce366121, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:28 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '731d0819-2292-4c89-bff7-ec72ce366121' of type subvolume
Dec 06 10:22:28 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:22:28.789+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '731d0819-2292-4c89-bff7-ec72ce366121' of type subvolume
Dec 06 10:22:28 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "731d0819-2292-4c89-bff7-ec72ce366121", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:731d0819-2292-4c89-bff7-ec72ce366121, vol_name:cephfs) < ""
Dec 06 10:22:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/731d0819-2292-4c89-bff7-ec72ce366121'' moved to trashcan
Dec 06 10:22:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:22:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:731d0819-2292-4c89-bff7-ec72ce366121, vol_name:cephfs) < ""
Dec 06 10:22:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:22:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v425: 177 pgs: 177 active+clean; 987 MiB data, 3.2 GiB used, 39 GiB / 42 GiB avail; 21 KiB/s rd, 29 MiB/s wr, 45 op/s
Dec 06 10:22:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9d3cab8c-98e7-4693-a92b-d356598b900a", "snap_name": "34ca82e6-47fe-4edf-98f0-b49a22c3b971_3308a422-28d2-40bc-9817-d02064ebbe3c", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:34ca82e6-47fe-4edf-98f0-b49a22c3b971_3308a422-28d2-40bc-9817-d02064ebbe3c, sub_name:9d3cab8c-98e7-4693-a92b-d356598b900a, vol_name:cephfs) < ""
Dec 06 10:22:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9d3cab8c-98e7-4693-a92b-d356598b900a/.meta.tmp'
Dec 06 10:22:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9d3cab8c-98e7-4693-a92b-d356598b900a/.meta.tmp' to config b'/volumes/_nogroup/9d3cab8c-98e7-4693-a92b-d356598b900a/.meta'
Dec 06 10:22:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:34ca82e6-47fe-4edf-98f0-b49a22c3b971_3308a422-28d2-40bc-9817-d02064ebbe3c, sub_name:9d3cab8c-98e7-4693-a92b-d356598b900a, vol_name:cephfs) < ""
Dec 06 10:22:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9d3cab8c-98e7-4693-a92b-d356598b900a", "snap_name": "34ca82e6-47fe-4edf-98f0-b49a22c3b971", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:34ca82e6-47fe-4edf-98f0-b49a22c3b971, sub_name:9d3cab8c-98e7-4693-a92b-d356598b900a, vol_name:cephfs) < ""
Dec 06 10:22:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9d3cab8c-98e7-4693-a92b-d356598b900a/.meta.tmp'
Dec 06 10:22:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9d3cab8c-98e7-4693-a92b-d356598b900a/.meta.tmp' to config b'/volumes/_nogroup/9d3cab8c-98e7-4693-a92b-d356598b900a/.meta'
Dec 06 10:22:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:34ca82e6-47fe-4edf-98f0-b49a22c3b971, sub_name:9d3cab8c-98e7-4693-a92b-d356598b900a, vol_name:cephfs) < ""
Dec 06 10:22:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:22:29 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3455244347' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:22:29 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "731d0819-2292-4c89-bff7-ec72ce366121", "format": "json"}]: dispatch
Dec 06 10:22:29 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3455244347' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:22:30 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "731d0819-2292-4c89-bff7-ec72ce366121", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:30 np0005548790.localdomain ceph-mon[301742]: pgmap v425: 177 pgs: 177 active+clean; 987 MiB data, 3.2 GiB used, 39 GiB / 42 GiB avail; 21 KiB/s rd, 29 MiB/s wr, 45 op/s
Dec 06 10:22:30 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9d3cab8c-98e7-4693-a92b-d356598b900a", "snap_name": "34ca82e6-47fe-4edf-98f0-b49a22c3b971_3308a422-28d2-40bc-9817-d02064ebbe3c", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:30 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9d3cab8c-98e7-4693-a92b-d356598b900a", "snap_name": "34ca82e6-47fe-4edf-98f0-b49a22c3b971", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:30 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e194 e194: 6 total, 6 up, 6 in
Dec 06 10:22:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v427: 177 pgs: 177 active+clean; 987 MiB data, 3.2 GiB used, 39 GiB / 42 GiB avail; 16 KiB/s rd, 24 MiB/s wr, 37 op/s
Dec 06 10:22:31 np0005548790.localdomain ceph-mon[301742]: osdmap e194: 6 total, 6 up, 6 in
Dec 06 10:22:31 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e195 e195: 6 total, 6 up, 6 in
Dec 06 10:22:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "04a68994-1285-4b19-bd78-8daa43192107", "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:31 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:04a68994-1285-4b19-bd78-8daa43192107, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Dec 06 10:22:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:04a68994-1285-4b19-bd78-8daa43192107, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Dec 06 10:22:32 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9d3cab8c-98e7-4693-a92b-d356598b900a", "format": "json"}]: dispatch
Dec 06 10:22:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9d3cab8c-98e7-4693-a92b-d356598b900a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9d3cab8c-98e7-4693-a92b-d356598b900a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:32 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9d3cab8c-98e7-4693-a92b-d356598b900a' of type subvolume
Dec 06 10:22:32 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:22:32.543+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9d3cab8c-98e7-4693-a92b-d356598b900a' of type subvolume
Dec 06 10:22:32 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9d3cab8c-98e7-4693-a92b-d356598b900a", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9d3cab8c-98e7-4693-a92b-d356598b900a, vol_name:cephfs) < ""
Dec 06 10:22:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9d3cab8c-98e7-4693-a92b-d356598b900a'' moved to trashcan
Dec 06 10:22:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:22:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9d3cab8c-98e7-4693-a92b-d356598b900a, vol_name:cephfs) < ""
Dec 06 10:22:32 np0005548790.localdomain ceph-mon[301742]: pgmap v427: 177 pgs: 177 active+clean; 987 MiB data, 3.2 GiB used, 39 GiB / 42 GiB avail; 16 KiB/s rd, 24 MiB/s wr, 37 op/s
Dec 06 10:22:32 np0005548790.localdomain ceph-mon[301742]: osdmap e195: 6 total, 6 up, 6 in
Dec 06 10:22:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v429: 177 pgs: 177 active+clean; 1.1 GiB data, 3.6 GiB used, 38 GiB / 42 GiB avail; 2.6 MiB/s rd, 30 MiB/s wr, 76 op/s
Dec 06 10:22:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:33.451 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:22:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:33.453 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:33.453 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 10:22:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:33.454 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:22:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:33.454 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:22:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:33.458 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:33 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "04a68994-1285-4b19-bd78-8daa43192107", "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:33 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9d3cab8c-98e7-4693-a92b-d356598b900a", "format": "json"}]: dispatch
Dec 06 10:22:33 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9d3cab8c-98e7-4693-a92b-d356598b900a", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:22:34 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c4b473ee-dbff-47ba-b3da-5329e0795d44", "format": "json"}]: dispatch
Dec 06 10:22:34 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c4b473ee-dbff-47ba-b3da-5329e0795d44, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:35 np0005548790.localdomain ceph-mon[301742]: pgmap v429: 177 pgs: 177 active+clean; 1.1 GiB data, 3.6 GiB used, 38 GiB / 42 GiB avail; 2.6 MiB/s rd, 30 MiB/s wr, 76 op/s
Dec 06 10:22:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v430: 177 pgs: 177 active+clean; 1.1 GiB data, 3.6 GiB used, 38 GiB / 42 GiB avail; 2.6 MiB/s rd, 30 MiB/s wr, 76 op/s
Dec 06 10:22:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c4b473ee-dbff-47ba-b3da-5329e0795d44, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c4b473ee-dbff-47ba-b3da-5329e0795d44", "format": "json"}]: dispatch
Dec 06 10:22:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c4b473ee-dbff-47ba-b3da-5329e0795d44, vol_name:cephfs) < ""
Dec 06 10:22:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c4b473ee-dbff-47ba-b3da-5329e0795d44, vol_name:cephfs) < ""
Dec 06 10:22:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "04a68994-1285-4b19-bd78-8daa43192107", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:04a68994-1285-4b19-bd78-8daa43192107, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Dec 06 10:22:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:04a68994-1285-4b19-bd78-8daa43192107, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Dec 06 10:22:36 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c4b473ee-dbff-47ba-b3da-5329e0795d44", "format": "json"}]: dispatch
Dec 06 10:22:36 np0005548790.localdomain ceph-mon[301742]: pgmap v430: 177 pgs: 177 active+clean; 1.1 GiB data, 3.6 GiB used, 38 GiB / 42 GiB avail; 2.6 MiB/s rd, 30 MiB/s wr, 76 op/s
Dec 06 10:22:36 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c4b473ee-dbff-47ba-b3da-5329e0795d44", "format": "json"}]: dispatch
Dec 06 10:22:36 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:22:36 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "04a68994-1285-4b19-bd78-8daa43192107", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:37 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1087206490' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:22:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v431: 177 pgs: 177 active+clean; 1.1 GiB data, 3.6 GiB used, 38 GiB / 42 GiB avail; 2.6 MiB/s rd, 15 MiB/s wr, 54 op/s
Dec 06 10:22:37 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e196 e196: 6 total, 6 up, 6 in
Dec 06 10:22:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c4b473ee-dbff-47ba-b3da-5329e0795d44", "format": "json"}]: dispatch
Dec 06 10:22:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c4b473ee-dbff-47ba-b3da-5329e0795d44, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c4b473ee-dbff-47ba-b3da-5329e0795d44, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c4b473ee-dbff-47ba-b3da-5329e0795d44", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c4b473ee-dbff-47ba-b3da-5329e0795d44, vol_name:cephfs) < ""
Dec 06 10:22:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c4b473ee-dbff-47ba-b3da-5329e0795d44'' moved to trashcan
Dec 06 10:22:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:22:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c4b473ee-dbff-47ba-b3da-5329e0795d44, vol_name:cephfs) < ""
Dec 06 10:22:38 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "df8d11b3-b101-4628-bf7f-13330bfcfc51", "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:38 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:df8d11b3-b101-4628-bf7f-13330bfcfc51, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Dec 06 10:22:38 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:df8d11b3-b101-4628-bf7f-13330bfcfc51, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Dec 06 10:22:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:38.459 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:22:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:38.461 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:22:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:38.461 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 10:22:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:38.462 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:22:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:38.502 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:38.502 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:22:38 np0005548790.localdomain ceph-mon[301742]: pgmap v431: 177 pgs: 177 active+clean; 1.1 GiB data, 3.6 GiB used, 38 GiB / 42 GiB avail; 2.6 MiB/s rd, 15 MiB/s wr, 54 op/s
Dec 06 10:22:38 np0005548790.localdomain ceph-mon[301742]: osdmap e196: 6 total, 6 up, 6 in
Dec 06 10:22:38 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/4090728169' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:22:38 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/749219856' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:22:38 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/749219856' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:22:38 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/3578013769' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:22:38 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e197 e197: 6 total, 6 up, 6 in
Dec 06 10:22:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:22:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v434: 177 pgs: 177 active+clean; 1.2 GiB data, 3.9 GiB used, 38 GiB / 42 GiB avail; 2.8 MiB/s rd, 33 MiB/s wr, 183 op/s
Dec 06 10:22:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:39.334 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:22:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:39.335 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:22:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:39.335 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:22:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:39.351 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:22:39 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c4b473ee-dbff-47ba-b3da-5329e0795d44", "format": "json"}]: dispatch
Dec 06 10:22:39 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c4b473ee-dbff-47ba-b3da-5329e0795d44", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:39 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "df8d11b3-b101-4628-bf7f-13330bfcfc51", "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:39 np0005548790.localdomain ceph-mon[301742]: osdmap e197: 6 total, 6 up, 6 in
Dec 06 10:22:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/457341292' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:22:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/457341292' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:22:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/819637897' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:22:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/819637897' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:22:40 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e198 e198: 6 total, 6 up, 6 in
Dec 06 10:22:40 np0005548790.localdomain ceph-mon[301742]: pgmap v434: 177 pgs: 177 active+clean; 1.2 GiB data, 3.9 GiB used, 38 GiB / 42 GiB avail; 2.8 MiB/s rd, 33 MiB/s wr, 183 op/s
Dec 06 10:22:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v436: 177 pgs: 177 active+clean; 1.2 GiB data, 3.9 GiB used, 38 GiB / 42 GiB avail; 103 KiB/s rd, 21 MiB/s wr, 158 op/s
Dec 06 10:22:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1d7917f2-7128-46ce-8479-e0741aca1efd", "snap_name": "14d68a54-7a75-45f3-abbd-01dbc7ace567_04b38652-f50e-477c-8a7a-6a8616208060", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:14d68a54-7a75-45f3-abbd-01dbc7ace567_04b38652-f50e-477c-8a7a-6a8616208060, sub_name:1d7917f2-7128-46ce-8479-e0741aca1efd, vol_name:cephfs) < ""
Dec 06 10:22:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1d7917f2-7128-46ce-8479-e0741aca1efd/.meta.tmp'
Dec 06 10:22:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1d7917f2-7128-46ce-8479-e0741aca1efd/.meta.tmp' to config b'/volumes/_nogroup/1d7917f2-7128-46ce-8479-e0741aca1efd/.meta'
Dec 06 10:22:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:14d68a54-7a75-45f3-abbd-01dbc7ace567_04b38652-f50e-477c-8a7a-6a8616208060, sub_name:1d7917f2-7128-46ce-8479-e0741aca1efd, vol_name:cephfs) < ""
Dec 06 10:22:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1d7917f2-7128-46ce-8479-e0741aca1efd", "snap_name": "14d68a54-7a75-45f3-abbd-01dbc7ace567", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:14d68a54-7a75-45f3-abbd-01dbc7ace567, sub_name:1d7917f2-7128-46ce-8479-e0741aca1efd, vol_name:cephfs) < ""
Dec 06 10:22:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1d7917f2-7128-46ce-8479-e0741aca1efd/.meta.tmp'
Dec 06 10:22:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1d7917f2-7128-46ce-8479-e0741aca1efd/.meta.tmp' to config b'/volumes/_nogroup/1d7917f2-7128-46ce-8479-e0741aca1efd/.meta'
Dec 06 10:22:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:14d68a54-7a75-45f3-abbd-01dbc7ace567, sub_name:1d7917f2-7128-46ce-8479-e0741aca1efd, vol_name:cephfs) < ""
Dec 06 10:22:41 np0005548790.localdomain ceph-mon[301742]: osdmap e198: 6 total, 6 up, 6 in
Dec 06 10:22:41 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2929352609' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:22:41 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2929352609' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:22:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "df8d11b3-b101-4628-bf7f-13330bfcfc51", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:df8d11b3-b101-4628-bf7f-13330bfcfc51, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Dec 06 10:22:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:df8d11b3-b101-4628-bf7f-13330bfcfc51, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Dec 06 10:22:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Optimize plan auto_2025-12-06_10:22:41
Dec 06 10:22:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:22:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] do_upmap
Dec 06 10:22:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] pools ['manila_data', 'backups', 'volumes', 'images', 'vms', 'manila_metadata', '.mgr']
Dec 06 10:22:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] prepared 0/10 changes
Dec 06 10:22:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:22:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:22:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:22:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:22:42 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:22:42 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2211323013' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:22:42 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:22:42 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2211323013' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32)
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014885626046901173 of space, bias 1.0, pg target 0.29721633340312675 quantized to 32 (current 32)
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0752100345477387 of space, bias 1.0, pg target 15.016936898031828 quantized to 32 (current 32)
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.3631525683975433e-06 of space, bias 1.0, pg target 0.0002517288409640797 quantized to 32 (current 32)
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 8.17891541038526e-07 of space, bias 1.0, pg target 0.0001510373045784478 quantized to 32 (current 32)
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0002611800321049693 of space, bias 4.0, pg target 0.19292498371487066 quantized to 16 (current 16)
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:22:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:42.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:22:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:42.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:22:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:22:42 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:22:42 np0005548790.localdomain podman[320157]: 2025-12-06 10:22:42.5646372 +0000 UTC m=+0.078642659 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 10:22:42 np0005548790.localdomain podman[320157]: 2025-12-06 10:22:42.576178493 +0000 UTC m=+0.090183932 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:22:42 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:22:42 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e199 e199: 6 total, 6 up, 6 in
Dec 06 10:22:42 np0005548790.localdomain ceph-mon[301742]: pgmap v436: 177 pgs: 177 active+clean; 1.2 GiB data, 3.9 GiB used, 38 GiB / 42 GiB avail; 103 KiB/s rd, 21 MiB/s wr, 158 op/s
Dec 06 10:22:42 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1d7917f2-7128-46ce-8479-e0741aca1efd", "snap_name": "14d68a54-7a75-45f3-abbd-01dbc7ace567_04b38652-f50e-477c-8a7a-6a8616208060", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:42 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1d7917f2-7128-46ce-8479-e0741aca1efd", "snap_name": "14d68a54-7a75-45f3-abbd-01dbc7ace567", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:42 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "df8d11b3-b101-4628-bf7f-13330bfcfc51", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:42 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2211323013' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:22:42 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2211323013' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:22:42 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1478917471' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:22:42 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1478917471' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:22:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v438: 177 pgs: 177 active+clean; 195 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 290 KiB/s rd, 24 MiB/s wr, 436 op/s
Dec 06 10:22:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:43.503 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:43 np0005548790.localdomain ceph-mon[301742]: osdmap e199: 6 total, 6 up, 6 in
Dec 06 10:22:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:22:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:44.328 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:22:44 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:22:44 np0005548790.localdomain podman[320175]: 2025-12-06 10:22:44.57411743 +0000 UTC m=+0.081856107 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:22:44 np0005548790.localdomain podman[320175]: 2025-12-06 10:22:44.612837058 +0000 UTC m=+0.120575765 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:22:44 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:22:44 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1d7917f2-7128-46ce-8479-e0741aca1efd", "format": "json"}]: dispatch
Dec 06 10:22:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1d7917f2-7128-46ce-8479-e0741aca1efd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1d7917f2-7128-46ce-8479-e0741aca1efd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:44 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1d7917f2-7128-46ce-8479-e0741aca1efd' of type subvolume
Dec 06 10:22:44 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:22:44.699+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1d7917f2-7128-46ce-8479-e0741aca1efd' of type subvolume
Dec 06 10:22:44 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1d7917f2-7128-46ce-8479-e0741aca1efd", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1d7917f2-7128-46ce-8479-e0741aca1efd, vol_name:cephfs) < ""
Dec 06 10:22:44 np0005548790.localdomain ceph-mon[301742]: pgmap v438: 177 pgs: 177 active+clean; 195 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 290 KiB/s rd, 24 MiB/s wr, 436 op/s
Dec 06 10:22:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1d7917f2-7128-46ce-8479-e0741aca1efd'' moved to trashcan
Dec 06 10:22:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:22:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1d7917f2-7128-46ce-8479-e0741aca1efd, vol_name:cephfs) < ""
Dec 06 10:22:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e200 e200: 6 total, 6 up, 6 in
Dec 06 10:22:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:45.067 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "64212ab2-3c88-4fa8-92e9-e7786f748419", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:64212ab2-3c88-4fa8-92e9-e7786f748419, vol_name:cephfs) < ""
Dec 06 10:22:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/64212ab2-3c88-4fa8-92e9-e7786f748419/.meta.tmp'
Dec 06 10:22:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/64212ab2-3c88-4fa8-92e9-e7786f748419/.meta.tmp' to config b'/volumes/_nogroup/64212ab2-3c88-4fa8-92e9-e7786f748419/.meta'
Dec 06 10:22:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:64212ab2-3c88-4fa8-92e9-e7786f748419, vol_name:cephfs) < ""
Dec 06 10:22:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "64212ab2-3c88-4fa8-92e9-e7786f748419", "format": "json"}]: dispatch
Dec 06 10:22:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:64212ab2-3c88-4fa8-92e9-e7786f748419, vol_name:cephfs) < ""
Dec 06 10:22:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:64212ab2-3c88-4fa8-92e9-e7786f748419, vol_name:cephfs) < ""
Dec 06 10:22:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v440: 177 pgs: 177 active+clean; 195 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 171 KiB/s rd, 1.4 MiB/s wr, 254 op/s
Dec 06 10:22:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:45.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:22:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:45.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:22:45 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:22:45 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:22:45 np0005548790.localdomain podman[320199]: 2025-12-06 10:22:45.576291865 +0000 UTC m=+0.087105188 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, name=ubi9-minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7)
Dec 06 10:22:45 np0005548790.localdomain podman[320199]: 2025-12-06 10:22:45.612597448 +0000 UTC m=+0.123410771 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=ubi9-minimal, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible)
Dec 06 10:22:45 np0005548790.localdomain systemd[1]: tmp-crun.4Bw6ce.mount: Deactivated successfully.
Dec 06 10:22:45 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:22:45 np0005548790.localdomain podman[320198]: 2025-12-06 10:22:45.631372986 +0000 UTC m=+0.147070992 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2)
Dec 06 10:22:45 np0005548790.localdomain podman[320198]: 2025-12-06 10:22:45.643021481 +0000 UTC m=+0.158719517 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:22:45 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:22:45 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1d7917f2-7128-46ce-8479-e0741aca1efd", "format": "json"}]: dispatch
Dec 06 10:22:45 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1d7917f2-7128-46ce-8479-e0741aca1efd", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:45 np0005548790.localdomain ceph-mon[301742]: osdmap e200: 6 total, 6 up, 6 in
Dec 06 10:22:45 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:22:45 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1317054491' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:22:45 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1317054491' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:22:45 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e201 e201: 6 total, 6 up, 6 in
Dec 06 10:22:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:46.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:22:46 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "64212ab2-3c88-4fa8-92e9-e7786f748419", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:46 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "64212ab2-3c88-4fa8-92e9-e7786f748419", "format": "json"}]: dispatch
Dec 06 10:22:46 np0005548790.localdomain ceph-mon[301742]: pgmap v440: 177 pgs: 177 active+clean; 195 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 171 KiB/s rd, 1.4 MiB/s wr, 254 op/s
Dec 06 10:22:46 np0005548790.localdomain ceph-mon[301742]: osdmap e201: 6 total, 6 up, 6 in
Dec 06 10:22:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v442: 177 pgs: 177 active+clean; 195 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 171 KiB/s rd, 1.4 MiB/s wr, 254 op/s
Dec 06 10:22:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:47.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:22:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:47.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:22:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:47.358 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:22:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:47.358 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:22:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:47.358 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:22:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:47.359 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:22:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:47.359 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:22:47 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e202 e202: 6 total, 6 up, 6 in
Dec 06 10:22:47 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:22:47 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3787706483' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:22:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:47.813 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:22:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:48.008 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:22:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:48.010 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=11517MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:22:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:48.011 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:22:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:48.011 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:22:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:48.076 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:22:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:48.076 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:22:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:48.099 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:22:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:22:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:22:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:48.404 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:22:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:48.405 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:22:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:48.405 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:22:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:22:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:22:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:22:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18745 "" "Go-http-client/1.1"
Dec 06 10:22:48 np0005548790.localdomain ceph-mon[301742]: pgmap v442: 177 pgs: 177 active+clean; 195 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 171 KiB/s rd, 1.4 MiB/s wr, 254 op/s
Dec 06 10:22:48 np0005548790.localdomain ceph-mon[301742]: osdmap e202: 6 total, 6 up, 6 in
Dec 06 10:22:48 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/3787706483' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:22:48 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/869442583' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:22:48 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/4009757146' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:22:48 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "64212ab2-3c88-4fa8-92e9-e7786f748419", "format": "json"}]: dispatch
Dec 06 10:22:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:64212ab2-3c88-4fa8-92e9-e7786f748419, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:48.532 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:64212ab2-3c88-4fa8-92e9-e7786f748419, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:48 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '64212ab2-3c88-4fa8-92e9-e7786f748419' of type subvolume
Dec 06 10:22:48 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:22:48.537+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '64212ab2-3c88-4fa8-92e9-e7786f748419' of type subvolume
Dec 06 10:22:48 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "64212ab2-3c88-4fa8-92e9-e7786f748419", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:64212ab2-3c88-4fa8-92e9-e7786f748419, vol_name:cephfs) < ""
Dec 06 10:22:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/64212ab2-3c88-4fa8-92e9-e7786f748419'' moved to trashcan
Dec 06 10:22:48 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e203 e203: 6 total, 6 up, 6 in
Dec 06 10:22:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:22:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:64212ab2-3c88-4fa8-92e9-e7786f748419, vol_name:cephfs) < ""
Dec 06 10:22:48 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:22:48 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1713529106' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:22:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:48.598 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:22:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:48.604 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:22:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:48.620 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:22:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:48.622 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:22:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:48.623 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:22:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:22:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v445: 177 pgs: 177 active+clean; 196 MiB data, 984 MiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 51 KiB/s wr, 84 op/s
Dec 06 10:22:49 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "64212ab2-3c88-4fa8-92e9-e7786f748419", "format": "json"}]: dispatch
Dec 06 10:22:49 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "64212ab2-3c88-4fa8-92e9-e7786f748419", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:49 np0005548790.localdomain ceph-mon[301742]: osdmap e203: 6 total, 6 up, 6 in
Dec 06 10:22:49 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1713529106' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:22:49 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2088044712' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:22:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e204 e204: 6 total, 6 up, 6 in
Dec 06 10:22:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:49.623 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:22:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:49.624 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:22:50 np0005548790.localdomain ceph-mon[301742]: pgmap v445: 177 pgs: 177 active+clean; 196 MiB data, 984 MiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 51 KiB/s wr, 84 op/s
Dec 06 10:22:50 np0005548790.localdomain ceph-mon[301742]: osdmap e204: 6 total, 6 up, 6 in
Dec 06 10:22:50 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e205 e205: 6 total, 6 up, 6 in
Dec 06 10:22:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v448: 177 pgs: 177 active+clean; 196 MiB data, 984 MiB used, 41 GiB / 42 GiB avail; 62 KiB/s rd, 58 KiB/s wr, 95 op/s
Dec 06 10:22:51 np0005548790.localdomain ceph-mon[301742]: osdmap e205: 6 total, 6 up, 6 in
Dec 06 10:22:51 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e206 e206: 6 total, 6 up, 6 in
Dec 06 10:22:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ea938ab5-0d5f-47e7-a093-25bbc5841b54", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:51 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea938ab5-0d5f-47e7-a093-25bbc5841b54, vol_name:cephfs) < ""
Dec 06 10:22:51 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea938ab5-0d5f-47e7-a093-25bbc5841b54/.meta.tmp'
Dec 06 10:22:51 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea938ab5-0d5f-47e7-a093-25bbc5841b54/.meta.tmp' to config b'/volumes/_nogroup/ea938ab5-0d5f-47e7-a093-25bbc5841b54/.meta'
Dec 06 10:22:51 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea938ab5-0d5f-47e7-a093-25bbc5841b54, vol_name:cephfs) < ""
Dec 06 10:22:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ea938ab5-0d5f-47e7-a093-25bbc5841b54", "format": "json"}]: dispatch
Dec 06 10:22:51 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea938ab5-0d5f-47e7-a093-25bbc5841b54, vol_name:cephfs) < ""
Dec 06 10:22:51 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea938ab5-0d5f-47e7-a093-25bbc5841b54, vol_name:cephfs) < ""
Dec 06 10:22:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:22:52 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e207 e207: 6 total, 6 up, 6 in
Dec 06 10:22:52 np0005548790.localdomain podman[320282]: 2025-12-06 10:22:52.566719521 +0000 UTC m=+0.080482128 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd)
Dec 06 10:22:52 np0005548790.localdomain podman[320282]: 2025-12-06 10:22:52.603905578 +0000 UTC m=+0.117668195 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:22:52 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:22:52 np0005548790.localdomain ceph-mon[301742]: pgmap v448: 177 pgs: 177 active+clean; 196 MiB data, 984 MiB used, 41 GiB / 42 GiB avail; 62 KiB/s rd, 58 KiB/s wr, 95 op/s
Dec 06 10:22:52 np0005548790.localdomain ceph-mon[301742]: osdmap e206: 6 total, 6 up, 6 in
Dec 06 10:22:52 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:22:52 np0005548790.localdomain ceph-mon[301742]: osdmap e207: 6 total, 6 up, 6 in
Dec 06 10:22:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v451: 177 pgs: 177 active+clean; 196 MiB data, 969 MiB used, 41 GiB / 42 GiB avail; 202 KiB/s rd, 56 KiB/s wr, 286 op/s
Dec 06 10:22:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:53.534 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:22:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:53.536 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:22:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:53.537 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 10:22:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:53.537 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:22:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:53.575 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:53.576 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:22:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:22:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:22:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:22:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:22:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:22:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:22:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:22:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:22:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:22:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:22:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:22:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:22:53 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ea938ab5-0d5f-47e7-a093-25bbc5841b54", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:22:53 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ea938ab5-0d5f-47e7-a093-25bbc5841b54", "format": "json"}]: dispatch
Dec 06 10:22:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:22:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e208 e208: 6 total, 6 up, 6 in
Dec 06 10:22:54 np0005548790.localdomain ceph-mon[301742]: pgmap v451: 177 pgs: 177 active+clean; 196 MiB data, 969 MiB used, 41 GiB / 42 GiB avail; 202 KiB/s rd, 56 KiB/s wr, 286 op/s
Dec 06 10:22:54 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2991255880' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:22:54 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2991255880' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:22:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ea938ab5-0d5f-47e7-a093-25bbc5841b54", "format": "json"}]: dispatch
Dec 06 10:22:55 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ea938ab5-0d5f-47e7-a093-25bbc5841b54, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:55 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ea938ab5-0d5f-47e7-a093-25bbc5841b54, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:22:55 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea938ab5-0d5f-47e7-a093-25bbc5841b54' of type subvolume
Dec 06 10:22:55 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:22:55.214+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea938ab5-0d5f-47e7-a093-25bbc5841b54' of type subvolume
Dec 06 10:22:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ea938ab5-0d5f-47e7-a093-25bbc5841b54", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:55 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea938ab5-0d5f-47e7-a093-25bbc5841b54, vol_name:cephfs) < ""
Dec 06 10:22:55 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ea938ab5-0d5f-47e7-a093-25bbc5841b54'' moved to trashcan
Dec 06 10:22:55 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:22:55 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea938ab5-0d5f-47e7-a093-25bbc5841b54, vol_name:cephfs) < ""
Dec 06 10:22:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v453: 177 pgs: 177 active+clean; 196 MiB data, 969 MiB used, 41 GiB / 42 GiB avail; 174 KiB/s rd, 48 KiB/s wr, 246 op/s
Dec 06 10:22:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:55.298 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:55 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:55.300 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:22:55 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:22:55.301 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:22:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:22:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:22:55 np0005548790.localdomain podman[320301]: 2025-12-06 10:22:55.562461696 +0000 UTC m=+0.076231884 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:22:55 np0005548790.localdomain podman[320301]: 2025-12-06 10:22:55.568207411 +0000 UTC m=+0.081976899 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 10:22:55 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:22:55 np0005548790.localdomain podman[320302]: 2025-12-06 10:22:55.625127172 +0000 UTC m=+0.135215141 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 10:22:55 np0005548790.localdomain podman[320302]: 2025-12-06 10:22:55.685825185 +0000 UTC m=+0.195913134 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Dec 06 10:22:55 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:22:55 np0005548790.localdomain ceph-mon[301742]: osdmap e208: 6 total, 6 up, 6 in
Dec 06 10:22:55 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3220423124' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:22:55 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3220423124' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:22:55 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1020764155' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:22:55 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1020764155' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:22:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:56.382 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:56 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:22:56 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3555310940' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:22:56 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:22:56 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3555310940' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:22:56 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ea938ab5-0d5f-47e7-a093-25bbc5841b54", "format": "json"}]: dispatch
Dec 06 10:22:56 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ea938ab5-0d5f-47e7-a093-25bbc5841b54", "force": true, "format": "json"}]: dispatch
Dec 06 10:22:56 np0005548790.localdomain ceph-mon[301742]: pgmap v453: 177 pgs: 177 active+clean; 196 MiB data, 969 MiB used, 41 GiB / 42 GiB avail; 174 KiB/s rd, 48 KiB/s wr, 246 op/s
Dec 06 10:22:56 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3555310940' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:22:56 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3555310940' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:22:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v454: 177 pgs: 177 active+clean; 196 MiB data, 969 MiB used, 41 GiB / 42 GiB avail; 135 KiB/s rd, 37 KiB/s wr, 190 op/s
Dec 06 10:22:57 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e209 e209: 6 total, 6 up, 6 in
Dec 06 10:22:58 np0005548790.localdomain ceph-mon[301742]: pgmap v454: 177 pgs: 177 active+clean; 196 MiB data, 969 MiB used, 41 GiB / 42 GiB avail; 135 KiB/s rd, 37 KiB/s wr, 190 op/s
Dec 06 10:22:58 np0005548790.localdomain ceph-mon[301742]: osdmap e209: 6 total, 6 up, 6 in
Dec 06 10:22:58 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2409640309' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:22:58 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2409640309' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:22:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:22:58.629 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:22:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:22:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v456: 177 pgs: 177 active+clean; 196 MiB data, 953 MiB used, 41 GiB / 42 GiB avail; 216 KiB/s rd, 56 KiB/s wr, 301 op/s
Dec 06 10:23:00 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:00.235 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:00 np0005548790.localdomain ceph-mon[301742]: pgmap v456: 177 pgs: 177 active+clean; 196 MiB data, 953 MiB used, 41 GiB / 42 GiB avail; 216 KiB/s rd, 56 KiB/s wr, 301 op/s
Dec 06 10:23:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v457: 177 pgs: 177 active+clean; 196 MiB data, 953 MiB used, 41 GiB / 42 GiB avail; 81 KiB/s rd, 19 KiB/s wr, 110 op/s
Dec 06 10:23:02 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bf11e39e-6643-46ed-983d-322b7205a5ae", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:23:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bf11e39e-6643-46ed-983d-322b7205a5ae, vol_name:cephfs) < ""
Dec 06 10:23:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bf11e39e-6643-46ed-983d-322b7205a5ae/.meta.tmp'
Dec 06 10:23:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bf11e39e-6643-46ed-983d-322b7205a5ae/.meta.tmp' to config b'/volumes/_nogroup/bf11e39e-6643-46ed-983d-322b7205a5ae/.meta'
Dec 06 10:23:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bf11e39e-6643-46ed-983d-322b7205a5ae, vol_name:cephfs) < ""
Dec 06 10:23:02 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bf11e39e-6643-46ed-983d-322b7205a5ae", "format": "json"}]: dispatch
Dec 06 10:23:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bf11e39e-6643-46ed-983d-322b7205a5ae, vol_name:cephfs) < ""
Dec 06 10:23:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bf11e39e-6643-46ed-983d-322b7205a5ae, vol_name:cephfs) < ""
Dec 06 10:23:02 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e210 e210: 6 total, 6 up, 6 in
Dec 06 10:23:02 np0005548790.localdomain ceph-mon[301742]: pgmap v457: 177 pgs: 177 active+clean; 196 MiB data, 953 MiB used, 41 GiB / 42 GiB avail; 81 KiB/s rd, 19 KiB/s wr, 110 op/s
Dec 06 10:23:02 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:23:02 np0005548790.localdomain ceph-mon[301742]: osdmap e210: 6 total, 6 up, 6 in
Dec 06 10:23:02 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e300c2ad-ae7a-425b-b81e-235b17341052", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:23:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e300c2ad-ae7a-425b-b81e-235b17341052, vol_name:cephfs) < ""
Dec 06 10:23:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e300c2ad-ae7a-425b-b81e-235b17341052/.meta.tmp'
Dec 06 10:23:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e300c2ad-ae7a-425b-b81e-235b17341052/.meta.tmp' to config b'/volumes/_nogroup/e300c2ad-ae7a-425b-b81e-235b17341052/.meta'
Dec 06 10:23:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e300c2ad-ae7a-425b-b81e-235b17341052, vol_name:cephfs) < ""
Dec 06 10:23:02 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e300c2ad-ae7a-425b-b81e-235b17341052", "format": "json"}]: dispatch
Dec 06 10:23:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e300c2ad-ae7a-425b-b81e-235b17341052, vol_name:cephfs) < ""
Dec 06 10:23:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e300c2ad-ae7a-425b-b81e-235b17341052, vol_name:cephfs) < ""
Dec 06 10:23:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v459: 177 pgs: 177 active+clean; 196 MiB data, 954 MiB used, 41 GiB / 42 GiB avail; 84 KiB/s rd, 34 KiB/s wr, 118 op/s
Dec 06 10:23:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e211 e211: 6 total, 6 up, 6 in
Dec 06 10:23:03 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bf11e39e-6643-46ed-983d-322b7205a5ae", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:23:03 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bf11e39e-6643-46ed-983d-322b7205a5ae", "format": "json"}]: dispatch
Dec 06 10:23:03 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e300c2ad-ae7a-425b-b81e-235b17341052", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:23:03 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e300c2ad-ae7a-425b-b81e-235b17341052", "format": "json"}]: dispatch
Dec 06 10:23:03 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:23:03 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:03.632 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:23:04 np0005548790.localdomain ceph-mon[301742]: pgmap v459: 177 pgs: 177 active+clean; 196 MiB data, 954 MiB used, 41 GiB / 42 GiB avail; 84 KiB/s rd, 34 KiB/s wr, 118 op/s
Dec 06 10:23:04 np0005548790.localdomain ceph-mon[301742]: osdmap e211: 6 total, 6 up, 6 in
Dec 06 10:23:04 np0005548790.localdomain sudo[320351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:23:04 np0005548790.localdomain sudo[320351]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:23:04 np0005548790.localdomain sudo[320351]: pam_unix(sudo:session): session closed for user root
Dec 06 10:23:04 np0005548790.localdomain sudo[320369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:23:04 np0005548790.localdomain sudo[320369]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:23:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v461: 177 pgs: 177 active+clean; 196 MiB data, 954 MiB used, 41 GiB / 42 GiB avail; 87 KiB/s rd, 35 KiB/s wr, 123 op/s
Dec 06 10:23:05 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:23:05.303 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=33b2d0f4-3dae-458c-b286-c937c7cb3d9e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:23:05 np0005548790.localdomain sudo[320369]: pam_unix(sudo:session): session closed for user root
Dec 06 10:23:05 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:23:05 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:23:05 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:23:05 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:23:05 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:23:05 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] update: starting ev a7d9cb6a-52c9-4664-8c72-ad002d134353 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:23:05 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] complete: finished ev a7d9cb6a-52c9-4664-8c72-ad002d134353 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:23:05 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Completed event a7d9cb6a-52c9-4664-8c72-ad002d134353 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 06 10:23:05 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:23:05 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:23:05 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e212 e212: 6 total, 6 up, 6 in
Dec 06 10:23:05 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:23:05 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:23:05 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:23:05 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:23:05 np0005548790.localdomain sudo[320420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:23:05 np0005548790.localdomain sudo[320420]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:23:05 np0005548790.localdomain sudo[320420]: pam_unix(sudo:session): session closed for user root
Dec 06 10:23:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e300c2ad-ae7a-425b-b81e-235b17341052", "snap_name": "4ddcde0f-43b5-4b51-a615-3fe828471f71", "format": "json"}]: dispatch
Dec 06 10:23:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:4ddcde0f-43b5-4b51-a615-3fe828471f71, sub_name:e300c2ad-ae7a-425b-b81e-235b17341052, vol_name:cephfs) < ""
Dec 06 10:23:06 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:4ddcde0f-43b5-4b51-a615-3fe828471f71, sub_name:e300c2ad-ae7a-425b-b81e-235b17341052, vol_name:cephfs) < ""
Dec 06 10:23:06 np0005548790.localdomain ceph-mon[301742]: pgmap v461: 177 pgs: 177 active+clean; 196 MiB data, 954 MiB used, 41 GiB / 42 GiB avail; 87 KiB/s rd, 35 KiB/s wr, 123 op/s
Dec 06 10:23:06 np0005548790.localdomain ceph-mon[301742]: osdmap e212: 6 total, 6 up, 6 in
Dec 06 10:23:06 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e213 e213: 6 total, 6 up, 6 in
Dec 06 10:23:06 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bf11e39e-6643-46ed-983d-322b7205a5ae", "format": "json"}]: dispatch
Dec 06 10:23:06 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bf11e39e-6643-46ed-983d-322b7205a5ae, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:23:06 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bf11e39e-6643-46ed-983d-322b7205a5ae, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:23:06 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:23:06.984+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bf11e39e-6643-46ed-983d-322b7205a5ae' of type subvolume
Dec 06 10:23:06 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bf11e39e-6643-46ed-983d-322b7205a5ae' of type subvolume
Dec 06 10:23:06 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bf11e39e-6643-46ed-983d-322b7205a5ae", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:06 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bf11e39e-6643-46ed-983d-322b7205a5ae, vol_name:cephfs) < ""
Dec 06 10:23:06 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bf11e39e-6643-46ed-983d-322b7205a5ae'' moved to trashcan
Dec 06 10:23:07 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:23:07 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bf11e39e-6643-46ed-983d-322b7205a5ae, vol_name:cephfs) < ""
Dec 06 10:23:07 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Writing back 50 completed events
Dec 06 10:23:07 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:23:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v464: 177 pgs: 177 active+clean; 196 MiB data, 954 MiB used, 41 GiB / 42 GiB avail; 6.3 KiB/s rd, 25 KiB/s wr, 14 op/s
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.326 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:23:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:23:07 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e300c2ad-ae7a-425b-b81e-235b17341052", "snap_name": "4ddcde0f-43b5-4b51-a615-3fe828471f71", "format": "json"}]: dispatch
Dec 06 10:23:07 np0005548790.localdomain ceph-mon[301742]: osdmap e213: 6 total, 6 up, 6 in
Dec 06 10:23:07 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:23:07 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e214 e214: 6 total, 6 up, 6 in
Dec 06 10:23:08 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:08.637 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:23:08 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:08.639 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:23:08 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:08.639 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 10:23:08 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:08.639 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:23:08 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:08.653 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:08 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:08.653 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:23:08 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bf11e39e-6643-46ed-983d-322b7205a5ae", "format": "json"}]: dispatch
Dec 06 10:23:08 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bf11e39e-6643-46ed-983d-322b7205a5ae", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:08 np0005548790.localdomain ceph-mon[301742]: pgmap v464: 177 pgs: 177 active+clean; 196 MiB data, 954 MiB used, 41 GiB / 42 GiB avail; 6.3 KiB/s rd, 25 KiB/s wr, 14 op/s
Dec 06 10:23:08 np0005548790.localdomain ceph-mon[301742]: osdmap e214: 6 total, 6 up, 6 in
Dec 06 10:23:08 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1802975927' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:23:08 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1802975927' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:23:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:23:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v466: 177 pgs: 177 active+clean; 196 MiB data, 954 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 21 KiB/s wr, 72 op/s
Dec 06 10:23:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e300c2ad-ae7a-425b-b81e-235b17341052", "snap_name": "2106c631-2fac-4a48-9eb5-105ae638e038", "format": "json"}]: dispatch
Dec 06 10:23:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2106c631-2fac-4a48-9eb5-105ae638e038, sub_name:e300c2ad-ae7a-425b-b81e-235b17341052, vol_name:cephfs) < ""
Dec 06 10:23:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2106c631-2fac-4a48-9eb5-105ae638e038, sub_name:e300c2ad-ae7a-425b-b81e-235b17341052, vol_name:cephfs) < ""
Dec 06 10:23:10 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:10.331 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:10 np0005548790.localdomain ceph-mon[301742]: pgmap v466: 177 pgs: 177 active+clean; 196 MiB data, 954 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 21 KiB/s wr, 72 op/s
Dec 06 10:23:10 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e300c2ad-ae7a-425b-b81e-235b17341052", "snap_name": "2106c631-2fac-4a48-9eb5-105ae638e038", "format": "json"}]: dispatch
Dec 06 10:23:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v467: 177 pgs: 177 active+clean; 196 MiB data, 954 MiB used, 41 GiB / 42 GiB avail; 50 KiB/s rd, 20 KiB/s wr, 68 op/s
Dec 06 10:23:11 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e215 e215: 6 total, 6 up, 6 in
Dec 06 10:23:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:23:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:23:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:23:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:23:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:23:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:23:12 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e216 e216: 6 total, 6 up, 6 in
Dec 06 10:23:12 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:23:12 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:23:12.686 262327 INFO neutron.agent.linux.ip_lib [None req-0512310b-b210-48fb-a15b-c681acca33c4 - - - - - -] Device tapbe97762f-7f cannot be used as it has no MAC address
Dec 06 10:23:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:12.712 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:12 np0005548790.localdomain kernel: device tapbe97762f-7f entered promiscuous mode
Dec 06 10:23:12 np0005548790.localdomain podman[320441]: 2025-12-06 10:23:12.722996321 +0000 UTC m=+0.093938474 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 10:23:12 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016592.7235] manager: (tapbe97762f-7f): new Generic device (/org/freedesktop/NetworkManager/Devices/50)
Dec 06 10:23:12 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:23:12Z|00250|binding|INFO|Claiming lport be97762f-7fc5-4dbe-9eba-6ab80559ad8a for this chassis.
Dec 06 10:23:12 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:23:12Z|00251|binding|INFO|be97762f-7fc5-4dbe-9eba-6ab80559ad8a: Claiming unknown
Dec 06 10:23:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:12.726 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:12 np0005548790.localdomain systemd-udevd[320466]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:23:12 np0005548790.localdomain ceph-mon[301742]: pgmap v467: 177 pgs: 177 active+clean; 196 MiB data, 954 MiB used, 41 GiB / 42 GiB avail; 50 KiB/s rd, 20 KiB/s wr, 68 op/s
Dec 06 10:23:12 np0005548790.localdomain ceph-mon[301742]: osdmap e215: 6 total, 6 up, 6 in
Dec 06 10:23:12 np0005548790.localdomain ceph-mon[301742]: osdmap e216: 6 total, 6 up, 6 in
Dec 06 10:23:12 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:23:12.734 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-a3d4440b-134c-42ab-a512-ee72a4c55dbf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3d4440b-134c-42ab-a512-ee72a4c55dbf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8321907d0a38406c8e7c51f32ab796ad', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51072df1-b4ef-4272-a07f-9e27da17d0a7, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=be97762f-7fc5-4dbe-9eba-6ab80559ad8a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:23:12 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:23:12.736 159200 INFO neutron.agent.ovn.metadata.agent [-] Port be97762f-7fc5-4dbe-9eba-6ab80559ad8a in datapath a3d4440b-134c-42ab-a512-ee72a4c55dbf bound to our chassis
Dec 06 10:23:12 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:23:12.737 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a3d4440b-134c-42ab-a512-ee72a4c55dbf or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:23:12 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:23:12.738 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[91737b4b-ced9-40f8-adda-c83a8216d39a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:23:12 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapbe97762f-7f: No such device
Dec 06 10:23:12 np0005548790.localdomain podman[320441]: 2025-12-06 10:23:12.759305034 +0000 UTC m=+0.130247157 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 06 10:23:12 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapbe97762f-7f: No such device
Dec 06 10:23:12 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:23:12Z|00252|binding|INFO|Setting lport be97762f-7fc5-4dbe-9eba-6ab80559ad8a ovn-installed in OVS
Dec 06 10:23:12 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:23:12Z|00253|binding|INFO|Setting lport be97762f-7fc5-4dbe-9eba-6ab80559ad8a up in Southbound
Dec 06 10:23:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:12.762 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:12.763 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:12 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapbe97762f-7f: No such device
Dec 06 10:23:12 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapbe97762f-7f: No such device
Dec 06 10:23:12 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapbe97762f-7f: No such device
Dec 06 10:23:12 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapbe97762f-7f: No such device
Dec 06 10:23:12 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:23:12 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapbe97762f-7f: No such device
Dec 06 10:23:12 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapbe97762f-7f: No such device
Dec 06 10:23:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:12.796 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:12.822 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:12 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e300c2ad-ae7a-425b-b81e-235b17341052", "snap_name": "2106c631-2fac-4a48-9eb5-105ae638e038_339c2891-c9dd-4dd5-bc06-64f8eef95887", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2106c631-2fac-4a48-9eb5-105ae638e038_339c2891-c9dd-4dd5-bc06-64f8eef95887, sub_name:e300c2ad-ae7a-425b-b81e-235b17341052, vol_name:cephfs) < ""
Dec 06 10:23:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e300c2ad-ae7a-425b-b81e-235b17341052/.meta.tmp'
Dec 06 10:23:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e300c2ad-ae7a-425b-b81e-235b17341052/.meta.tmp' to config b'/volumes/_nogroup/e300c2ad-ae7a-425b-b81e-235b17341052/.meta'
Dec 06 10:23:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2106c631-2fac-4a48-9eb5-105ae638e038_339c2891-c9dd-4dd5-bc06-64f8eef95887, sub_name:e300c2ad-ae7a-425b-b81e-235b17341052, vol_name:cephfs) < ""
Dec 06 10:23:12 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e300c2ad-ae7a-425b-b81e-235b17341052", "snap_name": "2106c631-2fac-4a48-9eb5-105ae638e038", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2106c631-2fac-4a48-9eb5-105ae638e038, sub_name:e300c2ad-ae7a-425b-b81e-235b17341052, vol_name:cephfs) < ""
Dec 06 10:23:13 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e300c2ad-ae7a-425b-b81e-235b17341052/.meta.tmp'
Dec 06 10:23:13 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e300c2ad-ae7a-425b-b81e-235b17341052/.meta.tmp' to config b'/volumes/_nogroup/e300c2ad-ae7a-425b-b81e-235b17341052/.meta'
Dec 06 10:23:13 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2106c631-2fac-4a48-9eb5-105ae638e038, sub_name:e300c2ad-ae7a-425b-b81e-235b17341052, vol_name:cephfs) < ""
Dec 06 10:23:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v470: 177 pgs: 177 active+clean; 196 MiB data, 963 MiB used, 41 GiB / 42 GiB avail; 77 KiB/s rd, 44 KiB/s wr, 108 op/s
Dec 06 10:23:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:13.682 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:13 np0005548790.localdomain podman[320539]: 
Dec 06 10:23:13 np0005548790.localdomain podman[320539]: 2025-12-06 10:23:13.764061198 +0000 UTC m=+0.111081447 container create 5073edbdc9ca2668783e82afe059b920400b09e83323b3fef35fd3e2815b730a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a3d4440b-134c-42ab-a512-ee72a4c55dbf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 06 10:23:13 np0005548790.localdomain systemd[1]: Started libpod-conmon-5073edbdc9ca2668783e82afe059b920400b09e83323b3fef35fd3e2815b730a.scope.
Dec 06 10:23:13 np0005548790.localdomain podman[320539]: 2025-12-06 10:23:13.722409881 +0000 UTC m=+0.069430180 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:23:13 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:23:13 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd234320d16cdb39cb97342081f9aeafc3cac69c79df3f8518c1b61d71f7ef96/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:23:13 np0005548790.localdomain podman[320539]: 2025-12-06 10:23:13.848274388 +0000 UTC m=+0.195294667 container init 5073edbdc9ca2668783e82afe059b920400b09e83323b3fef35fd3e2815b730a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a3d4440b-134c-42ab-a512-ee72a4c55dbf, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 06 10:23:13 np0005548790.localdomain podman[320539]: 2025-12-06 10:23:13.857146238 +0000 UTC m=+0.204166477 container start 5073edbdc9ca2668783e82afe059b920400b09e83323b3fef35fd3e2815b730a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a3d4440b-134c-42ab-a512-ee72a4c55dbf, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:23:13 np0005548790.localdomain dnsmasq[320558]: started, version 2.85 cachesize 150
Dec 06 10:23:13 np0005548790.localdomain dnsmasq[320558]: DNS service limited to local subnets
Dec 06 10:23:13 np0005548790.localdomain dnsmasq[320558]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:23:13 np0005548790.localdomain dnsmasq[320558]: warning: no upstream servers configured
Dec 06 10:23:13 np0005548790.localdomain dnsmasq-dhcp[320558]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 06 10:23:13 np0005548790.localdomain dnsmasq[320558]: read /var/lib/neutron/dhcp/a3d4440b-134c-42ab-a512-ee72a4c55dbf/addn_hosts - 0 addresses
Dec 06 10:23:13 np0005548790.localdomain dnsmasq-dhcp[320558]: read /var/lib/neutron/dhcp/a3d4440b-134c-42ab-a512-ee72a4c55dbf/host
Dec 06 10:23:13 np0005548790.localdomain dnsmasq-dhcp[320558]: read /var/lib/neutron/dhcp/a3d4440b-134c-42ab-a512-ee72a4c55dbf/opts
Dec 06 10:23:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:23:14 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:23:14.319 262327 INFO neutron.agent.dhcp.agent [None req-2ca959a5-8cea-4ad4-b9fa-60b65a7abcdf - - - - - -] DHCP configuration for ports {'e6e53e74-afa7-4ca2-bcfe-7d2475f00d62'} is completed
Dec 06 10:23:14 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e300c2ad-ae7a-425b-b81e-235b17341052", "snap_name": "2106c631-2fac-4a48-9eb5-105ae638e038_339c2891-c9dd-4dd5-bc06-64f8eef95887", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:14 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e300c2ad-ae7a-425b-b81e-235b17341052", "snap_name": "2106c631-2fac-4a48-9eb5-105ae638e038", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:14 np0005548790.localdomain ceph-mon[301742]: pgmap v470: 177 pgs: 177 active+clean; 196 MiB data, 963 MiB used, 41 GiB / 42 GiB avail; 77 KiB/s rd, 44 KiB/s wr, 108 op/s
Dec 06 10:23:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v471: 177 pgs: 177 active+clean; 196 MiB data, 963 MiB used, 41 GiB / 42 GiB avail; 61 KiB/s rd, 35 KiB/s wr, 86 op/s
Dec 06 10:23:15 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:15.418 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:15 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:23:15 np0005548790.localdomain podman[320559]: 2025-12-06 10:23:15.573915565 +0000 UTC m=+0.081521418 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 10:23:15 np0005548790.localdomain podman[320559]: 2025-12-06 10:23:15.586257279 +0000 UTC m=+0.093863132 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:23:15 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:23:16 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:23:16.006 262327 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:23:15Z, description=, device_id=2b5489f4-b8c8-4de2-9a14-bc27f555fe1d, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c85620e20>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c85620d90>], id=cd3e0a94-36ae-4f84-9b38-e15db44677fe, ip_allocation=immediate, mac_address=fa:16:3e:18:8b:c4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:23:11Z, description=, dns_domain=, id=a3d4440b-134c-42ab-a512-ee72a4c55dbf, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesSnapshotTestJSON-815985565-network, port_security_enabled=True, project_id=8321907d0a38406c8e7c51f32ab796ad, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=11089, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3372, status=ACTIVE, subnets=['e063a6aa-6ff5-406a-bd2b-f47dca99cefc'], tags=[], tenant_id=8321907d0a38406c8e7c51f32ab796ad, updated_at=2025-12-06T10:23:12Z, vlan_transparent=None, network_id=a3d4440b-134c-42ab-a512-ee72a4c55dbf, port_security_enabled=False, project_id=8321907d0a38406c8e7c51f32ab796ad, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3390, status=DOWN, tags=[], tenant_id=8321907d0a38406c8e7c51f32ab796ad, updated_at=2025-12-06T10:23:15Z on network a3d4440b-134c-42ab-a512-ee72a4c55dbf
Dec 06 10:23:16 np0005548790.localdomain dnsmasq[320558]: read /var/lib/neutron/dhcp/a3d4440b-134c-42ab-a512-ee72a4c55dbf/addn_hosts - 1 addresses
Dec 06 10:23:16 np0005548790.localdomain dnsmasq-dhcp[320558]: read /var/lib/neutron/dhcp/a3d4440b-134c-42ab-a512-ee72a4c55dbf/host
Dec 06 10:23:16 np0005548790.localdomain dnsmasq-dhcp[320558]: read /var/lib/neutron/dhcp/a3d4440b-134c-42ab-a512-ee72a4c55dbf/opts
Dec 06 10:23:16 np0005548790.localdomain podman[320599]: 2025-12-06 10:23:16.236232112 +0000 UTC m=+0.058888405 container kill 5073edbdc9ca2668783e82afe059b920400b09e83323b3fef35fd3e2815b730a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a3d4440b-134c-42ab-a512-ee72a4c55dbf, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:23:16 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e300c2ad-ae7a-425b-b81e-235b17341052", "snap_name": "4ddcde0f-43b5-4b51-a615-3fe828471f71_bd1921c7-7a11-44e6-a718-d94ea8eab798", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:16 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4ddcde0f-43b5-4b51-a615-3fe828471f71_bd1921c7-7a11-44e6-a718-d94ea8eab798, sub_name:e300c2ad-ae7a-425b-b81e-235b17341052, vol_name:cephfs) < ""
Dec 06 10:23:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:23:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:23:16 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e300c2ad-ae7a-425b-b81e-235b17341052/.meta.tmp'
Dec 06 10:23:16 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e300c2ad-ae7a-425b-b81e-235b17341052/.meta.tmp' to config b'/volumes/_nogroup/e300c2ad-ae7a-425b-b81e-235b17341052/.meta'
Dec 06 10:23:16 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4ddcde0f-43b5-4b51-a615-3fe828471f71_bd1921c7-7a11-44e6-a718-d94ea8eab798, sub_name:e300c2ad-ae7a-425b-b81e-235b17341052, vol_name:cephfs) < ""
Dec 06 10:23:16 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e300c2ad-ae7a-425b-b81e-235b17341052", "snap_name": "4ddcde0f-43b5-4b51-a615-3fe828471f71", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:16 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4ddcde0f-43b5-4b51-a615-3fe828471f71, sub_name:e300c2ad-ae7a-425b-b81e-235b17341052, vol_name:cephfs) < ""
Dec 06 10:23:16 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e300c2ad-ae7a-425b-b81e-235b17341052/.meta.tmp'
Dec 06 10:23:16 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e300c2ad-ae7a-425b-b81e-235b17341052/.meta.tmp' to config b'/volumes/_nogroup/e300c2ad-ae7a-425b-b81e-235b17341052/.meta'
Dec 06 10:23:16 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4ddcde0f-43b5-4b51-a615-3fe828471f71, sub_name:e300c2ad-ae7a-425b-b81e-235b17341052, vol_name:cephfs) < ""
Dec 06 10:23:16 np0005548790.localdomain podman[320614]: 2025-12-06 10:23:16.414108426 +0000 UTC m=+0.152153939 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, config_id=edpm, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public)
Dec 06 10:23:16 np0005548790.localdomain podman[320613]: 2025-12-06 10:23:16.377129525 +0000 UTC m=+0.118919369 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, tcib_managed=true)
Dec 06 10:23:16 np0005548790.localdomain podman[320614]: 2025-12-06 10:23:16.434146508 +0000 UTC m=+0.172192001 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, name=ubi9-minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.buildah.version=1.33.7, maintainer=Red Hat, Inc.)
Dec 06 10:23:16 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:23:16 np0005548790.localdomain podman[320613]: 2025-12-06 10:23:16.457200223 +0000 UTC m=+0.198990027 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 06 10:23:16 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:23:16 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:23:16.493 262327 INFO neutron.agent.dhcp.agent [None req-ba9a412a-fc17-4369-aba4-dcaa70da41bd - - - - - -] DHCP configuration for ports {'cd3e0a94-36ae-4f84-9b38-e15db44677fe'} is completed
Dec 06 10:23:16 np0005548790.localdomain ceph-mon[301742]: pgmap v471: 177 pgs: 177 active+clean; 196 MiB data, 963 MiB used, 41 GiB / 42 GiB avail; 61 KiB/s rd, 35 KiB/s wr, 86 op/s
Dec 06 10:23:16 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:23:16.948 262327 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-06T10:23:15Z, description=, device_id=2b5489f4-b8c8-4de2-9a14-bc27f555fe1d, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c859ae520>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f1c859ae940>], id=cd3e0a94-36ae-4f84-9b38-e15db44677fe, ip_allocation=immediate, mac_address=fa:16:3e:18:8b:c4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-06T10:23:11Z, description=, dns_domain=, id=a3d4440b-134c-42ab-a512-ee72a4c55dbf, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesSnapshotTestJSON-815985565-network, port_security_enabled=True, project_id=8321907d0a38406c8e7c51f32ab796ad, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=11089, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3372, status=ACTIVE, subnets=['e063a6aa-6ff5-406a-bd2b-f47dca99cefc'], tags=[], tenant_id=8321907d0a38406c8e7c51f32ab796ad, updated_at=2025-12-06T10:23:12Z, vlan_transparent=None, network_id=a3d4440b-134c-42ab-a512-ee72a4c55dbf, port_security_enabled=False, project_id=8321907d0a38406c8e7c51f32ab796ad, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3390, status=DOWN, tags=[], tenant_id=8321907d0a38406c8e7c51f32ab796ad, updated_at=2025-12-06T10:23:15Z on network a3d4440b-134c-42ab-a512-ee72a4c55dbf
Dec 06 10:23:17 np0005548790.localdomain dnsmasq[320558]: read /var/lib/neutron/dhcp/a3d4440b-134c-42ab-a512-ee72a4c55dbf/addn_hosts - 1 addresses
Dec 06 10:23:17 np0005548790.localdomain dnsmasq-dhcp[320558]: read /var/lib/neutron/dhcp/a3d4440b-134c-42ab-a512-ee72a4c55dbf/host
Dec 06 10:23:17 np0005548790.localdomain podman[320677]: 2025-12-06 10:23:17.163102119 +0000 UTC m=+0.061955699 container kill 5073edbdc9ca2668783e82afe059b920400b09e83323b3fef35fd3e2815b730a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a3d4440b-134c-42ab-a512-ee72a4c55dbf, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 06 10:23:17 np0005548790.localdomain dnsmasq-dhcp[320558]: read /var/lib/neutron/dhcp/a3d4440b-134c-42ab-a512-ee72a4c55dbf/opts
Dec 06 10:23:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v472: 177 pgs: 177 active+clean; 196 MiB data, 963 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 18 KiB/s wr, 30 op/s
Dec 06 10:23:17 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:23:17.447 262327 INFO neutron.agent.dhcp.agent [None req-de1aff05-8dcb-47c2-9de7-a3825b95d3bb - - - - - -] DHCP configuration for ports {'cd3e0a94-36ae-4f84-9b38-e15db44677fe'} is completed
Dec 06 10:23:17 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e300c2ad-ae7a-425b-b81e-235b17341052", "snap_name": "4ddcde0f-43b5-4b51-a615-3fe828471f71_bd1921c7-7a11-44e6-a718-d94ea8eab798", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:17 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e300c2ad-ae7a-425b-b81e-235b17341052", "snap_name": "4ddcde0f-43b5-4b51-a615-3fe828471f71", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:23:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:23:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:23:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156742 "" "Go-http-client/1.1"
Dec 06 10:23:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:23:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19216 "" "Go-http-client/1.1"
Dec 06 10:23:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:18.713 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:18 np0005548790.localdomain ceph-mon[301742]: pgmap v472: 177 pgs: 177 active+clean; 196 MiB data, 963 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 18 KiB/s wr, 30 op/s
Dec 06 10:23:19 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:23:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v473: 177 pgs: 177 active+clean; 196 MiB data, 981 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 41 KiB/s wr, 75 op/s
Dec 06 10:23:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e300c2ad-ae7a-425b-b81e-235b17341052", "format": "json"}]: dispatch
Dec 06 10:23:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e300c2ad-ae7a-425b-b81e-235b17341052, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:23:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e300c2ad-ae7a-425b-b81e-235b17341052, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:23:19 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:23:19.543+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e300c2ad-ae7a-425b-b81e-235b17341052' of type subvolume
Dec 06 10:23:19 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e300c2ad-ae7a-425b-b81e-235b17341052' of type subvolume
Dec 06 10:23:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e300c2ad-ae7a-425b-b81e-235b17341052", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e300c2ad-ae7a-425b-b81e-235b17341052, vol_name:cephfs) < ""
Dec 06 10:23:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e300c2ad-ae7a-425b-b81e-235b17341052'' moved to trashcan
Dec 06 10:23:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:23:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e300c2ad-ae7a-425b-b81e-235b17341052, vol_name:cephfs) < ""
Dec 06 10:23:19 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e217 e217: 6 total, 6 up, 6 in
Dec 06 10:23:20 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e218 e218: 6 total, 6 up, 6 in
Dec 06 10:23:20 np0005548790.localdomain ceph-mon[301742]: pgmap v473: 177 pgs: 177 active+clean; 196 MiB data, 981 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 41 KiB/s wr, 75 op/s
Dec 06 10:23:20 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e300c2ad-ae7a-425b-b81e-235b17341052", "format": "json"}]: dispatch
Dec 06 10:23:20 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e300c2ad-ae7a-425b-b81e-235b17341052", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:20 np0005548790.localdomain ceph-mon[301742]: osdmap e217: 6 total, 6 up, 6 in
Dec 06 10:23:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v476: 177 pgs: 177 active+clean; 196 MiB data, 981 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 23 KiB/s wr, 45 op/s
Dec 06 10:23:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:23:21 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1707344585' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:23:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:23:21 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1707344585' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:23:21 np0005548790.localdomain ceph-mon[301742]: osdmap e218: 6 total, 6 up, 6 in
Dec 06 10:23:21 np0005548790.localdomain ceph-mon[301742]: pgmap v476: 177 pgs: 177 active+clean; 196 MiB data, 981 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 23 KiB/s wr, 45 op/s
Dec 06 10:23:21 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1707344585' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:23:21 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1707344585' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:23:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:23:21 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4034920819' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:23:22 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/4034920819' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:23:23 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v477: 177 pgs: 177 active+clean; 261 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 108 KiB/s rd, 8.0 MiB/s wr, 153 op/s
Dec 06 10:23:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:23:23 np0005548790.localdomain podman[320697]: 2025-12-06 10:23:23.568598252 +0000 UTC m=+0.080982563 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Dec 06 10:23:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:23:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:23:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:23:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:23:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:23:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:23:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:23:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:23:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:23:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:23:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:23:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:23:23 np0005548790.localdomain podman[320697]: 2025-12-06 10:23:23.609180711 +0000 UTC m=+0.121564962 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:23:23 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:23:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:23.716 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:23:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:23.718 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:23:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:23.719 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 10:23:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:23.719 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:23:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:23.720 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:23:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:23.723 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:23 np0005548790.localdomain ceph-mon[301742]: pgmap v477: 177 pgs: 177 active+clean; 261 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 108 KiB/s rd, 8.0 MiB/s wr, 153 op/s
Dec 06 10:23:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:23:24 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:23:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp'
Dec 06 10:23:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp' to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta'
Dec 06 10:23:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:24 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "format": "json"}]: dispatch
Dec 06 10:23:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:24 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:23:24 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "format": "json"}]: dispatch
Dec 06 10:23:24 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:23:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v478: 177 pgs: 177 active+clean; 261 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 108 KiB/s rd, 8.0 MiB/s wr, 153 op/s
Dec 06 10:23:25 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/4146455680' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:23:25 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/4146455680' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:23:25 np0005548790.localdomain ceph-mon[301742]: pgmap v478: 177 pgs: 177 active+clean; 261 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 108 KiB/s rd, 8.0 MiB/s wr, 153 op/s
Dec 06 10:23:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:23:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:23:26 np0005548790.localdomain podman[320716]: 2025-12-06 10:23:26.585337834 +0000 UTC m=+0.092691530 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:23:26 np0005548790.localdomain systemd[1]: tmp-crun.fycwOo.mount: Deactivated successfully.
Dec 06 10:23:26 np0005548790.localdomain podman[320717]: 2025-12-06 10:23:26.669078231 +0000 UTC m=+0.158525653 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 10:23:26 np0005548790.localdomain podman[320717]: 2025-12-06 10:23:26.702756752 +0000 UTC m=+0.192204204 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 10:23:26 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:23:26 np0005548790.localdomain podman[320716]: 2025-12-06 10:23:26.724242534 +0000 UTC m=+0.231596210 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:23:26 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:23:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v479: 177 pgs: 177 active+clean; 261 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 75 KiB/s rd, 8.0 MiB/s wr, 108 op/s
Dec 06 10:23:27 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e219 e219: 6 total, 6 up, 6 in
Dec 06 10:23:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "f95d5566-befb-4ef3-874d-5a8d6c4a6df5", "format": "json"}]: dispatch
Dec 06 10:23:27 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f95d5566-befb-4ef3-874d-5a8d6c4a6df5, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:27 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f95d5566-befb-4ef3-874d-5a8d6c4a6df5, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:28 np0005548790.localdomain ceph-mon[301742]: pgmap v479: 177 pgs: 177 active+clean; 261 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 75 KiB/s rd, 8.0 MiB/s wr, 108 op/s
Dec 06 10:23:28 np0005548790.localdomain ceph-mon[301742]: osdmap e219: 6 total, 6 up, 6 in
Dec 06 10:23:28 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "f95d5566-befb-4ef3-874d-5a8d6c4a6df5", "format": "json"}]: dispatch
Dec 06 10:23:28 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "265f7762-8f3c-4f93-a2e8-4d31ff999c12", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:23:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:265f7762-8f3c-4f93-a2e8-4d31ff999c12, vol_name:cephfs) < ""
Dec 06 10:23:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:28.724 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:23:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:28.726 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:23:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:28.726 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 10:23:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:28.727 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:23:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:28.815 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:28.816 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:23:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/265f7762-8f3c-4f93-a2e8-4d31ff999c12/.meta.tmp'
Dec 06 10:23:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/265f7762-8f3c-4f93-a2e8-4d31ff999c12/.meta.tmp' to config b'/volumes/_nogroup/265f7762-8f3c-4f93-a2e8-4d31ff999c12/.meta'
Dec 06 10:23:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:265f7762-8f3c-4f93-a2e8-4d31ff999c12, vol_name:cephfs) < ""
Dec 06 10:23:28 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "265f7762-8f3c-4f93-a2e8-4d31ff999c12", "format": "json"}]: dispatch
Dec 06 10:23:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:265f7762-8f3c-4f93-a2e8-4d31ff999c12, vol_name:cephfs) < ""
Dec 06 10:23:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:265f7762-8f3c-4f93-a2e8-4d31ff999c12, vol_name:cephfs) < ""
Dec 06 10:23:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:23:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v481: 177 pgs: 177 active+clean; 641 MiB data, 2.2 GiB used, 40 GiB / 42 GiB avail; 137 KiB/s rd, 53 MiB/s wr, 216 op/s
Dec 06 10:23:29 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "265f7762-8f3c-4f93-a2e8-4d31ff999c12", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:23:29 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:23:30 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "265f7762-8f3c-4f93-a2e8-4d31ff999c12", "format": "json"}]: dispatch
Dec 06 10:23:30 np0005548790.localdomain ceph-mon[301742]: pgmap v481: 177 pgs: 177 active+clean; 641 MiB data, 2.2 GiB used, 40 GiB / 42 GiB avail; 137 KiB/s rd, 53 MiB/s wr, 216 op/s
Dec 06 10:23:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "82ca32b7-c603-4087-ab30-5f8e76d96b29", "format": "json"}]: dispatch
Dec 06 10:23:31 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:82ca32b7-c603-4087-ab30-5f8e76d96b29, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:31 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:82ca32b7-c603-4087-ab30-5f8e76d96b29, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v482: 177 pgs: 177 active+clean; 641 MiB data, 2.2 GiB used, 40 GiB / 42 GiB avail; 116 KiB/s rd, 44 MiB/s wr, 182 op/s
Dec 06 10:23:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "265f7762-8f3c-4f93-a2e8-4d31ff999c12", "snap_name": "1386d79b-64b7-4747-8e0b-aeb5f89fda0b", "format": "json"}]: dispatch
Dec 06 10:23:31 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1386d79b-64b7-4747-8e0b-aeb5f89fda0b, sub_name:265f7762-8f3c-4f93-a2e8-4d31ff999c12, vol_name:cephfs) < ""
Dec 06 10:23:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1386d79b-64b7-4747-8e0b-aeb5f89fda0b, sub_name:265f7762-8f3c-4f93-a2e8-4d31ff999c12, vol_name:cephfs) < ""
Dec 06 10:23:32 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "82ca32b7-c603-4087-ab30-5f8e76d96b29", "format": "json"}]: dispatch
Dec 06 10:23:32 np0005548790.localdomain ceph-mon[301742]: pgmap v482: 177 pgs: 177 active+clean; 641 MiB data, 2.2 GiB used, 40 GiB / 42 GiB avail; 116 KiB/s rd, 44 MiB/s wr, 182 op/s
Dec 06 10:23:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v483: 177 pgs: 177 active+clean; 989 MiB data, 3.2 GiB used, 39 GiB / 42 GiB avail; 104 KiB/s rd, 73 MiB/s wr, 183 op/s
Dec 06 10:23:33 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "265f7762-8f3c-4f93-a2e8-4d31ff999c12", "snap_name": "1386d79b-64b7-4747-8e0b-aeb5f89fda0b", "format": "json"}]: dispatch
Dec 06 10:23:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:33.817 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:23:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:33.819 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:23:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:33.819 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 10:23:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:33.819 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:23:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:33.856 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:33.857 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:23:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:23:34 np0005548790.localdomain ceph-mon[301742]: pgmap v483: 177 pgs: 177 active+clean; 989 MiB data, 3.2 GiB used, 39 GiB / 42 GiB avail; 104 KiB/s rd, 73 MiB/s wr, 183 op/s
Dec 06 10:23:34 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "82ca32b7-c603-4087-ab30-5f8e76d96b29_d91f5f9e-cb0d-49f0-967d-dd0f7dcc891c", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:34 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:82ca32b7-c603-4087-ab30-5f8e76d96b29_d91f5f9e-cb0d-49f0-967d-dd0f7dcc891c, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp'
Dec 06 10:23:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp' to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta'
Dec 06 10:23:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:82ca32b7-c603-4087-ab30-5f8e76d96b29_d91f5f9e-cb0d-49f0-967d-dd0f7dcc891c, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "82ca32b7-c603-4087-ab30-5f8e76d96b29", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:82ca32b7-c603-4087-ab30-5f8e76d96b29, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp'
Dec 06 10:23:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp' to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta'
Dec 06 10:23:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v484: 177 pgs: 177 active+clean; 989 MiB data, 3.2 GiB used, 39 GiB / 42 GiB avail; 104 KiB/s rd, 73 MiB/s wr, 183 op/s
Dec 06 10:23:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:82ca32b7-c603-4087-ab30-5f8e76d96b29, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "265f7762-8f3c-4f93-a2e8-4d31ff999c12", "snap_name": "1386d79b-64b7-4747-8e0b-aeb5f89fda0b_b3a51089-88de-4e95-b3d9-848c07c2978e", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1386d79b-64b7-4747-8e0b-aeb5f89fda0b_b3a51089-88de-4e95-b3d9-848c07c2978e, sub_name:265f7762-8f3c-4f93-a2e8-4d31ff999c12, vol_name:cephfs) < ""
Dec 06 10:23:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/265f7762-8f3c-4f93-a2e8-4d31ff999c12/.meta.tmp'
Dec 06 10:23:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/265f7762-8f3c-4f93-a2e8-4d31ff999c12/.meta.tmp' to config b'/volumes/_nogroup/265f7762-8f3c-4f93-a2e8-4d31ff999c12/.meta'
Dec 06 10:23:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1386d79b-64b7-4747-8e0b-aeb5f89fda0b_b3a51089-88de-4e95-b3d9-848c07c2978e, sub_name:265f7762-8f3c-4f93-a2e8-4d31ff999c12, vol_name:cephfs) < ""
Dec 06 10:23:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "265f7762-8f3c-4f93-a2e8-4d31ff999c12", "snap_name": "1386d79b-64b7-4747-8e0b-aeb5f89fda0b", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1386d79b-64b7-4747-8e0b-aeb5f89fda0b, sub_name:265f7762-8f3c-4f93-a2e8-4d31ff999c12, vol_name:cephfs) < ""
Dec 06 10:23:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/265f7762-8f3c-4f93-a2e8-4d31ff999c12/.meta.tmp'
Dec 06 10:23:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/265f7762-8f3c-4f93-a2e8-4d31ff999c12/.meta.tmp' to config b'/volumes/_nogroup/265f7762-8f3c-4f93-a2e8-4d31ff999c12/.meta'
Dec 06 10:23:35 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/685384975' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:23:35 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/685384975' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:23:35 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "82ca32b7-c603-4087-ab30-5f8e76d96b29_d91f5f9e-cb0d-49f0-967d-dd0f7dcc891c", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:35 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "82ca32b7-c603-4087-ab30-5f8e76d96b29", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:35 np0005548790.localdomain ceph-mon[301742]: pgmap v484: 177 pgs: 177 active+clean; 989 MiB data, 3.2 GiB used, 39 GiB / 42 GiB avail; 104 KiB/s rd, 73 MiB/s wr, 183 op/s
Dec 06 10:23:35 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "265f7762-8f3c-4f93-a2e8-4d31ff999c12", "snap_name": "1386d79b-64b7-4747-8e0b-aeb5f89fda0b_b3a51089-88de-4e95-b3d9-848c07c2978e", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:35 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "265f7762-8f3c-4f93-a2e8-4d31ff999c12", "snap_name": "1386d79b-64b7-4747-8e0b-aeb5f89fda0b", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1386d79b-64b7-4747-8e0b-aeb5f89fda0b, sub_name:265f7762-8f3c-4f93-a2e8-4d31ff999c12, vol_name:cephfs) < ""
Dec 06 10:23:37 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:23:37 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1133027623' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:23:37 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:23:37 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1133027623' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:23:37 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1133027623' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:23:37 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1133027623' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:23:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v485: 177 pgs: 177 active+clean; 989 MiB data, 3.2 GiB used, 39 GiB / 42 GiB avail; 104 KiB/s rd, 73 MiB/s wr, 183 op/s
Dec 06 10:23:38 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "5aa4551f-daf4-40ec-a318-c443e92f70d8", "format": "json"}]: dispatch
Dec 06 10:23:38 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5aa4551f-daf4-40ec-a318-c443e92f70d8, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:38 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5aa4551f-daf4-40ec-a318-c443e92f70d8, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:38 np0005548790.localdomain ceph-mon[301742]: pgmap v485: 177 pgs: 177 active+clean; 989 MiB data, 3.2 GiB used, 39 GiB / 42 GiB avail; 104 KiB/s rd, 73 MiB/s wr, 183 op/s
Dec 06 10:23:38 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "265f7762-8f3c-4f93-a2e8-4d31ff999c12", "format": "json"}]: dispatch
Dec 06 10:23:38 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:265f7762-8f3c-4f93-a2e8-4d31ff999c12, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:23:38 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:265f7762-8f3c-4f93-a2e8-4d31ff999c12, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:23:38 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:23:38.834+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '265f7762-8f3c-4f93-a2e8-4d31ff999c12' of type subvolume
Dec 06 10:23:38 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '265f7762-8f3c-4f93-a2e8-4d31ff999c12' of type subvolume
Dec 06 10:23:38 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "265f7762-8f3c-4f93-a2e8-4d31ff999c12", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:38 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:265f7762-8f3c-4f93-a2e8-4d31ff999c12, vol_name:cephfs) < ""
Dec 06 10:23:38 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/265f7762-8f3c-4f93-a2e8-4d31ff999c12'' moved to trashcan
Dec 06 10:23:38 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:23:38 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:265f7762-8f3c-4f93-a2e8-4d31ff999c12, vol_name:cephfs) < ""
Dec 06 10:23:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:38.858 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:23:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:38.859 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:23:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:38.859 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 10:23:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:38.860 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:23:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:38.903 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:38.904 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:23:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:23:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v486: 177 pgs: 177 active+clean; 337 MiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 155 KiB/s rd, 82 MiB/s wr, 276 op/s
Dec 06 10:23:39 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "5aa4551f-daf4-40ec-a318-c443e92f70d8", "format": "json"}]: dispatch
Dec 06 10:23:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3601030272' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:23:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3601030272' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:23:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e220 e220: 6 total, 6 up, 6 in
Dec 06 10:23:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:40.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:23:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:40.334 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:23:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:40.334 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:23:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:40.348 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:23:40 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "265f7762-8f3c-4f93-a2e8-4d31ff999c12", "format": "json"}]: dispatch
Dec 06 10:23:40 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "265f7762-8f3c-4f93-a2e8-4d31ff999c12", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:40 np0005548790.localdomain ceph-mon[301742]: pgmap v486: 177 pgs: 177 active+clean; 337 MiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 155 KiB/s rd, 82 MiB/s wr, 276 op/s
Dec 06 10:23:40 np0005548790.localdomain ceph-mon[301742]: osdmap e220: 6 total, 6 up, 6 in
Dec 06 10:23:40 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/860284658' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:23:40 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e221 e221: 6 total, 6 up, 6 in
Dec 06 10:23:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v489: 177 pgs: 177 active+clean; 337 MiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 96 KiB/s rd, 29 MiB/s wr, 175 op/s
Dec 06 10:23:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:41.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:23:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:41.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 10:23:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e222 e222: 6 total, 6 up, 6 in
Dec 06 10:23:41 np0005548790.localdomain ceph-mon[301742]: osdmap e221: 6 total, 6 up, 6 in
Dec 06 10:23:41 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/2359987376' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:23:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Optimize plan auto_2025-12-06_10:23:41
Dec 06 10:23:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:23:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] do_upmap
Dec 06 10:23:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] pools ['manila_metadata', 'backups', 'vms', 'images', 'manila_data', '.mgr', 'volumes']
Dec 06 10:23:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] prepared 0/10 changes
Dec 06 10:23:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:23:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:23:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:23:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32)
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014869268216080402 of space, bias 1.0, pg target 0.2968897220477387 quantized to 32 (current 32)
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.008282969389772939 of space, bias 1.0, pg target 1.6483109085648149 quantized to 32 (current 32)
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.3631525683975433e-06 of space, bias 1.0, pg target 0.0002694498243532477 quantized to 32 (current 32)
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.00036995960706309326 of space, bias 4.0, pg target 0.29251472931788575 quantized to 16 (current 16)
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "5aa4551f-daf4-40ec-a318-c443e92f70d8_a730a057-9533-4839-a43f-0f73eb78bbd2", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5aa4551f-daf4-40ec-a318-c443e92f70d8_a730a057-9533-4839-a43f-0f73eb78bbd2, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f0645e486a0>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f0618c2ca00>)]
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp'
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp' to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta'
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5aa4551f-daf4-40ec-a318-c443e92f70d8_a730a057-9533-4839-a43f-0f73eb78bbd2, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "5aa4551f-daf4-40ec-a318-c443e92f70d8", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5aa4551f-daf4-40ec-a318-c443e92f70d8, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp'
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp' to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta'
Dec 06 10:23:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5aa4551f-daf4-40ec-a318-c443e92f70d8, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:42 np0005548790.localdomain ceph-mon[301742]: pgmap v489: 177 pgs: 177 active+clean; 337 MiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 96 KiB/s rd, 29 MiB/s wr, 175 op/s
Dec 06 10:23:42 np0005548790.localdomain ceph-mon[301742]: osdmap e222: 6 total, 6 up, 6 in
Dec 06 10:23:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v491: 177 pgs: 177 active+clean; 197 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 154 KiB/s rd, 39 MiB/s wr, 281 op/s
Dec 06 10:23:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:43.348 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:23:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:23:43 np0005548790.localdomain podman[320762]: 2025-12-06 10:23:43.569657601 +0000 UTC m=+0.082125804 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent)
Dec 06 10:23:43 np0005548790.localdomain podman[320762]: 2025-12-06 10:23:43.579127787 +0000 UTC m=+0.091595990 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Dec 06 10:23:43 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:23:43 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e223 e223: 6 total, 6 up, 6 in
Dec 06 10:23:43 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "5aa4551f-daf4-40ec-a318-c443e92f70d8_a730a057-9533-4839-a43f-0f73eb78bbd2", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:43 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "5aa4551f-daf4-40ec-a318-c443e92f70d8", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:43.905 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:23:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:43.907 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:23:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:43.907 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 10:23:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:43.907 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:23:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:43.946 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:43.946 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:23:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:23:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:44.329 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:23:44 np0005548790.localdomain ceph-mon[301742]: pgmap v491: 177 pgs: 177 active+clean; 197 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 154 KiB/s rd, 39 MiB/s wr, 281 op/s
Dec 06 10:23:44 np0005548790.localdomain ceph-mon[301742]: osdmap e223: 6 total, 6 up, 6 in
Dec 06 10:23:44 np0005548790.localdomain ceph-mon[301742]: mgrmap e52: np0005548790.kvkfyr(active, since 12m), standbys: np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548789.mzhmje
Dec 06 10:23:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v493: 177 pgs: 177 active+clean; 197 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 64 KiB/s wr, 49 op/s
Dec 06 10:23:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:45.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:23:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:45.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:23:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "5755ebf1-140d-4767-bf58-cf9be5b35f79", "format": "json"}]: dispatch
Dec 06 10:23:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5755ebf1-140d-4767-bf58-cf9be5b35f79, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:46 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5755ebf1-140d-4767-bf58-cf9be5b35f79, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:23:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:23:46 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:23:46 np0005548790.localdomain podman[320781]: 2025-12-06 10:23:46.575361214 +0000 UTC m=+0.086019399 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, io.openshift.expose-services=, container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, name=ubi9-minimal, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm)
Dec 06 10:23:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:46.580 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:46 np0005548790.localdomain podman[320781]: 2025-12-06 10:23:46.593095884 +0000 UTC m=+0.103754059 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git)
Dec 06 10:23:46 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:23:46 np0005548790.localdomain podman[320780]: 2025-12-06 10:23:46.677110888 +0000 UTC m=+0.190445666 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:23:46 np0005548790.localdomain podman[320780]: 2025-12-06 10:23:46.688131326 +0000 UTC m=+0.201466064 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 10:23:46 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:23:46 np0005548790.localdomain podman[320782]: 2025-12-06 10:23:46.73184574 +0000 UTC m=+0.234536359 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:23:46 np0005548790.localdomain podman[320782]: 2025-12-06 10:23:46.766818396 +0000 UTC m=+0.269508965 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute)
Dec 06 10:23:46 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:23:47 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e224 e224: 6 total, 6 up, 6 in
Dec 06 10:23:47 np0005548790.localdomain ceph-mon[301742]: pgmap v493: 177 pgs: 177 active+clean; 197 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 64 KiB/s wr, 49 op/s
Dec 06 10:23:47 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "5755ebf1-140d-4767-bf58-cf9be5b35f79", "format": "json"}]: dispatch
Dec 06 10:23:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v495: 177 pgs: 177 active+clean; 197 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 60 KiB/s wr, 47 op/s
Dec 06 10:23:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:47.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:23:47 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e225 e225: 6 total, 6 up, 6 in
Dec 06 10:23:48 np0005548790.localdomain ceph-mon[301742]: osdmap e224: 6 total, 6 up, 6 in
Dec 06 10:23:48 np0005548790.localdomain ceph-mon[301742]: pgmap v495: 177 pgs: 177 active+clean; 197 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 60 KiB/s wr, 47 op/s
Dec 06 10:23:48 np0005548790.localdomain ceph-mon[301742]: osdmap e225: 6 total, 6 up, 6 in
Dec 06 10:23:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:48.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:23:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:48.362 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:23:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:48.362 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:23:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:48.363 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:23:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:48.363 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:23:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:48.363 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:23:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:23:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:23:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:23:48.405 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:23:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:23:48.405 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:23:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:23:48.406 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:23:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:23:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156742 "" "Go-http-client/1.1"
Dec 06 10:23:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:23:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19226 "" "Go-http-client/1.1"
Dec 06 10:23:48 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:23:48 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2848039413' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:23:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:48.849 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:23:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:48.979 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:48.996 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:23:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:49.105 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:23:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:49.106 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=11519MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:23:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:49.107 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:23:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:49.107 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:23:49 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2848039413' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:23:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "5755ebf1-140d-4767-bf58-cf9be5b35f79_20013db0-3906-4db4-b01b-002cf0a0c6bc", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5755ebf1-140d-4767-bf58-cf9be5b35f79_20013db0-3906-4db4-b01b-002cf0a0c6bc, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp'
Dec 06 10:23:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp' to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta'
Dec 06 10:23:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5755ebf1-140d-4767-bf58-cf9be5b35f79_20013db0-3906-4db4-b01b-002cf0a0c6bc, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "5755ebf1-140d-4767-bf58-cf9be5b35f79", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5755ebf1-140d-4767-bf58-cf9be5b35f79, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp'
Dec 06 10:23:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp' to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta'
Dec 06 10:23:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5755ebf1-140d-4767-bf58-cf9be5b35f79, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v497: 177 pgs: 177 active+clean; 197 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 21 KiB/s wr, 35 op/s
Dec 06 10:23:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:49.402 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:23:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:49.402 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:23:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:49.484 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:23:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:23:49.655 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:23:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:49.655 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:49 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:23:49.658 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:23:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:23:49 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1887499854' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:23:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:49.950 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:23:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:49.957 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:23:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:49.972 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:23:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:49.975 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:23:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:49.975 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:23:50 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "5755ebf1-140d-4767-bf58-cf9be5b35f79_20013db0-3906-4db4-b01b-002cf0a0c6bc", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:50 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "5755ebf1-140d-4767-bf58-cf9be5b35f79", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:50 np0005548790.localdomain ceph-mon[301742]: pgmap v497: 177 pgs: 177 active+clean; 197 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 21 KiB/s wr, 35 op/s
Dec 06 10:23:50 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3773085003' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:23:50 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3773085003' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:23:50 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1887499854' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:23:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:50.976 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:23:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:50.977 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:23:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:50.978 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:23:51 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/585236573' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:23:51 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e226 e226: 6 total, 6 up, 6 in
Dec 06 10:23:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v499: 177 pgs: 177 active+clean; 197 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 21 KiB/s wr, 35 op/s
Dec 06 10:23:52 np0005548790.localdomain ceph-mon[301742]: osdmap e226: 6 total, 6 up, 6 in
Dec 06 10:23:52 np0005548790.localdomain ceph-mon[301742]: pgmap v499: 177 pgs: 177 active+clean; 197 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 21 KiB/s wr, 35 op/s
Dec 06 10:23:52 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2689749222' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:23:52 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e227 e227: 6 total, 6 up, 6 in
Dec 06 10:23:52 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "07253e42-ed74-4565-a93c-b3109a8bee58", "format": "json"}]: dispatch
Dec 06 10:23:52 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:07253e42-ed74-4565-a93c-b3109a8bee58, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:52 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:07253e42-ed74-4565-a93c-b3109a8bee58, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:52 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e228 e228: 6 total, 6 up, 6 in
Dec 06 10:23:53 np0005548790.localdomain ceph-mon[301742]: osdmap e227: 6 total, 6 up, 6 in
Dec 06 10:23:53 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "07253e42-ed74-4565-a93c-b3109a8bee58", "format": "json"}]: dispatch
Dec 06 10:23:53 np0005548790.localdomain ceph-mon[301742]: osdmap e228: 6 total, 6 up, 6 in
Dec 06 10:23:53 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3355742361' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:23:53 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3355742361' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:23:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v502: 177 pgs: 177 active+clean; 198 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 3.7 MiB/s rd, 70 KiB/s wr, 179 op/s
Dec 06 10:23:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:23:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:23:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:23:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:23:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:23:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:23:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:23:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:23:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:23:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:23:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:23:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:23:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:54.014 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:23:54 np0005548790.localdomain ceph-mon[301742]: pgmap v502: 177 pgs: 177 active+clean; 198 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 3.7 MiB/s rd, 70 KiB/s wr, 179 op/s
Dec 06 10:23:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:23:54 np0005548790.localdomain podman[320883]: 2025-12-06 10:23:54.562796035 +0000 UTC m=+0.078775463 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 10:23:54 np0005548790.localdomain podman[320883]: 2025-12-06 10:23:54.573966217 +0000 UTC m=+0.089945735 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:23:54 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:23:55 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e229 e229: 6 total, 6 up, 6 in
Dec 06 10:23:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v504: 177 pgs: 177 active+clean; 198 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 5.1 MiB/s rd, 66 KiB/s wr, 197 op/s
Dec 06 10:23:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:55.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:23:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:55.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 10:23:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:55.355 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 10:23:56 np0005548790.localdomain ceph-mon[301742]: osdmap e229: 6 total, 6 up, 6 in
Dec 06 10:23:56 np0005548790.localdomain ceph-mon[301742]: pgmap v504: 177 pgs: 177 active+clean; 198 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 5.1 MiB/s rd, 66 KiB/s wr, 197 op/s
Dec 06 10:23:56 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "07253e42-ed74-4565-a93c-b3109a8bee58_5f296b41-2976-4ca1-90d8-b8619d1ec6e3", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:56 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:07253e42-ed74-4565-a93c-b3109a8bee58_5f296b41-2976-4ca1-90d8-b8619d1ec6e3, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:56 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp'
Dec 06 10:23:56 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp' to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta'
Dec 06 10:23:56 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:07253e42-ed74-4565-a93c-b3109a8bee58_5f296b41-2976-4ca1-90d8-b8619d1ec6e3, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:56 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "07253e42-ed74-4565-a93c-b3109a8bee58", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:56 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:07253e42-ed74-4565-a93c-b3109a8bee58, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:56 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp'
Dec 06 10:23:56 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp' to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta'
Dec 06 10:23:56 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:07253e42-ed74-4565-a93c-b3109a8bee58, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:23:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v505: 177 pgs: 177 active+clean; 198 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 3.5 MiB/s rd, 45 KiB/s wr, 134 op/s
Dec 06 10:23:57 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3696695018' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:23:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:23:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:23:57 np0005548790.localdomain podman[320903]: 2025-12-06 10:23:57.580686628 +0000 UTC m=+0.089975857 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:23:57 np0005548790.localdomain podman[320903]: 2025-12-06 10:23:57.589511396 +0000 UTC m=+0.098800675 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 10:23:57 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e230 e230: 6 total, 6 up, 6 in
Dec 06 10:23:57 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:23:57 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:23:57.660 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=33b2d0f4-3dae-458c-b286-c937c7cb3d9e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:23:57 np0005548790.localdomain podman[320904]: 2025-12-06 10:23:57.661484675 +0000 UTC m=+0.168851331 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:23:57 np0005548790.localdomain podman[320904]: 2025-12-06 10:23:57.731400567 +0000 UTC m=+0.238767203 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 10:23:57 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:23:58 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "07253e42-ed74-4565-a93c-b3109a8bee58_5f296b41-2976-4ca1-90d8-b8619d1ec6e3", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:58 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "07253e42-ed74-4565-a93c-b3109a8bee58", "force": true, "format": "json"}]: dispatch
Dec 06 10:23:58 np0005548790.localdomain ceph-mon[301742]: pgmap v505: 177 pgs: 177 active+clean; 198 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 3.5 MiB/s rd, 45 KiB/s wr, 134 op/s
Dec 06 10:23:58 np0005548790.localdomain ceph-mon[301742]: osdmap e230: 6 total, 6 up, 6 in
Dec 06 10:23:58 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2773610337' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:23:58 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2773610337' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:23:58 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e231 e231: 6 total, 6 up, 6 in
Dec 06 10:23:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:59.016 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:23:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:59.018 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:23:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:59.019 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 10:23:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:59.019 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:23:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:59.050 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:23:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:23:59.051 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:23:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:23:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v508: 177 pgs: 177 active+clean; 258 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 4.0 MiB/s wr, 179 op/s
Dec 06 10:23:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e232 e232: 6 total, 6 up, 6 in
Dec 06 10:23:59 np0005548790.localdomain ceph-mon[301742]: osdmap e231: 6 total, 6 up, 6 in
Dec 06 10:24:00 np0005548790.localdomain ceph-mon[301742]: pgmap v508: 177 pgs: 177 active+clean; 258 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 4.0 MiB/s wr, 179 op/s
Dec 06 10:24:00 np0005548790.localdomain ceph-mon[301742]: osdmap e232: 6 total, 6 up, 6 in
Dec 06 10:24:00 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "76242542-0436-4e10-a0ca-932a5ea39e05", "format": "json"}]: dispatch
Dec 06 10:24:00 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:76242542-0436-4e10-a0ca-932a5ea39e05, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:24:00 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:76242542-0436-4e10-a0ca-932a5ea39e05, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:24:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v510: 177 pgs: 177 active+clean; 258 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 4.0 MiB/s wr, 179 op/s
Dec 06 10:24:01 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:24:01 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/402114055' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:24:01 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:24:01 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/402114055' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:24:01 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/573079347' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:24:01 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/402114055' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:24:01 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/402114055' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:24:02 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e233 e233: 6 total, 6 up, 6 in
Dec 06 10:24:02 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "76242542-0436-4e10-a0ca-932a5ea39e05", "format": "json"}]: dispatch
Dec 06 10:24:02 np0005548790.localdomain ceph-mon[301742]: pgmap v510: 177 pgs: 177 active+clean; 258 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 4.0 MiB/s wr, 179 op/s
Dec 06 10:24:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v512: 177 pgs: 177 active+clean; 337 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 7.7 MiB/s rd, 11 MiB/s wr, 355 op/s
Dec 06 10:24:03 np0005548790.localdomain dnsmasq[320558]: read /var/lib/neutron/dhcp/a3d4440b-134c-42ab-a512-ee72a4c55dbf/addn_hosts - 0 addresses
Dec 06 10:24:03 np0005548790.localdomain podman[320970]: 2025-12-06 10:24:03.618754238 +0000 UTC m=+0.060663963 container kill 5073edbdc9ca2668783e82afe059b920400b09e83323b3fef35fd3e2815b730a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a3d4440b-134c-42ab-a512-ee72a4c55dbf, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:24:03 np0005548790.localdomain dnsmasq-dhcp[320558]: read /var/lib/neutron/dhcp/a3d4440b-134c-42ab-a512-ee72a4c55dbf/host
Dec 06 10:24:03 np0005548790.localdomain dnsmasq-dhcp[320558]: read /var/lib/neutron/dhcp/a3d4440b-134c-42ab-a512-ee72a4c55dbf/opts
Dec 06 10:24:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e234 e234: 6 total, 6 up, 6 in
Dec 06 10:24:03 np0005548790.localdomain ceph-mon[301742]: osdmap e233: 6 total, 6 up, 6 in
Dec 06 10:24:03 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1694106485' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:24:03 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1694106485' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:24:03 np0005548790.localdomain ceph-mon[301742]: osdmap e234: 6 total, 6 up, 6 in
Dec 06 10:24:03 np0005548790.localdomain kernel: device tapbe97762f-7f left promiscuous mode
Dec 06 10:24:03 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:24:03Z|00254|binding|INFO|Releasing lport be97762f-7fc5-4dbe-9eba-6ab80559ad8a from this chassis (sb_readonly=0)
Dec 06 10:24:03 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:24:03Z|00255|binding|INFO|Setting lport be97762f-7fc5-4dbe-9eba-6ab80559ad8a down in Southbound
Dec 06 10:24:03 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:03.801 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:03 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:03.810 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-a3d4440b-134c-42ab-a512-ee72a4c55dbf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3d4440b-134c-42ab-a512-ee72a4c55dbf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8321907d0a38406c8e7c51f32ab796ad', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51072df1-b4ef-4272-a07f-9e27da17d0a7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=be97762f-7fc5-4dbe-9eba-6ab80559ad8a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:24:03 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:03.812 159200 INFO neutron.agent.ovn.metadata.agent [-] Port be97762f-7fc5-4dbe-9eba-6ab80559ad8a in datapath a3d4440b-134c-42ab-a512-ee72a4c55dbf unbound from our chassis
Dec 06 10:24:03 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:03.814 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a3d4440b-134c-42ab-a512-ee72a4c55dbf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:24:03 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:03.816 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[dc48a8d3-3f3c-469e-b6f6-d2feaa586cc9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:24:03 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:03.823 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:04.091 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:24:04 np0005548790.localdomain ceph-mon[301742]: pgmap v512: 177 pgs: 177 active+clean; 337 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 7.7 MiB/s rd, 11 MiB/s wr, 355 op/s
Dec 06 10:24:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e235 e235: 6 total, 6 up, 6 in
Dec 06 10:24:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v515: 177 pgs: 177 active+clean; 337 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 4.1 MiB/s rd, 7.1 MiB/s wr, 167 op/s
Dec 06 10:24:05 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:05.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:24:05 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:05.411 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:05 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:24:05 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1159844726' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:24:05 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:24:05 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1159844726' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:24:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "76242542-0436-4e10-a0ca-932a5ea39e05_4c1bc32d-22e5-4d62-bff7-71f1039d4c95", "force": true, "format": "json"}]: dispatch
Dec 06 10:24:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:76242542-0436-4e10-a0ca-932a5ea39e05_4c1bc32d-22e5-4d62-bff7-71f1039d4c95, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:24:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp'
Dec 06 10:24:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp' to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta'
Dec 06 10:24:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:76242542-0436-4e10-a0ca-932a5ea39e05_4c1bc32d-22e5-4d62-bff7-71f1039d4c95, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:24:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "76242542-0436-4e10-a0ca-932a5ea39e05", "force": true, "format": "json"}]: dispatch
Dec 06 10:24:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:76242542-0436-4e10-a0ca-932a5ea39e05, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:24:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp'
Dec 06 10:24:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp' to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta'
Dec 06 10:24:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:76242542-0436-4e10-a0ca-932a5ea39e05, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:24:05 np0005548790.localdomain ceph-mon[301742]: osdmap e235: 6 total, 6 up, 6 in
Dec 06 10:24:05 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1159844726' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:24:05 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1159844726' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:24:05 np0005548790.localdomain dnsmasq[320558]: exiting on receipt of SIGTERM
Dec 06 10:24:05 np0005548790.localdomain podman[321011]: 2025-12-06 10:24:05.819599137 +0000 UTC m=+0.065170195 container kill 5073edbdc9ca2668783e82afe059b920400b09e83323b3fef35fd3e2815b730a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a3d4440b-134c-42ab-a512-ee72a4c55dbf, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:24:05 np0005548790.localdomain systemd[1]: libpod-5073edbdc9ca2668783e82afe059b920400b09e83323b3fef35fd3e2815b730a.scope: Deactivated successfully.
Dec 06 10:24:05 np0005548790.localdomain sudo[321022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:24:05 np0005548790.localdomain sudo[321022]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:24:05 np0005548790.localdomain sudo[321022]: pam_unix(sudo:session): session closed for user root
Dec 06 10:24:05 np0005548790.localdomain podman[321041]: 2025-12-06 10:24:05.890537468 +0000 UTC m=+0.052327308 container died 5073edbdc9ca2668783e82afe059b920400b09e83323b3fef35fd3e2815b730a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a3d4440b-134c-42ab-a512-ee72a4c55dbf, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 10:24:05 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5073edbdc9ca2668783e82afe059b920400b09e83323b3fef35fd3e2815b730a-userdata-shm.mount: Deactivated successfully.
Dec 06 10:24:05 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-bd234320d16cdb39cb97342081f9aeafc3cac69c79df3f8518c1b61d71f7ef96-merged.mount: Deactivated successfully.
Dec 06 10:24:05 np0005548790.localdomain podman[321041]: 2025-12-06 10:24:05.936552553 +0000 UTC m=+0.098342343 container remove 5073edbdc9ca2668783e82afe059b920400b09e83323b3fef35fd3e2815b730a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a3d4440b-134c-42ab-a512-ee72a4c55dbf, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:24:05 np0005548790.localdomain systemd[1]: libpod-conmon-5073edbdc9ca2668783e82afe059b920400b09e83323b3fef35fd3e2815b730a.scope: Deactivated successfully.
Dec 06 10:24:05 np0005548790.localdomain sudo[321064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 06 10:24:05 np0005548790.localdomain sudo[321064]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:24:06 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2da3d4440b\x2d134c\x2d42ab\x2da512\x2dee72a4c55dbf.mount: Deactivated successfully.
Dec 06 10:24:06 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:24:06.283 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:24:06 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0)
Dec 06 10:24:06 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0)
Dec 06 10:24:06 np0005548790.localdomain sudo[321064]: pam_unix(sudo:session): session closed for user root
Dec 06 10:24:06 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0)
Dec 06 10:24:06 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0)
Dec 06 10:24:06 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:24:06 np0005548790.localdomain sudo[321104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:24:06 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:24:06 np0005548790.localdomain sudo[321104]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:24:06 np0005548790.localdomain sudo[321104]: pam_unix(sudo:session): session closed for user root
Dec 06 10:24:06 np0005548790.localdomain sudo[321122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:24:06 np0005548790.localdomain sudo[321122]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:24:06 np0005548790.localdomain ceph-mon[301742]: pgmap v515: 177 pgs: 177 active+clean; 337 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 4.1 MiB/s rd, 7.1 MiB/s wr, 167 op/s
Dec 06 10:24:06 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "76242542-0436-4e10-a0ca-932a5ea39e05_4c1bc32d-22e5-4d62-bff7-71f1039d4c95", "force": true, "format": "json"}]: dispatch
Dec 06 10:24:06 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "76242542-0436-4e10-a0ca-932a5ea39e05", "force": true, "format": "json"}]: dispatch
Dec 06 10:24:06 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:24:06 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:24:06 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:24:06 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:24:06 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:24:06 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:24:07 np0005548790.localdomain sudo[321122]: pam_unix(sudo:session): session closed for user root
Dec 06 10:24:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v516: 177 pgs: 177 active+clean; 337 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 3.9 MiB/s rd, 6.7 MiB/s wr, 157 op/s
Dec 06 10:24:07 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:24:07.288 262327 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:24:07 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e236 e236: 6 total, 6 up, 6 in
Dec 06 10:24:07 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:24:07 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:24:07 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:24:07 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:24:07 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:24:07 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] update: starting ev be453266-6d71-4d6f-af81-a41388f0ab1a (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:24:07 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] complete: finished ev be453266-6d71-4d6f-af81-a41388f0ab1a (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:24:07 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Completed event be453266-6d71-4d6f-af81-a41388f0ab1a (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 06 10:24:07 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:24:07 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:24:07 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e237 e237: 6 total, 6 up, 6 in
Dec 06 10:24:07 np0005548790.localdomain sudo[321173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:24:07 np0005548790.localdomain sudo[321173]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:24:07 np0005548790.localdomain sudo[321173]: pam_unix(sudo:session): session closed for user root
Dec 06 10:24:08 np0005548790.localdomain ceph-mon[301742]: pgmap v516: 177 pgs: 177 active+clean; 337 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 3.9 MiB/s rd, 6.7 MiB/s wr, 157 op/s
Dec 06 10:24:08 np0005548790.localdomain ceph-mon[301742]: osdmap e236: 6 total, 6 up, 6 in
Dec 06 10:24:08 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:24:08 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:24:08 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:24:08 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:24:08 np0005548790.localdomain ceph-mon[301742]: osdmap e237: 6 total, 6 up, 6 in
Dec 06 10:24:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:24:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:09.122 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v519: 177 pgs: 177 active+clean; 244 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 71 KiB/s rd, 32 KiB/s wr, 104 op/s
Dec 06 10:24:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e238 e238: 6 total, 6 up, 6 in
Dec 06 10:24:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:24:09 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3765798835' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:24:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:24:09 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3765798835' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:24:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:24:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:24:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp'
Dec 06 10:24:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp' to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta'
Dec 06 10:24:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:24:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "format": "json"}]: dispatch
Dec 06 10:24:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:24:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:24:10 np0005548790.localdomain ceph-mon[301742]: pgmap v519: 177 pgs: 177 active+clean; 244 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 71 KiB/s rd, 32 KiB/s wr, 104 op/s
Dec 06 10:24:10 np0005548790.localdomain ceph-mon[301742]: osdmap e238: 6 total, 6 up, 6 in
Dec 06 10:24:10 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3765798835' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:24:10 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3765798835' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:24:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "f95d5566-befb-4ef3-874d-5a8d6c4a6df5_77a05e19-399e-4f52-950c-489f333205cf", "force": true, "format": "json"}]: dispatch
Dec 06 10:24:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f95d5566-befb-4ef3-874d-5a8d6c4a6df5_77a05e19-399e-4f52-950c-489f333205cf, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:24:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp'
Dec 06 10:24:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp' to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta'
Dec 06 10:24:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f95d5566-befb-4ef3-874d-5a8d6c4a6df5_77a05e19-399e-4f52-950c-489f333205cf, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:24:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "f95d5566-befb-4ef3-874d-5a8d6c4a6df5", "force": true, "format": "json"}]: dispatch
Dec 06 10:24:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f95d5566-befb-4ef3-874d-5a8d6c4a6df5, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:24:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp'
Dec 06 10:24:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta.tmp' to config b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562/.meta'
Dec 06 10:24:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f95d5566-befb-4ef3-874d-5a8d6c4a6df5, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:24:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v521: 177 pgs: 177 active+clean; 244 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 66 KiB/s rd, 29 KiB/s wr, 96 op/s
Dec 06 10:24:11 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:24:11 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "format": "json"}]: dispatch
Dec 06 10:24:11 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:24:11 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e239 e239: 6 total, 6 up, 6 in
Dec 06 10:24:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:24:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:24:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:24:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:24:12 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Writing back 50 completed events
Dec 06 10:24:12 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:24:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:24:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:24:12 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "f95d5566-befb-4ef3-874d-5a8d6c4a6df5_77a05e19-399e-4f52-950c-489f333205cf", "force": true, "format": "json"}]: dispatch
Dec 06 10:24:12 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "snap_name": "f95d5566-befb-4ef3-874d-5a8d6c4a6df5", "force": true, "format": "json"}]: dispatch
Dec 06 10:24:12 np0005548790.localdomain ceph-mon[301742]: pgmap v521: 177 pgs: 177 active+clean; 244 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 66 KiB/s rd, 29 KiB/s wr, 96 op/s
Dec 06 10:24:12 np0005548790.localdomain ceph-mon[301742]: osdmap e239: 6 total, 6 up, 6 in
Dec 06 10:24:12 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:24:12 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:24:12.600 2 INFO neutron.agent.securitygroups_rpc [None req-63e1cfe1-0be8-4c17-9453-907c82bfa210 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Security group rule updated ['d407968b-b8de-45cd-a244-3bf62d3c0357']
Dec 06 10:24:12 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e240 e240: 6 total, 6 up, 6 in
Dec 06 10:24:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v524: 177 pgs: 177 active+clean; 198 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 102 KiB/s rd, 75 KiB/s wr, 152 op/s
Dec 06 10:24:13 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:24:13.351 2 INFO neutron.agent.securitygroups_rpc [None req-9218de2a-a054-45fc-bd21-8f037be37a59 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Security group rule updated ['d407968b-b8de-45cd-a244-3bf62d3c0357']
Dec 06 10:24:13 np0005548790.localdomain ceph-mon[301742]: osdmap e240: 6 total, 6 up, 6 in
Dec 06 10:24:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "077e73af-9063-4209-9319-e18e1a460598", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:24:13 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:077e73af-9063-4209-9319-e18e1a460598, vol_name:cephfs) < ""
Dec 06 10:24:13 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/077e73af-9063-4209-9319-e18e1a460598/.meta.tmp'
Dec 06 10:24:13 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/077e73af-9063-4209-9319-e18e1a460598/.meta.tmp' to config b'/volumes/_nogroup/077e73af-9063-4209-9319-e18e1a460598/.meta'
Dec 06 10:24:13 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:077e73af-9063-4209-9319-e18e1a460598, vol_name:cephfs) < ""
Dec 06 10:24:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "077e73af-9063-4209-9319-e18e1a460598", "format": "json"}]: dispatch
Dec 06 10:24:13 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:077e73af-9063-4209-9319-e18e1a460598, vol_name:cephfs) < ""
Dec 06 10:24:13 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:077e73af-9063-4209-9319-e18e1a460598, vol_name:cephfs) < ""
Dec 06 10:24:13 np0005548790.localdomain sshd[321192]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:24:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:24:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:14.127 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:24:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:14.129 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:24:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:14.129 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 10:24:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:14.129 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:24:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:14.152 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:14.153 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:24:14 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "format": "json"}]: dispatch
Dec 06 10:24:14 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:09dfdd64-7386-4e00-b67b-f81d142ea562, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:24:14 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:09dfdd64-7386-4e00-b67b-f81d142ea562, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:24:14 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:24:14.407+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '09dfdd64-7386-4e00-b67b-f81d142ea562' of type subvolume
Dec 06 10:24:14 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '09dfdd64-7386-4e00-b67b-f81d142ea562' of type subvolume
Dec 06 10:24:14 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "force": true, "format": "json"}]: dispatch
Dec 06 10:24:14 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:24:14 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/09dfdd64-7386-4e00-b67b-f81d142ea562'' moved to trashcan
Dec 06 10:24:14 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:24:14 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:09dfdd64-7386-4e00-b67b-f81d142ea562, vol_name:cephfs) < ""
Dec 06 10:24:14 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:24:14 np0005548790.localdomain podman[321193]: 2025-12-06 10:24:14.569899356 +0000 UTC m=+0.084157838 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:24:14 np0005548790.localdomain podman[321193]: 2025-12-06 10:24:14.579127636 +0000 UTC m=+0.093386108 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 10:24:14 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:24:14 np0005548790.localdomain ceph-mon[301742]: pgmap v524: 177 pgs: 177 active+clean; 198 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 102 KiB/s rd, 75 KiB/s wr, 152 op/s
Dec 06 10:24:14 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "077e73af-9063-4209-9319-e18e1a460598", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:24:14 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "077e73af-9063-4209-9319-e18e1a460598", "format": "json"}]: dispatch
Dec 06 10:24:14 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:24:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v525: 177 pgs: 177 active+clean; 198 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 42 KiB/s wr, 47 op/s
Dec 06 10:24:15 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "format": "json"}]: dispatch
Dec 06 10:24:15 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "09dfdd64-7386-4e00-b67b-f81d142ea562", "force": true, "format": "json"}]: dispatch
Dec 06 10:24:15 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e241 e241: 6 total, 6 up, 6 in
Dec 06 10:24:16 np0005548790.localdomain ceph-mon[301742]: pgmap v525: 177 pgs: 177 active+clean; 198 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 42 KiB/s wr, 47 op/s
Dec 06 10:24:16 np0005548790.localdomain ceph-mon[301742]: osdmap e241: 6 total, 6 up, 6 in
Dec 06 10:24:16 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "077e73af-9063-4209-9319-e18e1a460598", "new_size": 2147483648, "format": "json"}]: dispatch
Dec 06 10:24:16 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:077e73af-9063-4209-9319-e18e1a460598, vol_name:cephfs) < ""
Dec 06 10:24:16 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:077e73af-9063-4209-9319-e18e1a460598, vol_name:cephfs) < ""
Dec 06 10:24:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:16.964 280869 DEBUG oslo_concurrency.lockutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Acquiring lock "b59377c8-c3d7-452b-8305-d2853ef47bb4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:24:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:16.964 280869 DEBUG oslo_concurrency.lockutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:24:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:16.992 280869 DEBUG nova.compute.manager [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 06 10:24:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:17.091 280869 DEBUG oslo_concurrency.lockutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:24:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:17.091 280869 DEBUG oslo_concurrency.lockutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:24:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:17.097 280869 DEBUG nova.virt.hardware [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 06 10:24:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:17.097 280869 INFO nova.compute.claims [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Claim successful on node np0005548790.localdomain
Dec 06 10:24:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:17.227 280869 DEBUG oslo_concurrency.processutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:24:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v527: 177 pgs: 177 active+clean; 198 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 42 KiB/s wr, 47 op/s
Dec 06 10:24:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:24:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:24:17 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:24:17 np0005548790.localdomain podman[321231]: 2025-12-06 10:24:17.568122888 +0000 UTC m=+0.085155556 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 10:24:17 np0005548790.localdomain podman[321231]: 2025-12-06 10:24:17.581109619 +0000 UTC m=+0.098142267 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 10:24:17 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:24:17 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e242 e242: 6 total, 6 up, 6 in
Dec 06 10:24:17 np0005548790.localdomain podman[321233]: 2025-12-06 10:24:17.63099084 +0000 UTC m=+0.143042083 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, release=1755695350, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, version=9.6, config_id=edpm, name=ubi9-minimal)
Dec 06 10:24:17 np0005548790.localdomain podman[321233]: 2025-12-06 10:24:17.669544802 +0000 UTC m=+0.181596045 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_id=edpm, distribution-scope=public, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 06 10:24:17 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:24:17 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:24:17 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1847092686' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:24:17 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "077e73af-9063-4209-9319-e18e1a460598", "new_size": 2147483648, "format": "json"}]: dispatch
Dec 06 10:24:17 np0005548790.localdomain ceph-mon[301742]: osdmap e242: 6 total, 6 up, 6 in
Dec 06 10:24:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:17.708 280869 DEBUG oslo_concurrency.processutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:24:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:17.714 280869 DEBUG nova.compute.provider_tree [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:24:17 np0005548790.localdomain podman[321232]: 2025-12-06 10:24:17.726086874 +0000 UTC m=+0.243119482 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0)
Dec 06 10:24:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:17.731 280869 DEBUG nova.scheduler.client.report [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:24:17 np0005548790.localdomain podman[321232]: 2025-12-06 10:24:17.760289479 +0000 UTC m=+0.277322017 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 06 10:24:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:17.766 280869 DEBUG oslo_concurrency.lockutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:24:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:17.767 280869 DEBUG nova.compute.manager [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 06 10:24:17 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:24:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:17.845 280869 DEBUG nova.compute.manager [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 06 10:24:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:17.846 280869 DEBUG nova.network.neutron [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 06 10:24:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:17.870 280869 INFO nova.virt.libvirt.driver [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 06 10:24:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:17.890 280869 DEBUG nova.compute.manager [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 06 10:24:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:17.975 280869 DEBUG nova.compute.manager [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 06 10:24:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:17.978 280869 DEBUG nova.virt.libvirt.driver [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 06 10:24:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:17.979 280869 INFO nova.virt.libvirt.driver [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Creating image(s)
Dec 06 10:24:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:18.019 280869 DEBUG nova.storage.rbd_utils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] rbd image b59377c8-c3d7-452b-8305-d2853ef47bb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:24:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:18.059 280869 DEBUG nova.storage.rbd_utils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] rbd image b59377c8-c3d7-452b-8305-d2853ef47bb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:24:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:18.099 280869 DEBUG nova.storage.rbd_utils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] rbd image b59377c8-c3d7-452b-8305-d2853ef47bb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:24:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:18.104 280869 DEBUG oslo_concurrency.processutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb68b180567fda17719a7393615b2f958ad3226e --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:24:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:18.178 280869 DEBUG oslo_concurrency.processutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/cb68b180567fda17719a7393615b2f958ad3226e --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:24:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:18.179 280869 DEBUG oslo_concurrency.lockutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Acquiring lock "cb68b180567fda17719a7393615b2f958ad3226e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:24:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:18.180 280869 DEBUG oslo_concurrency.lockutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lock "cb68b180567fda17719a7393615b2f958ad3226e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:24:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:18.181 280869 DEBUG oslo_concurrency.lockutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lock "cb68b180567fda17719a7393615b2f958ad3226e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:24:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:18.223 280869 DEBUG nova.storage.rbd_utils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] rbd image b59377c8-c3d7-452b-8305-d2853ef47bb4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:24:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:18.229 280869 DEBUG oslo_concurrency.processutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/cb68b180567fda17719a7393615b2f958ad3226e b59377c8-c3d7-452b-8305-d2853ef47bb4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:24:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:18.251 280869 DEBUG nova.policy [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 06 10:24:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:24:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:24:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:24:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:24:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:24:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18760 "" "Go-http-client/1.1"
Dec 06 10:24:18 np0005548790.localdomain ceph-mon[301742]: pgmap v527: 177 pgs: 177 active+clean; 198 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 42 KiB/s wr, 47 op/s
Dec 06 10:24:18 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1847092686' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:24:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:18.826 280869 DEBUG oslo_concurrency.processutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/cb68b180567fda17719a7393615b2f958ad3226e b59377c8-c3d7-452b-8305-d2853ef47bb4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:24:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:18.934 280869 DEBUG nova.storage.rbd_utils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] resizing rbd image b59377c8-c3d7-452b-8305-d2853ef47bb4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 06 10:24:19 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:24:19.085 2 INFO neutron.agent.securitygroups_rpc [req-6cf9c8b9-f82f-4729-827a-87ee94dc739b req-d70b880e-b383-4ad3-91f4-06f3e667f577 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Security group member updated ['d407968b-b8de-45cd-a244-3bf62d3c0357']
Dec 06 10:24:19 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:24:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:19.110 280869 DEBUG nova.objects.instance [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lazy-loading 'migration_context' on Instance uuid b59377c8-c3d7-452b-8305-d2853ef47bb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:24:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:19.135 280869 DEBUG nova.virt.libvirt.driver [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 06 10:24:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:19.136 280869 DEBUG nova.virt.libvirt.driver [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Ensure instance console log exists: /var/lib/nova/instances/b59377c8-c3d7-452b-8305-d2853ef47bb4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 06 10:24:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:19.137 280869 DEBUG oslo_concurrency.lockutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:24:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:19.137 280869 DEBUG oslo_concurrency.lockutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:24:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:19.137 280869 DEBUG oslo_concurrency.lockutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:24:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:19.154 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:24:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:19.156 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:24:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:19.156 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 06 10:24:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:19.157 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:24:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:19.188 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:19.189 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 06 10:24:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:19.263 280869 DEBUG nova.network.neutron [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Successfully created port: c8391efe-eabf-46a0-94e6-c12eb660cfb2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 06 10:24:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v529: 177 pgs: 177 active+clean; 198 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 88 KiB/s wr, 46 op/s
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.012 280869 DEBUG nova.network.neutron [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Successfully updated port: c8391efe-eabf-46a0-94e6-c12eb660cfb2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.033 280869 DEBUG oslo_concurrency.lockutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Acquiring lock "refresh_cache-b59377c8-c3d7-452b-8305-d2853ef47bb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.034 280869 DEBUG oslo_concurrency.lockutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Acquired lock "refresh_cache-b59377c8-c3d7-452b-8305-d2853ef47bb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.034 280869 DEBUG nova.network.neutron [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.089 280869 DEBUG nova.network.neutron [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 06 10:24:20 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "077e73af-9063-4209-9319-e18e1a460598", "format": "json"}]: dispatch
Dec 06 10:24:20 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:077e73af-9063-4209-9319-e18e1a460598, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:24:20 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:077e73af-9063-4209-9319-e18e1a460598, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:24:20 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '077e73af-9063-4209-9319-e18e1a460598' of type subvolume
Dec 06 10:24:20 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:24:20.281+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '077e73af-9063-4209-9319-e18e1a460598' of type subvolume
Dec 06 10:24:20 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "077e73af-9063-4209-9319-e18e1a460598", "force": true, "format": "json"}]: dispatch
Dec 06 10:24:20 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:077e73af-9063-4209-9319-e18e1a460598, vol_name:cephfs) < ""
Dec 06 10:24:20 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/077e73af-9063-4209-9319-e18e1a460598'' moved to trashcan
Dec 06 10:24:20 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:24:20 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:077e73af-9063-4209-9319-e18e1a460598, vol_name:cephfs) < ""
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.344 280869 DEBUG nova.compute.manager [req-079423ff-2601-4bc8-88fd-a5cca630063a req-638e01de-8632-4935-b0f1-dceaf93c26f8 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Received event network-changed-c8391efe-eabf-46a0-94e6-c12eb660cfb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.344 280869 DEBUG nova.compute.manager [req-079423ff-2601-4bc8-88fd-a5cca630063a req-638e01de-8632-4935-b0f1-dceaf93c26f8 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Refreshing instance network info cache due to event network-changed-c8391efe-eabf-46a0-94e6-c12eb660cfb2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.345 280869 DEBUG oslo_concurrency.lockutils [req-079423ff-2601-4bc8-88fd-a5cca630063a req-638e01de-8632-4935-b0f1-dceaf93c26f8 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "refresh_cache-b59377c8-c3d7-452b-8305-d2853ef47bb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.418 280869 DEBUG nova.network.neutron [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Updating instance_info_cache with network_info: [{"id": "c8391efe-eabf-46a0-94e6-c12eb660cfb2", "address": "fa:16:3e:ec:95:9c", "network": {"id": "55ffc629-08a5-404f-87a7-26deb97840dc", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1845353867-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "b51f704fe6204487b0317c3332364cca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8391efe-ea", "ovs_interfaceid": "c8391efe-eabf-46a0-94e6-c12eb660cfb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.447 280869 DEBUG oslo_concurrency.lockutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Releasing lock "refresh_cache-b59377c8-c3d7-452b-8305-d2853ef47bb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.448 280869 DEBUG nova.compute.manager [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Instance network_info: |[{"id": "c8391efe-eabf-46a0-94e6-c12eb660cfb2", "address": "fa:16:3e:ec:95:9c", "network": {"id": "55ffc629-08a5-404f-87a7-26deb97840dc", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1845353867-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "b51f704fe6204487b0317c3332364cca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8391efe-ea", "ovs_interfaceid": "c8391efe-eabf-46a0-94e6-c12eb660cfb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.448 280869 DEBUG oslo_concurrency.lockutils [req-079423ff-2601-4bc8-88fd-a5cca630063a req-638e01de-8632-4935-b0f1-dceaf93c26f8 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquired lock "refresh_cache-b59377c8-c3d7-452b-8305-d2853ef47bb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.449 280869 DEBUG nova.network.neutron [req-079423ff-2601-4bc8-88fd-a5cca630063a req-638e01de-8632-4935-b0f1-dceaf93c26f8 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Refreshing network info cache for port c8391efe-eabf-46a0-94e6-c12eb660cfb2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.453 280869 DEBUG nova.virt.libvirt.driver [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Start _get_guest_xml network_info=[{"id": "c8391efe-eabf-46a0-94e6-c12eb660cfb2", "address": "fa:16:3e:ec:95:9c", "network": {"id": "55ffc629-08a5-404f-87a7-26deb97840dc", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1845353867-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "b51f704fe6204487b0317c3332364cca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8391efe-ea", "ovs_interfaceid": "c8391efe-eabf-46a0-94e6-c12eb660cfb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:13:11Z,direct_url=<?>,disk_format='qcow2',id=6a944ab6-8965-4055-b7fc-af6e395005ea,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3d603431c0bb4967bafc7a0aa6108bfe',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:13:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'boot_index': 0, 'guest_format': None, 'encryption_secret_uuid': None, 'encryption_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_options': None, 'size': 0, 'image_id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.459 280869 WARNING nova.virt.libvirt.driver [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.467 280869 DEBUG nova.virt.libvirt.host [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Searching host: 'np0005548790.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.467 280869 DEBUG nova.virt.libvirt.host [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.470 280869 DEBUG nova.virt.libvirt.host [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Searching host: 'np0005548790.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.470 280869 DEBUG nova.virt.libvirt.host [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.471 280869 DEBUG nova.virt.libvirt.driver [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.471 280869 DEBUG nova.virt.hardware [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-06T10:13:10Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a0a7498e-22eb-495c-a2e3-89ba9e483bf6',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-06T10:13:11Z,direct_url=<?>,disk_format='qcow2',id=6a944ab6-8965-4055-b7fc-af6e395005ea,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3d603431c0bb4967bafc7a0aa6108bfe',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-06T10:13:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.472 280869 DEBUG nova.virt.hardware [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.473 280869 DEBUG nova.virt.hardware [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.473 280869 DEBUG nova.virt.hardware [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.473 280869 DEBUG nova.virt.hardware [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.474 280869 DEBUG nova.virt.hardware [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.474 280869 DEBUG nova.virt.hardware [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.475 280869 DEBUG nova.virt.hardware [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.475 280869 DEBUG nova.virt.hardware [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.476 280869 DEBUG nova.virt.hardware [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.476 280869 DEBUG nova.virt.hardware [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.481 280869 DEBUG oslo_concurrency.processutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:24:20 np0005548790.localdomain sshd[321192]: Received disconnect from 101.47.160.186 port 44628:11: Bye Bye [preauth]
Dec 06 10:24:20 np0005548790.localdomain sshd[321192]: Disconnected from authenticating user root 101.47.160.186 port 44628 [preauth]
Dec 06 10:24:20 np0005548790.localdomain ceph-mon[301742]: pgmap v529: 177 pgs: 177 active+clean; 198 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 88 KiB/s wr, 46 op/s
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.868 280869 DEBUG nova.network.neutron [req-079423ff-2601-4bc8-88fd-a5cca630063a req-638e01de-8632-4935-b0f1-dceaf93c26f8 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Updated VIF entry in instance network info cache for port c8391efe-eabf-46a0-94e6-c12eb660cfb2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.869 280869 DEBUG nova.network.neutron [req-079423ff-2601-4bc8-88fd-a5cca630063a req-638e01de-8632-4935-b0f1-dceaf93c26f8 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Updating instance_info_cache with network_info: [{"id": "c8391efe-eabf-46a0-94e6-c12eb660cfb2", "address": "fa:16:3e:ec:95:9c", "network": {"id": "55ffc629-08a5-404f-87a7-26deb97840dc", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1845353867-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "b51f704fe6204487b0317c3332364cca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8391efe-ea", "ovs_interfaceid": "c8391efe-eabf-46a0-94e6-c12eb660cfb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.888 280869 DEBUG oslo_concurrency.lockutils [req-079423ff-2601-4bc8-88fd-a5cca630063a req-638e01de-8632-4935-b0f1-dceaf93c26f8 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Releasing lock "refresh_cache-b59377c8-c3d7-452b-8305-d2853ef47bb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:24:20 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:24:20 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3448344129' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:24:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:20.971 280869 DEBUG oslo_concurrency.processutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.013 280869 DEBUG nova.storage.rbd_utils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] rbd image b59377c8-c3d7-452b-8305-d2853ef47bb4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.021 280869 DEBUG oslo_concurrency.processutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:24:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v530: 177 pgs: 177 active+clean; 198 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 43 KiB/s wr, 2 op/s
Dec 06 10:24:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:24:21 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2893884725' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.455 280869 DEBUG oslo_concurrency.processutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.458 280869 DEBUG nova.virt.libvirt.vif [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:24:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-739598656',display_name='tempest-VolumesBackupsTest-instance-739598656',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005548790.localdomain',hostname='tempest-volumesbackupstest-instance-739598656',id=10,image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI3Da3ZAzMu07Y3jOXfDkV45E3//rDjS7tw7vpkgEh1B1VKbPEiZiwURSVqrMcu/DW1QQdZYZpxlNs8HoKSRsiyrqyYFtsCyQHjgg2Q1M3OkTcWMp/2hJEhqLfueca6tfQ==',key_name='tempest-keypair-1339678623',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005548790.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005548790.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b51f704fe6204487b0317c3332364cca',ramdisk_id='',reservation_id='r-ljipvdtt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1677778141',owner_user_name='tempest-VolumesBackupsTest-1677778141-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:24:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b40d497af0834616a664e6909c0f6685',uuid=b59377c8-c3d7-452b-8305-d2853ef47bb4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c8391efe-eabf-46a0-94e6-c12eb660cfb2", "address": "fa:16:3e:ec:95:9c", "network": {"id": "55ffc629-08a5-404f-87a7-26deb97840dc", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1845353867-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "b51f704fe6204487b0317c3332364cca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8391efe-ea", "ovs_interfaceid": "c8391efe-eabf-46a0-94e6-c12eb660cfb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.459 280869 DEBUG nova.network.os_vif_util [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Converting VIF {"id": "c8391efe-eabf-46a0-94e6-c12eb660cfb2", "address": "fa:16:3e:ec:95:9c", "network": {"id": "55ffc629-08a5-404f-87a7-26deb97840dc", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1845353867-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "b51f704fe6204487b0317c3332364cca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8391efe-ea", "ovs_interfaceid": "c8391efe-eabf-46a0-94e6-c12eb660cfb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.460 280869 DEBUG nova.network.os_vif_util [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:95:9c,bridge_name='br-int',has_traffic_filtering=True,id=c8391efe-eabf-46a0-94e6-c12eb660cfb2,network=Network(55ffc629-08a5-404f-87a7-26deb97840dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8391efe-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.462 280869 DEBUG nova.objects.instance [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lazy-loading 'pci_devices' on Instance uuid b59377c8-c3d7-452b-8305-d2853ef47bb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.481 280869 DEBUG nova.virt.libvirt.driver [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] End _get_guest_xml xml=<domain type="kvm">
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:   <uuid>b59377c8-c3d7-452b-8305-d2853ef47bb4</uuid>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:   <name>instance-0000000a</name>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:   <memory>131072</memory>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:   <vcpu>1</vcpu>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:   <metadata>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <nova:name>tempest-VolumesBackupsTest-instance-739598656</nova:name>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <nova:creationTime>2025-12-06 10:24:20</nova:creationTime>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <nova:flavor name="m1.nano">
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:         <nova:memory>128</nova:memory>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:         <nova:disk>1</nova:disk>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:         <nova:swap>0</nova:swap>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:         <nova:ephemeral>0</nova:ephemeral>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:         <nova:vcpus>1</nova:vcpus>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       </nova:flavor>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <nova:owner>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:         <nova:user uuid="b40d497af0834616a664e6909c0f6685">tempest-VolumesBackupsTest-1677778141-project-member</nova:user>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:         <nova:project uuid="b51f704fe6204487b0317c3332364cca">tempest-VolumesBackupsTest-1677778141</nova:project>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       </nova:owner>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <nova:root type="image" uuid="6a944ab6-8965-4055-b7fc-af6e395005ea"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <nova:ports>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:         <nova:port uuid="c8391efe-eabf-46a0-94e6-c12eb660cfb2">
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:         </nova:port>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       </nova:ports>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     </nova:instance>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:   </metadata>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:   <sysinfo type="smbios">
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <system>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <entry name="manufacturer">RDO</entry>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <entry name="product">OpenStack Compute</entry>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <entry name="serial">b59377c8-c3d7-452b-8305-d2853ef47bb4</entry>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <entry name="uuid">b59377c8-c3d7-452b-8305-d2853ef47bb4</entry>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <entry name="family">Virtual Machine</entry>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     </system>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:   </sysinfo>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:   <os>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <boot dev="hd"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <smbios mode="sysinfo"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:   </os>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:   <features>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <acpi/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <apic/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <vmcoreinfo/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:   </features>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:   <clock offset="utc">
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <timer name="pit" tickpolicy="delay"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <timer name="hpet" present="no"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:   </clock>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:   <cpu mode="host-model" match="exact">
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <topology sockets="1" cores="1" threads="1"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:   </cpu>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:   <devices>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <disk type="network" device="disk">
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <driver type="raw" cache="none"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <source protocol="rbd" name="vms/b59377c8-c3d7-452b-8305-d2853ef47bb4_disk">
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:         <host name="172.18.0.103" port="6789"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:         <host name="172.18.0.104" port="6789"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:         <host name="172.18.0.105" port="6789"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       </source>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <auth username="openstack">
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:         <secret type="ceph" uuid="1939e851-b10c-5c3b-9bb7-8e7f380233e8"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       </auth>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <target dev="vda" bus="virtio"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     </disk>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <disk type="network" device="cdrom">
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <driver type="raw" cache="none"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <source protocol="rbd" name="vms/b59377c8-c3d7-452b-8305-d2853ef47bb4_disk.config">
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:         <host name="172.18.0.103" port="6789"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:         <host name="172.18.0.104" port="6789"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:         <host name="172.18.0.105" port="6789"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       </source>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <auth username="openstack">
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:         <secret type="ceph" uuid="1939e851-b10c-5c3b-9bb7-8e7f380233e8"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       </auth>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <target dev="sda" bus="sata"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     </disk>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <interface type="ethernet">
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <mac address="fa:16:3e:ec:95:9c"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <model type="virtio"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <driver name="vhost" rx_queue_size="512"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <mtu size="1442"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <target dev="tapc8391efe-ea"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     </interface>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <serial type="pty">
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <log file="/var/lib/nova/instances/b59377c8-c3d7-452b-8305-d2853ef47bb4/console.log" append="off"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     </serial>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <video>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <model type="virtio"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     </video>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <input type="tablet" bus="usb"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <rng model="virtio">
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <backend model="random">/dev/urandom</backend>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     </rng>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="pci" model="pcie-root-port"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <controller type="usb" index="0"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     <memballoon model="virtio">
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:       <stats period="10"/>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:     </memballoon>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:   </devices>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: </domain>
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.482 280869 DEBUG nova.compute.manager [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Preparing to wait for external event network-vif-plugged-c8391efe-eabf-46a0-94e6-c12eb660cfb2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.482 280869 DEBUG oslo_concurrency.lockutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Acquiring lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.482 280869 DEBUG oslo_concurrency.lockutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.483 280869 DEBUG oslo_concurrency.lockutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.483 280869 DEBUG nova.virt.libvirt.vif [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-06T10:24:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-739598656',display_name='tempest-VolumesBackupsTest-instance-739598656',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005548790.localdomain',hostname='tempest-volumesbackupstest-instance-739598656',id=10,image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI3Da3ZAzMu07Y3jOXfDkV45E3//rDjS7tw7vpkgEh1B1VKbPEiZiwURSVqrMcu/DW1QQdZYZpxlNs8HoKSRsiyrqyYFtsCyQHjgg2Q1M3OkTcWMp/2hJEhqLfueca6tfQ==',key_name='tempest-keypair-1339678623',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005548790.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005548790.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b51f704fe6204487b0317c3332364cca',ramdisk_id='',reservation_id='r-ljipvdtt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-1677778141',owner_user_name='tempest-VolumesBackupsTest-1677778141-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-06T10:24:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b40d497af0834616a664e6909c0f6685',uuid=b59377c8-c3d7-452b-8305-d2853ef47bb4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c8391efe-eabf-46a0-94e6-c12eb660cfb2", "address": "fa:16:3e:ec:95:9c", "network": {"id": "55ffc629-08a5-404f-87a7-26deb97840dc", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1845353867-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "b51f704fe6204487b0317c3332364cca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8391efe-ea", "ovs_interfaceid": "c8391efe-eabf-46a0-94e6-c12eb660cfb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.483 280869 DEBUG nova.network.os_vif_util [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Converting VIF {"id": "c8391efe-eabf-46a0-94e6-c12eb660cfb2", "address": "fa:16:3e:ec:95:9c", "network": {"id": "55ffc629-08a5-404f-87a7-26deb97840dc", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1845353867-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "b51f704fe6204487b0317c3332364cca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8391efe-ea", "ovs_interfaceid": "c8391efe-eabf-46a0-94e6-c12eb660cfb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.484 280869 DEBUG nova.network.os_vif_util [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:95:9c,bridge_name='br-int',has_traffic_filtering=True,id=c8391efe-eabf-46a0-94e6-c12eb660cfb2,network=Network(55ffc629-08a5-404f-87a7-26deb97840dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8391efe-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.484 280869 DEBUG os_vif [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:95:9c,bridge_name='br-int',has_traffic_filtering=True,id=c8391efe-eabf-46a0-94e6-c12eb660cfb2,network=Network(55ffc629-08a5-404f-87a7-26deb97840dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8391efe-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.485 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.485 280869 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.485 280869 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.488 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.489 280869 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8391efe-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.489 280869 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc8391efe-ea, col_values=(('external_ids', {'iface-id': 'c8391efe-eabf-46a0-94e6-c12eb660cfb2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:95:9c', 'vm-uuid': 'b59377c8-c3d7-452b-8305-d2853ef47bb4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.495 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.497 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.498 280869 INFO os_vif [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:95:9c,bridge_name='br-int',has_traffic_filtering=True,id=c8391efe-eabf-46a0-94e6-c12eb660cfb2,network=Network(55ffc629-08a5-404f-87a7-26deb97840dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8391efe-ea')
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.536 280869 DEBUG nova.virt.libvirt.driver [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.537 280869 DEBUG nova.virt.libvirt.driver [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.537 280869 DEBUG nova.virt.libvirt.driver [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] No VIF found with MAC fa:16:3e:ec:95:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.537 280869 INFO nova.virt.libvirt.driver [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Using config drive
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.570 280869 DEBUG nova.storage.rbd_utils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] rbd image b59377c8-c3d7-452b-8305-d2853ef47bb4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:24:21 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "077e73af-9063-4209-9319-e18e1a460598", "format": "json"}]: dispatch
Dec 06 10:24:21 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "077e73af-9063-4209-9319-e18e1a460598", "force": true, "format": "json"}]: dispatch
Dec 06 10:24:21 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/3448344129' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:24:21 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2893884725' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.803 280869 INFO nova.virt.libvirt.driver [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Creating config drive at /var/lib/nova/instances/b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.config
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.811 280869 DEBUG oslo_concurrency.processutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf9kh40vy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.944 280869 DEBUG oslo_concurrency.processutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf9kh40vy" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.992 280869 DEBUG nova.storage.rbd_utils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] rbd image b59377c8-c3d7-452b-8305-d2853ef47bb4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 06 10:24:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:21.998 280869 DEBUG oslo_concurrency.processutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.config b59377c8-c3d7-452b-8305-d2853ef47bb4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.213 280869 DEBUG oslo_concurrency.processutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.config b59377c8-c3d7-452b-8305-d2853ef47bb4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.215s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.214 280869 INFO nova.virt.libvirt.driver [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Deleting local config drive /var/lib/nova/instances/b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.config because it was imported into RBD.
Dec 06 10:24:22 np0005548790.localdomain systemd[1]: Started libvirt secret daemon.
Dec 06 10:24:22 np0005548790.localdomain kernel: device tapc8391efe-ea entered promiscuous mode
Dec 06 10:24:22 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016662.3313] manager: (tapc8391efe-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Dec 06 10:24:22 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:24:22Z|00256|binding|INFO|Claiming lport c8391efe-eabf-46a0-94e6-c12eb660cfb2 for this chassis.
Dec 06 10:24:22 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:24:22Z|00257|binding|INFO|c8391efe-eabf-46a0-94e6-c12eb660cfb2: Claiming fa:16:3e:ec:95:9c 10.100.0.10
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.333 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:22 np0005548790.localdomain systemd-udevd[321609]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.345 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:95:9c 10.100.0.10'], port_security=['fa:16:3e:ec:95:9c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55ffc629-08a5-404f-87a7-26deb97840dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b51f704fe6204487b0317c3332364cca', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd407968b-b8de-45cd-a244-3bf62d3c0357', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a555e286-25fe-4028-bbdb-d66a3efae4d1, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=c8391efe-eabf-46a0-94e6-c12eb660cfb2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.347 159200 INFO neutron.agent.ovn.metadata.agent [-] Port c8391efe-eabf-46a0-94e6-c12eb660cfb2 in datapath 55ffc629-08a5-404f-87a7-26deb97840dc bound to our chassis
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.349 159200 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55ffc629-08a5-404f-87a7-26deb97840dc
Dec 06 10:24:22 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:24:22Z|00258|binding|INFO|Setting lport c8391efe-eabf-46a0-94e6-c12eb660cfb2 up in Southbound
Dec 06 10:24:22 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:24:22Z|00259|binding|INFO|Setting lport c8391efe-eabf-46a0-94e6-c12eb660cfb2 ovn-installed in OVS
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.351 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.353 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.356 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:22 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016662.3647] device (tapc8391efe-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.363 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[e8f18e9e-fb75-447d-a6e4-fa50b66277ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.364 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap55ffc629-01 in ovnmeta-55ffc629-08a5-404f-87a7-26deb97840dc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.367 262518 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap55ffc629-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.367 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[cafd2c36-c858-4e8e-b328-a06d4150ac11]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:24:22 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016662.3697] device (tapc8391efe-ea): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external')
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.369 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[a0da2a7e-5910-4b9c-815c-3429e14dfcbc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:24:22 np0005548790.localdomain systemd-machined[202564]: New machine qemu-4-instance-0000000a.
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.382 159379 DEBUG oslo.privsep.daemon [-] privsep: reply[00756122-26de-4599-a3c5-9ff9befdcef8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:24:22 np0005548790.localdomain systemd[1]: Started Virtual Machine qemu-4-instance-0000000a.
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.408 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[b3d52716-4b1d-45cb-baaf-dc831922b8f2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.442 309039 DEBUG oslo.privsep.daemon [-] privsep: reply[c8e3241e-2900-4015-b14c-ec37c2da2ff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.447 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[b0330291-1d33-4b33-ae7a-9d6271b04f2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:24:22 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016662.4490] manager: (tap55ffc629-00): new Veth device (/org/freedesktop/NetworkManager/Devices/52)
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.476 309039 DEBUG oslo.privsep.daemon [-] privsep: reply[73f76ae6-b8aa-4a49-a631-0fc813ac8243]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.480 309039 DEBUG oslo.privsep.daemon [-] privsep: reply[5d09d268-875c-421b-aeac-9b5a4cb9471d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:24:22 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016662.5023] device (tap55ffc629-00): carrier: link connected
Dec 06 10:24:22 np0005548790.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap55ffc629-01: link becomes ready
Dec 06 10:24:22 np0005548790.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap55ffc629-00: link becomes ready
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.509 309039 DEBUG oslo.privsep.daemon [-] privsep: reply[80c8a7a5-473f-4096-a11c-06527c68ef4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.526 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[e62122ad-84d5-44bc-bfc8-40c1ebb1098f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55ffc629-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:58:20:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1307763, 'reachable_time': 22923, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321645, 'error': None, 'target': 'ovnmeta-55ffc629-08a5-404f-87a7-26deb97840dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.545 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[8565b56f-ddf5-4abd-95d8-c14a3a0ec7b3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe58:20ab'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1307763, 'tstamp': 1307763}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321653, 'error': None, 'target': 'ovnmeta-55ffc629-08a5-404f-87a7-26deb97840dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.565 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[d067a406-0052-4afa-b88a-2802d96150c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55ffc629-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:58:20:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1307763, 'reachable_time': 22923, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 321662, 'error': None, 'target': 'ovnmeta-55ffc629-08a5-404f-87a7-26deb97840dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.597 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0237d6-b432-4fc6-91d4-62ac616286fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:24:22 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e243 e243: 6 total, 6 up, 6 in
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.664 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[781ebbb5-731c-4abe-a6bc-08e60894125e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.668 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55ffc629-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.668 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.670 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55ffc629-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:24:22 np0005548790.localdomain kernel: device tap55ffc629-00 entered promiscuous mode
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.708 280869 DEBUG nova.compute.manager [req-bd49f73b-7ce3-40d2-a761-0ad2beda1113 req-8cb07ba4-dc6a-4d32-8c75-e57893d6bdd0 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Received event network-vif-plugged-c8391efe-eabf-46a0-94e6-c12eb660cfb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.709 280869 DEBUG oslo_concurrency.lockutils [req-bd49f73b-7ce3-40d2-a761-0ad2beda1113 req-8cb07ba4-dc6a-4d32-8c75-e57893d6bdd0 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.710 280869 DEBUG oslo_concurrency.lockutils [req-bd49f73b-7ce3-40d2-a761-0ad2beda1113 req-8cb07ba4-dc6a-4d32-8c75-e57893d6bdd0 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.710 280869 DEBUG oslo_concurrency.lockutils [req-bd49f73b-7ce3-40d2-a761-0ad2beda1113 req-8cb07ba4-dc6a-4d32-8c75-e57893d6bdd0 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.711 280869 DEBUG nova.compute.manager [req-bd49f73b-7ce3-40d2-a761-0ad2beda1113 req-8cb07ba4-dc6a-4d32-8c75-e57893d6bdd0 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Processing event network-vif-plugged-c8391efe-eabf-46a0-94e6-c12eb660cfb2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.711 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.711 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55ffc629-00, col_values=(('external_ids', {'iface-id': '03e3daed-e0ad-41ef-b4d5-42d85bf912f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:24:22 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:24:22Z|00260|binding|INFO|Releasing lport 03e3daed-e0ad-41ef-b4d5-42d85bf912f3 from this chassis (sb_readonly=0)
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.717 159200 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/55ffc629-08a5-404f-87a7-26deb97840dc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/55ffc629-08a5-404f-87a7-26deb97840dc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.718 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[5119e28c-89df-4bba-8997-b098ce2f5169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.720 159200 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: global
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]:     log         /dev/log local0 debug
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]:     log-tag     haproxy-metadata-proxy-55ffc629-08a5-404f-87a7-26deb97840dc
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]:     user        root
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]:     group       root
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]:     maxconn     1024
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]:     pidfile     /var/lib/neutron/external/pids/55ffc629-08a5-404f-87a7-26deb97840dc.pid.haproxy
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]:     daemon
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: defaults
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]:     log global
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]:     mode http
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]:     option httplog
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]:     option dontlognull
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]:     option http-server-close
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]:     option forwardfor
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]:     retries                 3
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout http-request    30s
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout connect         30s
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout client          32s
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout server          32s
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]:     timeout http-keep-alive 30s
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: listen listener
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]:     bind 169.254.169.254:80
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]:     server metadata /var/lib/neutron/metadata_proxy
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]:     http-request add-header X-OVN-Network-ID 55ffc629-08a5-404f-87a7-26deb97840dc
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 06 10:24:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:22.722 159200 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-55ffc629-08a5-404f-87a7-26deb97840dc', 'env', 'PROCESS_TAG=haproxy-55ffc629-08a5-404f-87a7-26deb97840dc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/55ffc629-08a5-404f-87a7-26deb97840dc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.725 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:22 np0005548790.localdomain ceph-mon[301742]: pgmap v530: 177 pgs: 177 active+clean; 198 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 43 KiB/s wr, 2 op/s
Dec 06 10:24:22 np0005548790.localdomain ceph-mon[301742]: osdmap e243: 6 total, 6 up, 6 in
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.828 280869 DEBUG nova.virt.driver [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Emitting event <LifecycleEvent: 1765016662.8272665, b59377c8-c3d7-452b-8305-d2853ef47bb4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.829 280869 INFO nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] VM Started (Lifecycle Event)
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.832 280869 DEBUG nova.compute.manager [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.852 280869 DEBUG nova.virt.libvirt.driver [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.856 280869 DEBUG nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.858 280869 INFO nova.virt.libvirt.driver [-] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Instance spawned successfully.
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.859 280869 DEBUG nova.virt.libvirt.driver [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.863 280869 DEBUG nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.890 280869 INFO nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.891 280869 DEBUG nova.virt.driver [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Emitting event <LifecycleEvent: 1765016662.8275762, b59377c8-c3d7-452b-8305-d2853ef47bb4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.891 280869 INFO nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] VM Paused (Lifecycle Event)
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.895 280869 DEBUG nova.virt.libvirt.driver [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.896 280869 DEBUG nova.virt.libvirt.driver [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.897 280869 DEBUG nova.virt.libvirt.driver [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.897 280869 DEBUG nova.virt.libvirt.driver [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.898 280869 DEBUG nova.virt.libvirt.driver [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.899 280869 DEBUG nova.virt.libvirt.driver [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.922 280869 DEBUG nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.925 280869 DEBUG nova.virt.driver [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Emitting event <LifecycleEvent: 1765016662.8360114, b59377c8-c3d7-452b-8305-d2853ef47bb4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.926 280869 INFO nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] VM Resumed (Lifecycle Event)
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.979 280869 DEBUG nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:24:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:22.985 280869 DEBUG nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 06 10:24:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:23.008 280869 INFO nova.compute.manager [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Took 5.03 seconds to spawn the instance on the hypervisor.
Dec 06 10:24:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:23.009 280869 DEBUG nova.compute.manager [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:24:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:23.013 280869 INFO nova.compute.manager [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 06 10:24:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:23.102 280869 INFO nova.compute.manager [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Took 6.05 seconds to build instance.
Dec 06 10:24:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:23.123 280869 DEBUG oslo_concurrency.lockutils [None req-6cf9c8b9-f82f-4729-827a-87ee94dc739b b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:24:23 np0005548790.localdomain podman[321720]: 
Dec 06 10:24:23 np0005548790.localdomain podman[321720]: 2025-12-06 10:24:23.18551765 +0000 UTC m=+0.084113178 container create ab93c3c3db34e7ec3c84647251b8a6ec0a0d638bea3351300d3a2f9a9b01ab11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55ffc629-08a5-404f-87a7-26deb97840dc, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 10:24:23 np0005548790.localdomain systemd[1]: Started libpod-conmon-ab93c3c3db34e7ec3c84647251b8a6ec0a0d638bea3351300d3a2f9a9b01ab11.scope.
Dec 06 10:24:23 np0005548790.localdomain podman[321720]: 2025-12-06 10:24:23.141754756 +0000 UTC m=+0.040350314 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 06 10:24:23 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:24:23 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7628a71dceefaa970745c21c0b1861909eba1ecb7069446b647b01f7cac1931/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:24:23 np0005548790.localdomain podman[321720]: 2025-12-06 10:24:23.285766233 +0000 UTC m=+0.184361761 container init ab93c3c3db34e7ec3c84647251b8a6ec0a0d638bea3351300d3a2f9a9b01ab11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55ffc629-08a5-404f-87a7-26deb97840dc, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 10:24:23 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v532: 177 pgs: 177 active+clean; 245 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 2.9 MiB/s wr, 63 op/s
Dec 06 10:24:23 np0005548790.localdomain podman[321720]: 2025-12-06 10:24:23.293339519 +0000 UTC m=+0.191935047 container start ab93c3c3db34e7ec3c84647251b8a6ec0a0d638bea3351300d3a2f9a9b01ab11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55ffc629-08a5-404f-87a7-26deb97840dc, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 10:24:23 np0005548790.localdomain neutron-haproxy-ovnmeta-55ffc629-08a5-404f-87a7-26deb97840dc[321734]: [NOTICE]   (321738) : New worker (321740) forked
Dec 06 10:24:23 np0005548790.localdomain neutron-haproxy-ovnmeta-55ffc629-08a5-404f-87a7-26deb97840dc[321734]: [NOTICE]   (321738) : Loading success.
Dec 06 10:24:23 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3e304c5e-12d4-47b8-989b-1fc1253e12f7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:24:23 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3e304c5e-12d4-47b8-989b-1fc1253e12f7, vol_name:cephfs) < ""
Dec 06 10:24:23 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3e304c5e-12d4-47b8-989b-1fc1253e12f7/.meta.tmp'
Dec 06 10:24:23 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3e304c5e-12d4-47b8-989b-1fc1253e12f7/.meta.tmp' to config b'/volumes/_nogroup/3e304c5e-12d4-47b8-989b-1fc1253e12f7/.meta'
Dec 06 10:24:23 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3e304c5e-12d4-47b8-989b-1fc1253e12f7, vol_name:cephfs) < ""
Dec 06 10:24:23 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3e304c5e-12d4-47b8-989b-1fc1253e12f7", "format": "json"}]: dispatch
Dec 06 10:24:23 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3e304c5e-12d4-47b8-989b-1fc1253e12f7, vol_name:cephfs) < ""
Dec 06 10:24:23 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3e304c5e-12d4-47b8-989b-1fc1253e12f7, vol_name:cephfs) < ""
Dec 06 10:24:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:24:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:24:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:24:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:24:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:24:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:24:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:24:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:24:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:24:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:24:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:24:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:24:23 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:24:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:24:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:24.198 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:24.741 280869 DEBUG nova.compute.manager [req-3db4c41a-4353-4b30-a6bd-ffacbd2f83ce req-a87515dc-f14a-40c7-9a60-1c2f1ad8b0b8 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Received event network-vif-plugged-c8391efe-eabf-46a0-94e6-c12eb660cfb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:24:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:24.742 280869 DEBUG oslo_concurrency.lockutils [req-3db4c41a-4353-4b30-a6bd-ffacbd2f83ce req-a87515dc-f14a-40c7-9a60-1c2f1ad8b0b8 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:24:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:24.742 280869 DEBUG oslo_concurrency.lockutils [req-3db4c41a-4353-4b30-a6bd-ffacbd2f83ce req-a87515dc-f14a-40c7-9a60-1c2f1ad8b0b8 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:24:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:24.742 280869 DEBUG oslo_concurrency.lockutils [req-3db4c41a-4353-4b30-a6bd-ffacbd2f83ce req-a87515dc-f14a-40c7-9a60-1c2f1ad8b0b8 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:24:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:24.743 280869 DEBUG nova.compute.manager [req-3db4c41a-4353-4b30-a6bd-ffacbd2f83ce req-a87515dc-f14a-40c7-9a60-1c2f1ad8b0b8 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] No waiting events found dispatching network-vif-plugged-c8391efe-eabf-46a0-94e6-c12eb660cfb2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:24:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:24.743 280869 WARNING nova.compute.manager [req-3db4c41a-4353-4b30-a6bd-ffacbd2f83ce req-a87515dc-f14a-40c7-9a60-1c2f1ad8b0b8 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Received unexpected event network-vif-plugged-c8391efe-eabf-46a0-94e6-c12eb660cfb2 for instance with vm_state active and task_state None.
Dec 06 10:24:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:24.744 280869 DEBUG nova.compute.manager [req-3db4c41a-4353-4b30-a6bd-ffacbd2f83ce req-a87515dc-f14a-40c7-9a60-1c2f1ad8b0b8 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Received event network-changed-c8391efe-eabf-46a0-94e6-c12eb660cfb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:24:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:24.744 280869 DEBUG nova.compute.manager [req-3db4c41a-4353-4b30-a6bd-ffacbd2f83ce req-a87515dc-f14a-40c7-9a60-1c2f1ad8b0b8 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Refreshing instance network info cache due to event network-changed-c8391efe-eabf-46a0-94e6-c12eb660cfb2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 06 10:24:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:24.744 280869 DEBUG oslo_concurrency.lockutils [req-3db4c41a-4353-4b30-a6bd-ffacbd2f83ce req-a87515dc-f14a-40c7-9a60-1c2f1ad8b0b8 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "refresh_cache-b59377c8-c3d7-452b-8305-d2853ef47bb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:24:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:24.745 280869 DEBUG oslo_concurrency.lockutils [req-3db4c41a-4353-4b30-a6bd-ffacbd2f83ce req-a87515dc-f14a-40c7-9a60-1c2f1ad8b0b8 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquired lock "refresh_cache-b59377c8-c3d7-452b-8305-d2853ef47bb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:24:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:24.745 280869 DEBUG nova.network.neutron [req-3db4c41a-4353-4b30-a6bd-ffacbd2f83ce req-a87515dc-f14a-40c7-9a60-1c2f1ad8b0b8 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Refreshing network info cache for port c8391efe-eabf-46a0-94e6-c12eb660cfb2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 06 10:24:24 np0005548790.localdomain ceph-mon[301742]: pgmap v532: 177 pgs: 177 active+clean; 245 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 2.9 MiB/s wr, 63 op/s
Dec 06 10:24:24 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3e304c5e-12d4-47b8-989b-1fc1253e12f7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:24:24 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3e304c5e-12d4-47b8-989b-1fc1253e12f7", "format": "json"}]: dispatch
Dec 06 10:24:24 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:24:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:efbd58ea-e56f-4c21-9ed9-3d319ca403b8, vol_name:cephfs) < ""
Dec 06 10:24:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/efbd58ea-e56f-4c21-9ed9-3d319ca403b8/.meta.tmp'
Dec 06 10:24:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/efbd58ea-e56f-4c21-9ed9-3d319ca403b8/.meta.tmp' to config b'/volumes/_nogroup/efbd58ea-e56f-4c21-9ed9-3d319ca403b8/.meta'
Dec 06 10:24:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:efbd58ea-e56f-4c21-9ed9-3d319ca403b8, vol_name:cephfs) < ""
Dec 06 10:24:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "format": "json"}]: dispatch
Dec 06 10:24:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:efbd58ea-e56f-4c21-9ed9-3d319ca403b8, vol_name:cephfs) < ""
Dec 06 10:24:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:efbd58ea-e56f-4c21-9ed9-3d319ca403b8, vol_name:cephfs) < ""
Dec 06 10:24:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v533: 177 pgs: 177 active+clean; 245 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 2.7 MiB/s wr, 60 op/s
Dec 06 10:24:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:24:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:25.600 280869 DEBUG nova.network.neutron [req-3db4c41a-4353-4b30-a6bd-ffacbd2f83ce req-a87515dc-f14a-40c7-9a60-1c2f1ad8b0b8 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Updated VIF entry in instance network info cache for port c8391efe-eabf-46a0-94e6-c12eb660cfb2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 06 10:24:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:25.601 280869 DEBUG nova.network.neutron [req-3db4c41a-4353-4b30-a6bd-ffacbd2f83ce req-a87515dc-f14a-40c7-9a60-1c2f1ad8b0b8 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Updating instance_info_cache with network_info: [{"id": "c8391efe-eabf-46a0-94e6-c12eb660cfb2", "address": "fa:16:3e:ec:95:9c", "network": {"id": "55ffc629-08a5-404f-87a7-26deb97840dc", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1845353867-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "b51f704fe6204487b0317c3332364cca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8391efe-ea", "ovs_interfaceid": "c8391efe-eabf-46a0-94e6-c12eb660cfb2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:24:25 np0005548790.localdomain systemd[1]: tmp-crun.7mJD7r.mount: Deactivated successfully.
Dec 06 10:24:25 np0005548790.localdomain podman[321749]: 2025-12-06 10:24:25.633683663 +0000 UTC m=+0.134815210 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 06 10:24:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:25.634 280869 DEBUG oslo_concurrency.lockutils [req-3db4c41a-4353-4b30-a6bd-ffacbd2f83ce req-a87515dc-f14a-40c7-9a60-1c2f1ad8b0b8 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Releasing lock "refresh_cache-b59377c8-c3d7-452b-8305-d2853ef47bb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:24:25 np0005548790.localdomain podman[321749]: 2025-12-06 10:24:25.646180062 +0000 UTC m=+0.147311589 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec 06 10:24:25 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:24:25 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:24:25 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "format": "json"}]: dispatch
Dec 06 10:24:25 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:24:25 np0005548790.localdomain ceph-mon[301742]: pgmap v533: 177 pgs: 177 active+clean; 245 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 2.7 MiB/s wr, 60 op/s
Dec 06 10:24:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:26.492 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:26.570 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:26 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "3e304c5e-12d4-47b8-989b-1fc1253e12f7", "new_size": 2147483648, "format": "json"}]: dispatch
Dec 06 10:24:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:3e304c5e-12d4-47b8-989b-1fc1253e12f7, vol_name:cephfs) < ""
Dec 06 10:24:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:3e304c5e-12d4-47b8-989b-1fc1253e12f7, vol_name:cephfs) < ""
Dec 06 10:24:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v534: 177 pgs: 177 active+clean; 245 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 2.3 MiB/s wr, 49 op/s
Dec 06 10:24:28 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "82abd4b2-157a-49c5-b0f6-995ee895ebc0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:24:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:82abd4b2-157a-49c5-b0f6-995ee895ebc0, vol_name:cephfs) < ""
Dec 06 10:24:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/82abd4b2-157a-49c5-b0f6-995ee895ebc0/.meta.tmp'
Dec 06 10:24:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/82abd4b2-157a-49c5-b0f6-995ee895ebc0/.meta.tmp' to config b'/volumes/_nogroup/82abd4b2-157a-49c5-b0f6-995ee895ebc0/.meta'
Dec 06 10:24:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:82abd4b2-157a-49c5-b0f6-995ee895ebc0, vol_name:cephfs) < ""
Dec 06 10:24:28 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "82abd4b2-157a-49c5-b0f6-995ee895ebc0", "format": "json"}]: dispatch
Dec 06 10:24:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:82abd4b2-157a-49c5-b0f6-995ee895ebc0, vol_name:cephfs) < ""
Dec 06 10:24:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:82abd4b2-157a-49c5-b0f6-995ee895ebc0, vol_name:cephfs) < ""
Dec 06 10:24:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:24:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:24:28 np0005548790.localdomain podman[321768]: 2025-12-06 10:24:28.576173086 +0000 UTC m=+0.088635290 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:24:28 np0005548790.localdomain podman[321769]: 2025-12-06 10:24:28.623217369 +0000 UTC m=+0.121022337 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 10:24:28 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "3e304c5e-12d4-47b8-989b-1fc1253e12f7", "new_size": 2147483648, "format": "json"}]: dispatch
Dec 06 10:24:28 np0005548790.localdomain ceph-mon[301742]: pgmap v534: 177 pgs: 177 active+clean; 245 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 2.3 MiB/s wr, 49 op/s
Dec 06 10:24:28 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:24:28 np0005548790.localdomain podman[321769]: 2025-12-06 10:24:28.655334869 +0000 UTC m=+0.153139847 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec 06 10:24:28 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:24:28 np0005548790.localdomain podman[321768]: 2025-12-06 10:24:28.707429118 +0000 UTC m=+0.219891312 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:24:28 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:24:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:24:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:29.195 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v535: 177 pgs: 177 active+clean; 245 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 124 op/s
Dec 06 10:24:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:29.557 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:29 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "82abd4b2-157a-49c5-b0f6-995ee895ebc0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:24:29 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "82abd4b2-157a-49c5-b0f6-995ee895ebc0", "format": "json"}]: dispatch
Dec 06 10:24:30 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3e304c5e-12d4-47b8-989b-1fc1253e12f7", "format": "json"}]: dispatch
Dec 06 10:24:30 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3e304c5e-12d4-47b8-989b-1fc1253e12f7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:24:30 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3e304c5e-12d4-47b8-989b-1fc1253e12f7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:24:30 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:24:30.388+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3e304c5e-12d4-47b8-989b-1fc1253e12f7' of type subvolume
Dec 06 10:24:30 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3e304c5e-12d4-47b8-989b-1fc1253e12f7' of type subvolume
Dec 06 10:24:30 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3e304c5e-12d4-47b8-989b-1fc1253e12f7", "force": true, "format": "json"}]: dispatch
Dec 06 10:24:30 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3e304c5e-12d4-47b8-989b-1fc1253e12f7, vol_name:cephfs) < ""
Dec 06 10:24:30 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3e304c5e-12d4-47b8-989b-1fc1253e12f7'' moved to trashcan
Dec 06 10:24:30 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:24:30 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3e304c5e-12d4-47b8-989b-1fc1253e12f7, vol_name:cephfs) < ""
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: pgmap v535: 177 pgs: 177 active+clean; 245 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 124 op/s
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:30.679050) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016670679118, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2836, "num_deletes": 279, "total_data_size": 5385992, "memory_usage": 5462008, "flush_reason": "Manual Compaction"}
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016670700650, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 3502146, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26946, "largest_seqno": 29777, "table_properties": {"data_size": 3490518, "index_size": 7557, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 26999, "raw_average_key_size": 22, "raw_value_size": 3466567, "raw_average_value_size": 2917, "num_data_blocks": 316, "num_entries": 1188, "num_filter_entries": 1188, "num_deletions": 279, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016537, "oldest_key_time": 1765016537, "file_creation_time": 1765016670, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 21663 microseconds, and 9921 cpu microseconds.
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:30.700712) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 3502146 bytes OK
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:30.700743) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:30.702668) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:30.702691) EVENT_LOG_v1 {"time_micros": 1765016670702684, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:30.702716) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 5372797, prev total WAL file size 5372797, number of live WAL files 2.
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:30.704234) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132353530' seq:72057594037927935, type:22 .. '7061786F73003132383032' seq:0, type:0; will stop at (end)
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(3420KB)], [45(16MB)]
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016670704326, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 21166360, "oldest_snapshot_seqno": -1}
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 13565 keys, 19525202 bytes, temperature: kUnknown
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016670805910, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 19525202, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19448418, "index_size": 41803, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33925, "raw_key_size": 364939, "raw_average_key_size": 26, "raw_value_size": 19218391, "raw_average_value_size": 1416, "num_data_blocks": 1551, "num_entries": 13565, "num_filter_entries": 13565, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015768, "oldest_key_time": 0, "file_creation_time": 1765016670, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:30.806201) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 19525202 bytes
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:30.808725) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 208.2 rd, 192.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 16.8 +0.0 blob) out(18.6 +0.0 blob), read-write-amplify(11.6) write-amplify(5.6) OK, records in: 14129, records dropped: 564 output_compression: NoCompression
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:30.808820) EVENT_LOG_v1 {"time_micros": 1765016670808758, "job": 26, "event": "compaction_finished", "compaction_time_micros": 101662, "compaction_time_cpu_micros": 50490, "output_level": 6, "num_output_files": 1, "total_output_size": 19525202, "num_input_records": 14129, "num_output_records": 13565, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016670809483, "job": 26, "event": "table_file_deletion", "file_number": 47}
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016670811824, "job": 26, "event": "table_file_deletion", "file_number": 45}
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:30.704028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:30.811954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:30.811966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:30.811970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:30.811974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:24:30 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:30.811978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:24:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v536: 177 pgs: 177 active+clean; 245 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 124 op/s
Dec 06 10:24:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:31.496 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "82abd4b2-157a-49c5-b0f6-995ee895ebc0", "auth_id": "Joe", "tenant_id": "14fcd30962314973b2c11b49f89b4cb4", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:24:31 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:82abd4b2-157a-49c5-b0f6-995ee895ebc0, tenant_id:14fcd30962314973b2c11b49f89b4cb4, vol_name:cephfs) < ""
Dec 06 10:24:31 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Dec 06 10:24:31 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec 06 10:24:31 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID Joe with tenant 14fcd30962314973b2c11b49f89b4cb4
Dec 06 10:24:31 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/82abd4b2-157a-49c5-b0f6-995ee895ebc0/2859c761-f712-4409-870b-5e31b0c42ab8", "osd", "allow rw pool=manila_data namespace=fsvolumens_82abd4b2-157a-49c5-b0f6-995ee895ebc0", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:24:31 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/82abd4b2-157a-49c5-b0f6-995ee895ebc0/2859c761-f712-4409-870b-5e31b0c42ab8", "osd", "allow rw pool=manila_data namespace=fsvolumens_82abd4b2-157a-49c5-b0f6-995ee895ebc0", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:24:31 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3e304c5e-12d4-47b8-989b-1fc1253e12f7", "format": "json"}]: dispatch
Dec 06 10:24:31 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3e304c5e-12d4-47b8-989b-1fc1253e12f7", "force": true, "format": "json"}]: dispatch
Dec 06 10:24:31 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec 06 10:24:31 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/82abd4b2-157a-49c5-b0f6-995ee895ebc0/2859c761-f712-4409-870b-5e31b0c42ab8", "osd", "allow rw pool=manila_data namespace=fsvolumens_82abd4b2-157a-49c5-b0f6-995ee895ebc0", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:24:31 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/82abd4b2-157a-49c5-b0f6-995ee895ebc0/2859c761-f712-4409-870b-5e31b0c42ab8", "osd", "allow rw pool=manila_data namespace=fsvolumens_82abd4b2-157a-49c5-b0f6-995ee895ebc0", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:24:31 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/82abd4b2-157a-49c5-b0f6-995ee895ebc0/2859c761-f712-4409-870b-5e31b0c42ab8", "osd", "allow rw pool=manila_data namespace=fsvolumens_82abd4b2-157a-49c5-b0f6-995ee895ebc0", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:24:31 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:82abd4b2-157a-49c5-b0f6-995ee895ebc0, tenant_id:14fcd30962314973b2c11b49f89b4cb4, vol_name:cephfs) < ""
Dec 06 10:24:32 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:24:32Z|00261|binding|INFO|Releasing lport 03e3daed-e0ad-41ef-b4d5-42d85bf912f3 from this chassis (sb_readonly=0)
Dec 06 10:24:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:32.601 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:32 np0005548790.localdomain ceph-mon[301742]: pgmap v536: 177 pgs: 177 active+clean; 245 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 124 op/s
Dec 06 10:24:32 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "82abd4b2-157a-49c5-b0f6-995ee895ebc0", "auth_id": "Joe", "tenant_id": "14fcd30962314973b2c11b49f89b4cb4", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:24:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v537: 177 pgs: 177 active+clean; 246 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Dec 06 10:24:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fb1967bf-2011-4ff5-8e79-30781feb0f35", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:24:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fb1967bf-2011-4ff5-8e79-30781feb0f35, vol_name:cephfs) < ""
Dec 06 10:24:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fb1967bf-2011-4ff5-8e79-30781feb0f35/.meta.tmp'
Dec 06 10:24:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fb1967bf-2011-4ff5-8e79-30781feb0f35/.meta.tmp' to config b'/volumes/_nogroup/fb1967bf-2011-4ff5-8e79-30781feb0f35/.meta'
Dec 06 10:24:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fb1967bf-2011-4ff5-8e79-30781feb0f35, vol_name:cephfs) < ""
Dec 06 10:24:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fb1967bf-2011-4ff5-8e79-30781feb0f35", "format": "json"}]: dispatch
Dec 06 10:24:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fb1967bf-2011-4ff5-8e79-30781feb0f35, vol_name:cephfs) < ""
Dec 06 10:24:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fb1967bf-2011-4ff5-8e79-30781feb0f35, vol_name:cephfs) < ""
Dec 06 10:24:33 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:24:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:24:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:34.199 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:34 np0005548790.localdomain ceph-mon[301742]: pgmap v537: 177 pgs: 177 active+clean; 246 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Dec 06 10:24:34 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fb1967bf-2011-4ff5-8e79-30781feb0f35", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:24:34 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fb1967bf-2011-4ff5-8e79-30781feb0f35", "format": "json"}]: dispatch
Dec 06 10:24:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4e35c7b0-6333-486a-9deb-d9473aa05e04", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:24:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4e35c7b0-6333-486a-9deb-d9473aa05e04, vol_name:cephfs) < ""
Dec 06 10:24:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v538: 177 pgs: 177 active+clean; 246 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 62 KiB/s wr, 68 op/s
Dec 06 10:24:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4e35c7b0-6333-486a-9deb-d9473aa05e04/.meta.tmp'
Dec 06 10:24:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4e35c7b0-6333-486a-9deb-d9473aa05e04/.meta.tmp' to config b'/volumes/_nogroup/4e35c7b0-6333-486a-9deb-d9473aa05e04/.meta'
Dec 06 10:24:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4e35c7b0-6333-486a-9deb-d9473aa05e04, vol_name:cephfs) < ""
Dec 06 10:24:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4e35c7b0-6333-486a-9deb-d9473aa05e04", "format": "json"}]: dispatch
Dec 06 10:24:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4e35c7b0-6333-486a-9deb-d9473aa05e04, vol_name:cephfs) < ""
Dec 06 10:24:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4e35c7b0-6333-486a-9deb-d9473aa05e04, vol_name:cephfs) < ""
Dec 06 10:24:35 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:24:36 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Dec 06 10:24:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:36.499 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:36 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4e35c7b0-6333-486a-9deb-d9473aa05e04", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:24:36 np0005548790.localdomain ceph-mon[301742]: pgmap v538: 177 pgs: 177 active+clean; 246 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 62 KiB/s wr, 68 op/s
Dec 06 10:24:36 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4e35c7b0-6333-486a-9deb-d9473aa05e04", "format": "json"}]: dispatch
Dec 06 10:24:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fb1967bf-2011-4ff5-8e79-30781feb0f35", "format": "json"}]: dispatch
Dec 06 10:24:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fb1967bf-2011-4ff5-8e79-30781feb0f35, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:24:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fb1967bf-2011-4ff5-8e79-30781feb0f35, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:24:37 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:24:37.033+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fb1967bf-2011-4ff5-8e79-30781feb0f35' of type subvolume
Dec 06 10:24:37 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fb1967bf-2011-4ff5-8e79-30781feb0f35' of type subvolume
Dec 06 10:24:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fb1967bf-2011-4ff5-8e79-30781feb0f35", "force": true, "format": "json"}]: dispatch
Dec 06 10:24:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fb1967bf-2011-4ff5-8e79-30781feb0f35, vol_name:cephfs) < ""
Dec 06 10:24:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fb1967bf-2011-4ff5-8e79-30781feb0f35'' moved to trashcan
Dec 06 10:24:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:24:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fb1967bf-2011-4ff5-8e79-30781feb0f35, vol_name:cephfs) < ""
Dec 06 10:24:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v539: 177 pgs: 177 active+clean; 246 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 62 KiB/s wr, 68 op/s
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:37.667629) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016677667891, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 366, "num_deletes": 257, "total_data_size": 141651, "memory_usage": 148968, "flush_reason": "Manual Compaction"}
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016677670720, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 91458, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29783, "largest_seqno": 30143, "table_properties": {"data_size": 89299, "index_size": 270, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5639, "raw_average_key_size": 18, "raw_value_size": 84804, "raw_average_value_size": 272, "num_data_blocks": 12, "num_entries": 311, "num_filter_entries": 311, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016671, "oldest_key_time": 1765016671, "file_creation_time": 1765016677, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 3158 microseconds, and 1072 cpu microseconds.
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:37.670765) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 91458 bytes OK
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:37.670810) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:37.672021) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:37.672043) EVENT_LOG_v1 {"time_micros": 1765016677672036, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:37.672065) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 139144, prev total WAL file size 139144, number of live WAL files 2.
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:37.672536) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323730' seq:72057594037927935, type:22 .. '6C6F676D0034353233' seq:0, type:0; will stop at (end)
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(89KB)], [48(18MB)]
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016677672629, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 19616660, "oldest_snapshot_seqno": -1}
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 13346 keys, 19187750 bytes, temperature: kUnknown
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016677779050, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 19187750, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19112831, "index_size": 40450, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33413, "raw_key_size": 361388, "raw_average_key_size": 27, "raw_value_size": 18886863, "raw_average_value_size": 1415, "num_data_blocks": 1487, "num_entries": 13346, "num_filter_entries": 13346, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015768, "oldest_key_time": 0, "file_creation_time": 1765016677, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:37.779376) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 19187750 bytes
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:37.782055) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.2 rd, 180.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 18.6 +0.0 blob) out(18.3 +0.0 blob), read-write-amplify(424.3) write-amplify(209.8) OK, records in: 13876, records dropped: 530 output_compression: NoCompression
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:37.782077) EVENT_LOG_v1 {"time_micros": 1765016677782067, "job": 28, "event": "compaction_finished", "compaction_time_micros": 106496, "compaction_time_cpu_micros": 47933, "output_level": 6, "num_output_files": 1, "total_output_size": 19187750, "num_input_records": 13876, "num_output_records": 13346, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016677782211, "job": 28, "event": "table_file_deletion", "file_number": 50}
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016677783921, "job": 28, "event": "table_file_deletion", "file_number": 48}
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:37.672414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:37.784208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:37.784216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:37.784219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:37.784222) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:24:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:24:37.784225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:24:38 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:24:38Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ec:95:9c 10.100.0.10
Dec 06 10:24:38 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:24:38Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ec:95:9c 10.100.0.10
Dec 06 10:24:38 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4e35c7b0-6333-486a-9deb-d9473aa05e04", "auth_id": "Joe", "tenant_id": "438e893229f742e78fe8e62ef6ea17d5", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:24:38 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:4e35c7b0-6333-486a-9deb-d9473aa05e04, tenant_id:438e893229f742e78fe8e62ef6ea17d5, vol_name:cephfs) < ""
Dec 06 10:24:38 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Dec 06 10:24:38 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec 06 10:24:38 np0005548790.localdomain ceph-mgr[286934]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: Joe is already in use
Dec 06 10:24:38 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:4e35c7b0-6333-486a-9deb-d9473aa05e04, tenant_id:438e893229f742e78fe8e62ef6ea17d5, vol_name:cephfs) < ""
Dec 06 10:24:38 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Dec 06 10:24:38 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:24:38.538+0000 7f06345ec640 -1 mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Dec 06 10:24:38 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fb1967bf-2011-4ff5-8e79-30781feb0f35", "format": "json"}]: dispatch
Dec 06 10:24:38 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fb1967bf-2011-4ff5-8e79-30781feb0f35", "force": true, "format": "json"}]: dispatch
Dec 06 10:24:38 np0005548790.localdomain ceph-mon[301742]: pgmap v539: 177 pgs: 177 active+clean; 246 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 62 KiB/s wr, 68 op/s
Dec 06 10:24:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec 06 10:24:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:24:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:39.201 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v540: 177 pgs: 177 active+clean; 279 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 133 op/s
Dec 06 10:24:39 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4e35c7b0-6333-486a-9deb-d9473aa05e04", "auth_id": "Joe", "tenant_id": "438e893229f742e78fe8e62ef6ea17d5", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:24:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3543682375' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:24:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3543682375' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:24:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/3756278959' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:24:40 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6a347634-f593-49b5-a8b9-3ebf810ccb41", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:24:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6a347634-f593-49b5-a8b9-3ebf810ccb41, vol_name:cephfs) < ""
Dec 06 10:24:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6a347634-f593-49b5-a8b9-3ebf810ccb41/.meta.tmp'
Dec 06 10:24:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6a347634-f593-49b5-a8b9-3ebf810ccb41/.meta.tmp' to config b'/volumes/_nogroup/6a347634-f593-49b5-a8b9-3ebf810ccb41/.meta'
Dec 06 10:24:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6a347634-f593-49b5-a8b9-3ebf810ccb41, vol_name:cephfs) < ""
Dec 06 10:24:40 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6a347634-f593-49b5-a8b9-3ebf810ccb41", "format": "json"}]: dispatch
Dec 06 10:24:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6a347634-f593-49b5-a8b9-3ebf810ccb41, vol_name:cephfs) < ""
Dec 06 10:24:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6a347634-f593-49b5-a8b9-3ebf810ccb41, vol_name:cephfs) < ""
Dec 06 10:24:40 np0005548790.localdomain ceph-mon[301742]: pgmap v540: 177 pgs: 177 active+clean; 279 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 133 op/s
Dec 06 10:24:40 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:24:40 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/701765729' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:24:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v541: 177 pgs: 177 active+clean; 279 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 326 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Dec 06 10:24:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:41.346 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:24:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:41.346 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:24:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:41.346 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:24:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:41.503 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:41.713 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "refresh_cache-b59377c8-c3d7-452b-8305-d2853ef47bb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 06 10:24:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:41.714 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquired lock "refresh_cache-b59377c8-c3d7-452b-8305-d2853ef47bb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 06 10:24:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:41.714 280869 DEBUG nova.network.neutron [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 06 10:24:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:41.715 280869 DEBUG nova.objects.instance [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b59377c8-c3d7-452b-8305-d2853ef47bb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:24:41 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6a347634-f593-49b5-a8b9-3ebf810ccb41", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:24:41 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6a347634-f593-49b5-a8b9-3ebf810ccb41", "format": "json"}]: dispatch
Dec 06 10:24:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Optimize plan auto_2025-12-06_10:24:41
Dec 06 10:24:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:24:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] do_upmap
Dec 06 10:24:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] pools ['images', 'volumes', 'manila_metadata', 'manila_data', 'backups', 'vms', '.mgr']
Dec 06 10:24:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] prepared 0/10 changes
Dec 06 10:24:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:24:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:24:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4e35c7b0-6333-486a-9deb-d9473aa05e04", "auth_id": "tempest-cephx-id-649576020", "tenant_id": "438e893229f742e78fe8e62ef6ea17d5", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:24:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-649576020, format:json, prefix:fs subvolume authorize, sub_name:4e35c7b0-6333-486a-9deb-d9473aa05e04, tenant_id:438e893229f742e78fe8e62ef6ea17d5, vol_name:cephfs) < ""
Dec 06 10:24:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-649576020", "format": "json"} v 0)
Dec 06 10:24:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-649576020", "format": "json"} : dispatch
Dec 06 10:24:41 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID tempest-cephx-id-649576020 with tenant 438e893229f742e78fe8e62ef6ea17d5
Dec 06 10:24:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:24:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:24:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-649576020", "caps": ["mds", "allow rw path=/volumes/_nogroup/4e35c7b0-6333-486a-9deb-d9473aa05e04/5578bf84-b0d0-4fb9-8cfc-a71276656281", "osd", "allow rw pool=manila_data namespace=fsvolumens_4e35c7b0-6333-486a-9deb-d9473aa05e04", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:24:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-649576020", "caps": ["mds", "allow rw path=/volumes/_nogroup/4e35c7b0-6333-486a-9deb-d9473aa05e04/5578bf84-b0d0-4fb9-8cfc-a71276656281", "osd", "allow rw pool=manila_data namespace=fsvolumens_4e35c7b0-6333-486a-9deb-d9473aa05e04", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-649576020, format:json, prefix:fs subvolume authorize, sub_name:4e35c7b0-6333-486a-9deb-d9473aa05e04, tenant_id:438e893229f742e78fe8e62ef6ea17d5, vol_name:cephfs) < ""
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006570940640703518 of space, bias 1.0, pg target 1.3141881281407037 quantized to 32 (current 32)
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014866541910943606 of space, bias 1.0, pg target 0.29584418402777773 quantized to 32 (current 32)
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8555772569444443 quantized to 32 (current 32)
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.4071718546435884e-05 quantized to 32 (current 32)
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.9084135957565606e-06 of space, bias 1.0, pg target 0.0003785020298250512 quantized to 32 (current 32)
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0005714335566722502 of space, bias 4.0, pg target 0.45333728829331843 quantized to 16 (current 16)
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:24:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:42.296 280869 DEBUG nova.network.neutron [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Updating instance_info_cache with network_info: [{"id": "c8391efe-eabf-46a0-94e6-c12eb660cfb2", "address": "fa:16:3e:ec:95:9c", "network": {"id": "55ffc629-08a5-404f-87a7-26deb97840dc", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1845353867-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "b51f704fe6204487b0317c3332364cca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8391efe-ea", "ovs_interfaceid": "c8391efe-eabf-46a0-94e6-c12eb660cfb2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:24:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:42.314 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Releasing lock "refresh_cache-b59377c8-c3d7-452b-8305-d2853ef47bb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 06 10:24:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:42.314 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:24:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:24:42 np0005548790.localdomain ceph-mon[301742]: pgmap v541: 177 pgs: 177 active+clean; 279 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 326 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Dec 06 10:24:42 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4e35c7b0-6333-486a-9deb-d9473aa05e04", "auth_id": "tempest-cephx-id-649576020", "tenant_id": "438e893229f742e78fe8e62ef6ea17d5", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:24:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-649576020", "format": "json"} : dispatch
Dec 06 10:24:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-649576020", "caps": ["mds", "allow rw path=/volumes/_nogroup/4e35c7b0-6333-486a-9deb-d9473aa05e04/5578bf84-b0d0-4fb9-8cfc-a71276656281", "osd", "allow rw pool=manila_data namespace=fsvolumens_4e35c7b0-6333-486a-9deb-d9473aa05e04", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:24:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-649576020", "caps": ["mds", "allow rw path=/volumes/_nogroup/4e35c7b0-6333-486a-9deb-d9473aa05e04/5578bf84-b0d0-4fb9-8cfc-a71276656281", "osd", "allow rw pool=manila_data namespace=fsvolumens_4e35c7b0-6333-486a-9deb-d9473aa05e04", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:24:42 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-649576020", "caps": ["mds", "allow rw path=/volumes/_nogroup/4e35c7b0-6333-486a-9deb-d9473aa05e04/5578bf84-b0d0-4fb9-8cfc-a71276656281", "osd", "allow rw pool=manila_data namespace=fsvolumens_4e35c7b0-6333-486a-9deb-d9473aa05e04", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:24:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v542: 177 pgs: 177 active+clean; 279 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Dec 06 10:24:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:43.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:24:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6a347634-f593-49b5-a8b9-3ebf810ccb41", "format": "json"}]: dispatch
Dec 06 10:24:43 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6a347634-f593-49b5-a8b9-3ebf810ccb41, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:24:43 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6a347634-f593-49b5-a8b9-3ebf810ccb41, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:24:43 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:24:43.551+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6a347634-f593-49b5-a8b9-3ebf810ccb41' of type subvolume
Dec 06 10:24:43 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6a347634-f593-49b5-a8b9-3ebf810ccb41' of type subvolume
Dec 06 10:24:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6a347634-f593-49b5-a8b9-3ebf810ccb41", "force": true, "format": "json"}]: dispatch
Dec 06 10:24:43 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6a347634-f593-49b5-a8b9-3ebf810ccb41, vol_name:cephfs) < ""
Dec 06 10:24:43 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6a347634-f593-49b5-a8b9-3ebf810ccb41'' moved to trashcan
Dec 06 10:24:43 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:24:43 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6a347634-f593-49b5-a8b9-3ebf810ccb41, vol_name:cephfs) < ""
Dec 06 10:24:43 np0005548790.localdomain ceph-mon[301742]: pgmap v542: 177 pgs: 177 active+clean; 279 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Dec 06 10:24:43 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6a347634-f593-49b5-a8b9-3ebf810ccb41", "format": "json"}]: dispatch
Dec 06 10:24:43 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6a347634-f593-49b5-a8b9-3ebf810ccb41", "force": true, "format": "json"}]: dispatch
Dec 06 10:24:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:24:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:44.206 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:44.328 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:24:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4e35c7b0-6333-486a-9deb-d9473aa05e04", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 06 10:24:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:4e35c7b0-6333-486a-9deb-d9473aa05e04, vol_name:cephfs) < ""
Dec 06 10:24:45 np0005548790.localdomain ceph-mgr[286934]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'Joe' for subvolume '4e35c7b0-6333-486a-9deb-d9473aa05e04'
Dec 06 10:24:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:4e35c7b0-6333-486a-9deb-d9473aa05e04, vol_name:cephfs) < ""
Dec 06 10:24:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4e35c7b0-6333-486a-9deb-d9473aa05e04", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 06 10:24:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:4e35c7b0-6333-486a-9deb-d9473aa05e04, vol_name:cephfs) < ""
Dec 06 10:24:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/4e35c7b0-6333-486a-9deb-d9473aa05e04/5578bf84-b0d0-4fb9-8cfc-a71276656281
Dec 06 10:24:45 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/4e35c7b0-6333-486a-9deb-d9473aa05e04/5578bf84-b0d0-4fb9-8cfc-a71276656281],prefix=session evict} (starting...)
Dec 06 10:24:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:24:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:4e35c7b0-6333-486a-9deb-d9473aa05e04, vol_name:cephfs) < ""
Dec 06 10:24:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v543: 177 pgs: 177 active+clean; 279 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Dec 06 10:24:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:45.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:24:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:45.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:24:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:45.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:24:45 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:24:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3643a05c-023e-4ee5-80c6-9c2596521f46", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:24:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3643a05c-023e-4ee5-80c6-9c2596521f46, vol_name:cephfs) < ""
Dec 06 10:24:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3643a05c-023e-4ee5-80c6-9c2596521f46/.meta.tmp'
Dec 06 10:24:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3643a05c-023e-4ee5-80c6-9c2596521f46/.meta.tmp' to config b'/volumes/_nogroup/3643a05c-023e-4ee5-80c6-9c2596521f46/.meta'
Dec 06 10:24:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3643a05c-023e-4ee5-80c6-9c2596521f46, vol_name:cephfs) < ""
Dec 06 10:24:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3643a05c-023e-4ee5-80c6-9c2596521f46", "format": "json"}]: dispatch
Dec 06 10:24:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3643a05c-023e-4ee5-80c6-9c2596521f46, vol_name:cephfs) < ""
Dec 06 10:24:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3643a05c-023e-4ee5-80c6-9c2596521f46, vol_name:cephfs) < ""
Dec 06 10:24:45 np0005548790.localdomain podman[321818]: 2025-12-06 10:24:45.572841735 +0000 UTC m=+0.087426197 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 06 10:24:45 np0005548790.localdomain podman[321818]: 2025-12-06 10:24:45.577516032 +0000 UTC m=+0.092100504 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 10:24:45 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:24:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:46.507 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:46 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "aeb4e71b-e797-4ddb-8668-e9c13b45f222", "format": "json"}]: dispatch
Dec 06 10:24:46 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:aeb4e71b-e797-4ddb-8668-e9c13b45f222, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:24:47 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4e35c7b0-6333-486a-9deb-d9473aa05e04", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 06 10:24:47 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4e35c7b0-6333-486a-9deb-d9473aa05e04", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 06 10:24:47 np0005548790.localdomain ceph-mon[301742]: pgmap v543: 177 pgs: 177 active+clean; 279 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Dec 06 10:24:47 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3643a05c-023e-4ee5-80c6-9c2596521f46", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:24:47 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3643a05c-023e-4ee5-80c6-9c2596521f46", "format": "json"}]: dispatch
Dec 06 10:24:47 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:24:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:aeb4e71b-e797-4ddb-8668-e9c13b45f222, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:24:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v544: 177 pgs: 177 active+clean; 279 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Dec 06 10:24:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:47.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:24:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:24:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:24:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:48.406 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:24:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:48.407 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:24:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:24:48.408 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:24:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:24:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156105 "" "Go-http-client/1.1"
Dec 06 10:24:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:24:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19240 "" "Go-http-client/1.1"
Dec 06 10:24:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:24:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:24:48 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:24:48 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4e35c7b0-6333-486a-9deb-d9473aa05e04", "auth_id": "tempest-cephx-id-649576020", "format": "json"}]: dispatch
Dec 06 10:24:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-649576020, format:json, prefix:fs subvolume deauthorize, sub_name:4e35c7b0-6333-486a-9deb-d9473aa05e04, vol_name:cephfs) < ""
Dec 06 10:24:48 np0005548790.localdomain podman[321838]: 2025-12-06 10:24:48.628932536 +0000 UTC m=+0.134000882 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, version=9.6, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter)
Dec 06 10:24:48 np0005548790.localdomain podman[321836]: 2025-12-06 10:24:48.665513366 +0000 UTC m=+0.176417309 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:24:48 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "aeb4e71b-e797-4ddb-8668-e9c13b45f222", "format": "json"}]: dispatch
Dec 06 10:24:48 np0005548790.localdomain ceph-mon[301742]: pgmap v544: 177 pgs: 177 active+clean; 279 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Dec 06 10:24:48 np0005548790.localdomain podman[321836]: 2025-12-06 10:24:48.682579748 +0000 UTC m=+0.193483691 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 10:24:48 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:24:48 np0005548790.localdomain podman[321838]: 2025-12-06 10:24:48.699157347 +0000 UTC m=+0.204225763 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, release=1755695350, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=edpm, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 06 10:24:48 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-649576020", "format": "json"} v 0)
Dec 06 10:24:48 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-649576020", "format": "json"} : dispatch
Dec 06 10:24:48 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-649576020"} v 0)
Dec 06 10:24:48 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-649576020"} : dispatch
Dec 06 10:24:48 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:24:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-649576020, format:json, prefix:fs subvolume deauthorize, sub_name:4e35c7b0-6333-486a-9deb-d9473aa05e04, vol_name:cephfs) < ""
Dec 06 10:24:48 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4e35c7b0-6333-486a-9deb-d9473aa05e04", "auth_id": "tempest-cephx-id-649576020", "format": "json"}]: dispatch
Dec 06 10:24:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-649576020, format:json, prefix:fs subvolume evict, sub_name:4e35c7b0-6333-486a-9deb-d9473aa05e04, vol_name:cephfs) < ""
Dec 06 10:24:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-649576020, client_metadata.root=/volumes/_nogroup/4e35c7b0-6333-486a-9deb-d9473aa05e04/5578bf84-b0d0-4fb9-8cfc-a71276656281
Dec 06 10:24:48 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=tempest-cephx-id-649576020,client_metadata.root=/volumes/_nogroup/4e35c7b0-6333-486a-9deb-d9473aa05e04/5578bf84-b0d0-4fb9-8cfc-a71276656281],prefix=session evict} (starting...)
Dec 06 10:24:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:24:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-649576020, format:json, prefix:fs subvolume evict, sub_name:4e35c7b0-6333-486a-9deb-d9473aa05e04, vol_name:cephfs) < ""
Dec 06 10:24:48 np0005548790.localdomain podman[321837]: 2025-12-06 10:24:48.794368113 +0000 UTC m=+0.300175068 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 06 10:24:48 np0005548790.localdomain podman[321837]: 2025-12-06 10:24:48.808923887 +0000 UTC m=+0.314730792 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 06 10:24:48 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:24:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:24:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:49.209 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v545: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Dec 06 10:24:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3643a05c-023e-4ee5-80c6-9c2596521f46", "format": "json"}]: dispatch
Dec 06 10:24:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3643a05c-023e-4ee5-80c6-9c2596521f46, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:24:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3643a05c-023e-4ee5-80c6-9c2596521f46, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:24:49 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3643a05c-023e-4ee5-80c6-9c2596521f46' of type subvolume
Dec 06 10:24:49 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:24:49.341+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3643a05c-023e-4ee5-80c6-9c2596521f46' of type subvolume
Dec 06 10:24:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3643a05c-023e-4ee5-80c6-9c2596521f46", "force": true, "format": "json"}]: dispatch
Dec 06 10:24:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3643a05c-023e-4ee5-80c6-9c2596521f46, vol_name:cephfs) < ""
Dec 06 10:24:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3643a05c-023e-4ee5-80c6-9c2596521f46'' moved to trashcan
Dec 06 10:24:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:24:49 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3643a05c-023e-4ee5-80c6-9c2596521f46, vol_name:cephfs) < ""
Dec 06 10:24:49 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4e35c7b0-6333-486a-9deb-d9473aa05e04", "auth_id": "tempest-cephx-id-649576020", "format": "json"}]: dispatch
Dec 06 10:24:49 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-649576020", "format": "json"} : dispatch
Dec 06 10:24:49 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-649576020"} : dispatch
Dec 06 10:24:49 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-649576020"} : dispatch
Dec 06 10:24:49 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-649576020"}]': finished
Dec 06 10:24:49 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4e35c7b0-6333-486a-9deb-d9473aa05e04", "auth_id": "tempest-cephx-id-649576020", "format": "json"}]: dispatch
Dec 06 10:24:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:50.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:24:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:50.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:24:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:50.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:24:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:50.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:24:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:50.354 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:24:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:50.354 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:24:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:50.355 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:24:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:50.355 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:24:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:50.355 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:24:50 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "aeb4e71b-e797-4ddb-8668-e9c13b45f222_44a54937-b38a-4109-92db-e338a6e6c4a4", "force": true, "format": "json"}]: dispatch
Dec 06 10:24:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aeb4e71b-e797-4ddb-8668-e9c13b45f222_44a54937-b38a-4109-92db-e338a6e6c4a4, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:24:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp'
Dec 06 10:24:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp' to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta'
Dec 06 10:24:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aeb4e71b-e797-4ddb-8668-e9c13b45f222_44a54937-b38a-4109-92db-e338a6e6c4a4, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:24:50 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "aeb4e71b-e797-4ddb-8668-e9c13b45f222", "force": true, "format": "json"}]: dispatch
Dec 06 10:24:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aeb4e71b-e797-4ddb-8668-e9c13b45f222, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:24:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp'
Dec 06 10:24:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp' to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta'
Dec 06 10:24:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aeb4e71b-e797-4ddb-8668-e9c13b45f222, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:24:50 np0005548790.localdomain ceph-mon[301742]: pgmap v545: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Dec 06 10:24:50 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3643a05c-023e-4ee5-80c6-9c2596521f46", "format": "json"}]: dispatch
Dec 06 10:24:50 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3643a05c-023e-4ee5-80c6-9c2596521f46", "force": true, "format": "json"}]: dispatch
Dec 06 10:24:50 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:24:50 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2358963560' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:24:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:50.804 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:24:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:50.885 280869 DEBUG nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 10:24:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:50.886 280869 DEBUG nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 06 10:24:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:51.102 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:24:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:51.105 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=11284MB free_disk=41.700347900390625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:24:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:51.105 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:24:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:51.106 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:24:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:51.202 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Instance b59377c8-c3d7-452b-8305-d2853ef47bb4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 06 10:24:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:51.203 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:24:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:51.204 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=640MB phys_disk=41GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:24:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v546: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.7 KiB/s rd, 67 KiB/s wr, 5 op/s
Dec 06 10:24:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:51.502 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Refreshing inventories for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 10:24:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:51.526 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updating ProviderTree inventory for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 10:24:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:51.527 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updating inventory in ProviderTree for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 10:24:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:51.542 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:51.548 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Refreshing aggregate associations for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 10:24:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:51.571 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Refreshing trait associations for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0, traits: HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AMD_SVM,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_F16C,HW_CPU_X86_ABM,HW_CPU_X86_BMI,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SVM,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 10:24:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:51.618 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:24:51 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "aeb4e71b-e797-4ddb-8668-e9c13b45f222_44a54937-b38a-4109-92db-e338a6e6c4a4", "force": true, "format": "json"}]: dispatch
Dec 06 10:24:51 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "aeb4e71b-e797-4ddb-8668-e9c13b45f222", "force": true, "format": "json"}]: dispatch
Dec 06 10:24:51 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2358963560' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:24:51 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/425768214' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:24:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "82abd4b2-157a-49c5-b0f6-995ee895ebc0", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 06 10:24:51 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:82abd4b2-157a-49c5-b0f6-995ee895ebc0, vol_name:cephfs) < ""
Dec 06 10:24:51 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Dec 06 10:24:51 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec 06 10:24:51 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.Joe"} v 0)
Dec 06 10:24:51 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch
Dec 06 10:24:52 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:82abd4b2-157a-49c5-b0f6-995ee895ebc0, vol_name:cephfs) < ""
Dec 06 10:24:52 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "82abd4b2-157a-49c5-b0f6-995ee895ebc0", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 06 10:24:52 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:82abd4b2-157a-49c5-b0f6-995ee895ebc0, vol_name:cephfs) < ""
Dec 06 10:24:52 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/82abd4b2-157a-49c5-b0f6-995ee895ebc0/2859c761-f712-4409-870b-5e31b0c42ab8
Dec 06 10:24:52 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/82abd4b2-157a-49c5-b0f6-995ee895ebc0/2859c761-f712-4409-870b-5e31b0c42ab8],prefix=session evict} (starting...)
Dec 06 10:24:52 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:24:52 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:82abd4b2-157a-49c5-b0f6-995ee895ebc0, vol_name:cephfs) < ""
Dec 06 10:24:52 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:24:52 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1474548798' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:24:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:52.139 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:24:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:52.146 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:24:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:52.173 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:24:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:52.209 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:24:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:52.210 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:24:52 np0005548790.localdomain ceph-mon[301742]: pgmap v546: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.7 KiB/s rd, 67 KiB/s wr, 5 op/s
Dec 06 10:24:52 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec 06 10:24:52 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch
Dec 06 10:24:52 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch
Dec 06 10:24:52 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Dec 06 10:24:52 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1474548798' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:24:52 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/1025362414' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:24:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v547: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 2.0 KiB/s rd, 124 KiB/s wr, 11 op/s
Dec 06 10:24:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:24:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:24:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:24:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:24:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:24:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:24:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:24:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:24:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:24:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:24:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:24:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:24:53 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "82abd4b2-157a-49c5-b0f6-995ee895ebc0", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 06 10:24:53 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "82abd4b2-157a-49c5-b0f6-995ee895ebc0", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 06 10:24:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:24:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:54.211 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:54 np0005548790.localdomain ceph-mon[301742]: pgmap v547: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 2.0 KiB/s rd, 124 KiB/s wr, 11 op/s
Dec 06 10:24:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v548: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 100 KiB/s wr, 8 op/s
Dec 06 10:24:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "auth_id": "admin", "tenant_id": "14fcd30962314973b2c11b49f89b4cb4", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:24:55 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:efbd58ea-e56f-4c21-9ed9-3d319ca403b8, tenant_id:14fcd30962314973b2c11b49f89b4cb4, vol_name:cephfs) < ""
Dec 06 10:24:55 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin", "format": "json"} v 0)
Dec 06 10:24:55 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch
Dec 06 10:24:55 np0005548790.localdomain ceph-mgr[286934]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Dec 06 10:24:55 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:efbd58ea-e56f-4c21-9ed9-3d319ca403b8, tenant_id:14fcd30962314973b2c11b49f89b4cb4, vol_name:cephfs) < ""
Dec 06 10:24:55 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Dec 06 10:24:55 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:24:55.324+0000 7f06345ec640 -1 mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Dec 06 10:24:55 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch
Dec 06 10:24:55 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e244 e244: 6 total, 6 up, 6 in
Dec 06 10:24:56 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "cdf82d1a-cb71-4bc8-a483-dd9f0663d1ab", "format": "json"}]: dispatch
Dec 06 10:24:56 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:cdf82d1a-cb71-4bc8-a483-dd9f0663d1ab, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:24:56 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:cdf82d1a-cb71-4bc8-a483-dd9f0663d1ab, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:24:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:56.340 280869 DEBUG oslo_concurrency.lockutils [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Acquiring lock "b59377c8-c3d7-452b-8305-d2853ef47bb4" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:24:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:56.341 280869 DEBUG oslo_concurrency.lockutils [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:24:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:56.360 280869 DEBUG nova.objects.instance [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lazy-loading 'flavor' on Instance uuid b59377c8-c3d7-452b-8305-d2853ef47bb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:24:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:56.408 280869 INFO nova.virt.libvirt.driver [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Ignoring supplied device name: /dev/vdb
Dec 06 10:24:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:56.426 280869 DEBUG oslo_concurrency.lockutils [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:24:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:24:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:56.546 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:56 np0005548790.localdomain podman[321944]: 2025-12-06 10:24:56.587120753 +0000 UTC m=+0.096951496 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 10:24:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:56.601 280869 DEBUG oslo_concurrency.lockutils [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Acquiring lock "b59377c8-c3d7-452b-8305-d2853ef47bb4" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:24:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:56.602 280869 DEBUG oslo_concurrency.lockutils [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:24:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:56.602 280869 INFO nova.compute.manager [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Attaching volume aaa4ed2d-8dfc-40b7-87e3-cd5257ed1965 to /dev/vdb
Dec 06 10:24:56 np0005548790.localdomain podman[321944]: 2025-12-06 10:24:56.605570091 +0000 UTC m=+0.115400854 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 10:24:56 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:24:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:56.736 280869 DEBUG os_brick.utils [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.108', 'multipath': True, 'enforce_multipath': True, 'host': 'np0005548790.localdomain', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 06 10:24:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:56.738 280869 INFO oslo.privsep.daemon [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpdv7s9p9w/privsep.sock']
Dec 06 10:24:56 np0005548790.localdomain ceph-mon[301742]: pgmap v548: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 100 KiB/s wr, 8 op/s
Dec 06 10:24:56 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "auth_id": "admin", "tenant_id": "14fcd30962314973b2c11b49f89b4cb4", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:24:56 np0005548790.localdomain ceph-mon[301742]: osdmap e244: 6 total, 6 up, 6 in
Dec 06 10:24:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v550: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 120 KiB/s wr, 9 op/s
Dec 06 10:24:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:57.365 280869 INFO oslo.privsep.daemon [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Spawned new privsep daemon via rootwrap
Dec 06 10:24:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:57.276 321969 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 06 10:24:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:57.281 321969 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 06 10:24:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:57.284 321969 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 06 10:24:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:57.285 321969 INFO oslo.privsep.daemon [-] privsep daemon running as pid 321969
Dec 06 10:24:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:57.369 321969 DEBUG oslo.privsep.daemon [-] privsep: reply[3b53f13a-1956-433c-be8b-82aefea60284]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:24:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:57.461 321969 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:24:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:57.471 321969 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:24:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:57.472 321969 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb5e325-47d0-4151-8b12-4605e25f122f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:24:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:57.473 321969 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:24:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:57.480 321969 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:24:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:57.480 321969 DEBUG oslo.privsep.daemon [-] privsep: reply[033bfb4c-371b-4bcb-8fe6-b34882e6c786]: (4, ('InitiatorName=iqn.1994-05.com.redhat:a4f71413484d\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:24:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:57.483 321969 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:24:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:57.491 321969 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:24:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:57.491 321969 DEBUG oslo.privsep.daemon [-] privsep: reply[4327dc5b-425b-4d87-9701-fbf5530cd240]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:24:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:57.493 321969 DEBUG oslo.privsep.daemon [-] privsep: reply[74daed09-6423-46d6-b62f-888b0858020d]: (4, 'f03c6239-85fa-4e2b-b1f7-56cf939bb96f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:24:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:57.494 280869 DEBUG oslo_concurrency.processutils [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:24:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:57.516 280869 DEBUG oslo_concurrency.processutils [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:24:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:57.520 280869 DEBUG os_brick.initiator.connectors.lightos [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 06 10:24:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:57.521 280869 DEBUG os_brick.initiator.connectors.lightos [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 06 10:24:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:57.521 280869 DEBUG os_brick.initiator.connectors.lightos [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:f03c6239-85fa-4e2b-b1f7-56cf939bb96f dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 06 10:24:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:57.522 280869 DEBUG os_brick.utils [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] <== get_connector_properties: return (784ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.108', 'host': 'np0005548790.localdomain', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:a4f71413484d', 'do_local_attach': False, 'nvme_hostid': 'f03c6239-85fa-4e2b-b1f7-56cf939bb96f', 'system uuid': 'f03c6239-85fa-4e2b-b1f7-56cf939bb96f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:f03c6239-85fa-4e2b-b1f7-56cf939bb96f', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 06 10:24:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:57.523 280869 DEBUG nova.virt.block_device [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Updating existing volume attachment record: 9fd8fe29-efa0-4174-a6a7-cc2325d45439 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 06 10:24:57 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "cdf82d1a-cb71-4bc8-a483-dd9f0663d1ab", "format": "json"}]: dispatch
Dec 06 10:24:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:58.508 280869 DEBUG oslo_concurrency.lockutils [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:24:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:58.509 280869 DEBUG oslo_concurrency.lockutils [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:24:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:58.511 280869 DEBUG oslo_concurrency.lockutils [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:24:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:58.520 280869 DEBUG nova.objects.instance [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lazy-loading 'flavor' on Instance uuid b59377c8-c3d7-452b-8305-d2853ef47bb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:24:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:58.563 280869 DEBUG nova.virt.libvirt.driver [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Attempting to attach volume aaa4ed2d-8dfc-40b7-87e3-cd5257ed1965 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 06 10:24:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:58.566 280869 DEBUG nova.virt.libvirt.guest [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] attach device xml: <disk type="network" device="disk">
Dec 06 10:24:58 np0005548790.localdomain nova_compute[280865]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 10:24:58 np0005548790.localdomain nova_compute[280865]:   <source protocol="rbd" name="volumes/volume-aaa4ed2d-8dfc-40b7-87e3-cd5257ed1965">
Dec 06 10:24:58 np0005548790.localdomain nova_compute[280865]:     <host name="172.18.0.103" port="6789"/>
Dec 06 10:24:58 np0005548790.localdomain nova_compute[280865]:     <host name="172.18.0.104" port="6789"/>
Dec 06 10:24:58 np0005548790.localdomain nova_compute[280865]:     <host name="172.18.0.105" port="6789"/>
Dec 06 10:24:58 np0005548790.localdomain nova_compute[280865]:   </source>
Dec 06 10:24:58 np0005548790.localdomain nova_compute[280865]:   <auth username="openstack">
Dec 06 10:24:58 np0005548790.localdomain nova_compute[280865]:     <secret type="ceph" uuid="1939e851-b10c-5c3b-9bb7-8e7f380233e8"/>
Dec 06 10:24:58 np0005548790.localdomain nova_compute[280865]:   </auth>
Dec 06 10:24:58 np0005548790.localdomain nova_compute[280865]:   <target dev="vdb" bus="virtio"/>
Dec 06 10:24:58 np0005548790.localdomain nova_compute[280865]:   <serial>aaa4ed2d-8dfc-40b7-87e3-cd5257ed1965</serial>
Dec 06 10:24:58 np0005548790.localdomain nova_compute[280865]: </disk>
Dec 06 10:24:58 np0005548790.localdomain nova_compute[280865]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 06 10:24:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:58.739 280869 DEBUG nova.virt.libvirt.driver [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:24:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:58.740 280869 DEBUG nova.virt.libvirt.driver [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:24:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:58.740 280869 DEBUG nova.virt.libvirt.driver [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 06 10:24:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:58.740 280869 DEBUG nova.virt.libvirt.driver [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] No VIF found with MAC fa:16:3e:ec:95:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 06 10:24:58 np0005548790.localdomain ceph-mon[301742]: pgmap v550: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 120 KiB/s wr, 9 op/s
Dec 06 10:24:58 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2198419892' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:24:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:58.893 280869 DEBUG oslo_concurrency.lockutils [None req-db35458f-d2a1-4d28-a30d-5c1c7e2269a1 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:24:58 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "auth_id": "david", "tenant_id": "14fcd30962314973b2c11b49f89b4cb4", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:24:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:efbd58ea-e56f-4c21-9ed9-3d319ca403b8, tenant_id:14fcd30962314973b2c11b49f89b4cb4, vol_name:cephfs) < ""
Dec 06 10:24:58 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Dec 06 10:24:58 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec 06 10:24:58 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID david with tenant 14fcd30962314973b2c11b49f89b4cb4
Dec 06 10:24:58 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/efbd58ea-e56f-4c21-9ed9-3d319ca403b8/413ff259-8ca8-446a-9a48-682d6b0aa9e8", "osd", "allow rw pool=manila_data namespace=fsvolumens_efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:24:58 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/efbd58ea-e56f-4c21-9ed9-3d319ca403b8/413ff259-8ca8-446a-9a48-682d6b0aa9e8", "osd", "allow rw pool=manila_data namespace=fsvolumens_efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:24:59 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:efbd58ea-e56f-4c21-9ed9-3d319ca403b8, tenant_id:14fcd30962314973b2c11b49f89b4cb4, vol_name:cephfs) < ""
Dec 06 10:24:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:24:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:24:59.216 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:24:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v551: 177 pgs: 177 active+clean; 281 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.3 KiB/s rd, 99 KiB/s wr, 8 op/s
Dec 06 10:24:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:24:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:24:59 np0005548790.localdomain podman[321999]: 2025-12-06 10:24:59.582879387 +0000 UTC m=+0.086674197 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 06 10:24:59 np0005548790.localdomain podman[321998]: 2025-12-06 10:24:59.64581629 +0000 UTC m=+0.138628793 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:24:59 np0005548790.localdomain podman[321998]: 2025-12-06 10:24:59.658219926 +0000 UTC m=+0.151032469 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:24:59 np0005548790.localdomain podman[321999]: 2025-12-06 10:24:59.67126593 +0000 UTC m=+0.175060770 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec 06 10:24:59 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:24:59 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:24:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "9ca8d539-a020-426e-a70b-3e1ee1b92584", "format": "json"}]: dispatch
Dec 06 10:24:59 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9ca8d539-a020-426e-a70b-3e1ee1b92584, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:24:59 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9ca8d539-a020-426e-a70b-3e1ee1b92584, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:24:59 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "auth_id": "david", "tenant_id": "14fcd30962314973b2c11b49f89b4cb4", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:24:59 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec 06 10:24:59 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/efbd58ea-e56f-4c21-9ed9-3d319ca403b8/413ff259-8ca8-446a-9a48-682d6b0aa9e8", "osd", "allow rw pool=manila_data namespace=fsvolumens_efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:24:59 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/efbd58ea-e56f-4c21-9ed9-3d319ca403b8/413ff259-8ca8-446a-9a48-682d6b0aa9e8", "osd", "allow rw pool=manila_data namespace=fsvolumens_efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:24:59 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/efbd58ea-e56f-4c21-9ed9-3d319ca403b8/413ff259-8ca8-446a-9a48-682d6b0aa9e8", "osd", "allow rw pool=manila_data namespace=fsvolumens_efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:24:59 np0005548790.localdomain ceph-mon[301742]: pgmap v551: 177 pgs: 177 active+clean; 281 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.3 KiB/s rd, 99 KiB/s wr, 8 op/s
Dec 06 10:24:59 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "9ca8d539-a020-426e-a70b-3e1ee1b92584", "format": "json"}]: dispatch
Dec 06 10:25:01 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1792782973' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:25:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v552: 177 pgs: 177 active+clean; 281 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.3 KiB/s rd, 99 KiB/s wr, 8 op/s
Dec 06 10:25:01 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:01.550 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "56f0c12e-9bce-473e-90b3-283dbd57851f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:25:01 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:56f0c12e-9bce-473e-90b3-283dbd57851f, vol_name:cephfs) < ""
Dec 06 10:25:01 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/56f0c12e-9bce-473e-90b3-283dbd57851f/.meta.tmp'
Dec 06 10:25:01 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/56f0c12e-9bce-473e-90b3-283dbd57851f/.meta.tmp' to config b'/volumes/_nogroup/56f0c12e-9bce-473e-90b3-283dbd57851f/.meta'
Dec 06 10:25:01 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:56f0c12e-9bce-473e-90b3-283dbd57851f, vol_name:cephfs) < ""
Dec 06 10:25:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "56f0c12e-9bce-473e-90b3-283dbd57851f", "format": "json"}]: dispatch
Dec 06 10:25:01 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:56f0c12e-9bce-473e-90b3-283dbd57851f, vol_name:cephfs) < ""
Dec 06 10:25:01 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:56f0c12e-9bce-473e-90b3-283dbd57851f, vol_name:cephfs) < ""
Dec 06 10:25:02 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e245 e245: 6 total, 6 up, 6 in
Dec 06 10:25:02 np0005548790.localdomain ceph-mon[301742]: pgmap v552: 177 pgs: 177 active+clean; 281 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.3 KiB/s rd, 99 KiB/s wr, 8 op/s
Dec 06 10:25:02 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "56f0c12e-9bce-473e-90b3-283dbd57851f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:25:02 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:25:02 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6baadd1a-0add-4c99-8c70-7fc88a6a0739", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:25:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6baadd1a-0add-4c99-8c70-7fc88a6a0739, vol_name:cephfs) < ""
Dec 06 10:25:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6baadd1a-0add-4c99-8c70-7fc88a6a0739/.meta.tmp'
Dec 06 10:25:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6baadd1a-0add-4c99-8c70-7fc88a6a0739/.meta.tmp' to config b'/volumes/_nogroup/6baadd1a-0add-4c99-8c70-7fc88a6a0739/.meta'
Dec 06 10:25:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6baadd1a-0add-4c99-8c70-7fc88a6a0739, vol_name:cephfs) < ""
Dec 06 10:25:02 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6baadd1a-0add-4c99-8c70-7fc88a6a0739", "format": "json"}]: dispatch
Dec 06 10:25:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6baadd1a-0add-4c99-8c70-7fc88a6a0739, vol_name:cephfs) < ""
Dec 06 10:25:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6baadd1a-0add-4c99-8c70-7fc88a6a0739, vol_name:cephfs) < ""
Dec 06 10:25:02 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e246 e246: 6 total, 6 up, 6 in
Dec 06 10:25:03 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "56f0c12e-9bce-473e-90b3-283dbd57851f", "format": "json"}]: dispatch
Dec 06 10:25:03 np0005548790.localdomain ceph-mon[301742]: osdmap e245: 6 total, 6 up, 6 in
Dec 06 10:25:03 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6baadd1a-0add-4c99-8c70-7fc88a6a0739", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:25:03 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6baadd1a-0add-4c99-8c70-7fc88a6a0739", "format": "json"}]: dispatch
Dec 06 10:25:03 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:25:03 np0005548790.localdomain ceph-mon[301742]: osdmap e246: 6 total, 6 up, 6 in
Dec 06 10:25:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v555: 177 pgs: 177 active+clean; 281 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 95 KiB/s wr, 34 op/s
Dec 06 10:25:03 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:25:03Z|00262|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Dec 06 10:25:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "bd3fee73-c2d0-4d8d-bc68-793e93881208", "format": "json"}]: dispatch
Dec 06 10:25:03 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bd3fee73-c2d0-4d8d-bc68-793e93881208, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:03 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bd3fee73-c2d0-4d8d-bc68-793e93881208, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 06 10:25:03 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4089585165' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:25:04 np0005548790.localdomain ceph-mon[301742]: pgmap v555: 177 pgs: 177 active+clean; 281 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 95 KiB/s wr, 34 op/s
Dec 06 10:25:04 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "bd3fee73-c2d0-4d8d-bc68-793e93881208", "format": "json"}]: dispatch
Dec 06 10:25:04 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/4089585165' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:25:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:25:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:04.218 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:05 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e247 e247: 6 total, 6 up, 6 in
Dec 06 10:25:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "56f0c12e-9bce-473e-90b3-283dbd57851f", "snap_name": "f5e12b35-c54a-4c08-b88c-95eb83ef6404", "format": "json"}]: dispatch
Dec 06 10:25:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f5e12b35-c54a-4c08-b88c-95eb83ef6404, sub_name:56f0c12e-9bce-473e-90b3-283dbd57851f, vol_name:cephfs) < ""
Dec 06 10:25:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f5e12b35-c54a-4c08-b88c-95eb83ef6404, sub_name:56f0c12e-9bce-473e-90b3-283dbd57851f, vol_name:cephfs) < ""
Dec 06 10:25:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v557: 177 pgs: 177 active+clean; 281 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 67 KiB/s wr, 39 op/s
Dec 06 10:25:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6baadd1a-0add-4c99-8c70-7fc88a6a0739", "auth_id": "david", "tenant_id": "438e893229f742e78fe8e62ef6ea17d5", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:25:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:6baadd1a-0add-4c99-8c70-7fc88a6a0739, tenant_id:438e893229f742e78fe8e62ef6ea17d5, vol_name:cephfs) < ""
Dec 06 10:25:05 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Dec 06 10:25:05 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec 06 10:25:05 np0005548790.localdomain ceph-mgr[286934]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: david is already in use
Dec 06 10:25:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:6baadd1a-0add-4c99-8c70-7fc88a6a0739, tenant_id:438e893229f742e78fe8e62ef6ea17d5, vol_name:cephfs) < ""
Dec 06 10:25:05 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:25:05.756+0000 7f06345ec640 -1 mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Dec 06 10:25:05 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Dec 06 10:25:06 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e248 e248: 6 total, 6 up, 6 in
Dec 06 10:25:06 np0005548790.localdomain ceph-mon[301742]: osdmap e247: 6 total, 6 up, 6 in
Dec 06 10:25:06 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "56f0c12e-9bce-473e-90b3-283dbd57851f", "snap_name": "f5e12b35-c54a-4c08-b88c-95eb83ef6404", "format": "json"}]: dispatch
Dec 06 10:25:06 np0005548790.localdomain ceph-mon[301742]: pgmap v557: 177 pgs: 177 active+clean; 281 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 67 KiB/s wr, 39 op/s
Dec 06 10:25:06 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6baadd1a-0add-4c99-8c70-7fc88a6a0739", "auth_id": "david", "tenant_id": "438e893229f742e78fe8e62ef6ea17d5", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:25:06 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec 06 10:25:06 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:06.554 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:07 np0005548790.localdomain ceph-mon[301742]: osdmap e248: 6 total, 6 up, 6 in
Dec 06 10:25:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v559: 177 pgs: 177 active+clean; 281 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 76 KiB/s wr, 45 op/s
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.691 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}4beeef56fdce3d1319f81141d17e5a7962dfb574f6ca8010d137265826f59628" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.830 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 954 Content-Type: application/json Date: Sat, 06 Dec 2025 10:25:07 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-61cffad3-1cd5-4456-a2f5-f812e3ebb791 x-openstack-request-id: req-61cffad3-1cd5-4456-a2f5-f812e3ebb791 _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.830 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavors": [{"id": "3b9dcd46-fa1b-4714-ba2b-665da2f67af6", "name": "m1.small", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/3b9dcd46-fa1b-4714-ba2b-665da2f67af6"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/3b9dcd46-fa1b-4714-ba2b-665da2f67af6"}]}, {"id": "72bdd1eb-059b-401d-8f8a-ec7c66937f24", "name": "m1.micro", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/72bdd1eb-059b-401d-8f8a-ec7c66937f24"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/72bdd1eb-059b-401d-8f8a-ec7c66937f24"}]}, {"id": "a0a7498e-22eb-495c-a2e3-89ba9e483bf6", "name": "m1.nano", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/a0a7498e-22eb-495c-a2e3-89ba9e483bf6"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/a0a7498e-22eb-495c-a2e3-89ba9e483bf6"}]}]} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.830 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None used request id req-61cffad3-1cd5-4456-a2f5-f812e3ebb791 request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.833 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors/a0a7498e-22eb-495c-a2e3-89ba9e483bf6 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}4beeef56fdce3d1319f81141d17e5a7962dfb574f6ca8010d137265826f59628" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519
Dec 06 10:25:07 np0005548790.localdomain sudo[322044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:25:07 np0005548790.localdomain sudo[322044]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:25:07 np0005548790.localdomain sudo[322044]: pam_unix(sudo:session): session closed for user root
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.851 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 493 Content-Type: application/json Date: Sat, 06 Dec 2025 10:25:07 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-1fea6f77-1378-4e00-9d94-1cec2c6c385f x-openstack-request-id: req-1fea6f77-1378-4e00-9d94-1cec2c6c385f _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.851 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavor": {"id": "a0a7498e-22eb-495c-a2e3-89ba9e483bf6", "name": "m1.nano", "ram": 128, "disk": 1, "swap": "", "OS-FLV-EXT-DATA:ephemeral": 0, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/a0a7498e-22eb-495c-a2e3-89ba9e483bf6"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/a0a7498e-22eb-495c-a2e3-89ba9e483bf6"}]}} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.851 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors/a0a7498e-22eb-495c-a2e3-89ba9e483bf6 used request id req-1fea6f77-1378-4e00-9d94-1cec2c6c385f request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.853 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'name': 'tempest-VolumesBackupsTest-instance-739598656', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'np0005548790.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b51f704fe6204487b0317c3332364cca', 'user_id': 'b40d497af0834616a664e6909c0f6685', 'hostId': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.853 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.857 12 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b59377c8-c3d7-452b-8305-d2853ef47bb4 / tapc8391efe-ea inspect_vnics /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/inspector.py:136
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.857 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a96b469d-2cc7-41c0-9386-cca58c05fa22', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'instance-0000000a-b59377c8-c3d7-452b-8305-d2853ef47bb4-tapc8391efe-ea', 'timestamp': '2025-12-06T10:25:07.853828', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'tapc8391efe-ea', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:ec:95:9c', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc8391efe-ea'}, 'message_id': 'd6733b6e-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.061112031, 'message_signature': '2c38155f87c6f1b03682b98e351f8e783a2afd1f769afcfb1377ff105911bb40'}]}, 'timestamp': '2025-12-06 10:25:07.858686', '_unique_id': 'fc34f723a89b4dfca34138c3eaa1b499'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging     yield
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.865 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.871 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.871 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/network.incoming.bytes volume: 4269 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'bc758ac8-d544-4806-a9ae-ddd86defe846', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 4269, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'instance-0000000a-b59377c8-c3d7-452b-8305-d2853ef47bb4-tapc8391efe-ea', 'timestamp': '2025-12-06T10:25:07.871307', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'tapc8391efe-ea', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:ec:95:9c', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc8391efe-ea'}, 'message_id': 'd6754030-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.061112031, 'message_signature': '21033b68ae6c4bc4b8a5d4d8eb797631f44230ceea23b884942e659c7bb4a1e6'}]}, 'timestamp': '2025-12-06 10:25:07.871834', '_unique_id': 'edaf3710d0e14ae2a7d021a5b3b19a91'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging     yield
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.872 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.873 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.924 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.write.latency volume: 16618839348 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.925 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.925 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'd55bb21b-e823-453f-a37b-af6ebe0202b9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 16618839348, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-vda', 'timestamp': '2025-12-06T10:25:07.874078', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'd67d6f30-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.081363619, 'message_signature': 'ccdafc1f3f749d05c2b5a08e8cd1426fa723b4581e8c7adbd313b11f5063d109'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-vdb', 'timestamp': '2025-12-06T10:25:07.874078', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vdb'}, 'message_id': 'd67d801a-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.081363619, 'message_signature': '8c3e388288a164d26a3e52719bee88cdd571e53cccffaa91db4de4bf1050e79f'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-sda', 'timestamp': '2025-12-06T10:25:07.874078', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'd67d90aa-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.081363619, 'message_signature': '9ed2e90e9b43f32534a4760225f457b9d6787784a006761f631bcf2700fbddd7'}]}, 'timestamp': '2025-12-06 10:25:07.926240', '_unique_id': '7aa5e8a358b74b58985b7f81612819e5'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain sudo[322062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging     yield
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 06 10:25:07 np0005548790.localdomain sudo[322062]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.927 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.928 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.928 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.read.latency volume: 1598577062 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.929 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.read.latency volume: 3588056 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.929 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.read.latency volume: 103339425 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5f0b47f2-2966-46e3-bff8-3816f4e4bc5b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 1598577062, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-vda', 'timestamp': '2025-12-06T10:25:07.928739', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'd67e0512-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.081363619, 'message_signature': 'c492afc41dfbfc2d589930d11beaf2a76da5b0e8ae52f33a88b8a7894717e0be'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 3588056, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-vdb', 'timestamp': '2025-12-06T10:25:07.928739', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vdb'}, 'message_id': 'd67e1570-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.081363619, 'message_signature': 'bda1cf4463716073964a8e67077ce949dee76c9b2366a78e59d3103a944955f6'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 103339425, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-sda', 'timestamp': '2025-12-06T10:25:07.928739', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'd67e2b50-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.081363619, 'message_signature': 'b371e274045bd6fb70a1d9d0de52d3c19afb5ed876ad01c520392331100b2d6b'}]}, 'timestamp': '2025-12-06 10:25:07.931157', '_unique_id': 'd2c28577788642f9bf9a100b2d1736b4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging     yield
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.932 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.933 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.933 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.write.bytes volume: 72970240 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.933 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.934 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '24935d48-ebed-429b-b0c7-f600b00ff6b5', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 72970240, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-vda', 'timestamp': '2025-12-06T10:25:07.933518', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'd67ebd72-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.081363619, 'message_signature': '49c1f7e04b0f5d9e7ddd9d787e12ea0fdd4a2ce959b8a2ec9135cc2a600c7821'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-vdb', 'timestamp': '2025-12-06T10:25:07.933518', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vdb'}, 'message_id': 'd67ece98-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.081363619, 'message_signature': '967c9372935eaa873a8ef79cd7366c760ffde0e35db6eaf631c68b77c600a85a'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-sda', 'timestamp': '2025-12-06T10:25:07.933518', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'd67edd98-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.081363619, 'message_signature': '337c48005bc6ecf4c54c68305cff54434b033c7183cf293f4774ef29557db476'}]}, 'timestamp': '2025-12-06 10:25:07.934759', '_unique_id': '0fee006539c34598935fab8e8abd2580'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging     yield
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.935 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.937 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.937 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b407c10a-828a-4084-810c-ef16f0cfcbd2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 3390, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'instance-0000000a-b59377c8-c3d7-452b-8305-d2853ef47bb4-tapc8391efe-ea', 'timestamp': '2025-12-06T10:25:07.937302', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'tapc8391efe-ea', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:ec:95:9c', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc8391efe-ea'}, 'message_id': 'd67f51a6-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.061112031, 'message_signature': 'd323f56c2ef42c2270383fa4ac75d548206836ca5f770e57f83b8944a567387e'}]}, 'timestamp': '2025-12-06 10:25:07.937765', '_unique_id': '1ec422919d6d4567b45128a6b8f0198a'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging     yield
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.938 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.940 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.940 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7647d808-bd04-48e7-90ff-3e0dd8afea7c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 28, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'instance-0000000a-b59377c8-c3d7-452b-8305-d2853ef47bb4-tapc8391efe-ea', 'timestamp': '2025-12-06T10:25:07.940157', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'tapc8391efe-ea', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:ec:95:9c', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc8391efe-ea'}, 'message_id': 'd67fc12c-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.061112031, 'message_signature': '740771d28596d80660420a026d169e36a82455bafec700912b57744bad6647c7'}]}, 'timestamp': '2025-12-06 10:25:07.940675', '_unique_id': '8e05ffd8f65c4f41a924b40517afb2fb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging     yield
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.941 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.942 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.956 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/cpu volume: 13360000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fecc5490-0484-40c0-a676-308c5282474d', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 13360000000, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'timestamp': '2025-12-06T10:25:07.942930', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'cpu_number': 1}, 'message_id': 'd6823d76-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.16343795, 'message_signature': '457b5f9f9b752d9a75346062c651b3de36ee1b42e014d15ef3f797bcf5905fa8'}]}, 'timestamp': '2025-12-06 10:25:07.956796', '_unique_id': '87307636de694b35b271dcc26ab74ca1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging     yield
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.957 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.968 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.968 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.968 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '05e89c06-eb68-4724-b06b-bdb6c91047ca', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-vda', 'timestamp': '2025-12-06T10:25:07.957907', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'd6840958-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.165069414, 'message_signature': '9733a5f9b309c6f84c1bcfa89a4b2987aecd4a0c0f44e45cbf9d4b455f467d00'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-vdb', 'timestamp': '2025-12-06T10:25:07.957907', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vdb'}, 'message_id': 'd68410f6-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.165069414, 'message_signature': 'be8493c1cf57f587b96e48a323f939b4ad355093618f6d9479107095ba017a29'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 509952, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-sda', 'timestamp': '2025-12-06T10:25:07.957907', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'd6841844-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.165069414, 'message_signature': '399d91f804598262845874f68101e6cb10b244991bc8921ae96839791fbff93c'}]}, 'timestamp': '2025-12-06 10:25:07.968916', '_unique_id': 'c5e50acd8c54479cae981c78f8fbd485'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging     yield
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.969 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.read.bytes volume: 31009280 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.read.bytes volume: 12288 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9d0e99c2-0dfc-46ad-997f-7235311c8efa', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 31009280, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-vda', 'timestamp': '2025-12-06T10:25:07.969944', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'd6844832-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.081363619, 'message_signature': '6673ddc3079e2e2314d14ff0ce470ca8778326641bd1fe864e93677695de1e64'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 12288, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-vdb', 'timestamp': '2025-12-06T10:25:07.969944', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vdb'}, 'message_id': 'd6844f1c-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.081363619, 'message_signature': '1d47c85a9395908a3d00c082296d13f831017295414722b843b1633a852ff024'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 299326, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-sda', 'timestamp': '2025-12-06T10:25:07.969944', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'd68455de-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.081363619, 'message_signature': 'e5c386ba381e1195531a362ac1028846fafb16d4be434cf6d5ace5c2d6d039ad'}]}, 'timestamp': '2025-12-06 10:25:07.970491', '_unique_id': '26a24420356b4772b8466b66bfdf2919'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging     yield
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.970 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.971 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.971 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.971 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-VolumesBackupsTest-instance-739598656>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-VolumesBackupsTest-instance-739598656>]
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.971 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.latency in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.971 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskLatencyPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.latency from polling [<NovaLikeServer: tempest-VolumesBackupsTest-instance-739598656>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-VolumesBackupsTest-instance-739598656>]
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c42e6c60-ba50-48e0-bab7-c6901b6a3f5c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'instance-0000000a-b59377c8-c3d7-452b-8305-d2853ef47bb4-tapc8391efe-ea', 'timestamp': '2025-12-06T10:25:07.972181', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'tapc8391efe-ea', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:ec:95:9c', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc8391efe-ea'}, 'message_id': 'd6849f58-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.061112031, 'message_signature': 'd45e8310c0d3aeb730678d23b576ea8d465c0642147b10fcaa00b1220256031c'}]}, 'timestamp': '2025-12-06 10:25:07.972387', '_unique_id': 'fb955d99dd054b0198532f262db167a0'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging     yield
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.972 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.973 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.973 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.973 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.973 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '15f8edfe-6acf-49ac-bc22-b68d852d2235', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-vda', 'timestamp': '2025-12-06T10:25:07.973333', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'd684cc1c-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.165069414, 'message_signature': '74a8ba9dc346361eed7d2a4bdcc7d827862020b3e896edcce0237692a6d136ec'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-vdb', 'timestamp': '2025-12-06T10:25:07.973333', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vdb'}, 'message_id': 'd684d2de-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.165069414, 'message_signature': '9092bd783e80f9451b958a8125ce3b699f2e86b2810d1a303cb3f14c8088d8af'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 509952, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-sda', 'timestamp': '2025-12-06T10:25:07.973333', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'd684da54-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.165069414, 'message_signature': '9ad2eb53f6c1af6f149f9a8edc22cb59575697fb49af74cf30a38908bfb4c5a7'}]}, 'timestamp': '2025-12-06 10:25:07.973883', '_unique_id': 'e45455d468b543129f4e757f28b2a8eb'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging     yield
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.974 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ded4889a-45c4-4995-bd5e-3e27b463d085', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 27, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'instance-0000000a-b59377c8-c3d7-452b-8305-d2853ef47bb4-tapc8391efe-ea', 'timestamp': '2025-12-06T10:25:07.974850', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'tapc8391efe-ea', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:ec:95:9c', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc8391efe-ea'}, 'message_id': 'd6850790-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.061112031, 'message_signature': 'c3f2ea04f5a8ab65090444860281b87a05c58fb501fb5a29d64167ac9ece8477'}]}, 'timestamp': '2025-12-06 10:25:07.975053', '_unique_id': '396908b5125743b88df4ea6e20a9ecb2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging     yield
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.975 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'fc8b3ca9-5c22-4f30-8d3c-bf09a933571b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'instance-0000000a-b59377c8-c3d7-452b-8305-d2853ef47bb4-tapc8391efe-ea', 'timestamp': '2025-12-06T10:25:07.976000', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'tapc8391efe-ea', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:ec:95:9c', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc8391efe-ea'}, 'message_id': 'd685344a-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.061112031, 'message_signature': '00205402f7ef4a2c2736c03daac179cbf5986fd8c64b0c2bfb10c78873e517bf'}]}, 'timestamp': '2025-12-06 10:25:07.976198', '_unique_id': '3fd25a08505c4d28b94e3e659e27f9c6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging     yield
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.976 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.977 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.977 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.write.requests volume: 273 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.977 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.977 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e9650e7e-a599-449d-8c75-387ea8ba3f75', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 273, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-vda', 'timestamp': '2025-12-06T10:25:07.977152', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'd685614a-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.081363619, 'message_signature': '2f009a2096a2dd03a12dd0fd837e5888c51a7876d7ca6070189adb2b0912295b'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-vdb', 'timestamp': '2025-12-06T10:25:07.977152', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vdb'}, 'message_id': 'd68568e8-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.081363619, 'message_signature': 'a3fac3a1c02099a2770a44203f2f4d3bf6a1beeb27a6b6eaaf0b28ceac75930f'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-sda', 'timestamp': '2025-12-06T10:25:07.977152', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'd6856faa-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.081363619, 'message_signature': 'b7441b408b64601bab1f1de59d2cee53a0beb86c03aa400e4a6854ad72330632'}]}, 'timestamp': '2025-12-06 10:25:07.977705', '_unique_id': 'e8ae97ee3f1f40e89eac6f372b22d9b1'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging     yield
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.978 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1fae3dd2-77ab-4390-b9fb-c3c9ca58002a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-vda', 'timestamp': '2025-12-06T10:25:07.978668', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'd6859c78-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.165069414, 'message_signature': 'b73e461b2b3c375462a87ea0c9dafe0a19ac48237e9945a93f487c1ae4b28f53'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-vdb', 'timestamp': '2025-12-06T10:25:07.978668', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vdb'}, 'message_id': 'd685a420-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.165069414, 'message_signature': 'bb12c70f76df560f4870203b39be13f3ea43c5ed31da0d2d1fe37ceda008c380'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 509952, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-sda', 'timestamp': '2025-12-06T10:25:07.978668', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'd685aa88-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.165069414, 'message_signature': '22a6ca9cd9f98c610b38381b20900b58222b2fe10ab6d369e31d310a4c2cf87b'}]}, 'timestamp': '2025-12-06 10:25:07.979211', '_unique_id': '3cbe6626d735450a8028777ab84b675d'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging     yield
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.979 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '1637c10b-631c-48f5-b8cf-2b32c8225654', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.incoming.packets.drop', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'instance-0000000a-b59377c8-c3d7-452b-8305-d2853ef47bb4-tapc8391efe-ea', 'timestamp': '2025-12-06T10:25:07.980180', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'tapc8391efe-ea', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:ec:95:9c', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc8391efe-ea'}, 'message_id': 'd685d7a6-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.061112031, 'message_signature': '7c0475fd0848a9a8cd0ba17ff6d14ba0951dd2c74f5d2b81b66b93a4833b9e62'}]}, 'timestamp': '2025-12-06 10:25:07.980382', '_unique_id': '5d40cd5273ad4f0faf1cf0b3f70157ca'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging     yield
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.980 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'c2b21a72-1453-4b68-a24e-745b35ac83e2', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.packets.error', 'counter_type': 'cumulative', 'counter_unit': 'packet', 'counter_volume': 0, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'instance-0000000a-b59377c8-c3d7-452b-8305-d2853ef47bb4-tapc8391efe-ea', 'timestamp': '2025-12-06T10:25:07.981361', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'tapc8391efe-ea', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:ec:95:9c', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc8391efe-ea'}, 'message_id': 'd68605be-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.061112031, 'message_signature': 'dd74be325a63bb5290410b675acfd059a77b33031a46f9762fb5794ac931847a'}]}, 'timestamp': '2025-12-06 10:25:07.981560', '_unique_id': 'f4e86dfa41d74db28692f7b488c06083'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging     yield
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.981 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.982 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.982 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '5447b8fd-e743-45d4-aa74-33cd3b675ba9', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'network.outgoing.bytes.delta', 'counter_type': 'delta', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'instance-0000000a-b59377c8-c3d7-452b-8305-d2853ef47bb4-tapc8391efe-ea', 'timestamp': '2025-12-06T10:25:07.982470', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'tapc8391efe-ea', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'mac': 'fa:16:3e:ec:95:9c', 'fref': None, 'parameters': {'interfaceid': None, 'bridge': None}, 'vnic_name': 'tapc8391efe-ea'}, 'message_id': 'd686311a-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.061112031, 'message_signature': '3a9a1e028491a5c6d740ff0b2fc2dc42a14a3210eb4635d9935329e0262d63ac'}]}, 'timestamp': '2025-12-06 10:25:07.982669', '_unique_id': 'cbb9291da67f403d91fd4c556139b40f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging     yield
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.read.requests volume: 1133 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.read.requests volume: 3 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.983 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '55741009-deab-4651-b61d-04c98827722f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1133, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-vda', 'timestamp': '2025-12-06T10:25:07.983605', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': 'd6865d84-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.081363619, 'message_signature': 'f70a14ef797770e67f0b0f95046e2b7f7467f8e369833dfaade4a1a46610395e'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 3, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-vdb', 'timestamp': '2025-12-06T10:25:07.983605', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vdb'}, 'message_id': 'd6866572-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.081363619, 'message_signature': 'a3a3223b82b9e75a3d0826b347bfbb54fc8c4711683523348806397f28ad51e8'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 120, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4-sda', 'timestamp': '2025-12-06T10:25:07.983605', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': 'd6866c2a-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.081363619, 'message_signature': '2c3e944a8a03df8e24305f68b73c4f72fe9dfead7ac75708d03be59be1fb7854'}]}, 'timestamp': '2025-12-06 10:25:07.984169', '_unique_id': 'd994a561aeda4fdeb08a3a44fe5047ef'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging     yield
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.984 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.985 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.985 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.985 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-VolumesBackupsTest-instance-739598656>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-VolumesBackupsTest-instance-739598656>]
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.985 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.985 12 DEBUG ceilometer.compute.pollsters [-] b59377c8-c3d7-452b-8305-d2853ef47bb4/memory.usage volume: 42.85546875 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '2e16ce0a-cebd-4e58-a9be-f232be154f5f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'memory.usage', 'counter_type': 'gauge', 'counter_unit': 'MB', 'counter_volume': 42.85546875, 'user_id': 'b40d497af0834616a664e6909c0f6685', 'user_name': None, 'project_id': 'b51f704fe6204487b0317c3332364cca', 'project_name': None, 'resource_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'timestamp': '2025-12-06T10:25:07.985479', 'resource_metadata': {'display_name': 'tempest-VolumesBackupsTest-instance-739598656', 'name': 'instance-0000000a', 'instance_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'instance_type': 'm1.nano', 'host': '7ec5e35d4fbc771b9684833e93ff3e4c545ea8da22c8cc00fe55533c', 'instance_host': 'np0005548790.localdomain', 'flavor': {'id': 'a0a7498e-22eb-495c-a2e3-89ba9e483bf6', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': '6a944ab6-8965-4055-b7fc-af6e395005ea'}, 'image_ref': '6a944ab6-8965-4055-b7fc-af6e395005ea', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1}, 'message_id': 'd686a708-d28d-11f0-98c0-fa163e7e495b', 'monotonic_time': 13123.16343795, 'message_signature': 'f0449258e416f1b1095c268861919baccc4a6b7fc2860ce208da0be5539cd225'}]}, 'timestamp': '2025-12-06 10:25:07.985683', '_unique_id': 'c20c6418a05d4f7ca191393742df0cc4'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging     yield
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging     return fun(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging     self._connection = self._establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging     conn = self.transport.establish_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging     conn.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging     self.transport.connect()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging     self._connect(self.host, self.port, self.connect_timeout)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging     self.sock.connect(sa)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last):
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging     self.transport._send_notification(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging     self._driver.send_notification(target, ctxt, message, version,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging     return self._send(target, ctxt, message,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging     with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn:
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging     return rpc_common.ConnectionContext(self._connection_pool,
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging     self.connection = connection_pool.get(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging     return self.create(retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging     return self.connection_cls(self.conf, self.url, purpose, retry=retry)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging     self.ensure_connection()
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging     self.connection.ensure_connection(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging     self._ensure_connection(*args, **kwargs)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging     return retry_over_time(
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging     self.gen.throw(typ, value, traceback)
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging   File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging     raise ConnectionError(str(exc)) from exc
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR oslo_messaging.notify.messaging 
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.iops in the context of pollsters
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskIOPSPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163
Dec 06 10:25:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:25:07.986 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.iops from polling [<NovaLikeServer: tempest-VolumesBackupsTest-instance-739598656>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-VolumesBackupsTest-instance-739598656>]
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "541781d2-de06-47af-8a8c-92b3f31186b7", "format": "json"}]: dispatch
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:541781d2-de06-47af-8a8c-92b3f31186b7, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:541781d2-de06-47af-8a8c-92b3f31186b7, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:08 np0005548790.localdomain systemd-journald[47675]: Data hash table of /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal has a fill level at 75.0 (53723 of 71630 items, 25165824 file size, 468 bytes per hash table item), suggesting rotation.
Dec 06 10:25:08 np0005548790.localdomain systemd-journald[47675]: /run/log/journal/4b30904fc4748c16d0c72dbebcabab49/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 06 10:25:08 np0005548790.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 10:25:08 np0005548790.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 06 10:25:08 np0005548790.localdomain sudo[322062]: pam_unix(sudo:session): session closed for user root
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "56f0c12e-9bce-473e-90b3-283dbd57851f", "snap_name": "f5e12b35-c54a-4c08-b88c-95eb83ef6404", "target_sub_name": "691a6613-80b6-44f7-899f-c0e9f04c8f64", "format": "json"}]: dispatch
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:f5e12b35-c54a-4c08-b88c-95eb83ef6404, sub_name:56f0c12e-9bce-473e-90b3-283dbd57851f, target_sub_name:691a6613-80b6-44f7-899f-c0e9f04c8f64, vol_name:cephfs) < ""
Dec 06 10:25:08 np0005548790.localdomain ceph-mon[301742]: pgmap v559: 177 pgs: 177 active+clean; 281 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 76 KiB/s wr, 45 op/s
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/691a6613-80b6-44f7-899f-c0e9f04c8f64/.meta.tmp'
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/691a6613-80b6-44f7-899f-c0e9f04c8f64/.meta.tmp' to config b'/volumes/_nogroup/691a6613-80b6-44f7-899f-c0e9f04c8f64/.meta'
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.clone_index] tracking-id d7826c54-2301-40e2-8793-49501b217c0f for path b'/volumes/_nogroup/691a6613-80b6-44f7-899f-c0e9f04c8f64'
Dec 06 10:25:08 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:25:08 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:25:08 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:25:08 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:25:08 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/56f0c12e-9bce-473e-90b3-283dbd57851f/.meta.tmp'
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/56f0c12e-9bce-473e-90b3-283dbd57851f/.meta.tmp' to config b'/volumes/_nogroup/56f0c12e-9bce-473e-90b3-283dbd57851f/.meta'
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:f5e12b35-c54a-4c08-b88c-95eb83ef6404, sub_name:56f0c12e-9bce-473e-90b3-283dbd57851f, target_sub_name:691a6613-80b6-44f7-899f-c0e9f04c8f64, vol_name:cephfs) < ""
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "691a6613-80b6-44f7-899f-c0e9f04c8f64", "format": "json"}]: dispatch
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:691a6613-80b6-44f7-899f-c0e9f04c8f64, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] update: starting ev 23c555bf-6388-4782-883b-2fbe8d757e94 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] complete: finished ev 23c555bf-6388-4782-883b-2fbe8d757e94 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Completed event 23c555bf-6388-4782-883b-2fbe8d757e94 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 06 10:25:08 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:25:08.764+0000 7f06395f6640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:25:08 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:25:08.764+0000 7f06395f6640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:25:08 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:25:08.764+0000 7f06395f6640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:25:08 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:25:08.764+0000 7f06395f6640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:25:08 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:25:08.764+0000 7f06395f6640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:25:08 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:25:08 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:691a6613-80b6-44f7-899f-c0e9f04c8f64, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/691a6613-80b6-44f7-899f-c0e9f04c8f64
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 691a6613-80b6-44f7-899f-c0e9f04c8f64)
Dec 06 10:25:08 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:25:08.791+0000 7f0639df7640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:25:08 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:25:08.791+0000 7f0639df7640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:25:08 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:25:08.791+0000 7f0639df7640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:25:08 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:25:08.791+0000 7f0639df7640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:25:08 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:25:08.791+0000 7f0639df7640 -1 client.0 error registering admin socket command: (17) File exists
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: client.0 error registering admin socket command: (17) File exists
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 691a6613-80b6-44f7-899f-c0e9f04c8f64) -- by 0 seconds
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/691a6613-80b6-44f7-899f-c0e9f04c8f64/.meta.tmp'
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/691a6613-80b6-44f7-899f-c0e9f04c8f64/.meta.tmp' to config b'/volumes/_nogroup/691a6613-80b6-44f7-899f-c0e9f04c8f64/.meta'
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6baadd1a-0add-4c99-8c70-7fc88a6a0739", "auth_id": "david", "format": "json"}]: dispatch
Dec 06 10:25:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:6baadd1a-0add-4c99-8c70-7fc88a6a0739, vol_name:cephfs) < ""
Dec 06 10:25:09 np0005548790.localdomain sudo[322136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:25:09 np0005548790.localdomain sudo[322136]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:25:09 np0005548790.localdomain sudo[322136]: pam_unix(sudo:session): session closed for user root
Dec 06 10:25:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:25:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e249 e249: 6 total, 6 up, 6 in
Dec 06 10:25:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:09.219 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v561: 177 pgs: 177 active+clean; 281 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 34 KiB/s wr, 44 op/s
Dec 06 10:25:09 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "541781d2-de06-47af-8a8c-92b3f31186b7", "format": "json"}]: dispatch
Dec 06 10:25:09 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "56f0c12e-9bce-473e-90b3-283dbd57851f", "snap_name": "f5e12b35-c54a-4c08-b88c-95eb83ef6404", "target_sub_name": "691a6613-80b6-44f7-899f-c0e9f04c8f64", "format": "json"}]: dispatch
Dec 06 10:25:09 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:25:09 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:25:09 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:25:09 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "691a6613-80b6-44f7-899f-c0e9f04c8f64", "format": "json"}]: dispatch
Dec 06 10:25:09 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:25:09 np0005548790.localdomain ceph-mon[301742]: osdmap e249: 6 total, 6 up, 6 in
Dec 06 10:25:10 np0005548790.localdomain ceph-mgr[286934]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'david' for subvolume '6baadd1a-0add-4c99-8c70-7fc88a6a0739'
Dec 06 10:25:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:6baadd1a-0add-4c99-8c70-7fc88a6a0739, vol_name:cephfs) < ""
Dec 06 10:25:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/56f0c12e-9bce-473e-90b3-283dbd57851f/.snap/f5e12b35-c54a-4c08-b88c-95eb83ef6404/71f6eb96-c2fc-4664-855f-be324d277991' to b'/volumes/_nogroup/691a6613-80b6-44f7-899f-c0e9f04c8f64/ae2ffb5f-a82b-4a26-b95e-b4639ec10f88'
Dec 06 10:25:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6baadd1a-0add-4c99-8c70-7fc88a6a0739", "auth_id": "david", "format": "json"}]: dispatch
Dec 06 10:25:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:6baadd1a-0add-4c99-8c70-7fc88a6a0739, vol_name:cephfs) < ""
Dec 06 10:25:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/6baadd1a-0add-4c99-8c70-7fc88a6a0739/f35ddd88-039f-4ab8-a9d0-fa4b2dcce59c
Dec 06 10:25:10 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/6baadd1a-0add-4c99-8c70-7fc88a6a0739/f35ddd88-039f-4ab8-a9d0-fa4b2dcce59c],prefix=session evict} (starting...)
Dec 06 10:25:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:25:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:6baadd1a-0add-4c99-8c70-7fc88a6a0739, vol_name:cephfs) < ""
Dec 06 10:25:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/691a6613-80b6-44f7-899f-c0e9f04c8f64/.meta.tmp'
Dec 06 10:25:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/691a6613-80b6-44f7-899f-c0e9f04c8f64/.meta.tmp' to config b'/volumes/_nogroup/691a6613-80b6-44f7-899f-c0e9f04c8f64/.meta'
Dec 06 10:25:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.clone_index] untracking d7826c54-2301-40e2-8793-49501b217c0f
Dec 06 10:25:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/56f0c12e-9bce-473e-90b3-283dbd57851f/.meta.tmp'
Dec 06 10:25:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/56f0c12e-9bce-473e-90b3-283dbd57851f/.meta.tmp' to config b'/volumes/_nogroup/56f0c12e-9bce-473e-90b3-283dbd57851f/.meta'
Dec 06 10:25:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/691a6613-80b6-44f7-899f-c0e9f04c8f64/.meta.tmp'
Dec 06 10:25:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/691a6613-80b6-44f7-899f-c0e9f04c8f64/.meta.tmp' to config b'/volumes/_nogroup/691a6613-80b6-44f7-899f-c0e9f04c8f64/.meta'
Dec 06 10:25:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 691a6613-80b6-44f7-899f-c0e9f04c8f64)
Dec 06 10:25:10 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6baadd1a-0add-4c99-8c70-7fc88a6a0739", "auth_id": "david", "format": "json"}]: dispatch
Dec 06 10:25:10 np0005548790.localdomain ceph-mon[301742]: pgmap v561: 177 pgs: 177 active+clean; 281 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 34 KiB/s wr, 44 op/s
Dec 06 10:25:10 np0005548790.localdomain ceph-mon[301742]: mgrmap e53: np0005548790.kvkfyr(active, since 13m), standbys: np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548789.mzhmje
Dec 06 10:25:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v562: 177 pgs: 177 active+clean; 281 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 33 KiB/s wr, 43 op/s
Dec 06 10:25:11 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:11.558 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:11 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6baadd1a-0add-4c99-8c70-7fc88a6a0739", "auth_id": "david", "format": "json"}]: dispatch
Dec 06 10:25:11 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/4192589746' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:25:11 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e250 e250: 6 total, 6 up, 6 in
Dec 06 10:25:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "da9cb17d-a418-4d10-ae1e-46dd9b618283", "format": "json"}]: dispatch
Dec 06 10:25:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:da9cb17d-a418-4d10-ae1e-46dd9b618283, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:25:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:25:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:da9cb17d-a418-4d10-ae1e-46dd9b618283, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:25:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:25:12 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "auth_id": "david", "format": "json"}]: dispatch
Dec 06 10:25:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:efbd58ea-e56f-4c21-9ed9-3d319ca403b8, vol_name:cephfs) < ""
Dec 06 10:25:12 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Dec 06 10:25:12 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec 06 10:25:12 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.david"} v 0)
Dec 06 10:25:12 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch
Dec 06 10:25:12 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Writing back 50 completed events
Dec 06 10:25:12 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:25:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:efbd58ea-e56f-4c21-9ed9-3d319ca403b8, vol_name:cephfs) < ""
Dec 06 10:25:12 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "auth_id": "david", "format": "json"}]: dispatch
Dec 06 10:25:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:efbd58ea-e56f-4c21-9ed9-3d319ca403b8, vol_name:cephfs) < ""
Dec 06 10:25:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/efbd58ea-e56f-4c21-9ed9-3d319ca403b8/413ff259-8ca8-446a-9a48-682d6b0aa9e8
Dec 06 10:25:12 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/efbd58ea-e56f-4c21-9ed9-3d319ca403b8/413ff259-8ca8-446a-9a48-682d6b0aa9e8],prefix=session evict} (starting...)
Dec 06 10:25:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:25:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:efbd58ea-e56f-4c21-9ed9-3d319ca403b8, vol_name:cephfs) < ""
Dec 06 10:25:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:25:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:25:12 np0005548790.localdomain ceph-mon[301742]: pgmap v562: 177 pgs: 177 active+clean; 281 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 33 KiB/s wr, 43 op/s
Dec 06 10:25:12 np0005548790.localdomain ceph-mon[301742]: osdmap e250: 6 total, 6 up, 6 in
Dec 06 10:25:12 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec 06 10:25:12 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch
Dec 06 10:25:12 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch
Dec 06 10:25:12 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Dec 06 10:25:12 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:25:12 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e251 e251: 6 total, 6 up, 6 in
Dec 06 10:25:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v565: 177 pgs: 177 active+clean; 282 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 79 KiB/s rd, 156 KiB/s wr, 116 op/s
Dec 06 10:25:13 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "da9cb17d-a418-4d10-ae1e-46dd9b618283", "format": "json"}]: dispatch
Dec 06 10:25:13 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "auth_id": "david", "format": "json"}]: dispatch
Dec 06 10:25:13 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "auth_id": "david", "format": "json"}]: dispatch
Dec 06 10:25:13 np0005548790.localdomain ceph-mon[301742]: osdmap e251: 6 total, 6 up, 6 in
Dec 06 10:25:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:25:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:14.249 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:14 np0005548790.localdomain snmpd[67989]: empty variable list in _query
Dec 06 10:25:14 np0005548790.localdomain snmpd[67989]: empty variable list in _query
Dec 06 10:25:14 np0005548790.localdomain ceph-mon[301742]: pgmap v565: 177 pgs: 177 active+clean; 282 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 79 KiB/s rd, 156 KiB/s wr, 116 op/s
Dec 06 10:25:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v566: 177 pgs: 177 active+clean; 282 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 119 KiB/s wr, 69 op/s
Dec 06 10:25:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "da9cb17d-a418-4d10-ae1e-46dd9b618283_59a2f9e2-3ade-456e-8a51-63c6c5f92484", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da9cb17d-a418-4d10-ae1e-46dd9b618283_59a2f9e2-3ade-456e-8a51-63c6c5f92484, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp'
Dec 06 10:25:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp' to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta'
Dec 06 10:25:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da9cb17d-a418-4d10-ae1e-46dd9b618283_59a2f9e2-3ade-456e-8a51-63c6c5f92484, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "da9cb17d-a418-4d10-ae1e-46dd9b618283", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da9cb17d-a418-4d10-ae1e-46dd9b618283, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp'
Dec 06 10:25:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp' to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta'
Dec 06 10:25:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:da9cb17d-a418-4d10-ae1e-46dd9b618283, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:15 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e252 e252: 6 total, 6 up, 6 in
Dec 06 10:25:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6baadd1a-0add-4c99-8c70-7fc88a6a0739", "format": "json"}]: dispatch
Dec 06 10:25:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6baadd1a-0add-4c99-8c70-7fc88a6a0739, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:25:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6baadd1a-0add-4c99-8c70-7fc88a6a0739, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:25:15 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:25:15.837+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6baadd1a-0add-4c99-8c70-7fc88a6a0739' of type subvolume
Dec 06 10:25:15 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6baadd1a-0add-4c99-8c70-7fc88a6a0739' of type subvolume
Dec 06 10:25:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6baadd1a-0add-4c99-8c70-7fc88a6a0739", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6baadd1a-0add-4c99-8c70-7fc88a6a0739, vol_name:cephfs) < ""
Dec 06 10:25:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6baadd1a-0add-4c99-8c70-7fc88a6a0739'' moved to trashcan
Dec 06 10:25:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:25:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6baadd1a-0add-4c99-8c70-7fc88a6a0739, vol_name:cephfs) < ""
Dec 06 10:25:16 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:25:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:16.562 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:16 np0005548790.localdomain systemd[1]: tmp-crun.lHzmdh.mount: Deactivated successfully.
Dec 06 10:25:16 np0005548790.localdomain podman[322157]: 2025-12-06 10:25:16.588048717 +0000 UTC m=+0.099990437 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true)
Dec 06 10:25:16 np0005548790.localdomain podman[322157]: 2025-12-06 10:25:16.622230512 +0000 UTC m=+0.134172232 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 06 10:25:16 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:25:16 np0005548790.localdomain ceph-mon[301742]: pgmap v566: 177 pgs: 177 active+clean; 282 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 119 KiB/s wr, 69 op/s
Dec 06 10:25:16 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "da9cb17d-a418-4d10-ae1e-46dd9b618283_59a2f9e2-3ade-456e-8a51-63c6c5f92484", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:16 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "da9cb17d-a418-4d10-ae1e-46dd9b618283", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:16 np0005548790.localdomain ceph-mon[301742]: osdmap e252: 6 total, 6 up, 6 in
Dec 06 10:25:16 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6baadd1a-0add-4c99-8c70-7fc88a6a0739", "format": "json"}]: dispatch
Dec 06 10:25:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v568: 177 pgs: 177 active+clean; 282 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 45 KiB/s rd, 122 KiB/s wr, 71 op/s
Dec 06 10:25:17 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e253 e253: 6 total, 6 up, 6 in
Dec 06 10:25:17 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6baadd1a-0add-4c99-8c70-7fc88a6a0739", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:17 np0005548790.localdomain ceph-mon[301742]: osdmap e253: 6 total, 6 up, 6 in
Dec 06 10:25:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:25:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:25:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:25:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156105 "" "Go-http-client/1.1"
Dec 06 10:25:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:25:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19251 "" "Go-http-client/1.1"
Dec 06 10:25:18 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e254 e254: 6 total, 6 up, 6 in
Dec 06 10:25:18 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "541781d2-de06-47af-8a8c-92b3f31186b7_1ac163ca-79ea-43b6-8028-d76d24ca4cd1", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:541781d2-de06-47af-8a8c-92b3f31186b7_1ac163ca-79ea-43b6-8028-d76d24ca4cd1, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp'
Dec 06 10:25:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp' to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta'
Dec 06 10:25:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:541781d2-de06-47af-8a8c-92b3f31186b7_1ac163ca-79ea-43b6-8028-d76d24ca4cd1, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:18 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "541781d2-de06-47af-8a8c-92b3f31186b7", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:541781d2-de06-47af-8a8c-92b3f31186b7, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp'
Dec 06 10:25:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp' to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta'
Dec 06 10:25:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:541781d2-de06-47af-8a8c-92b3f31186b7, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:18 np0005548790.localdomain ceph-mon[301742]: pgmap v568: 177 pgs: 177 active+clean; 282 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 45 KiB/s rd, 122 KiB/s wr, 71 op/s
Dec 06 10:25:18 np0005548790.localdomain ceph-mon[301742]: osdmap e254: 6 total, 6 up, 6 in
Dec 06 10:25:18 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4e35c7b0-6333-486a-9deb-d9473aa05e04", "format": "json"}]: dispatch
Dec 06 10:25:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4e35c7b0-6333-486a-9deb-d9473aa05e04, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:25:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4e35c7b0-6333-486a-9deb-d9473aa05e04, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:25:19 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:25:19.005+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4e35c7b0-6333-486a-9deb-d9473aa05e04' of type subvolume
Dec 06 10:25:19 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4e35c7b0-6333-486a-9deb-d9473aa05e04' of type subvolume
Dec 06 10:25:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4e35c7b0-6333-486a-9deb-d9473aa05e04", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4e35c7b0-6333-486a-9deb-d9473aa05e04, vol_name:cephfs) < ""
Dec 06 10:25:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4e35c7b0-6333-486a-9deb-d9473aa05e04'' moved to trashcan
Dec 06 10:25:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:25:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4e35c7b0-6333-486a-9deb-d9473aa05e04, vol_name:cephfs) < ""
Dec 06 10:25:19 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:19.200 280869 DEBUG oslo_concurrency.lockutils [None req-835fb42a-8e95-420f-b7a2-182f1d88b1bb b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Acquiring lock "b59377c8-c3d7-452b-8305-d2853ef47bb4" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:19.200 280869 DEBUG oslo_concurrency.lockutils [None req-835fb42a-8e95-420f-b7a2-182f1d88b1bb b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:19.224 280869 INFO nova.compute.manager [None req-835fb42a-8e95-420f-b7a2-182f1d88b1bb b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Detaching volume aaa4ed2d-8dfc-40b7-87e3-cd5257ed1965
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:19.272 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:19.285 280869 INFO nova.virt.block_device [None req-835fb42a-8e95-420f-b7a2-182f1d88b1bb b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Attempting to driver detach volume aaa4ed2d-8dfc-40b7-87e3-cd5257ed1965 from mountpoint /dev/vdb
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:19.297 280869 DEBUG nova.virt.libvirt.driver [None req-835fb42a-8e95-420f-b7a2-182f1d88b1bb b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Attempting to detach device vdb from instance b59377c8-c3d7-452b-8305-d2853ef47bb4 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:19.298 280869 DEBUG nova.virt.libvirt.guest [None req-835fb42a-8e95-420f-b7a2-182f1d88b1bb b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]:   <source protocol="rbd" name="volumes/volume-aaa4ed2d-8dfc-40b7-87e3-cd5257ed1965">
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]:     <host name="172.18.0.103" port="6789"/>
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]:     <host name="172.18.0.104" port="6789"/>
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]:     <host name="172.18.0.105" port="6789"/>
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]:   </source>
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]:   <target dev="vdb" bus="virtio"/>
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]:   <serial>aaa4ed2d-8dfc-40b7-87e3-cd5257ed1965</serial>
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]: </disk>
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:19.309 280869 INFO nova.virt.libvirt.driver [None req-835fb42a-8e95-420f-b7a2-182f1d88b1bb b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Successfully detached device vdb from instance b59377c8-c3d7-452b-8305-d2853ef47bb4 from the persistent domain config.
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:19.310 280869 DEBUG nova.virt.libvirt.driver [None req-835fb42a-8e95-420f-b7a2-182f1d88b1bb b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance b59377c8-c3d7-452b-8305-d2853ef47bb4 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:19.310 280869 DEBUG nova.virt.libvirt.guest [None req-835fb42a-8e95-420f-b7a2-182f1d88b1bb b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] detach device xml: <disk type="network" device="disk">
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]:   <source protocol="rbd" name="volumes/volume-aaa4ed2d-8dfc-40b7-87e3-cd5257ed1965">
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]:     <host name="172.18.0.103" port="6789"/>
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]:     <host name="172.18.0.104" port="6789"/>
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]:     <host name="172.18.0.105" port="6789"/>
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]:   </source>
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]:   <target dev="vdb" bus="virtio"/>
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]:   <serial>aaa4ed2d-8dfc-40b7-87e3-cd5257ed1965</serial>
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]: </disk>
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 06 10:25:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v571: 177 pgs: 2 active+clean+snaptrim, 175 active+clean; 282 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 64 KiB/s wr, 56 op/s
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:19.433 280869 DEBUG nova.virt.libvirt.driver [None req-dbfccaf3-c0e5-451b-ad8c-274fc67d0b68 - - - - - -] Received event <DeviceRemovedEvent: 1765016719.4328184, b59377c8-c3d7-452b-8305-d2853ef47bb4 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:19.435 280869 DEBUG nova.virt.libvirt.driver [None req-835fb42a-8e95-420f-b7a2-182f1d88b1bb b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance b59377c8-c3d7-452b-8305-d2853ef47bb4 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:19.438 280869 INFO nova.virt.libvirt.driver [None req-835fb42a-8e95-420f-b7a2-182f1d88b1bb b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Successfully detached device vdb from instance b59377c8-c3d7-452b-8305-d2853ef47bb4 from the live domain config.
Dec 06 10:25:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:25:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:25:19 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:25:19 np0005548790.localdomain podman[322177]: 2025-12-06 10:25:19.564081409 +0000 UTC m=+0.078614509 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:19.569 280869 DEBUG nova.objects.instance [None req-835fb42a-8e95-420f-b7a2-182f1d88b1bb b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lazy-loading 'flavor' on Instance uuid b59377c8-c3d7-452b-8305-d2853ef47bb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:25:19 np0005548790.localdomain podman[322177]: 2025-12-06 10:25:19.573568835 +0000 UTC m=+0.088101965 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:25:19 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:25:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:19.627 280869 DEBUG oslo_concurrency.lockutils [None req-835fb42a-8e95-420f-b7a2-182f1d88b1bb b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:25:19 np0005548790.localdomain podman[322179]: 2025-12-06 10:25:19.629572711 +0000 UTC m=+0.137745219 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm)
Dec 06 10:25:19 np0005548790.localdomain podman[322179]: 2025-12-06 10:25:19.645250476 +0000 UTC m=+0.153423054 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, config_id=edpm, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 06 10:25:19 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:25:19 np0005548790.localdomain podman[322178]: 2025-12-06 10:25:19.716179855 +0000 UTC m=+0.227876599 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 10:25:19 np0005548790.localdomain podman[322178]: 2025-12-06 10:25:19.731408157 +0000 UTC m=+0.243104961 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 10:25:19 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:25:19 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "541781d2-de06-47af-8a8c-92b3f31186b7_1ac163ca-79ea-43b6-8028-d76d24ca4cd1", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:19 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "541781d2-de06-47af-8a8c-92b3f31186b7", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:19 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4e35c7b0-6333-486a-9deb-d9473aa05e04", "format": "json"}]: dispatch
Dec 06 10:25:19 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4e35c7b0-6333-486a-9deb-d9473aa05e04", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:19 np0005548790.localdomain ceph-mon[301742]: pgmap v571: 177 pgs: 2 active+clean+snaptrim, 175 active+clean; 282 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 64 KiB/s wr, 56 op/s
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.414 280869 DEBUG oslo_concurrency.lockutils [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Acquiring lock "b59377c8-c3d7-452b-8305-d2853ef47bb4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.415 280869 DEBUG oslo_concurrency.lockutils [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.416 280869 DEBUG oslo_concurrency.lockutils [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Acquiring lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.417 280869 DEBUG oslo_concurrency.lockutils [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.417 280869 DEBUG oslo_concurrency.lockutils [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.419 280869 INFO nova.compute.manager [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Terminating instance
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.420 280869 DEBUG nova.compute.manager [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 06 10:25:20 np0005548790.localdomain kernel: device tapc8391efe-ea left promiscuous mode
Dec 06 10:25:20 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016720.4989] device (tapc8391efe-ea): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed')
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.538 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:20 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:25:20Z|00263|binding|INFO|Releasing lport c8391efe-eabf-46a0-94e6-c12eb660cfb2 from this chassis (sb_readonly=0)
Dec 06 10:25:20 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:25:20Z|00264|binding|INFO|Setting lport c8391efe-eabf-46a0-94e6-c12eb660cfb2 down in Southbound
Dec 06 10:25:20 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:25:20Z|00265|binding|INFO|Removing iface tapc8391efe-ea ovn-installed in OVS
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.540 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.550 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:20.551 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:95:9c 10.100.0.10'], port_security=['fa:16:3e:ec:95:9c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55ffc629-08a5-404f-87a7-26deb97840dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b51f704fe6204487b0317c3332364cca', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd407968b-b8de-45cd-a244-3bf62d3c0357', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a555e286-25fe-4028-bbdb-d66a3efae4d1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=c8391efe-eabf-46a0-94e6-c12eb660cfb2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:25:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:20.553 159200 INFO neutron.agent.ovn.metadata.agent [-] Port c8391efe-eabf-46a0-94e6-c12eb660cfb2 in datapath 55ffc629-08a5-404f-87a7-26deb97840dc unbound from our chassis
Dec 06 10:25:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:20.555 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55ffc629-08a5-404f-87a7-26deb97840dc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:25:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:20.557 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[65bc0a7e-bba6-4507-b2e1-9ecbf8717bbe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:25:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:20.558 159200 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-55ffc629-08a5-404f-87a7-26deb97840dc namespace which is not needed anymore
Dec 06 10:25:20 np0005548790.localdomain systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Dec 06 10:25:20 np0005548790.localdomain systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000a.scope: Consumed 16.886s CPU time.
Dec 06 10:25:20 np0005548790.localdomain systemd-machined[202564]: Machine qemu-4-instance-0000000a terminated.
Dec 06 10:25:20 np0005548790.localdomain kernel: device tapc8391efe-ea entered promiscuous mode
Dec 06 10:25:20 np0005548790.localdomain kernel: device tapc8391efe-ea left promiscuous mode
Dec 06 10:25:20 np0005548790.localdomain NetworkManager[5968]: <info>  [1765016720.6429] manager: (tapc8391efe-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Dec 06 10:25:20 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:25:20Z|00266|binding|INFO|Claiming lport c8391efe-eabf-46a0-94e6-c12eb660cfb2 for this chassis.
Dec 06 10:25:20 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:25:20Z|00267|binding|INFO|c8391efe-eabf-46a0-94e6-c12eb660cfb2: Claiming fa:16:3e:ec:95:9c 10.100.0.10
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.650 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:20.657 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:95:9c 10.100.0.10'], port_security=['fa:16:3e:ec:95:9c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55ffc629-08a5-404f-87a7-26deb97840dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b51f704fe6204487b0317c3332364cca', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd407968b-b8de-45cd-a244-3bf62d3c0357', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a555e286-25fe-4028-bbdb-d66a3efae4d1, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=c8391efe-eabf-46a0-94e6-c12eb660cfb2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.664 280869 INFO nova.virt.libvirt.driver [-] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Instance destroyed successfully.
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.665 280869 DEBUG nova.objects.instance [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lazy-loading 'resources' on Instance uuid b59377c8-c3d7-452b-8305-d2853ef47bb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 06 10:25:20 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:25:20Z|00268|binding|INFO|Setting lport c8391efe-eabf-46a0-94e6-c12eb660cfb2 ovn-installed in OVS
Dec 06 10:25:20 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:25:20Z|00269|binding|INFO|Setting lport c8391efe-eabf-46a0-94e6-c12eb660cfb2 up in Southbound
Dec 06 10:25:20 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:25:20Z|00270|binding|INFO|Releasing lport c8391efe-eabf-46a0-94e6-c12eb660cfb2 from this chassis (sb_readonly=1)
Dec 06 10:25:20 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:25:20Z|00271|if_status|INFO|Not setting lport c8391efe-eabf-46a0-94e6-c12eb660cfb2 down as sb is readonly
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.670 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:20 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:25:20Z|00272|binding|INFO|Removing iface tapc8391efe-ea ovn-installed in OVS
Dec 06 10:25:20 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:25:20Z|00273|binding|INFO|Releasing lport c8391efe-eabf-46a0-94e6-c12eb660cfb2 from this chassis (sb_readonly=0)
Dec 06 10:25:20 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:25:20Z|00274|binding|INFO|Setting lport c8391efe-eabf-46a0-94e6-c12eb660cfb2 down in Southbound
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.675 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.677 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.686 280869 DEBUG nova.virt.libvirt.vif [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-06T10:24:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-739598656',display_name='tempest-VolumesBackupsTest-instance-739598656',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005548790.localdomain',hostname='tempest-volumesbackupstest-instance-739598656',id=10,image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI3Da3ZAzMu07Y3jOXfDkV45E3//rDjS7tw7vpkgEh1B1VKbPEiZiwURSVqrMcu/DW1QQdZYZpxlNs8HoKSRsiyrqyYFtsCyQHjgg2Q1M3OkTcWMp/2hJEhqLfueca6tfQ==',key_name='tempest-keypair-1339678623',keypairs=<?>,launch_index=0,launched_at=2025-12-06T10:24:23Z,launched_on='np0005548790.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='np0005548790.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b51f704fe6204487b0317c3332364cca',ramdisk_id='',reservation_id='r-ljipvdtt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a944ab6-8965-4055-b7fc-af6e395005ea',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-1677778141',owner_user_name='tempest-VolumesBackupsTest-1677778141-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-06T10:24:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b40d497af0834616a664e6909c0f6685',uuid=b59377c8-c3d7-452b-8305-d2853ef47bb4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c8391efe-eabf-46a0-94e6-c12eb660cfb2", "address": "fa:16:3e:ec:95:9c", "network": {"id": "55ffc629-08a5-404f-87a7-26deb97840dc", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1845353867-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "b51f704fe6204487b0317c3332364cca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8391efe-ea", "ovs_interfaceid": "c8391efe-eabf-46a0-94e6-c12eb660cfb2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.687 280869 DEBUG nova.network.os_vif_util [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Converting VIF {"id": "c8391efe-eabf-46a0-94e6-c12eb660cfb2", "address": "fa:16:3e:ec:95:9c", "network": {"id": "55ffc629-08a5-404f-87a7-26deb97840dc", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1845353867-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "b51f704fe6204487b0317c3332364cca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8391efe-ea", "ovs_interfaceid": "c8391efe-eabf-46a0-94e6-c12eb660cfb2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 06 10:25:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:20.687 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:95:9c 10.100.0.10'], port_security=['fa:16:3e:ec:95:9c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b59377c8-c3d7-452b-8305-d2853ef47bb4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55ffc629-08a5-404f-87a7-26deb97840dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b51f704fe6204487b0317c3332364cca', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd407968b-b8de-45cd-a244-3bf62d3c0357', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a555e286-25fe-4028-bbdb-d66a3efae4d1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=c8391efe-eabf-46a0-94e6-c12eb660cfb2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.688 280869 DEBUG nova.network.os_vif_util [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ec:95:9c,bridge_name='br-int',has_traffic_filtering=True,id=c8391efe-eabf-46a0-94e6-c12eb660cfb2,network=Network(55ffc629-08a5-404f-87a7-26deb97840dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8391efe-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.688 280869 DEBUG os_vif [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:95:9c,bridge_name='br-int',has_traffic_filtering=True,id=c8391efe-eabf-46a0-94e6-c12eb660cfb2,network=Network(55ffc629-08a5-404f-87a7-26deb97840dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8391efe-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.691 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.692 280869 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8391efe-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.693 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.695 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.697 280869 INFO os_vif [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ec:95:9c,bridge_name='br-int',has_traffic_filtering=True,id=c8391efe-eabf-46a0-94e6-c12eb660cfb2,network=Network(55ffc629-08a5-404f-87a7-26deb97840dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8391efe-ea')
Dec 06 10:25:20 np0005548790.localdomain systemd[1]: tmp-crun.FyHBhG.mount: Deactivated successfully.
Dec 06 10:25:20 np0005548790.localdomain neutron-haproxy-ovnmeta-55ffc629-08a5-404f-87a7-26deb97840dc[321734]: [NOTICE]   (321738) : haproxy version is 2.8.14-c23fe91
Dec 06 10:25:20 np0005548790.localdomain neutron-haproxy-ovnmeta-55ffc629-08a5-404f-87a7-26deb97840dc[321734]: [NOTICE]   (321738) : path to executable is /usr/sbin/haproxy
Dec 06 10:25:20 np0005548790.localdomain neutron-haproxy-ovnmeta-55ffc629-08a5-404f-87a7-26deb97840dc[321734]: [ALERT]    (321738) : Current worker (321740) exited with code 143 (Terminated)
Dec 06 10:25:20 np0005548790.localdomain neutron-haproxy-ovnmeta-55ffc629-08a5-404f-87a7-26deb97840dc[321734]: [WARNING]  (321738) : All workers exited. Exiting... (0)
Dec 06 10:25:20 np0005548790.localdomain systemd[1]: libpod-ab93c3c3db34e7ec3c84647251b8a6ec0a0d638bea3351300d3a2f9a9b01ab11.scope: Deactivated successfully.
Dec 06 10:25:20 np0005548790.localdomain podman[322268]: 2025-12-06 10:25:20.745858015 +0000 UTC m=+0.092535926 container died ab93c3c3db34e7ec3c84647251b8a6ec0a0d638bea3351300d3a2f9a9b01ab11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55ffc629-08a5-404f-87a7-26deb97840dc, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec 06 10:25:20 np0005548790.localdomain podman[322268]: 2025-12-06 10:25:20.78483758 +0000 UTC m=+0.131515441 container cleanup ab93c3c3db34e7ec3c84647251b8a6ec0a0d638bea3351300d3a2f9a9b01ab11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55ffc629-08a5-404f-87a7-26deb97840dc, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 06 10:25:20 np0005548790.localdomain podman[322301]: 2025-12-06 10:25:20.84244008 +0000 UTC m=+0.082770052 container cleanup ab93c3c3db34e7ec3c84647251b8a6ec0a0d638bea3351300d3a2f9a9b01ab11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55ffc629-08a5-404f-87a7-26deb97840dc, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:25:20 np0005548790.localdomain systemd[1]: libpod-conmon-ab93c3c3db34e7ec3c84647251b8a6ec0a0d638bea3351300d3a2f9a9b01ab11.scope: Deactivated successfully.
Dec 06 10:25:20 np0005548790.localdomain podman[322320]: 2025-12-06 10:25:20.868555696 +0000 UTC m=+0.069727558 container remove ab93c3c3db34e7ec3c84647251b8a6ec0a0d638bea3351300d3a2f9a9b01ab11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55ffc629-08a5-404f-87a7-26deb97840dc, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:25:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:20.872 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[a66e20b0-0d13-43a1-9637-317a0c24eae8]: (4, ('Sat Dec  6 10:25:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-55ffc629-08a5-404f-87a7-26deb97840dc (ab93c3c3db34e7ec3c84647251b8a6ec0a0d638bea3351300d3a2f9a9b01ab11)\nab93c3c3db34e7ec3c84647251b8a6ec0a0d638bea3351300d3a2f9a9b01ab11\nSat Dec  6 10:25:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-55ffc629-08a5-404f-87a7-26deb97840dc (ab93c3c3db34e7ec3c84647251b8a6ec0a0d638bea3351300d3a2f9a9b01ab11)\nab93c3c3db34e7ec3c84647251b8a6ec0a0d638bea3351300d3a2f9a9b01ab11\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:25:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:20.874 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[7e635316-903c-46b7-bc55-dd5608ee5afc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:25:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:20.875 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55ffc629-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.877 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:20 np0005548790.localdomain kernel: device tap55ffc629-00 left promiscuous mode
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.879 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:20.882 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[8619fafa-67fa-4b9d-8e46-81719917eb51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:25:20 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e255 e255: 6 total, 6 up, 6 in
Dec 06 10:25:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:20.888 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:20.901 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[92e3711f-2f53-49a7-a60e-98b2d96a0f2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:25:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:20.902 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[9562271a-676b-4f6b-84b0-55ed5dcde3a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:25:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:20.919 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[83361b52-0efe-4904-a91c-010c1efdcc41]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1307757, 'reachable_time': 41105, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322339, 'error': None, 'target': 'ovnmeta-55ffc629-08a5-404f-87a7-26deb97840dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:25:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:20.921 159379 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-55ffc629-08a5-404f-87a7-26deb97840dc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 06 10:25:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:20.921 159379 DEBUG oslo.privsep.daemon [-] privsep: reply[ba7fbac4-0e31-4eda-a7da-28c89b801450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:25:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:20.922 159200 INFO neutron.agent.ovn.metadata.agent [-] Port c8391efe-eabf-46a0-94e6-c12eb660cfb2 in datapath 55ffc629-08a5-404f-87a7-26deb97840dc unbound from our chassis
Dec 06 10:25:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:20.924 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55ffc629-08a5-404f-87a7-26deb97840dc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:25:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:20.925 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[7508252f-abb1-4706-903e-241e6c6979db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:25:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:20.926 159200 INFO neutron.agent.ovn.metadata.agent [-] Port c8391efe-eabf-46a0-94e6-c12eb660cfb2 in datapath 55ffc629-08a5-404f-87a7-26deb97840dc unbound from our chassis
Dec 06 10:25:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:20.928 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55ffc629-08a5-404f-87a7-26deb97840dc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:25:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:20.929 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[33bc1277-75b1-4ebe-bddf-7c6ac018225d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:25:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:21.301 280869 INFO nova.virt.libvirt.driver [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Deleting instance files /var/lib/nova/instances/b59377c8-c3d7-452b-8305-d2853ef47bb4_del
Dec 06 10:25:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:21.302 280869 INFO nova.virt.libvirt.driver [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Deletion of /var/lib/nova/instances/b59377c8-c3d7-452b-8305-d2853ef47bb4_del complete
Dec 06 10:25:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v573: 177 pgs: 2 active+clean+snaptrim, 175 active+clean; 282 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 70 KiB/s wr, 61 op/s
Dec 06 10:25:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:21.358 280869 DEBUG nova.compute.manager [req-0d85f1dd-9314-4c4c-8e1c-02c95cde9df4 req-1d37c918-4117-4f17-b36d-5503066f9d49 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Received event network-vif-unplugged-c8391efe-eabf-46a0-94e6-c12eb660cfb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:25:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:21.358 280869 DEBUG oslo_concurrency.lockutils [req-0d85f1dd-9314-4c4c-8e1c-02c95cde9df4 req-1d37c918-4117-4f17-b36d-5503066f9d49 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:25:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:21.359 280869 DEBUG oslo_concurrency.lockutils [req-0d85f1dd-9314-4c4c-8e1c-02c95cde9df4 req-1d37c918-4117-4f17-b36d-5503066f9d49 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:25:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:21.359 280869 DEBUG oslo_concurrency.lockutils [req-0d85f1dd-9314-4c4c-8e1c-02c95cde9df4 req-1d37c918-4117-4f17-b36d-5503066f9d49 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:25:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:21.360 280869 DEBUG nova.compute.manager [req-0d85f1dd-9314-4c4c-8e1c-02c95cde9df4 req-1d37c918-4117-4f17-b36d-5503066f9d49 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] No waiting events found dispatching network-vif-unplugged-c8391efe-eabf-46a0-94e6-c12eb660cfb2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:25:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:21.360 280869 DEBUG nova.compute.manager [req-0d85f1dd-9314-4c4c-8e1c-02c95cde9df4 req-1d37c918-4117-4f17-b36d-5503066f9d49 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Received event network-vif-unplugged-c8391efe-eabf-46a0-94e6-c12eb660cfb2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 06 10:25:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:21.397 280869 INFO nova.compute.manager [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Took 0.98 seconds to destroy the instance on the hypervisor.
Dec 06 10:25:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:21.397 280869 DEBUG oslo.service.loopingcall [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 06 10:25:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:21.398 280869 DEBUG nova.compute.manager [-] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 06 10:25:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:21.398 280869 DEBUG nova.network.neutron [-] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 06 10:25:21 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-b7628a71dceefaa970745c21c0b1861909eba1ecb7069446b647b01f7cac1931-merged.mount: Deactivated successfully.
Dec 06 10:25:21 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ab93c3c3db34e7ec3c84647251b8a6ec0a0d638bea3351300d3a2f9a9b01ab11-userdata-shm.mount: Deactivated successfully.
Dec 06 10:25:21 np0005548790.localdomain systemd[1]: run-netns-ovnmeta\x2d55ffc629\x2d08a5\x2d404f\x2d87a7\x2d26deb97840dc.mount: Deactivated successfully.
Dec 06 10:25:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:21.835 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:21 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:21.847 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:25:21 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:21.848 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:25:21 np0005548790.localdomain ceph-mon[301742]: osdmap e255: 6 total, 6 up, 6 in
Dec 06 10:25:21 np0005548790.localdomain ceph-mon[301742]: pgmap v573: 177 pgs: 2 active+clean+snaptrim, 175 active+clean; 282 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 70 KiB/s wr, 61 op/s
Dec 06 10:25:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "bd3fee73-c2d0-4d8d-bc68-793e93881208_9652976b-d5e6-4b47-ae83-6f26b1212f0e", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:21 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bd3fee73-c2d0-4d8d-bc68-793e93881208_9652976b-d5e6-4b47-ae83-6f26b1212f0e, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:21 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp'
Dec 06 10:25:21 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp' to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta'
Dec 06 10:25:21 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bd3fee73-c2d0-4d8d-bc68-793e93881208_9652976b-d5e6-4b47-ae83-6f26b1212f0e, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "bd3fee73-c2d0-4d8d-bc68-793e93881208", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:21 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bd3fee73-c2d0-4d8d-bc68-793e93881208, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp'
Dec 06 10:25:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp' to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta'
Dec 06 10:25:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bd3fee73-c2d0-4d8d-bc68-793e93881208, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:22 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "82abd4b2-157a-49c5-b0f6-995ee895ebc0", "format": "json"}]: dispatch
Dec 06 10:25:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:82abd4b2-157a-49c5-b0f6-995ee895ebc0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:25:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:82abd4b2-157a-49c5-b0f6-995ee895ebc0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:25:22 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:25:22.267+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '82abd4b2-157a-49c5-b0f6-995ee895ebc0' of type subvolume
Dec 06 10:25:22 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '82abd4b2-157a-49c5-b0f6-995ee895ebc0' of type subvolume
Dec 06 10:25:22 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "82abd4b2-157a-49c5-b0f6-995ee895ebc0", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:82abd4b2-157a-49c5-b0f6-995ee895ebc0, vol_name:cephfs) < ""
Dec 06 10:25:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/82abd4b2-157a-49c5-b0f6-995ee895ebc0'' moved to trashcan
Dec 06 10:25:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:25:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:82abd4b2-157a-49c5-b0f6-995ee895ebc0, vol_name:cephfs) < ""
Dec 06 10:25:22 np0005548790.localdomain neutron_sriov_agent[255311]: 2025-12-06 10:25:22.658 2 INFO neutron.agent.securitygroups_rpc [req-63e33143-79ba-4452-b217-6b4868995963 req-6d925882-c432-4b30-bcfa-4ea2e9401f50 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Security group member updated ['d407968b-b8de-45cd-a244-3bf62d3c0357']
Dec 06 10:25:22 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e256 e256: 6 total, 6 up, 6 in
Dec 06 10:25:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:22.809 280869 DEBUG nova.network.neutron [-] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 06 10:25:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:22.827 280869 INFO nova.compute.manager [-] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Took 1.43 seconds to deallocate network for instance.
Dec 06 10:25:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:22.863 280869 DEBUG nova.compute.manager [req-4a40d8b2-c933-4560-954b-47b170a70fba req-b427532d-76f2-4546-bfb5-375b274035ae 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Received event network-vif-deleted-c8391efe-eabf-46a0-94e6-c12eb660cfb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:25:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:22.869 280869 DEBUG oslo_concurrency.lockutils [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:25:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:22.870 280869 DEBUG oslo_concurrency.lockutils [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:25:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:22.927 280869 DEBUG oslo_concurrency.processutils [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:25:23 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 06 10:25:23 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v575: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 84 KiB/s rd, 169 KiB/s wr, 130 op/s
Dec 06 10:25:23 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:25:23 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3694335426' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.410 280869 DEBUG nova.compute.manager [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Received event network-vif-plugged-c8391efe-eabf-46a0-94e6-c12eb660cfb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.411 280869 DEBUG oslo_concurrency.lockutils [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.412 280869 DEBUG oslo_concurrency.lockutils [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.412 280869 DEBUG oslo_concurrency.lockutils [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.412 280869 DEBUG nova.compute.manager [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] No waiting events found dispatching network-vif-plugged-c8391efe-eabf-46a0-94e6-c12eb660cfb2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.413 280869 WARNING nova.compute.manager [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Received unexpected event network-vif-plugged-c8391efe-eabf-46a0-94e6-c12eb660cfb2 for instance with vm_state deleted and task_state None.
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.413 280869 DEBUG nova.compute.manager [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Received event network-vif-plugged-c8391efe-eabf-46a0-94e6-c12eb660cfb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.414 280869 DEBUG oslo_concurrency.lockutils [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.414 280869 DEBUG oslo_concurrency.lockutils [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.414 280869 DEBUG oslo_concurrency.lockutils [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.415 280869 DEBUG nova.compute.manager [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] No waiting events found dispatching network-vif-plugged-c8391efe-eabf-46a0-94e6-c12eb660cfb2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.415 280869 WARNING nova.compute.manager [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Received unexpected event network-vif-plugged-c8391efe-eabf-46a0-94e6-c12eb660cfb2 for instance with vm_state deleted and task_state None.
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.416 280869 DEBUG nova.compute.manager [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Received event network-vif-plugged-c8391efe-eabf-46a0-94e6-c12eb660cfb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.416 280869 DEBUG oslo_concurrency.lockutils [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.417 280869 DEBUG oslo_concurrency.lockutils [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.417 280869 DEBUG oslo_concurrency.lockutils [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.417 280869 DEBUG nova.compute.manager [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] No waiting events found dispatching network-vif-plugged-c8391efe-eabf-46a0-94e6-c12eb660cfb2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.418 280869 WARNING nova.compute.manager [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Received unexpected event network-vif-plugged-c8391efe-eabf-46a0-94e6-c12eb660cfb2 for instance with vm_state deleted and task_state None.
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.418 280869 DEBUG nova.compute.manager [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Received event network-vif-unplugged-c8391efe-eabf-46a0-94e6-c12eb660cfb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.419 280869 DEBUG oslo_concurrency.lockutils [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.419 280869 DEBUG oslo_concurrency.lockutils [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.419 280869 DEBUG oslo_concurrency.lockutils [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.420 280869 DEBUG nova.compute.manager [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] No waiting events found dispatching network-vif-unplugged-c8391efe-eabf-46a0-94e6-c12eb660cfb2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.420 280869 WARNING nova.compute.manager [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Received unexpected event network-vif-unplugged-c8391efe-eabf-46a0-94e6-c12eb660cfb2 for instance with vm_state deleted and task_state None.
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.421 280869 DEBUG nova.compute.manager [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Received event network-vif-plugged-c8391efe-eabf-46a0-94e6-c12eb660cfb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.421 280869 DEBUG oslo_concurrency.lockutils [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Acquiring lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.421 280869 DEBUG oslo_concurrency.lockutils [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.422 280869 DEBUG oslo_concurrency.lockutils [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.422 280869 DEBUG nova.compute.manager [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] No waiting events found dispatching network-vif-plugged-c8391efe-eabf-46a0-94e6-c12eb660cfb2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.423 280869 WARNING nova.compute.manager [req-250b48a6-9446-42ef-b355-d35d256ef2d9 req-c5623f2a-e460-478d-9cb6-4ea4349c8a7d 0b60e97090454089ad6b8f4b898d0040 660705fe1cbe4111b6f2c99ce8d05c8c - - default default] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Received unexpected event network-vif-plugged-c8391efe-eabf-46a0-94e6-c12eb660cfb2 for instance with vm_state deleted and task_state None.
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.424 280869 DEBUG oslo_concurrency.processutils [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.430 280869 DEBUG nova.compute.provider_tree [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.454 280869 DEBUG nova.scheduler.client.report [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.475 280869 DEBUG oslo_concurrency.lockutils [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.524 280869 INFO nova.scheduler.client.report [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Deleted allocations for instance b59377c8-c3d7-452b-8305-d2853ef47bb4
Dec 06 10:25:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:25:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:25:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:25:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:25:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:25:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:25:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:25:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:25:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:25:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:25:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:25:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:25:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:23.614 280869 DEBUG oslo_concurrency.lockutils [None req-63e33143-79ba-4452-b217-6b4868995963 b40d497af0834616a664e6909c0f6685 b51f704fe6204487b0317c3332364cca - - default default] Lock "b59377c8-c3d7-452b-8305-d2853ef47bb4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:25:23 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "bd3fee73-c2d0-4d8d-bc68-793e93881208_9652976b-d5e6-4b47-ae83-6f26b1212f0e", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:23 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "bd3fee73-c2d0-4d8d-bc68-793e93881208", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:23 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "82abd4b2-157a-49c5-b0f6-995ee895ebc0", "format": "json"}]: dispatch
Dec 06 10:25:23 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "82abd4b2-157a-49c5-b0f6-995ee895ebc0", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:23 np0005548790.localdomain ceph-mon[301742]: osdmap e256: 6 total, 6 up, 6 in
Dec 06 10:25:23 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/3694335426' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:25:23 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e257 e257: 6 total, 6 up, 6 in
Dec 06 10:25:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:25:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:24.307 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:24 np0005548790.localdomain ceph-mon[301742]: pgmap v575: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 84 KiB/s rd, 169 KiB/s wr, 130 op/s
Dec 06 10:25:24 np0005548790.localdomain ceph-mon[301742]: osdmap e257: 6 total, 6 up, 6 in
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "9ca8d539-a020-426e-a70b-3e1ee1b92584_844e7d16-6de7-405a-b7f6-e1408c2dd627", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9ca8d539-a020-426e-a70b-3e1ee1b92584_844e7d16-6de7-405a-b7f6-e1408c2dd627, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp'
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp' to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta'
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v577: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 95 KiB/s wr, 65 op/s
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9ca8d539-a020-426e-a70b-3e1ee1b92584_844e7d16-6de7-405a-b7f6-e1408c2dd627, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "9ca8d539-a020-426e-a70b-3e1ee1b92584", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9ca8d539-a020-426e-a70b-3e1ee1b92584, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp'
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp' to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta'
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9ca8d539-a020-426e-a70b-3e1ee1b92584, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:25.724 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:25 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e258 e258: 6 total, 6 up, 6 in
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "auth_id": "admin", "format": "json"}]: dispatch
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:efbd58ea-e56f-4c21-9ed9-3d319ca403b8, vol_name:cephfs) < ""
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin doesn't exist
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:efbd58ea-e56f-4c21-9ed9-3d319ca403b8, vol_name:cephfs) < ""
Dec 06 10:25:25 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:25:25.756+0000 7f06345ec640 -1 mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Dec 06 10:25:25 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:25.850 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=33b2d0f4-3dae-458c-b286-c937c7cb3d9e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "format": "json"}]: dispatch
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:efbd58ea-e56f-4c21-9ed9-3d319ca403b8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:efbd58ea-e56f-4c21-9ed9-3d319ca403b8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:25:25 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:25:25.877+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'efbd58ea-e56f-4c21-9ed9-3d319ca403b8' of type subvolume
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'efbd58ea-e56f-4c21-9ed9-3d319ca403b8' of type subvolume
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:efbd58ea-e56f-4c21-9ed9-3d319ca403b8, vol_name:cephfs) < ""
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/efbd58ea-e56f-4c21-9ed9-3d319ca403b8'' moved to trashcan
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:25:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:efbd58ea-e56f-4c21-9ed9-3d319ca403b8, vol_name:cephfs) < ""
Dec 06 10:25:26 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "9ca8d539-a020-426e-a70b-3e1ee1b92584_844e7d16-6de7-405a-b7f6-e1408c2dd627", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:26 np0005548790.localdomain ceph-mon[301742]: pgmap v577: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 95 KiB/s wr, 65 op/s
Dec 06 10:25:26 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "9ca8d539-a020-426e-a70b-3e1ee1b92584", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:26 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "auth_id": "admin", "format": "json"}]: dispatch
Dec 06 10:25:26 np0005548790.localdomain ceph-mon[301742]: osdmap e258: 6 total, 6 up, 6 in
Dec 06 10:25:26 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e259 e259: 6 total, 6 up, 6 in
Dec 06 10:25:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v580: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 123 KiB/s wr, 85 op/s
Dec 06 10:25:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:25:27 np0005548790.localdomain podman[322363]: 2025-12-06 10:25:27.562816117 +0000 UTC m=+0.074492838 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 10:25:27 np0005548790.localdomain podman[322363]: 2025-12-06 10:25:27.574080471 +0000 UTC m=+0.085757202 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:25:27 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:25:27 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e260 e260: 6 total, 6 up, 6 in
Dec 06 10:25:27 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "format": "json"}]: dispatch
Dec 06 10:25:27 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "efbd58ea-e56f-4c21-9ed9-3d319ca403b8", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:27 np0005548790.localdomain ceph-mon[301742]: osdmap e259: 6 total, 6 up, 6 in
Dec 06 10:25:27 np0005548790.localdomain ceph-mon[301742]: osdmap e260: 6 total, 6 up, 6 in
Dec 06 10:25:28 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:25:28 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2541831024' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:25:28 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:25:28 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2541831024' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:25:28 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "cdf82d1a-cb71-4bc8-a483-dd9f0663d1ab_60347acc-96b8-4ecf-9b3d-01ad73eeabab", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cdf82d1a-cb71-4bc8-a483-dd9f0663d1ab_60347acc-96b8-4ecf-9b3d-01ad73eeabab, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp'
Dec 06 10:25:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp' to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta'
Dec 06 10:25:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cdf82d1a-cb71-4bc8-a483-dd9f0663d1ab_60347acc-96b8-4ecf-9b3d-01ad73eeabab, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:28 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "cdf82d1a-cb71-4bc8-a483-dd9f0663d1ab", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cdf82d1a-cb71-4bc8-a483-dd9f0663d1ab, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp'
Dec 06 10:25:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta.tmp' to config b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf/.meta'
Dec 06 10:25:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cdf82d1a-cb71-4bc8-a483-dd9f0663d1ab, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:28 np0005548790.localdomain ceph-mon[301742]: pgmap v580: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 123 KiB/s wr, 85 op/s
Dec 06 10:25:28 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2541831024' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:25:28 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/2541831024' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:25:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:25:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v582: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 76 KiB/s wr, 61 op/s
Dec 06 10:25:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:29.352 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:29 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "cdf82d1a-cb71-4bc8-a483-dd9f0663d1ab_60347acc-96b8-4ecf-9b3d-01ad73eeabab", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:29 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "snap_name": "cdf82d1a-cb71-4bc8-a483-dd9f0663d1ab", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:25:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:25:30 np0005548790.localdomain systemd[1]: tmp-crun.FNLSPa.mount: Deactivated successfully.
Dec 06 10:25:30 np0005548790.localdomain podman[322380]: 2025-12-06 10:25:30.562176679 +0000 UTC m=+0.074044235 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:25:30 np0005548790.localdomain podman[322381]: 2025-12-06 10:25:30.589067877 +0000 UTC m=+0.093125022 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 10:25:30 np0005548790.localdomain podman[322381]: 2025-12-06 10:25:30.625907954 +0000 UTC m=+0.129965169 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 06 10:25:30 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:25:30 np0005548790.localdomain podman[322380]: 2025-12-06 10:25:30.643994994 +0000 UTC m=+0.155862570 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:25:30 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:25:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:30.727 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:30 np0005548790.localdomain ceph-mon[301742]: pgmap v582: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 76 KiB/s wr, 61 op/s
Dec 06 10:25:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v583: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 71 KiB/s wr, 57 op/s
Dec 06 10:25:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:31.496 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "format": "json"}]: dispatch
Dec 06 10:25:31 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:25:31 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:25:31 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0908fc08-ca54-45dc-a60b-9cb6f31660bf' of type subvolume
Dec 06 10:25:31 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:25:31.795+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0908fc08-ca54-45dc-a60b-9cb6f31660bf' of type subvolume
Dec 06 10:25:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:31 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:31 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0908fc08-ca54-45dc-a60b-9cb6f31660bf'' moved to trashcan
Dec 06 10:25:31 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:25:31 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0908fc08-ca54-45dc-a60b-9cb6f31660bf, vol_name:cephfs) < ""
Dec 06 10:25:32 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e261 e261: 6 total, 6 up, 6 in
Dec 06 10:25:32 np0005548790.localdomain ceph-mon[301742]: pgmap v583: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 71 KiB/s wr, 57 op/s
Dec 06 10:25:32 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "format": "json"}]: dispatch
Dec 06 10:25:32 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0908fc08-ca54-45dc-a60b-9cb6f31660bf", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:32 np0005548790.localdomain ceph-mon[301742]: osdmap e261: 6 total, 6 up, 6 in
Dec 06 10:25:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v585: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 123 KiB/s wr, 60 op/s
Dec 06 10:25:33 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e262 e262: 6 total, 6 up, 6 in
Dec 06 10:25:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:25:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:34.383 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:34 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "691a6613-80b6-44f7-899f-c0e9f04c8f64", "format": "json"}]: dispatch
Dec 06 10:25:34 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:691a6613-80b6-44f7-899f-c0e9f04c8f64, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:25:34 np0005548790.localdomain ceph-mon[301742]: pgmap v585: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 123 KiB/s wr, 60 op/s
Dec 06 10:25:34 np0005548790.localdomain ceph-mon[301742]: osdmap e262: 6 total, 6 up, 6 in
Dec 06 10:25:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v587: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 106 KiB/s wr, 52 op/s
Dec 06 10:25:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:691a6613-80b6-44f7-899f-c0e9f04c8f64, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:25:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "691a6613-80b6-44f7-899f-c0e9f04c8f64", "format": "json"}]: dispatch
Dec 06 10:25:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:691a6613-80b6-44f7-899f-c0e9f04c8f64, vol_name:cephfs) < ""
Dec 06 10:25:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:691a6613-80b6-44f7-899f-c0e9f04c8f64, vol_name:cephfs) < ""
Dec 06 10:25:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:35.663 280869 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765016720.661992, b59377c8-c3d7-452b-8305-d2853ef47bb4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 06 10:25:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:35.664 280869 INFO nova.compute.manager [-] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] VM Stopped (Lifecycle Event)
Dec 06 10:25:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:35.681 280869 DEBUG nova.compute.manager [None req-44d8cc41-1e90-45e9-89a8-e5e7fde25712 - - - - - -] [instance: b59377c8-c3d7-452b-8305-d2853ef47bb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 06 10:25:35 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "691a6613-80b6-44f7-899f-c0e9f04c8f64", "format": "json"}]: dispatch
Dec 06 10:25:35 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:25:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:35.775 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:36 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b90e39e0-9458-4ad1-b3e6-84f20975b6e9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:25:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b90e39e0-9458-4ad1-b3e6-84f20975b6e9, vol_name:cephfs) < ""
Dec 06 10:25:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b90e39e0-9458-4ad1-b3e6-84f20975b6e9/.meta.tmp'
Dec 06 10:25:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b90e39e0-9458-4ad1-b3e6-84f20975b6e9/.meta.tmp' to config b'/volumes/_nogroup/b90e39e0-9458-4ad1-b3e6-84f20975b6e9/.meta'
Dec 06 10:25:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b90e39e0-9458-4ad1-b3e6-84f20975b6e9, vol_name:cephfs) < ""
Dec 06 10:25:36 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b90e39e0-9458-4ad1-b3e6-84f20975b6e9", "format": "json"}]: dispatch
Dec 06 10:25:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b90e39e0-9458-4ad1-b3e6-84f20975b6e9, vol_name:cephfs) < ""
Dec 06 10:25:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b90e39e0-9458-4ad1-b3e6-84f20975b6e9, vol_name:cephfs) < ""
Dec 06 10:25:36 np0005548790.localdomain ceph-mon[301742]: pgmap v587: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 106 KiB/s wr, 52 op/s
Dec 06 10:25:36 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "691a6613-80b6-44f7-899f-c0e9f04c8f64", "format": "json"}]: dispatch
Dec 06 10:25:36 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:25:36 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e263 e263: 6 total, 6 up, 6 in
Dec 06 10:25:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v589: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.7 KiB/s rd, 63 KiB/s wr, 9 op/s
Dec 06 10:25:37 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e264 e264: 6 total, 6 up, 6 in
Dec 06 10:25:37 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b90e39e0-9458-4ad1-b3e6-84f20975b6e9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:25:37 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b90e39e0-9458-4ad1-b3e6-84f20975b6e9", "format": "json"}]: dispatch
Dec 06 10:25:37 np0005548790.localdomain ceph-mon[301742]: osdmap e263: 6 total, 6 up, 6 in
Dec 06 10:25:38 np0005548790.localdomain ceph-mon[301742]: pgmap v589: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.7 KiB/s rd, 63 KiB/s wr, 9 op/s
Dec 06 10:25:38 np0005548790.localdomain ceph-mon[301742]: osdmap e264: 6 total, 6 up, 6 in
Dec 06 10:25:38 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e265 e265: 6 total, 6 up, 6 in
Dec 06 10:25:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:25:39 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3052221500' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:25:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:25:39 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3052221500' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:25:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:25:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v592: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 51 KiB/s wr, 36 op/s
Dec 06 10:25:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:39.409 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "b90e39e0-9458-4ad1-b3e6-84f20975b6e9", "new_size": 2147483648, "format": "json"}]: dispatch
Dec 06 10:25:39 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:b90e39e0-9458-4ad1-b3e6-84f20975b6e9, vol_name:cephfs) < ""
Dec 06 10:25:39 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:b90e39e0-9458-4ad1-b3e6-84f20975b6e9, vol_name:cephfs) < ""
Dec 06 10:25:39 np0005548790.localdomain ceph-mon[301742]: osdmap e265: 6 total, 6 up, 6 in
Dec 06 10:25:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3052221500' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:25:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3052221500' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:25:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:40.779 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:40 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e266 e266: 6 total, 6 up, 6 in
Dec 06 10:25:40 np0005548790.localdomain ceph-mon[301742]: pgmap v592: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 51 KiB/s wr, 36 op/s
Dec 06 10:25:40 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "b90e39e0-9458-4ad1-b3e6-84f20975b6e9", "new_size": 2147483648, "format": "json"}]: dispatch
Dec 06 10:25:40 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/302515343' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:25:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v594: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 63 KiB/s wr, 45 op/s
Dec 06 10:25:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Optimize plan auto_2025-12-06_10:25:41
Dec 06 10:25:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:25:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] do_upmap
Dec 06 10:25:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] pools ['vms', 'manila_metadata', 'images', '.mgr', 'manila_data', 'volumes', 'backups']
Dec 06 10:25:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] prepared 0/10 changes
Dec 06 10:25:41 np0005548790.localdomain ceph-mon[301742]: osdmap e266: 6 total, 6 up, 6 in
Dec 06 10:25:41 np0005548790.localdomain ceph-mon[301742]: pgmap v594: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 63 KiB/s wr, 45 op/s
Dec 06 10:25:41 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/2812440448' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:25:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e267 e267: 6 total, 6 up, 6 in
Dec 06 10:25:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:25:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:25:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:25:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32)
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014871994521217196 of space, bias 1.0, pg target 0.2969441572736367 quantized to 32 (current 32)
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32)
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00021701388888888888 quantized to 32 (current 32)
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0008874123220268007 of space, bias 4.0, pg target 0.7063802083333334 quantized to 16 (current 16)
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:25:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:25:42 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e268 e268: 6 total, 6 up, 6 in
Dec 06 10:25:42 np0005548790.localdomain ceph-mon[301742]: osdmap e267: 6 total, 6 up, 6 in
Dec 06 10:25:42 np0005548790.localdomain ceph-mon[301742]: osdmap e268: 6 total, 6 up, 6 in
Dec 06 10:25:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b90e39e0-9458-4ad1-b3e6-84f20975b6e9", "format": "json"}]: dispatch
Dec 06 10:25:43 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b90e39e0-9458-4ad1-b3e6-84f20975b6e9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:25:43 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b90e39e0-9458-4ad1-b3e6-84f20975b6e9, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:25:43 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b90e39e0-9458-4ad1-b3e6-84f20975b6e9' of type subvolume
Dec 06 10:25:43 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:25:43.031+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b90e39e0-9458-4ad1-b3e6-84f20975b6e9' of type subvolume
Dec 06 10:25:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b90e39e0-9458-4ad1-b3e6-84f20975b6e9", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:43 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b90e39e0-9458-4ad1-b3e6-84f20975b6e9, vol_name:cephfs) < ""
Dec 06 10:25:43 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b90e39e0-9458-4ad1-b3e6-84f20975b6e9'' moved to trashcan
Dec 06 10:25:43 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:25:43 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b90e39e0-9458-4ad1-b3e6-84f20975b6e9, vol_name:cephfs) < ""
Dec 06 10:25:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v597: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 76 KiB/s rd, 57 KiB/s wr, 105 op/s
Dec 06 10:25:43 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e269 e269: 6 total, 6 up, 6 in
Dec 06 10:25:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:25:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:44.446 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:44 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b90e39e0-9458-4ad1-b3e6-84f20975b6e9", "format": "json"}]: dispatch
Dec 06 10:25:44 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b90e39e0-9458-4ad1-b3e6-84f20975b6e9", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:44 np0005548790.localdomain ceph-mon[301742]: pgmap v597: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 76 KiB/s rd, 57 KiB/s wr, 105 op/s
Dec 06 10:25:44 np0005548790.localdomain ceph-mon[301742]: osdmap e269: 6 total, 6 up, 6 in
Dec 06 10:25:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:45.211 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:25:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:45.212 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:25:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:45.212 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:25:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:45.235 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:25:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:45.236 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:25:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "82dd5c85-b26d-4a99-a866-de66484ffe5c", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:25:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:82dd5c85-b26d-4a99-a866-de66484ffe5c, vol_name:cephfs) < ""
Dec 06 10:25:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/82dd5c85-b26d-4a99-a866-de66484ffe5c/.meta.tmp'
Dec 06 10:25:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/82dd5c85-b26d-4a99-a866-de66484ffe5c/.meta.tmp' to config b'/volumes/_nogroup/82dd5c85-b26d-4a99-a866-de66484ffe5c/.meta'
Dec 06 10:25:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:82dd5c85-b26d-4a99-a866-de66484ffe5c, vol_name:cephfs) < ""
Dec 06 10:25:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "82dd5c85-b26d-4a99-a866-de66484ffe5c", "format": "json"}]: dispatch
Dec 06 10:25:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:82dd5c85-b26d-4a99-a866-de66484ffe5c, vol_name:cephfs) < ""
Dec 06 10:25:45 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:82dd5c85-b26d-4a99-a866-de66484ffe5c, vol_name:cephfs) < ""
Dec 06 10:25:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v599: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 77 KiB/s rd, 58 KiB/s wr, 106 op/s
Dec 06 10:25:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:45.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:25:45 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:25:45 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1501997510' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:25:45 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1501997510' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:25:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:45.782 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:46 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "82dd5c85-b26d-4a99-a866-de66484ffe5c", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:25:46 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "82dd5c85-b26d-4a99-a866-de66484ffe5c", "format": "json"}]: dispatch
Dec 06 10:25:46 np0005548790.localdomain ceph-mon[301742]: pgmap v599: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 77 KiB/s rd, 58 KiB/s wr, 106 op/s
Dec 06 10:25:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v600: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 57 KiB/s rd, 43 KiB/s wr, 78 op/s
Dec 06 10:25:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:47.329 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:25:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:47.331 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:25:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:47.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:25:47 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:25:47 np0005548790.localdomain systemd[1]: tmp-crun.oW5Czb.mount: Deactivated successfully.
Dec 06 10:25:47 np0005548790.localdomain podman[322428]: 2025-12-06 10:25:47.570266038 +0000 UTC m=+0.088968289 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 06 10:25:47 np0005548790.localdomain podman[322428]: 2025-12-06 10:25:47.578377978 +0000 UTC m=+0.097080239 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 06 10:25:47 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:25:47 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e270 e270: 6 total, 6 up, 6 in
Dec 06 10:25:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:25:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:25:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:48.407 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:25:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:48.407 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:25:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:25:48.407 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:25:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:25:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:25:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:25:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18769 "" "Go-http-client/1.1"
Dec 06 10:25:48 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a6780a44-878b-4ff8-b9d6-a0cce9b3cee0", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:25:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:a6780a44-878b-4ff8-b9d6-a0cce9b3cee0, vol_name:cephfs) < ""
Dec 06 10:25:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a6780a44-878b-4ff8-b9d6-a0cce9b3cee0/.meta.tmp'
Dec 06 10:25:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a6780a44-878b-4ff8-b9d6-a0cce9b3cee0/.meta.tmp' to config b'/volumes/_nogroup/a6780a44-878b-4ff8-b9d6-a0cce9b3cee0/.meta'
Dec 06 10:25:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:a6780a44-878b-4ff8-b9d6-a0cce9b3cee0, vol_name:cephfs) < ""
Dec 06 10:25:48 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a6780a44-878b-4ff8-b9d6-a0cce9b3cee0", "format": "json"}]: dispatch
Dec 06 10:25:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a6780a44-878b-4ff8-b9d6-a0cce9b3cee0, vol_name:cephfs) < ""
Dec 06 10:25:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a6780a44-878b-4ff8-b9d6-a0cce9b3cee0, vol_name:cephfs) < ""
Dec 06 10:25:48 np0005548790.localdomain ceph-mon[301742]: pgmap v600: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 57 KiB/s rd, 43 KiB/s wr, 78 op/s
Dec 06 10:25:48 np0005548790.localdomain ceph-mon[301742]: osdmap e270: 6 total, 6 up, 6 in
Dec 06 10:25:48 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3f0a96b5-7080-41d0-b8de-4c6a54c262e3", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:25:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:3f0a96b5-7080-41d0-b8de-4c6a54c262e3, vol_name:cephfs) < ""
Dec 06 10:25:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3f0a96b5-7080-41d0-b8de-4c6a54c262e3/.meta.tmp'
Dec 06 10:25:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3f0a96b5-7080-41d0-b8de-4c6a54c262e3/.meta.tmp' to config b'/volumes/_nogroup/3f0a96b5-7080-41d0-b8de-4c6a54c262e3/.meta'
Dec 06 10:25:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:3f0a96b5-7080-41d0-b8de-4c6a54c262e3, vol_name:cephfs) < ""
Dec 06 10:25:48 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3f0a96b5-7080-41d0-b8de-4c6a54c262e3", "format": "json"}]: dispatch
Dec 06 10:25:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3f0a96b5-7080-41d0-b8de-4c6a54c262e3, vol_name:cephfs) < ""
Dec 06 10:25:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3f0a96b5-7080-41d0-b8de-4c6a54c262e3, vol_name:cephfs) < ""
Dec 06 10:25:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:25:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v602: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 72 KiB/s rd, 81 KiB/s wr, 101 op/s
Dec 06 10:25:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:49.490 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:49 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a6780a44-878b-4ff8-b9d6-a0cce9b3cee0", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:25:49 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a6780a44-878b-4ff8-b9d6-a0cce9b3cee0", "format": "json"}]: dispatch
Dec 06 10:25:49 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:25:49 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3f0a96b5-7080-41d0-b8de-4c6a54c262e3", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:25:49 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:25:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:50.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:25:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:25:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:25:50 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:25:50 np0005548790.localdomain podman[322446]: 2025-12-06 10:25:50.570616816 +0000 UTC m=+0.082513895 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 10:25:50 np0005548790.localdomain podman[322446]: 2025-12-06 10:25:50.579478276 +0000 UTC m=+0.091375405 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 10:25:50 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:25:50 np0005548790.localdomain podman[322448]: 2025-12-06 10:25:50.631893005 +0000 UTC m=+0.136606409 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.buildah.version=1.33.7, version=9.6, container_name=openstack_network_exporter, release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container)
Dec 06 10:25:50 np0005548790.localdomain podman[322447]: 2025-12-06 10:25:50.68602892 +0000 UTC m=+0.194603348 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 10:25:50 np0005548790.localdomain podman[322448]: 2025-12-06 10:25:50.698379124 +0000 UTC m=+0.203092528 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, io.openshift.tags=minimal rhel9, config_id=edpm, io.buildah.version=1.33.7, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, release=1755695350, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, version=9.6, architecture=x86_64, managed_by=edpm_ansible)
Dec 06 10:25:50 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:25:50 np0005548790.localdomain podman[322447]: 2025-12-06 10:25:50.751302496 +0000 UTC m=+0.259876884 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:25:50 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3f0a96b5-7080-41d0-b8de-4c6a54c262e3", "format": "json"}]: dispatch
Dec 06 10:25:50 np0005548790.localdomain ceph-mon[301742]: pgmap v602: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 72 KiB/s rd, 81 KiB/s wr, 101 op/s
Dec 06 10:25:50 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:25:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:50.785 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v603: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 35 KiB/s wr, 24 op/s
Dec 06 10:25:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "a6780a44-878b-4ff8-b9d6-a0cce9b3cee0", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Dec 06 10:25:51 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:a6780a44-878b-4ff8-b9d6-a0cce9b3cee0, vol_name:cephfs) < ""
Dec 06 10:25:51 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:a6780a44-878b-4ff8-b9d6-a0cce9b3cee0, vol_name:cephfs) < ""
Dec 06 10:25:52 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "82dd5c85-b26d-4a99-a866-de66484ffe5c", "format": "json"}]: dispatch
Dec 06 10:25:52 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:82dd5c85-b26d-4a99-a866-de66484ffe5c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:25:52 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:82dd5c85-b26d-4a99-a866-de66484ffe5c, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:25:52 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '82dd5c85-b26d-4a99-a866-de66484ffe5c' of type subvolume
Dec 06 10:25:52 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:25:52.143+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '82dd5c85-b26d-4a99-a866-de66484ffe5c' of type subvolume
Dec 06 10:25:52 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "82dd5c85-b26d-4a99-a866-de66484ffe5c", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:52 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:82dd5c85-b26d-4a99-a866-de66484ffe5c, vol_name:cephfs) < ""
Dec 06 10:25:52 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/82dd5c85-b26d-4a99-a866-de66484ffe5c'' moved to trashcan
Dec 06 10:25:52 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:25:52 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:82dd5c85-b26d-4a99-a866-de66484ffe5c, vol_name:cephfs) < ""
Dec 06 10:25:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:52.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:25:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:52.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:25:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:52.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:25:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:52.356 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:25:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:52.356 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:25:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:52.357 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:25:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:52.357 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:25:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:52.357 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:25:52 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e271 e271: 6 total, 6 up, 6 in
Dec 06 10:25:52 np0005548790.localdomain ceph-mon[301742]: pgmap v603: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 35 KiB/s wr, 24 op/s
Dec 06 10:25:52 np0005548790.localdomain ceph-mon[301742]: osdmap e271: 6 total, 6 up, 6 in
Dec 06 10:25:52 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:25:52 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4186351195' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:25:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:52.827 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:25:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:53.004 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:25:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:53.006 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=11459MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:25:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:53.006 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:25:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:53.007 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:25:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:53.074 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:25:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:53.074 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:25:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:53.091 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:25:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v605: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 94 KiB/s wr, 27 op/s
Dec 06 10:25:53 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:25:53 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3506710697' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:25:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:53.551 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:25:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:53.559 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:25:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:53.588 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:25:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:25:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:25:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:25:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:25:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:25:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:25:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:25:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:25:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:25:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:25:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:25:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:25:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:53.614 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:25:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:53.614 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:25:53 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "a6780a44-878b-4ff8-b9d6-a0cce9b3cee0", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Dec 06 10:25:53 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "82dd5c85-b26d-4a99-a866-de66484ffe5c", "format": "json"}]: dispatch
Dec 06 10:25:53 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "82dd5c85-b26d-4a99-a866-de66484ffe5c", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:53 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/4186351195' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:25:53 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2061225661' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:25:53 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/3506710697' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:25:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:25:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:54.539 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:54 np0005548790.localdomain ceph-mon[301742]: pgmap v605: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 94 KiB/s wr, 27 op/s
Dec 06 10:25:54 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/3655531887' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:25:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a6780a44-878b-4ff8-b9d6-a0cce9b3cee0", "format": "json"}]: dispatch
Dec 06 10:25:55 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a6780a44-878b-4ff8-b9d6-a0cce9b3cee0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:25:55 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a6780a44-878b-4ff8-b9d6-a0cce9b3cee0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:25:55 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a6780a44-878b-4ff8-b9d6-a0cce9b3cee0' of type subvolume
Dec 06 10:25:55 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:25:55.179+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a6780a44-878b-4ff8-b9d6-a0cce9b3cee0' of type subvolume
Dec 06 10:25:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a6780a44-878b-4ff8-b9d6-a0cce9b3cee0", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:55 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a6780a44-878b-4ff8-b9d6-a0cce9b3cee0, vol_name:cephfs) < ""
Dec 06 10:25:55 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a6780a44-878b-4ff8-b9d6-a0cce9b3cee0'' moved to trashcan
Dec 06 10:25:55 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:25:55 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a6780a44-878b-4ff8-b9d6-a0cce9b3cee0, vol_name:cephfs) < ""
Dec 06 10:25:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v606: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 94 KiB/s wr, 27 op/s
Dec 06 10:25:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3f0a96b5-7080-41d0-b8de-4c6a54c262e3", "format": "json"}]: dispatch
Dec 06 10:25:55 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3f0a96b5-7080-41d0-b8de-4c6a54c262e3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:25:55 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3f0a96b5-7080-41d0-b8de-4c6a54c262e3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:25:55 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:25:55.373+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3f0a96b5-7080-41d0-b8de-4c6a54c262e3' of type subvolume
Dec 06 10:25:55 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3f0a96b5-7080-41d0-b8de-4c6a54c262e3' of type subvolume
Dec 06 10:25:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3f0a96b5-7080-41d0-b8de-4c6a54c262e3", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:55 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3f0a96b5-7080-41d0-b8de-4c6a54c262e3, vol_name:cephfs) < ""
Dec 06 10:25:55 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3f0a96b5-7080-41d0-b8de-4c6a54c262e3'' moved to trashcan
Dec 06 10:25:55 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:25:55 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3f0a96b5-7080-41d0-b8de-4c6a54c262e3, vol_name:cephfs) < ""
Dec 06 10:25:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:55.788 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:56 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a6780a44-878b-4ff8-b9d6-a0cce9b3cee0", "format": "json"}]: dispatch
Dec 06 10:25:56 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a6780a44-878b-4ff8-b9d6-a0cce9b3cee0", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:56 np0005548790.localdomain ceph-mon[301742]: pgmap v606: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 94 KiB/s wr, 27 op/s
Dec 06 10:25:56 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3f0a96b5-7080-41d0-b8de-4c6a54c262e3", "format": "json"}]: dispatch
Dec 06 10:25:56 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3f0a96b5-7080-41d0-b8de-4c6a54c262e3", "force": true, "format": "json"}]: dispatch
Dec 06 10:25:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v607: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 78 KiB/s wr, 22 op/s
Dec 06 10:25:58 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a6b976c3-30be-4a3d-bf97-d12650362891", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:25:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a6b976c3-30be-4a3d-bf97-d12650362891, vol_name:cephfs) < ""
Dec 06 10:25:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a6b976c3-30be-4a3d-bf97-d12650362891/.meta.tmp'
Dec 06 10:25:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a6b976c3-30be-4a3d-bf97-d12650362891/.meta.tmp' to config b'/volumes/_nogroup/a6b976c3-30be-4a3d-bf97-d12650362891/.meta'
Dec 06 10:25:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a6b976c3-30be-4a3d-bf97-d12650362891, vol_name:cephfs) < ""
Dec 06 10:25:58 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:25:58 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a6b976c3-30be-4a3d-bf97-d12650362891", "format": "json"}]: dispatch
Dec 06 10:25:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a6b976c3-30be-4a3d-bf97-d12650362891, vol_name:cephfs) < ""
Dec 06 10:25:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a6b976c3-30be-4a3d-bf97-d12650362891, vol_name:cephfs) < ""
Dec 06 10:25:58 np0005548790.localdomain podman[322551]: 2025-12-06 10:25:58.572218941 +0000 UTC m=+0.083388908 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 10:25:58 np0005548790.localdomain podman[322551]: 2025-12-06 10:25:58.587646949 +0000 UTC m=+0.098816946 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=multipathd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:25:58 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:25:58 np0005548790.localdomain ceph-mon[301742]: pgmap v607: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 78 KiB/s wr, 22 op/s
Dec 06 10:25:58 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:25:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:25:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v608: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 87 KiB/s wr, 4 op/s
Dec 06 10:25:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:25:59.570 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:25:59 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a6b976c3-30be-4a3d-bf97-d12650362891", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:25:59 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a6b976c3-30be-4a3d-bf97-d12650362891", "format": "json"}]: dispatch
Dec 06 10:26:00 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:00.813 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:00 np0005548790.localdomain ceph-mon[301742]: pgmap v608: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 87 KiB/s wr, 4 op/s
Dec 06 10:26:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v609: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 87 KiB/s wr, 4 op/s
Dec 06 10:26:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:26:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:26:01 np0005548790.localdomain podman[322570]: 2025-12-06 10:26:01.564749619 +0000 UTC m=+0.078177537 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:26:01 np0005548790.localdomain podman[322570]: 2025-12-06 10:26:01.577208986 +0000 UTC m=+0.090636894 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:26:01 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:26:01 np0005548790.localdomain podman[322571]: 2025-12-06 10:26:01.631198757 +0000 UTC m=+0.139393434 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 10:26:01 np0005548790.localdomain podman[322571]: 2025-12-06 10:26:01.672340061 +0000 UTC m=+0.180534778 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 10:26:01 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:26:01 np0005548790.localdomain ceph-mon[301742]: pgmap v609: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 87 KiB/s wr, 4 op/s
Dec 06 10:26:02 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:26:02Z|00275|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 06 10:26:02 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a6b976c3-30be-4a3d-bf97-d12650362891", "format": "json"}]: dispatch
Dec 06 10:26:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a6b976c3-30be-4a3d-bf97-d12650362891, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:26:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a6b976c3-30be-4a3d-bf97-d12650362891, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:26:02 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:26:02.424+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a6b976c3-30be-4a3d-bf97-d12650362891' of type subvolume
Dec 06 10:26:02 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a6b976c3-30be-4a3d-bf97-d12650362891' of type subvolume
Dec 06 10:26:02 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a6b976c3-30be-4a3d-bf97-d12650362891", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a6b976c3-30be-4a3d-bf97-d12650362891, vol_name:cephfs) < ""
Dec 06 10:26:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a6b976c3-30be-4a3d-bf97-d12650362891'' moved to trashcan
Dec 06 10:26:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:26:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a6b976c3-30be-4a3d-bf97-d12650362891, vol_name:cephfs) < ""
Dec 06 10:26:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v610: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 386 B/s rd, 117 KiB/s wr, 6 op/s
Dec 06 10:26:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7f7d12c1-956a-4451-b950-3662d25c7591", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:26:03 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7f7d12c1-956a-4451-b950-3662d25c7591, vol_name:cephfs) < ""
Dec 06 10:26:03 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7f7d12c1-956a-4451-b950-3662d25c7591/.meta.tmp'
Dec 06 10:26:03 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7f7d12c1-956a-4451-b950-3662d25c7591/.meta.tmp' to config b'/volumes/_nogroup/7f7d12c1-956a-4451-b950-3662d25c7591/.meta'
Dec 06 10:26:03 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7f7d12c1-956a-4451-b950-3662d25c7591, vol_name:cephfs) < ""
Dec 06 10:26:03 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a6b976c3-30be-4a3d-bf97-d12650362891", "format": "json"}]: dispatch
Dec 06 10:26:03 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a6b976c3-30be-4a3d-bf97-d12650362891", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7f7d12c1-956a-4451-b950-3662d25c7591", "format": "json"}]: dispatch
Dec 06 10:26:03 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7f7d12c1-956a-4451-b950-3662d25c7591, vol_name:cephfs) < ""
Dec 06 10:26:03 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7f7d12c1-956a-4451-b950-3662d25c7591, vol_name:cephfs) < ""
Dec 06 10:26:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:26:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:04.569 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:04 np0005548790.localdomain ceph-mon[301742]: pgmap v610: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 386 B/s rd, 117 KiB/s wr, 6 op/s
Dec 06 10:26:04 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7f7d12c1-956a-4451-b950-3662d25c7591", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:26:04 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7f7d12c1-956a-4451-b950-3662d25c7591", "format": "json"}]: dispatch
Dec 06 10:26:04 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:26:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v611: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 64 KiB/s wr, 3 op/s
Dec 06 10:26:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "691a6613-80b6-44f7-899f-c0e9f04c8f64", "format": "json"}]: dispatch
Dec 06 10:26:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:691a6613-80b6-44f7-899f-c0e9f04c8f64, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:26:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:691a6613-80b6-44f7-899f-c0e9f04c8f64, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:26:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "691a6613-80b6-44f7-899f-c0e9f04c8f64", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:691a6613-80b6-44f7-899f-c0e9f04c8f64, vol_name:cephfs) < ""
Dec 06 10:26:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/691a6613-80b6-44f7-899f-c0e9f04c8f64'' moved to trashcan
Dec 06 10:26:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:26:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:691a6613-80b6-44f7-899f-c0e9f04c8f64, vol_name:cephfs) < ""
Dec 06 10:26:05 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:05.844 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:06 np0005548790.localdomain ceph-mon[301742]: pgmap v611: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 64 KiB/s wr, 3 op/s
Dec 06 10:26:06 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "691a6613-80b6-44f7-899f-c0e9f04c8f64", "format": "json"}]: dispatch
Dec 06 10:26:06 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "691a6613-80b6-44f7-899f-c0e9f04c8f64", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v612: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 64 KiB/s wr, 3 op/s
Dec 06 10:26:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "84cf61e2-4e49-410b-a14b-54abbb5a4b95", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:26:07 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:84cf61e2-4e49-410b-a14b-54abbb5a4b95, vol_name:cephfs) < ""
Dec 06 10:26:07 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/84cf61e2-4e49-410b-a14b-54abbb5a4b95/.meta.tmp'
Dec 06 10:26:07 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/84cf61e2-4e49-410b-a14b-54abbb5a4b95/.meta.tmp' to config b'/volumes/_nogroup/84cf61e2-4e49-410b-a14b-54abbb5a4b95/.meta'
Dec 06 10:26:07 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:84cf61e2-4e49-410b-a14b-54abbb5a4b95, vol_name:cephfs) < ""
Dec 06 10:26:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "84cf61e2-4e49-410b-a14b-54abbb5a4b95", "format": "json"}]: dispatch
Dec 06 10:26:07 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:84cf61e2-4e49-410b-a14b-54abbb5a4b95, vol_name:cephfs) < ""
Dec 06 10:26:07 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:84cf61e2-4e49-410b-a14b-54abbb5a4b95, vol_name:cephfs) < ""
Dec 06 10:26:07 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:26:08 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "56f0c12e-9bce-473e-90b3-283dbd57851f", "snap_name": "f5e12b35-c54a-4c08-b88c-95eb83ef6404_42a3d50e-9358-4b7a-9bc8-ccb63c964302", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f5e12b35-c54a-4c08-b88c-95eb83ef6404_42a3d50e-9358-4b7a-9bc8-ccb63c964302, sub_name:56f0c12e-9bce-473e-90b3-283dbd57851f, vol_name:cephfs) < ""
Dec 06 10:26:08 np0005548790.localdomain ceph-mon[301742]: pgmap v612: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 64 KiB/s wr, 3 op/s
Dec 06 10:26:08 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "84cf61e2-4e49-410b-a14b-54abbb5a4b95", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:26:08 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "84cf61e2-4e49-410b-a14b-54abbb5a4b95", "format": "json"}]: dispatch
Dec 06 10:26:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/56f0c12e-9bce-473e-90b3-283dbd57851f/.meta.tmp'
Dec 06 10:26:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/56f0c12e-9bce-473e-90b3-283dbd57851f/.meta.tmp' to config b'/volumes/_nogroup/56f0c12e-9bce-473e-90b3-283dbd57851f/.meta'
Dec 06 10:26:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f5e12b35-c54a-4c08-b88c-95eb83ef6404_42a3d50e-9358-4b7a-9bc8-ccb63c964302, sub_name:56f0c12e-9bce-473e-90b3-283dbd57851f, vol_name:cephfs) < ""
Dec 06 10:26:08 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "56f0c12e-9bce-473e-90b3-283dbd57851f", "snap_name": "f5e12b35-c54a-4c08-b88c-95eb83ef6404", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f5e12b35-c54a-4c08-b88c-95eb83ef6404, sub_name:56f0c12e-9bce-473e-90b3-283dbd57851f, vol_name:cephfs) < ""
Dec 06 10:26:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/56f0c12e-9bce-473e-90b3-283dbd57851f/.meta.tmp'
Dec 06 10:26:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/56f0c12e-9bce-473e-90b3-283dbd57851f/.meta.tmp' to config b'/volumes/_nogroup/56f0c12e-9bce-473e-90b3-283dbd57851f/.meta'
Dec 06 10:26:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f5e12b35-c54a-4c08-b88c-95eb83ef6404, sub_name:56f0c12e-9bce-473e-90b3-283dbd57851f, vol_name:cephfs) < ""
Dec 06 10:26:09 np0005548790.localdomain sudo[322618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:26:09 np0005548790.localdomain sudo[322618]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:26:09 np0005548790.localdomain sudo[322618]: pam_unix(sudo:session): session closed for user root
Dec 06 10:26:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:26:09 np0005548790.localdomain sudo[322636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:26:09 np0005548790.localdomain sudo[322636]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:26:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v613: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 103 KiB/s wr, 6 op/s
Dec 06 10:26:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:09.604 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:09 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "56f0c12e-9bce-473e-90b3-283dbd57851f", "snap_name": "f5e12b35-c54a-4c08-b88c-95eb83ef6404_42a3d50e-9358-4b7a-9bc8-ccb63c964302", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:09 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "56f0c12e-9bce-473e-90b3-283dbd57851f", "snap_name": "f5e12b35-c54a-4c08-b88c-95eb83ef6404", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:09 np0005548790.localdomain sudo[322636]: pam_unix(sudo:session): session closed for user root
Dec 06 10:26:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:26:09 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:26:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:26:09 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:26:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:26:10 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] update: starting ev 91315f18-59d4-4cbb-8932-cc4b9a560937 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:26:10 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] complete: finished ev 91315f18-59d4-4cbb-8932-cc4b9a560937 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:26:10 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Completed event 91315f18-59d4-4cbb-8932-cc4b9a560937 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 06 10:26:10 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:26:10 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:26:10 np0005548790.localdomain sudo[322687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:26:10 np0005548790.localdomain sudo[322687]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:26:10 np0005548790.localdomain sudo[322687]: pam_unix(sudo:session): session closed for user root
Dec 06 10:26:10 np0005548790.localdomain ceph-mon[301742]: pgmap v613: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 103 KiB/s wr, 6 op/s
Dec 06 10:26:10 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:26:10 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:26:10 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:26:10 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:26:10 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:10.847 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "87f760d1-7d00-4cc5-99b2-a7d946785f7e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:26:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:87f760d1-7d00-4cc5-99b2-a7d946785f7e, vol_name:cephfs) < ""
Dec 06 10:26:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/87f760d1-7d00-4cc5-99b2-a7d946785f7e/.meta.tmp'
Dec 06 10:26:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/87f760d1-7d00-4cc5-99b2-a7d946785f7e/.meta.tmp' to config b'/volumes/_nogroup/87f760d1-7d00-4cc5-99b2-a7d946785f7e/.meta'
Dec 06 10:26:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:87f760d1-7d00-4cc5-99b2-a7d946785f7e, vol_name:cephfs) < ""
Dec 06 10:26:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "87f760d1-7d00-4cc5-99b2-a7d946785f7e", "format": "json"}]: dispatch
Dec 06 10:26:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:87f760d1-7d00-4cc5-99b2-a7d946785f7e, vol_name:cephfs) < ""
Dec 06 10:26:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:87f760d1-7d00-4cc5-99b2-a7d946785f7e, vol_name:cephfs) < ""
Dec 06 10:26:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v614: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 70 KiB/s wr, 4 op/s
Dec 06 10:26:11 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:26:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:26:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:26:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:26:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:26:12 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "56f0c12e-9bce-473e-90b3-283dbd57851f", "format": "json"}]: dispatch
Dec 06 10:26:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:56f0c12e-9bce-473e-90b3-283dbd57851f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:26:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:56f0c12e-9bce-473e-90b3-283dbd57851f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:26:12 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:26:12.106+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '56f0c12e-9bce-473e-90b3-283dbd57851f' of type subvolume
Dec 06 10:26:12 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '56f0c12e-9bce-473e-90b3-283dbd57851f' of type subvolume
Dec 06 10:26:12 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "56f0c12e-9bce-473e-90b3-283dbd57851f", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:56f0c12e-9bce-473e-90b3-283dbd57851f, vol_name:cephfs) < ""
Dec 06 10:26:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/56f0c12e-9bce-473e-90b3-283dbd57851f'' moved to trashcan
Dec 06 10:26:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:26:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:56f0c12e-9bce-473e-90b3-283dbd57851f, vol_name:cephfs) < ""
Dec 06 10:26:12 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Writing back 50 completed events
Dec 06 10:26:12 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:26:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:26:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f0627d4d7c0>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f0618c16ee0>)]
Dec 06 10:26:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 06 10:26:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 06 10:26:12 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "87f760d1-7d00-4cc5-99b2-a7d946785f7e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:26:12 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "87f760d1-7d00-4cc5-99b2-a7d946785f7e", "format": "json"}]: dispatch
Dec 06 10:26:12 np0005548790.localdomain ceph-mon[301742]: pgmap v614: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 70 KiB/s wr, 4 op/s
Dec 06 10:26:12 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "56f0c12e-9bce-473e-90b3-283dbd57851f", "format": "json"}]: dispatch
Dec 06 10:26:12 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "56f0c12e-9bce-473e-90b3-283dbd57851f", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:12 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:26:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v615: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 113 KiB/s wr, 7 op/s
Dec 06 10:26:13 np0005548790.localdomain ceph-mon[301742]: pgmap v615: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 113 KiB/s wr, 7 op/s
Dec 06 10:26:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:26:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:14.608 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:14 np0005548790.localdomain snmpd[67989]: empty variable list in _query
Dec 06 10:26:14 np0005548790.localdomain snmpd[67989]: empty variable list in _query
Dec 06 10:26:14 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ab908986-633b-4dd3-a1cc-27834b2adc14", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:26:14 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ab908986-633b-4dd3-a1cc-27834b2adc14, vol_name:cephfs) < ""
Dec 06 10:26:14 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ab908986-633b-4dd3-a1cc-27834b2adc14/.meta.tmp'
Dec 06 10:26:14 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ab908986-633b-4dd3-a1cc-27834b2adc14/.meta.tmp' to config b'/volumes/_nogroup/ab908986-633b-4dd3-a1cc-27834b2adc14/.meta'
Dec 06 10:26:14 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ab908986-633b-4dd3-a1cc-27834b2adc14, vol_name:cephfs) < ""
Dec 06 10:26:14 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ab908986-633b-4dd3-a1cc-27834b2adc14", "format": "json"}]: dispatch
Dec 06 10:26:14 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ab908986-633b-4dd3-a1cc-27834b2adc14, vol_name:cephfs) < ""
Dec 06 10:26:14 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ab908986-633b-4dd3-a1cc-27834b2adc14, vol_name:cephfs) < ""
Dec 06 10:26:14 np0005548790.localdomain ceph-mon[301742]: mgrmap e54: np0005548790.kvkfyr(active, since 14m), standbys: np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548789.mzhmje
Dec 06 10:26:14 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ab908986-633b-4dd3-a1cc-27834b2adc14", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:26:14 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:26:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v616: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 82 KiB/s wr, 5 op/s
Dec 06 10:26:15 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:15.850 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:15 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e272 e272: 6 total, 6 up, 6 in
Dec 06 10:26:15 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ab908986-633b-4dd3-a1cc-27834b2adc14", "format": "json"}]: dispatch
Dec 06 10:26:15 np0005548790.localdomain ceph-mon[301742]: pgmap v616: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 82 KiB/s wr, 5 op/s
Dec 06 10:26:16 np0005548790.localdomain ceph-mon[301742]: osdmap e272: 6 total, 6 up, 6 in
Dec 06 10:26:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v618: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 99 KiB/s wr, 6 op/s
Dec 06 10:26:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:26:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:26:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:26:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:26:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:26:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18775 "" "Go-http-client/1.1"
Dec 06 10:26:18 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:26:18 np0005548790.localdomain podman[322706]: 2025-12-06 10:26:18.575584503 +0000 UTC m=+0.081945629 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 10:26:18 np0005548790.localdomain podman[322706]: 2025-12-06 10:26:18.580766134 +0000 UTC m=+0.087127300 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 06 10:26:18 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:26:18 np0005548790.localdomain ceph-mon[301742]: pgmap v618: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 99 KiB/s wr, 6 op/s
Dec 06 10:26:19 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:26:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:26:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:601ad3a1-8738-4a72-911d-38595abebd4b, vol_name:cephfs) < ""
Dec 06 10:26:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/.meta.tmp'
Dec 06 10:26:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/.meta.tmp' to config b'/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/.meta'
Dec 06 10:26:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:601ad3a1-8738-4a72-911d-38595abebd4b, vol_name:cephfs) < ""
Dec 06 10:26:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "format": "json"}]: dispatch
Dec 06 10:26:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:601ad3a1-8738-4a72-911d-38595abebd4b, vol_name:cephfs) < ""
Dec 06 10:26:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:601ad3a1-8738-4a72-911d-38595abebd4b, vol_name:cephfs) < ""
Dec 06 10:26:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v619: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 78 KiB/s wr, 5 op/s
Dec 06 10:26:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ab908986-633b-4dd3-a1cc-27834b2adc14", "format": "json"}]: dispatch
Dec 06 10:26:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ab908986-633b-4dd3-a1cc-27834b2adc14, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:26:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ab908986-633b-4dd3-a1cc-27834b2adc14, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:26:19 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:26:19.376+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ab908986-633b-4dd3-a1cc-27834b2adc14' of type subvolume
Dec 06 10:26:19 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ab908986-633b-4dd3-a1cc-27834b2adc14' of type subvolume
Dec 06 10:26:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ab908986-633b-4dd3-a1cc-27834b2adc14", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ab908986-633b-4dd3-a1cc-27834b2adc14, vol_name:cephfs) < ""
Dec 06 10:26:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ab908986-633b-4dd3-a1cc-27834b2adc14'' moved to trashcan
Dec 06 10:26:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:26:19 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ab908986-633b-4dd3-a1cc-27834b2adc14, vol_name:cephfs) < ""
Dec 06 10:26:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:19.641 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:19 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:26:20 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:26:20 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "format": "json"}]: dispatch
Dec 06 10:26:20 np0005548790.localdomain ceph-mon[301742]: pgmap v619: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 78 KiB/s wr, 5 op/s
Dec 06 10:26:20 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ab908986-633b-4dd3-a1cc-27834b2adc14", "format": "json"}]: dispatch
Dec 06 10:26:20 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ab908986-633b-4dd3-a1cc-27834b2adc14", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:20.853 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v620: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 78 KiB/s wr, 5 op/s
Dec 06 10:26:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:26:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:26:21 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:26:21 np0005548790.localdomain podman[322725]: 2025-12-06 10:26:21.575761018 +0000 UTC m=+0.088544828 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 10:26:21 np0005548790.localdomain podman[322725]: 2025-12-06 10:26:21.587130985 +0000 UTC m=+0.099914805 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:26:21 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:26:21 np0005548790.localdomain podman[322726]: 2025-12-06 10:26:21.678501748 +0000 UTC m=+0.187754053 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 06 10:26:21 np0005548790.localdomain podman[322726]: 2025-12-06 10:26:21.695157489 +0000 UTC m=+0.204409824 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec 06 10:26:21 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:26:21 np0005548790.localdomain systemd[1]: tmp-crun.kzVO5q.mount: Deactivated successfully.
Dec 06 10:26:21 np0005548790.localdomain podman[322727]: 2025-12-06 10:26:21.780373856 +0000 UTC m=+0.286210188 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-type=git, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1755695350, maintainer=Red Hat, Inc.)
Dec 06 10:26:21 np0005548790.localdomain podman[322727]: 2025-12-06 10:26:21.794235181 +0000 UTC m=+0.300071523 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, name=ubi9-minimal)
Dec 06 10:26:21 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:26:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:22.246 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:26:22.248 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:26:22 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:26:22.249 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:26:22 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "87f760d1-7d00-4cc5-99b2-a7d946785f7e", "format": "json"}]: dispatch
Dec 06 10:26:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:87f760d1-7d00-4cc5-99b2-a7d946785f7e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:26:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:87f760d1-7d00-4cc5-99b2-a7d946785f7e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:26:22 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:26:22.527+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '87f760d1-7d00-4cc5-99b2-a7d946785f7e' of type subvolume
Dec 06 10:26:22 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '87f760d1-7d00-4cc5-99b2-a7d946785f7e' of type subvolume
Dec 06 10:26:22 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "87f760d1-7d00-4cc5-99b2-a7d946785f7e", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:87f760d1-7d00-4cc5-99b2-a7d946785f7e, vol_name:cephfs) < ""
Dec 06 10:26:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/87f760d1-7d00-4cc5-99b2-a7d946785f7e'' moved to trashcan
Dec 06 10:26:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:26:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:87f760d1-7d00-4cc5-99b2-a7d946785f7e, vol_name:cephfs) < ""
Dec 06 10:26:22 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "auth_id": "eve49", "tenant_id": "e0991b50d433489b9122b5c71fdb2883", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:26:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:601ad3a1-8738-4a72-911d-38595abebd4b, tenant_id:e0991b50d433489b9122b5c71fdb2883, vol_name:cephfs) < ""
Dec 06 10:26:22 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0)
Dec 06 10:26:22 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Dec 06 10:26:22 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID eve49 with tenant e0991b50d433489b9122b5c71fdb2883
Dec 06 10:26:22 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_601ad3a1-8738-4a72-911d-38595abebd4b", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:26:22 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_601ad3a1-8738-4a72-911d-38595abebd4b", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:26:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:601ad3a1-8738-4a72-911d-38595abebd4b, tenant_id:e0991b50d433489b9122b5c71fdb2883, vol_name:cephfs) < ""
Dec 06 10:26:22 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e273 e273: 6 total, 6 up, 6 in
Dec 06 10:26:22 np0005548790.localdomain ceph-mon[301742]: pgmap v620: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 78 KiB/s wr, 5 op/s
Dec 06 10:26:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Dec 06 10:26:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_601ad3a1-8738-4a72-911d-38595abebd4b", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:26:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_601ad3a1-8738-4a72-911d-38595abebd4b", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:26:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_601ad3a1-8738-4a72-911d-38595abebd4b", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:26:22 np0005548790.localdomain ceph-mon[301742]: osdmap e273: 6 total, 6 up, 6 in
Dec 06 10:26:23 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v622: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 130 KiB/s wr, 7 op/s
Dec 06 10:26:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:23.556 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:26:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:26:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:26:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:26:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:26:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:26:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:26:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:26:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:26:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:26:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:26:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:26:23 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "87f760d1-7d00-4cc5-99b2-a7d946785f7e", "format": "json"}]: dispatch
Dec 06 10:26:23 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "87f760d1-7d00-4cc5-99b2-a7d946785f7e", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:23 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "auth_id": "eve49", "tenant_id": "e0991b50d433489b9122b5c71fdb2883", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:26:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:26:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:24.677 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:24 np0005548790.localdomain ceph-mon[301742]: pgmap v622: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 130 KiB/s wr, 7 op/s
Dec 06 10:26:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v623: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 435 B/s rd, 111 KiB/s wr, 6 op/s
Dec 06 10:26:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "auth_id": "eve48", "tenant_id": "e0991b50d433489b9122b5c71fdb2883", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:26:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:601ad3a1-8738-4a72-911d-38595abebd4b, tenant_id:e0991b50d433489b9122b5c71fdb2883, vol_name:cephfs) < ""
Dec 06 10:26:25 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0)
Dec 06 10:26:25 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Dec 06 10:26:25 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID eve48 with tenant e0991b50d433489b9122b5c71fdb2883
Dec 06 10:26:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:25.857 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:25 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Dec 06 10:26:25 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_601ad3a1-8738-4a72-911d-38595abebd4b", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:26:25 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_601ad3a1-8738-4a72-911d-38595abebd4b", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:26:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:601ad3a1-8738-4a72-911d-38595abebd4b, tenant_id:e0991b50d433489b9122b5c71fdb2883, vol_name:cephfs) < ""
Dec 06 10:26:26 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "84cf61e2-4e49-410b-a14b-54abbb5a4b95", "format": "json"}]: dispatch
Dec 06 10:26:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:84cf61e2-4e49-410b-a14b-54abbb5a4b95, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:26:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:84cf61e2-4e49-410b-a14b-54abbb5a4b95, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:26:26 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:26:26.007+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '84cf61e2-4e49-410b-a14b-54abbb5a4b95' of type subvolume
Dec 06 10:26:26 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '84cf61e2-4e49-410b-a14b-54abbb5a4b95' of type subvolume
Dec 06 10:26:26 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "84cf61e2-4e49-410b-a14b-54abbb5a4b95", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:84cf61e2-4e49-410b-a14b-54abbb5a4b95, vol_name:cephfs) < ""
Dec 06 10:26:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/84cf61e2-4e49-410b-a14b-54abbb5a4b95'' moved to trashcan
Dec 06 10:26:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:26:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:84cf61e2-4e49-410b-a14b-54abbb5a4b95, vol_name:cephfs) < ""
Dec 06 10:26:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:26.689 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:26 np0005548790.localdomain ceph-mon[301742]: pgmap v623: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 435 B/s rd, 111 KiB/s wr, 6 op/s
Dec 06 10:26:26 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "auth_id": "eve48", "tenant_id": "e0991b50d433489b9122b5c71fdb2883", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:26:26 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_601ad3a1-8738-4a72-911d-38595abebd4b", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:26:26 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_601ad3a1-8738-4a72-911d-38595abebd4b", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:26:26 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_601ad3a1-8738-4a72-911d-38595abebd4b", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:26:26 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "84cf61e2-4e49-410b-a14b-54abbb5a4b95", "format": "json"}]: dispatch
Dec 06 10:26:26 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "84cf61e2-4e49-410b-a14b-54abbb5a4b95", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v624: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 104 KiB/s wr, 5 op/s
Dec 06 10:26:28 np0005548790.localdomain ceph-mon[301742]: pgmap v624: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 104 KiB/s wr, 5 op/s
Dec 06 10:26:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:26:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7f7d12c1-956a-4451-b950-3662d25c7591", "format": "json"}]: dispatch
Dec 06 10:26:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7f7d12c1-956a-4451-b950-3662d25c7591, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:26:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7f7d12c1-956a-4451-b950-3662d25c7591, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:26:29 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:26:29.146+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7f7d12c1-956a-4451-b950-3662d25c7591' of type subvolume
Dec 06 10:26:29 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7f7d12c1-956a-4451-b950-3662d25c7591' of type subvolume
Dec 06 10:26:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7f7d12c1-956a-4451-b950-3662d25c7591", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7f7d12c1-956a-4451-b950-3662d25c7591, vol_name:cephfs) < ""
Dec 06 10:26:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7f7d12c1-956a-4451-b950-3662d25c7591'' moved to trashcan
Dec 06 10:26:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:26:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7f7d12c1-956a-4451-b950-3662d25c7591, vol_name:cephfs) < ""
Dec 06 10:26:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "auth_id": "eve48", "format": "json"}]: dispatch
Dec 06 10:26:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:601ad3a1-8738-4a72-911d-38595abebd4b, vol_name:cephfs) < ""
Dec 06 10:26:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v625: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 126 KiB/s wr, 7 op/s
Dec 06 10:26:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0)
Dec 06 10:26:29 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Dec 06 10:26:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve48"} v 0)
Dec 06 10:26:29 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch
Dec 06 10:26:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:601ad3a1-8738-4a72-911d-38595abebd4b, vol_name:cephfs) < ""
Dec 06 10:26:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "auth_id": "eve48", "format": "json"}]: dispatch
Dec 06 10:26:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:601ad3a1-8738-4a72-911d-38595abebd4b, vol_name:cephfs) < ""
Dec 06 10:26:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve48, client_metadata.root=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1
Dec 06 10:26:29 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=eve48,client_metadata.root=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1],prefix=session evict} (starting...)
Dec 06 10:26:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:26:29 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:601ad3a1-8738-4a72-911d-38595abebd4b, vol_name:cephfs) < ""
Dec 06 10:26:29 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:26:29 np0005548790.localdomain podman[322787]: 2025-12-06 10:26:29.568537394 +0000 UTC m=+0.078830095 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 10:26:29 np0005548790.localdomain podman[322787]: 2025-12-06 10:26:29.580014604 +0000 UTC m=+0.090307265 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:26:29 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:26:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:29.720 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Dec 06 10:26:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch
Dec 06 10:26:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch
Dec 06 10:26:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Dec 06 10:26:30 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:26:30.251 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=33b2d0f4-3dae-458c-b286-c937c7cb3d9e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:26:30 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:26:30 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:26:30 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/.meta.tmp'
Dec 06 10:26:30 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/.meta.tmp' to config b'/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/.meta'
Dec 06 10:26:30 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:26:30 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "format": "json"}]: dispatch
Dec 06 10:26:30 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:26:30 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:26:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:30.440 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:30 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7f7d12c1-956a-4451-b950-3662d25c7591", "format": "json"}]: dispatch
Dec 06 10:26:30 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7f7d12c1-956a-4451-b950-3662d25c7591", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:30 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "auth_id": "eve48", "format": "json"}]: dispatch
Dec 06 10:26:30 np0005548790.localdomain ceph-mon[301742]: pgmap v625: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 126 KiB/s wr, 7 op/s
Dec 06 10:26:30 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "auth_id": "eve48", "format": "json"}]: dispatch
Dec 06 10:26:30 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:26:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:30.865 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v626: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 126 KiB/s wr, 7 op/s
Dec 06 10:26:31 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:26:31 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "format": "json"}]: dispatch
Dec 06 10:26:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:26:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:26:32 np0005548790.localdomain systemd[1]: tmp-crun.XJTRLJ.mount: Deactivated successfully.
Dec 06 10:26:32 np0005548790.localdomain podman[322806]: 2025-12-06 10:26:32.573498107 +0000 UTC m=+0.090115550 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 10:26:32 np0005548790.localdomain podman[322807]: 2025-12-06 10:26:32.619749359 +0000 UTC m=+0.133218397 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 10:26:32 np0005548790.localdomain podman[322806]: 2025-12-06 10:26:32.637434817 +0000 UTC m=+0.154052260 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:26:32 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:26:32 np0005548790.localdomain podman[322807]: 2025-12-06 10:26:32.684113751 +0000 UTC m=+0.197582769 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 06 10:26:32 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:26:32 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "auth_id": "eve47", "tenant_id": "e0991b50d433489b9122b5c71fdb2883", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:26:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:601ad3a1-8738-4a72-911d-38595abebd4b, tenant_id:e0991b50d433489b9122b5c71fdb2883, vol_name:cephfs) < ""
Dec 06 10:26:32 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0)
Dec 06 10:26:32 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Dec 06 10:26:32 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID eve47 with tenant e0991b50d433489b9122b5c71fdb2883
Dec 06 10:26:32 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_601ad3a1-8738-4a72-911d-38595abebd4b", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:26:32 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_601ad3a1-8738-4a72-911d-38595abebd4b", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:26:32 np0005548790.localdomain ceph-mon[301742]: pgmap v626: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 126 KiB/s wr, 7 op/s
Dec 06 10:26:32 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Dec 06 10:26:32 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_601ad3a1-8738-4a72-911d-38595abebd4b", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:26:32 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_601ad3a1-8738-4a72-911d-38595abebd4b", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:26:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:601ad3a1-8738-4a72-911d-38595abebd4b, tenant_id:e0991b50d433489b9122b5c71fdb2883, vol_name:cephfs) < ""
Dec 06 10:26:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v627: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 870 B/s rd, 208 KiB/s wr, 13 op/s
Dec 06 10:26:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:26:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:26:33 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 06 10:26:33 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:26:33 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID alice with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:26:33 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:26:33 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:26:33 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:26:33 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "auth_id": "eve47", "tenant_id": "e0991b50d433489b9122b5c71fdb2883", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:26:33 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_601ad3a1-8738-4a72-911d-38595abebd4b", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:26:33 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:26:33 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:26:33 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:26:33 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:26:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:33.913 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:26:34 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:34.746 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:34 np0005548790.localdomain ceph-mon[301742]: pgmap v627: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 870 B/s rd, 208 KiB/s wr, 13 op/s
Dec 06 10:26:34 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:26:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v628: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 119 KiB/s wr, 8 op/s
Dec 06 10:26:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:35.923 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:35 np0005548790.localdomain ceph-mon[301742]: pgmap v628: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 119 KiB/s wr, 8 op/s
Dec 06 10:26:36 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "auth_id": "eve47", "format": "json"}]: dispatch
Dec 06 10:26:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:601ad3a1-8738-4a72-911d-38595abebd4b, vol_name:cephfs) < ""
Dec 06 10:26:36 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0)
Dec 06 10:26:36 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Dec 06 10:26:36 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve47"} v 0)
Dec 06 10:26:36 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch
Dec 06 10:26:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:601ad3a1-8738-4a72-911d-38595abebd4b, vol_name:cephfs) < ""
Dec 06 10:26:36 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "auth_id": "eve47", "format": "json"}]: dispatch
Dec 06 10:26:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:601ad3a1-8738-4a72-911d-38595abebd4b, vol_name:cephfs) < ""
Dec 06 10:26:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve47, client_metadata.root=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1
Dec 06 10:26:36 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=eve47,client_metadata.root=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1],prefix=session evict} (starting...)
Dec 06 10:26:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:26:36 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:601ad3a1-8738-4a72-911d-38595abebd4b, vol_name:cephfs) < ""
Dec 06 10:26:36 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "auth_id": "eve47", "format": "json"}]: dispatch
Dec 06 10:26:36 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Dec 06 10:26:36 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch
Dec 06 10:26:36 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch
Dec 06 10:26:36 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Dec 06 10:26:36 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "auth_id": "eve47", "format": "json"}]: dispatch
Dec 06 10:26:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:26:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:26:37 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 06 10:26:37 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:26:37 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec 06 10:26:37 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 06 10:26:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v629: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 119 KiB/s wr, 8 op/s
Dec 06 10:26:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:26:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:26:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:26:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:26:37 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:26:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:26:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:26:37 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:26:37 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:26:37 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 06 10:26:37 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 06 10:26:37 np0005548790.localdomain ceph-mon[301742]: pgmap v629: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 119 KiB/s wr, 8 op/s
Dec 06 10:26:37 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 06 10:26:37 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:26:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:38.447 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:26:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v630: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 183 KiB/s wr, 12 op/s
Dec 06 10:26:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3136332362' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:26:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3136332362' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:26:39 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:39.750 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:40 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "auth_id": "eve49", "format": "json"}]: dispatch
Dec 06 10:26:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:601ad3a1-8738-4a72-911d-38595abebd4b, vol_name:cephfs) < ""
Dec 06 10:26:40 np0005548790.localdomain ceph-mon[301742]: pgmap v630: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 183 KiB/s wr, 12 op/s
Dec 06 10:26:40 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0)
Dec 06 10:26:40 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Dec 06 10:26:40 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve49"} v 0)
Dec 06 10:26:40 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch
Dec 06 10:26:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:601ad3a1-8738-4a72-911d-38595abebd4b, vol_name:cephfs) < ""
Dec 06 10:26:40 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "auth_id": "eve49", "format": "json"}]: dispatch
Dec 06 10:26:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:601ad3a1-8738-4a72-911d-38595abebd4b, vol_name:cephfs) < ""
Dec 06 10:26:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve49, client_metadata.root=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1
Dec 06 10:26:40 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=eve49,client_metadata.root=/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b/a5c5bd7f-cc74-4f03-8395-ee4b429e02e1],prefix=session evict} (starting...)
Dec 06 10:26:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:26:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:601ad3a1-8738-4a72-911d-38595abebd4b, vol_name:cephfs) < ""
Dec 06 10:26:40 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a4535dc8-c535-4a4f-a931-b9e1e14f3568", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:26:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a4535dc8-c535-4a4f-a931-b9e1e14f3568, vol_name:cephfs) < ""
Dec 06 10:26:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a4535dc8-c535-4a4f-a931-b9e1e14f3568/.meta.tmp'
Dec 06 10:26:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a4535dc8-c535-4a4f-a931-b9e1e14f3568/.meta.tmp' to config b'/volumes/_nogroup/a4535dc8-c535-4a4f-a931-b9e1e14f3568/.meta'
Dec 06 10:26:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a4535dc8-c535-4a4f-a931-b9e1e14f3568, vol_name:cephfs) < ""
Dec 06 10:26:40 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a4535dc8-c535-4a4f-a931-b9e1e14f3568", "format": "json"}]: dispatch
Dec 06 10:26:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a4535dc8-c535-4a4f-a931-b9e1e14f3568, vol_name:cephfs) < ""
Dec 06 10:26:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a4535dc8-c535-4a4f-a931-b9e1e14f3568, vol_name:cephfs) < ""
Dec 06 10:26:40 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "r", "format": "json"}]: dispatch
Dec 06 10:26:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:26:40 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 06 10:26:40 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:26:40 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID alice with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:26:40 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:26:40 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:26:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:26:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:40.962 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "format": "json"}]: dispatch
Dec 06 10:26:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:601ad3a1-8738-4a72-911d-38595abebd4b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:26:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:601ad3a1-8738-4a72-911d-38595abebd4b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:26:41 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:26:41.032+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '601ad3a1-8738-4a72-911d-38595abebd4b' of type subvolume
Dec 06 10:26:41 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '601ad3a1-8738-4a72-911d-38595abebd4b' of type subvolume
Dec 06 10:26:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:601ad3a1-8738-4a72-911d-38595abebd4b, vol_name:cephfs) < ""
Dec 06 10:26:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/601ad3a1-8738-4a72-911d-38595abebd4b'' moved to trashcan
Dec 06 10:26:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:26:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:601ad3a1-8738-4a72-911d-38595abebd4b, vol_name:cephfs) < ""
Dec 06 10:26:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v631: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 142 KiB/s wr, 9 op/s
Dec 06 10:26:41 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "auth_id": "eve49", "format": "json"}]: dispatch
Dec 06 10:26:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Dec 06 10:26:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch
Dec 06 10:26:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch
Dec 06 10:26:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Dec 06 10:26:41 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "auth_id": "eve49", "format": "json"}]: dispatch
Dec 06 10:26:41 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a4535dc8-c535-4a4f-a931-b9e1e14f3568", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:26:41 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a4535dc8-c535-4a4f-a931-b9e1e14f3568", "format": "json"}]: dispatch
Dec 06 10:26:41 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:26:41 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "r", "format": "json"}]: dispatch
Dec 06 10:26:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:26:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:26:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:26:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:26:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:41.580 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Optimize plan auto_2025-12-06_10:26:41
Dec 06 10:26:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:26:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] do_upmap
Dec 06 10:26:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] pools ['backups', 'volumes', 'images', 'manila_data', 'manila_metadata', '.mgr', 'vms']
Dec 06 10:26:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] prepared 0/10 changes
Dec 06 10:26:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:26:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:26:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:26:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32)
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32)
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32)
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.9084135957565606e-06 of space, bias 1.0, pg target 0.00037977430555555556 quantized to 32 (current 32)
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0012824539363484088 of space, bias 4.0, pg target 1.0208333333333333 quantized to 16 (current 16)
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:26:42 np0005548790.localdomain ceph-mgr[286934]: client.0 ms_handle_reset on v2:172.18.0.108:6810/3354697053
Dec 06 10:26:42 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "format": "json"}]: dispatch
Dec 06 10:26:42 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "601ad3a1-8738-4a72-911d-38595abebd4b", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:42 np0005548790.localdomain ceph-mon[301742]: pgmap v631: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 142 KiB/s wr, 9 op/s
Dec 06 10:26:42 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/1615917791' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:26:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v632: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 207 KiB/s wr, 14 op/s
Dec 06 10:26:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "a4535dc8-c535-4a4f-a931-b9e1e14f3568", "snap_name": "1d0736b7-9f9a-4eab-ab98-cf736d1a3e1a", "format": "json"}]: dispatch
Dec 06 10:26:43 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1d0736b7-9f9a-4eab-ab98-cf736d1a3e1a, sub_name:a4535dc8-c535-4a4f-a931-b9e1e14f3568, vol_name:cephfs) < ""
Dec 06 10:26:43 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1d0736b7-9f9a-4eab-ab98-cf736d1a3e1a, sub_name:a4535dc8-c535-4a4f-a931-b9e1e14f3568, vol_name:cephfs) < ""
Dec 06 10:26:43 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/4068335658' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:26:43 np0005548790.localdomain ceph-mon[301742]: pgmap v632: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 207 KiB/s wr, 14 op/s
Dec 06 10:26:43 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "a4535dc8-c535-4a4f-a931-b9e1e14f3568", "snap_name": "1d0736b7-9f9a-4eab-ab98-cf736d1a3e1a", "format": "json"}]: dispatch
Dec 06 10:26:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:26:43 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:26:43 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 06 10:26:43 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:26:43 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec 06 10:26:43 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 06 10:26:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:26:44 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:26:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:26:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:26:44 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:26:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:26:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:26:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:44.616 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:26:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:44.752 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:26:44.921824) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016804921860, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2661, "num_deletes": 266, "total_data_size": 4649519, "memory_usage": 4707024, "flush_reason": "Manual Compaction"}
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016804942612, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 3017632, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30148, "largest_seqno": 32804, "table_properties": {"data_size": 3007152, "index_size": 6537, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 25415, "raw_average_key_size": 22, "raw_value_size": 2984935, "raw_average_value_size": 2613, "num_data_blocks": 280, "num_entries": 1142, "num_filter_entries": 1142, "num_deletions": 266, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016677, "oldest_key_time": 1765016677, "file_creation_time": 1765016804, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 20883 microseconds, and 7492 cpu microseconds.
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:26:44.942701) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 3017632 bytes OK
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:26:44.942729) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:26:44.945355) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:26:44.945381) EVENT_LOG_v1 {"time_micros": 1765016804945373, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:26:44.945406) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4637022, prev total WAL file size 4637022, number of live WAL files 2.
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:26:44.946763) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132383031' seq:72057594037927935, type:22 .. '7061786F73003133303533' seq:0, type:0; will stop at (end)
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(2946KB)], [51(18MB)]
Dec 06 10:26:44 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016804946887, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 22205382, "oldest_snapshot_seqno": -1}
Dec 06 10:26:45 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 13941 keys, 20512867 bytes, temperature: kUnknown
Dec 06 10:26:45 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016805060854, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 20512867, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 20432573, "index_size": 44353, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34885, "raw_key_size": 375688, "raw_average_key_size": 26, "raw_value_size": 20194932, "raw_average_value_size": 1448, "num_data_blocks": 1642, "num_entries": 13941, "num_filter_entries": 13941, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015768, "oldest_key_time": 0, "file_creation_time": 1765016804, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:26:45 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:26:45 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:26:45.061242) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 20512867 bytes
Dec 06 10:26:45 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:26:45.062906) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.6 rd, 179.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 18.3 +0.0 blob) out(19.6 +0.0 blob), read-write-amplify(14.2) write-amplify(6.8) OK, records in: 14488, records dropped: 547 output_compression: NoCompression
Dec 06 10:26:45 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:26:45.062935) EVENT_LOG_v1 {"time_micros": 1765016805062923, "job": 30, "event": "compaction_finished", "compaction_time_micros": 114112, "compaction_time_cpu_micros": 51818, "output_level": 6, "num_output_files": 1, "total_output_size": 20512867, "num_input_records": 14488, "num_output_records": 13941, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:26:45 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:26:45 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016805063413, "job": 30, "event": "table_file_deletion", "file_number": 53}
Dec 06 10:26:45 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:26:45 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016805065926, "job": 30, "event": "table_file_deletion", "file_number": 51}
Dec 06 10:26:45 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:26:44.946563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:26:45 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:26:45.066014) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:26:45 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:26:45.066019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:26:45 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:26:45.066022) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:26:45 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:26:45.066025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:26:45 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:26:45.066027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:26:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:45.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:26:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:45.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:26:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:45.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:26:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v633: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 129 KiB/s wr, 9 op/s
Dec 06 10:26:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:45.353 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:26:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:46.328 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:46.348 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:26:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:26:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:26:47 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 06 10:26:47 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:26:47 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID alice_bob with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:26:47 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:26:47 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:26:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:47.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:26:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:47.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:26:47 np0005548790.localdomain ceph-mon[301742]: pgmap v633: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 129 KiB/s wr, 9 op/s
Dec 06 10:26:47 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:26:47 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:26:47 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:26:47 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:26:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:26:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v634: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 129 KiB/s wr, 9 op/s
Dec 06 10:26:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a4535dc8-c535-4a4f-a931-b9e1e14f3568", "snap_name": "1d0736b7-9f9a-4eab-ab98-cf736d1a3e1a_5759c99a-9a0c-4c9b-8de2-da85e4830d9b", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1d0736b7-9f9a-4eab-ab98-cf736d1a3e1a_5759c99a-9a0c-4c9b-8de2-da85e4830d9b, sub_name:a4535dc8-c535-4a4f-a931-b9e1e14f3568, vol_name:cephfs) < ""
Dec 06 10:26:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a4535dc8-c535-4a4f-a931-b9e1e14f3568/.meta.tmp'
Dec 06 10:26:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a4535dc8-c535-4a4f-a931-b9e1e14f3568/.meta.tmp' to config b'/volumes/_nogroup/a4535dc8-c535-4a4f-a931-b9e1e14f3568/.meta'
Dec 06 10:26:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1d0736b7-9f9a-4eab-ab98-cf736d1a3e1a_5759c99a-9a0c-4c9b-8de2-da85e4830d9b, sub_name:a4535dc8-c535-4a4f-a931-b9e1e14f3568, vol_name:cephfs) < ""
Dec 06 10:26:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a4535dc8-c535-4a4f-a931-b9e1e14f3568", "snap_name": "1d0736b7-9f9a-4eab-ab98-cf736d1a3e1a", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1d0736b7-9f9a-4eab-ab98-cf736d1a3e1a, sub_name:a4535dc8-c535-4a4f-a931-b9e1e14f3568, vol_name:cephfs) < ""
Dec 06 10:26:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a4535dc8-c535-4a4f-a931-b9e1e14f3568/.meta.tmp'
Dec 06 10:26:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a4535dc8-c535-4a4f-a931-b9e1e14f3568/.meta.tmp' to config b'/volumes/_nogroup/a4535dc8-c535-4a4f-a931-b9e1e14f3568/.meta'
Dec 06 10:26:47 np0005548790.localdomain sshd[322857]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:26:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1d0736b7-9f9a-4eab-ab98-cf736d1a3e1a, sub_name:a4535dc8-c535-4a4f-a931-b9e1e14f3568, vol_name:cephfs) < ""
Dec 06 10:26:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:48.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:26:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:26:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:26:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:26:48.408 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:26:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:26:48.408 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:26:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:26:48.409 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:26:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:26:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:26:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:26:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18779 "" "Go-http-client/1.1"
Dec 06 10:26:48 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:26:48 np0005548790.localdomain ceph-mon[301742]: pgmap v634: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 129 KiB/s wr, 9 op/s
Dec 06 10:26:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:26:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:49.328 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:26:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v635: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 191 KiB/s wr, 13 op/s
Dec 06 10:26:49 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:26:49 np0005548790.localdomain systemd[1]: tmp-crun.JIAaiQ.mount: Deactivated successfully.
Dec 06 10:26:49 np0005548790.localdomain podman[322859]: 2025-12-06 10:26:49.576715375 +0000 UTC m=+0.090153041 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:26:49 np0005548790.localdomain podman[322859]: 2025-12-06 10:26:49.586194611 +0000 UTC m=+0.099632257 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 06 10:26:49 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:26:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:49.788 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:49 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a4535dc8-c535-4a4f-a931-b9e1e14f3568", "snap_name": "1d0736b7-9f9a-4eab-ab98-cf736d1a3e1a_5759c99a-9a0c-4c9b-8de2-da85e4830d9b", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:49 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "a4535dc8-c535-4a4f-a931-b9e1e14f3568", "snap_name": "1d0736b7-9f9a-4eab-ab98-cf736d1a3e1a", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:50.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:26:50 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:26:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:26:50 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 06 10:26:50 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:26:50 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec 06 10:26:50 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 06 10:26:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:26:50 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:26:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:26:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:26:50 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:26:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:26:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:26:50 np0005548790.localdomain ceph-mon[301742]: pgmap v635: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 191 KiB/s wr, 13 op/s
Dec 06 10:26:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:26:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 06 10:26:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 06 10:26:50 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 06 10:26:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a4535dc8-c535-4a4f-a931-b9e1e14f3568", "format": "json"}]: dispatch
Dec 06 10:26:51 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a4535dc8-c535-4a4f-a931-b9e1e14f3568, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:26:51 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a4535dc8-c535-4a4f-a931-b9e1e14f3568, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:26:51 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:26:51.115+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a4535dc8-c535-4a4f-a931-b9e1e14f3568' of type subvolume
Dec 06 10:26:51 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a4535dc8-c535-4a4f-a931-b9e1e14f3568' of type subvolume
Dec 06 10:26:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a4535dc8-c535-4a4f-a931-b9e1e14f3568", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:51 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a4535dc8-c535-4a4f-a931-b9e1e14f3568, vol_name:cephfs) < ""
Dec 06 10:26:51 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a4535dc8-c535-4a4f-a931-b9e1e14f3568'' moved to trashcan
Dec 06 10:26:51 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:26:51 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a4535dc8-c535-4a4f-a931-b9e1e14f3568, vol_name:cephfs) < ""
Dec 06 10:26:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:51.331 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v636: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 126 KiB/s wr, 9 op/s
Dec 06 10:26:51 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:26:51 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:26:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:52.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:26:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:52.354 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:26:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:52.354 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:26:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:52.355 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:26:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:52.355 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:26:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:52.355 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:26:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:26:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:26:52 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:26:52 np0005548790.localdomain systemd[1]: tmp-crun.jhjw48.mount: Deactivated successfully.
Dec 06 10:26:52 np0005548790.localdomain podman[322883]: 2025-12-06 10:26:52.560880505 +0000 UTC m=+0.079159673 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec 06 10:26:52 np0005548790.localdomain podman[322881]: 2025-12-06 10:26:52.622316668 +0000 UTC m=+0.139174257 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 10:26:52 np0005548790.localdomain podman[322881]: 2025-12-06 10:26:52.634194359 +0000 UTC m=+0.151051868 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 10:26:52 np0005548790.localdomain podman[322883]: 2025-12-06 10:26:52.644431127 +0000 UTC m=+0.162710295 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3)
Dec 06 10:26:52 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:26:52 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:26:52 np0005548790.localdomain podman[322884]: 2025-12-06 10:26:52.59470277 +0000 UTC m=+0.104568501 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vcs-type=git, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.)
Dec 06 10:26:52 np0005548790.localdomain podman[322884]: 2025-12-06 10:26:52.729317174 +0000 UTC m=+0.239182944 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, build-date=2025-08-20T13:12:41, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6)
Dec 06 10:26:52 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:26:52 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:26:52 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1038771449' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:26:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:52.860 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:26:52 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a4535dc8-c535-4a4f-a931-b9e1e14f3568", "format": "json"}]: dispatch
Dec 06 10:26:52 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a4535dc8-c535-4a4f-a931-b9e1e14f3568", "force": true, "format": "json"}]: dispatch
Dec 06 10:26:52 np0005548790.localdomain ceph-mon[301742]: pgmap v636: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 126 KiB/s wr, 9 op/s
Dec 06 10:26:52 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1038771449' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:26:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:53.077 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:26:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:53.079 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=11443MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:26:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:53.079 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:26:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:53.079 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:26:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:53.164 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:26:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:53.165 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:26:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:53.193 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:26:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v637: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 177 KiB/s wr, 13 op/s
Dec 06 10:26:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:26:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:26:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:26:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:26:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:26:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:26:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:26:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:26:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:26:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:26:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:26:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:26:53 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:26:53 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1752695086' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:26:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:53.646 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:26:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:53.652 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:26:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:53.672 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:26:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:53.675 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:26:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:53.675 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:26:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "r", "format": "json"}]: dispatch
Dec 06 10:26:53 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:26:53 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 06 10:26:53 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:26:53 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID alice_bob with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:26:53 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:26:53 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:26:53 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1752695086' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:26:53 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:26:53 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:26:53 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:26:53 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:26:53 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:26:53 np0005548790.localdomain sshd[322857]: Connection closed by 101.47.160.186 port 35216 [preauth]
Dec 06 10:26:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:26:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:54.676 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:26:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:54.677 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:26:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:54.819 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:54 np0005548790.localdomain ceph-mon[301742]: pgmap v637: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 177 KiB/s wr, 13 op/s
Dec 06 10:26:54 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "r", "format": "json"}]: dispatch
Dec 06 10:26:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v638: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 112 KiB/s wr, 7 op/s
Dec 06 10:26:55 np0005548790.localdomain ceph-mon[301742]: pgmap v638: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 112 KiB/s wr, 7 op/s
Dec 06 10:26:55 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/1759336759' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:26:55 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e274 e274: 6 total, 6 up, 6 in
Dec 06 10:26:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:56.335 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:26:56 np0005548790.localdomain ceph-mon[301742]: osdmap e274: 6 total, 6 up, 6 in
Dec 06 10:26:56 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/792248445' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:26:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:26:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:26:57 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 06 10:26:57 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:26:57 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec 06 10:26:57 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 06 10:26:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v640: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 134 KiB/s wr, 9 op/s
Dec 06 10:26:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:26:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:26:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:26:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:26:57 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:26:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:26:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:26:57 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:26:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:26:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 06 10:26:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 06 10:26:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 06 10:26:57 np0005548790.localdomain ceph-mon[301742]: pgmap v640: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 134 KiB/s wr, 9 op/s
Dec 06 10:26:57 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:26:58 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1be4d0fc-02f4-4c42-ae8f-f48f51b28e7a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:26:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1be4d0fc-02f4-4c42-ae8f-f48f51b28e7a, vol_name:cephfs) < ""
Dec 06 10:26:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1be4d0fc-02f4-4c42-ae8f-f48f51b28e7a/.meta.tmp'
Dec 06 10:26:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1be4d0fc-02f4-4c42-ae8f-f48f51b28e7a/.meta.tmp' to config b'/volumes/_nogroup/1be4d0fc-02f4-4c42-ae8f-f48f51b28e7a/.meta'
Dec 06 10:26:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1be4d0fc-02f4-4c42-ae8f-f48f51b28e7a, vol_name:cephfs) < ""
Dec 06 10:26:58 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1be4d0fc-02f4-4c42-ae8f-f48f51b28e7a", "format": "json"}]: dispatch
Dec 06 10:26:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1be4d0fc-02f4-4c42-ae8f-f48f51b28e7a, vol_name:cephfs) < ""
Dec 06 10:26:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1be4d0fc-02f4-4c42-ae8f-f48f51b28e7a, vol_name:cephfs) < ""
Dec 06 10:26:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:26:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v641: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 117 KiB/s wr, 8 op/s
Dec 06 10:26:59 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:26:59 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:26:59.852 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:00 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1be4d0fc-02f4-4c42-ae8f-f48f51b28e7a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:27:00 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1be4d0fc-02f4-4c42-ae8f-f48f51b28e7a", "format": "json"}]: dispatch
Dec 06 10:27:00 np0005548790.localdomain ceph-mon[301742]: pgmap v641: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 117 KiB/s wr, 8 op/s
Dec 06 10:27:00 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:27:00 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:27:00 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:27:00 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 06 10:27:00 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:27:00 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID alice bob with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:27:00 np0005548790.localdomain podman[322985]: 2025-12-06 10:27:00.566606051 +0000 UTC m=+0.084012445 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 06 10:27:00 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:27:00 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:00 np0005548790.localdomain podman[322985]: 2025-12-06 10:27:00.601589699 +0000 UTC m=+0.118996083 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 06 10:27:00 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:27:00 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:27:01 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:01.337 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v642: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 117 KiB/s wr, 8 op/s
Dec 06 10:27:01 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:27:01 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:27:01 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:01 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:01 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:27:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d70c0b2c-e84c-411c-8527-44acb71e029d", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:27:01 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:d70c0b2c-e84c-411c-8527-44acb71e029d, vol_name:cephfs) < ""
Dec 06 10:27:01 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d70c0b2c-e84c-411c-8527-44acb71e029d/.meta.tmp'
Dec 06 10:27:01 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d70c0b2c-e84c-411c-8527-44acb71e029d/.meta.tmp' to config b'/volumes/_nogroup/d70c0b2c-e84c-411c-8527-44acb71e029d/.meta'
Dec 06 10:27:01 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:d70c0b2c-e84c-411c-8527-44acb71e029d, vol_name:cephfs) < ""
Dec 06 10:27:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d70c0b2c-e84c-411c-8527-44acb71e029d", "format": "json"}]: dispatch
Dec 06 10:27:01 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d70c0b2c-e84c-411c-8527-44acb71e029d, vol_name:cephfs) < ""
Dec 06 10:27:01 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d70c0b2c-e84c-411c-8527-44acb71e029d, vol_name:cephfs) < ""
Dec 06 10:27:02 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c48080d3-3662-400d-9fd3-efdeb2cd8e3f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:27:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c48080d3-3662-400d-9fd3-efdeb2cd8e3f, vol_name:cephfs) < ""
Dec 06 10:27:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c48080d3-3662-400d-9fd3-efdeb2cd8e3f/.meta.tmp'
Dec 06 10:27:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c48080d3-3662-400d-9fd3-efdeb2cd8e3f/.meta.tmp' to config b'/volumes/_nogroup/c48080d3-3662-400d-9fd3-efdeb2cd8e3f/.meta'
Dec 06 10:27:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c48080d3-3662-400d-9fd3-efdeb2cd8e3f, vol_name:cephfs) < ""
Dec 06 10:27:02 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c48080d3-3662-400d-9fd3-efdeb2cd8e3f", "format": "json"}]: dispatch
Dec 06 10:27:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c48080d3-3662-400d-9fd3-efdeb2cd8e3f, vol_name:cephfs) < ""
Dec 06 10:27:02 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c48080d3-3662-400d-9fd3-efdeb2cd8e3f, vol_name:cephfs) < ""
Dec 06 10:27:02 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e275 e275: 6 total, 6 up, 6 in
Dec 06 10:27:02 np0005548790.localdomain ceph-mon[301742]: pgmap v642: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 117 KiB/s wr, 8 op/s
Dec 06 10:27:02 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:27:02 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:27:02 np0005548790.localdomain ceph-mon[301742]: osdmap e275: 6 total, 6 up, 6 in
Dec 06 10:27:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v644: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 172 KiB/s wr, 11 op/s
Dec 06 10:27:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:27:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:27:03 np0005548790.localdomain systemd[1]: tmp-crun.DGkdF8.mount: Deactivated successfully.
Dec 06 10:27:03 np0005548790.localdomain podman[323006]: 2025-12-06 10:27:03.585615536 +0000 UTC m=+0.091468007 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 06 10:27:03 np0005548790.localdomain podman[323005]: 2025-12-06 10:27:03.647442469 +0000 UTC m=+0.157465123 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:27:03 np0005548790.localdomain podman[323006]: 2025-12-06 10:27:03.67776671 +0000 UTC m=+0.183619151 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller)
Dec 06 10:27:03 np0005548790.localdomain podman[323005]: 2025-12-06 10:27:03.685380245 +0000 UTC m=+0.195402899 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 10:27:03 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:27:03 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:27:03 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d70c0b2c-e84c-411c-8527-44acb71e029d", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:27:03 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d70c0b2c-e84c-411c-8527-44acb71e029d", "format": "json"}]: dispatch
Dec 06 10:27:03 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c48080d3-3662-400d-9fd3-efdeb2cd8e3f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:27:03 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c48080d3-3662-400d-9fd3-efdeb2cd8e3f", "format": "json"}]: dispatch
Dec 06 10:27:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:27:03 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 06 10:27:04 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:27:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec 06 10:27:04 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 06 10:27:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:04 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:27:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:27:04 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:27:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:27:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:27:04 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:04.855 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:04 np0005548790.localdomain ceph-mon[301742]: pgmap v644: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 172 KiB/s wr, 11 op/s
Dec 06 10:27:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:27:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 06 10:27:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 06 10:27:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 06 10:27:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d70c0b2c-e84c-411c-8527-44acb71e029d", "snap_name": "1034db86-0931-4bff-b9cd-e904ae3ce178", "format": "json"}]: dispatch
Dec 06 10:27:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1034db86-0931-4bff-b9cd-e904ae3ce178, sub_name:d70c0b2c-e84c-411c-8527-44acb71e029d, vol_name:cephfs) < ""
Dec 06 10:27:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1034db86-0931-4bff-b9cd-e904ae3ce178, sub_name:d70c0b2c-e84c-411c-8527-44acb71e029d, vol_name:cephfs) < ""
Dec 06 10:27:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v645: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 325 B/s rd, 146 KiB/s wr, 10 op/s
Dec 06 10:27:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c48080d3-3662-400d-9fd3-efdeb2cd8e3f", "format": "json"}]: dispatch
Dec 06 10:27:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c48080d3-3662-400d-9fd3-efdeb2cd8e3f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:27:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c48080d3-3662-400d-9fd3-efdeb2cd8e3f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:27:05 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:27:05.755+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c48080d3-3662-400d-9fd3-efdeb2cd8e3f' of type subvolume
Dec 06 10:27:05 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c48080d3-3662-400d-9fd3-efdeb2cd8e3f' of type subvolume
Dec 06 10:27:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c48080d3-3662-400d-9fd3-efdeb2cd8e3f", "force": true, "format": "json"}]: dispatch
Dec 06 10:27:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c48080d3-3662-400d-9fd3-efdeb2cd8e3f, vol_name:cephfs) < ""
Dec 06 10:27:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c48080d3-3662-400d-9fd3-efdeb2cd8e3f'' moved to trashcan
Dec 06 10:27:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:27:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c48080d3-3662-400d-9fd3-efdeb2cd8e3f, vol_name:cephfs) < ""
Dec 06 10:27:05 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:27:05 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:27:06 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:06.339 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:06 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d70c0b2c-e84c-411c-8527-44acb71e029d", "snap_name": "1034db86-0931-4bff-b9cd-e904ae3ce178", "format": "json"}]: dispatch
Dec 06 10:27:06 np0005548790.localdomain ceph-mon[301742]: pgmap v645: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 325 B/s rd, 146 KiB/s wr, 10 op/s
Dec 06 10:27:06 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c48080d3-3662-400d-9fd3-efdeb2cd8e3f", "format": "json"}]: dispatch
Dec 06 10:27:06 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c48080d3-3662-400d-9fd3-efdeb2cd8e3f", "force": true, "format": "json"}]: dispatch
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.327 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.328 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:27:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:27:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v646: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 138 KiB/s wr, 9 op/s
Dec 06 10:27:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "r", "format": "json"}]: dispatch
Dec 06 10:27:07 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:27:07 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 06 10:27:07 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:27:07 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID alice bob with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:27:07 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:27:07 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:07 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:27:07 np0005548790.localdomain ceph-mon[301742]: pgmap v646: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 138 KiB/s wr, 9 op/s
Dec 06 10:27:07 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "r", "format": "json"}]: dispatch
Dec 06 10:27:07 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:27:07 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:07 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:07 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:27:08 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d70c0b2c-e84c-411c-8527-44acb71e029d", "snap_name": "1034db86-0931-4bff-b9cd-e904ae3ce178_8e2229e7-2c10-4e6d-9970-a62905b25ae2", "force": true, "format": "json"}]: dispatch
Dec 06 10:27:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1034db86-0931-4bff-b9cd-e904ae3ce178_8e2229e7-2c10-4e6d-9970-a62905b25ae2, sub_name:d70c0b2c-e84c-411c-8527-44acb71e029d, vol_name:cephfs) < ""
Dec 06 10:27:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d70c0b2c-e84c-411c-8527-44acb71e029d/.meta.tmp'
Dec 06 10:27:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d70c0b2c-e84c-411c-8527-44acb71e029d/.meta.tmp' to config b'/volumes/_nogroup/d70c0b2c-e84c-411c-8527-44acb71e029d/.meta'
Dec 06 10:27:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1034db86-0931-4bff-b9cd-e904ae3ce178_8e2229e7-2c10-4e6d-9970-a62905b25ae2, sub_name:d70c0b2c-e84c-411c-8527-44acb71e029d, vol_name:cephfs) < ""
Dec 06 10:27:08 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d70c0b2c-e84c-411c-8527-44acb71e029d", "snap_name": "1034db86-0931-4bff-b9cd-e904ae3ce178", "force": true, "format": "json"}]: dispatch
Dec 06 10:27:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1034db86-0931-4bff-b9cd-e904ae3ce178, sub_name:d70c0b2c-e84c-411c-8527-44acb71e029d, vol_name:cephfs) < ""
Dec 06 10:27:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d70c0b2c-e84c-411c-8527-44acb71e029d/.meta.tmp'
Dec 06 10:27:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d70c0b2c-e84c-411c-8527-44acb71e029d/.meta.tmp' to config b'/volumes/_nogroup/d70c0b2c-e84c-411c-8527-44acb71e029d/.meta'
Dec 06 10:27:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1034db86-0931-4bff-b9cd-e904ae3ce178, sub_name:d70c0b2c-e84c-411c-8527-44acb71e029d, vol_name:cephfs) < ""
Dec 06 10:27:08 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d70c0b2c-e84c-411c-8527-44acb71e029d", "snap_name": "1034db86-0931-4bff-b9cd-e904ae3ce178_8e2229e7-2c10-4e6d-9970-a62905b25ae2", "force": true, "format": "json"}]: dispatch
Dec 06 10:27:08 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d70c0b2c-e84c-411c-8527-44acb71e029d", "snap_name": "1034db86-0931-4bff-b9cd-e904ae3ce178", "force": true, "format": "json"}]: dispatch
Dec 06 10:27:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1be4d0fc-02f4-4c42-ae8f-f48f51b28e7a", "format": "json"}]: dispatch
Dec 06 10:27:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1be4d0fc-02f4-4c42-ae8f-f48f51b28e7a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:27:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1be4d0fc-02f4-4c42-ae8f-f48f51b28e7a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:27:09 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:27:09.095+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1be4d0fc-02f4-4c42-ae8f-f48f51b28e7a' of type subvolume
Dec 06 10:27:09 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1be4d0fc-02f4-4c42-ae8f-f48f51b28e7a' of type subvolume
Dec 06 10:27:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1be4d0fc-02f4-4c42-ae8f-f48f51b28e7a", "force": true, "format": "json"}]: dispatch
Dec 06 10:27:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1be4d0fc-02f4-4c42-ae8f-f48f51b28e7a, vol_name:cephfs) < ""
Dec 06 10:27:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1be4d0fc-02f4-4c42-ae8f-f48f51b28e7a'' moved to trashcan
Dec 06 10:27:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:27:09 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1be4d0fc-02f4-4c42-ae8f-f48f51b28e7a, vol_name:cephfs) < ""
Dec 06 10:27:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:27:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v647: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 168 KiB/s wr, 9 op/s
Dec 06 10:27:09 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:09.886 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:09 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1be4d0fc-02f4-4c42-ae8f-f48f51b28e7a", "format": "json"}]: dispatch
Dec 06 10:27:09 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1be4d0fc-02f4-4c42-ae8f-f48f51b28e7a", "force": true, "format": "json"}]: dispatch
Dec 06 10:27:09 np0005548790.localdomain ceph-mon[301742]: pgmap v647: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 168 KiB/s wr, 9 op/s
Dec 06 10:27:10 np0005548790.localdomain sudo[323053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:27:10 np0005548790.localdomain sudo[323053]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:27:10 np0005548790.localdomain sudo[323053]: pam_unix(sudo:session): session closed for user root
Dec 06 10:27:10 np0005548790.localdomain sudo[323071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:27:10 np0005548790.localdomain sudo[323071]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:27:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:27:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:10 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 06 10:27:10 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:27:10 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e276 e276: 6 total, 6 up, 6 in
Dec 06 10:27:10 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec 06 10:27:10 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 06 10:27:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:27:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:27:11 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:27:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:27:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:11 np0005548790.localdomain sudo[323071]: pam_unix(sudo:session): session closed for user root
Dec 06 10:27:11 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:27:11 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:27:11 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:27:11 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:27:11 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:27:11 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] update: starting ev 64053b46-5485-41c4-b79a-ca23008fd85c (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:27:11 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] complete: finished ev 64053b46-5485-41c4-b79a-ca23008fd85c (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:27:11 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Completed event 64053b46-5485-41c4-b79a-ca23008fd85c (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 06 10:27:11 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:27:11 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:27:11 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:11.341 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v649: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 239 B/s rd, 197 KiB/s wr, 11 op/s
Dec 06 10:27:11 np0005548790.localdomain sudo[323121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:27:11 np0005548790.localdomain sudo[323121]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:27:11 np0005548790.localdomain sudo[323121]: pam_unix(sudo:session): session closed for user root
Dec 06 10:27:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d70c0b2c-e84c-411c-8527-44acb71e029d", "format": "json"}]: dispatch
Dec 06 10:27:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d70c0b2c-e84c-411c-8527-44acb71e029d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:27:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d70c0b2c-e84c-411c-8527-44acb71e029d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:27:11 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:27:11.746+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd70c0b2c-e84c-411c-8527-44acb71e029d' of type subvolume
Dec 06 10:27:11 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd70c0b2c-e84c-411c-8527-44acb71e029d' of type subvolume
Dec 06 10:27:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d70c0b2c-e84c-411c-8527-44acb71e029d", "force": true, "format": "json"}]: dispatch
Dec 06 10:27:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d70c0b2c-e84c-411c-8527-44acb71e029d, vol_name:cephfs) < ""
Dec 06 10:27:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d70c0b2c-e84c-411c-8527-44acb71e029d'' moved to trashcan
Dec 06 10:27:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:27:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d70c0b2c-e84c-411c-8527-44acb71e029d, vol_name:cephfs) < ""
Dec 06 10:27:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:27:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:27:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:27:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:27:11 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:27:11 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:27:11 np0005548790.localdomain ceph-mon[301742]: osdmap e276: 6 total, 6 up, 6 in
Dec 06 10:27:11 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 06 10:27:11 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 06 10:27:11 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 06 10:27:11 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:27:11 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:27:11 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:27:11 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:27:11 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:27:11 np0005548790.localdomain ceph-mon[301742]: pgmap v649: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 239 B/s rd, 197 KiB/s wr, 11 op/s
Dec 06 10:27:11 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d70c0b2c-e84c-411c-8527-44acb71e029d", "format": "json"}]: dispatch
Dec 06 10:27:11 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d70c0b2c-e84c-411c-8527-44acb71e029d", "force": true, "format": "json"}]: dispatch
Dec 06 10:27:12 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Writing back 50 completed events
Dec 06 10:27:12 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:27:12 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a57baf93-1000-4372-9325-859e73a86488", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:27:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a57baf93-1000-4372-9325-859e73a86488, vol_name:cephfs) < ""
Dec 06 10:27:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a57baf93-1000-4372-9325-859e73a86488/.meta.tmp'
Dec 06 10:27:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a57baf93-1000-4372-9325-859e73a86488/.meta.tmp' to config b'/volumes/_nogroup/a57baf93-1000-4372-9325-859e73a86488/.meta'
Dec 06 10:27:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a57baf93-1000-4372-9325-859e73a86488, vol_name:cephfs) < ""
Dec 06 10:27:12 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a57baf93-1000-4372-9325-859e73a86488", "format": "json"}]: dispatch
Dec 06 10:27:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a57baf93-1000-4372-9325-859e73a86488, vol_name:cephfs) < ""
Dec 06 10:27:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a57baf93-1000-4372-9325-859e73a86488, vol_name:cephfs) < ""
Dec 06 10:27:12 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:27:12Z|00276|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Dec 06 10:27:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:27:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:27:13 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:27:13 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a57baf93-1000-4372-9325-859e73a86488", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:27:13 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a57baf93-1000-4372-9325-859e73a86488", "format": "json"}]: dispatch
Dec 06 10:27:13 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:27:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v650: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 173 KiB/s wr, 10 op/s
Dec 06 10:27:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:27:14 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:27:14 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:27:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 06 10:27:14 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:27:14 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID alice with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:27:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:27:14 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:14 np0005548790.localdomain ceph-mon[301742]: pgmap v650: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 173 KiB/s wr, 10 op/s
Dec 06 10:27:14 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:27:14 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:14 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:14 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:27:14 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:27:14 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:14.889 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d20a1bca-aac4-44af-8a03-caef89ee2c05", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:27:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d20a1bca-aac4-44af-8a03-caef89ee2c05, vol_name:cephfs) < ""
Dec 06 10:27:15 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Dec 06 10:27:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d20a1bca-aac4-44af-8a03-caef89ee2c05/.meta.tmp'
Dec 06 10:27:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d20a1bca-aac4-44af-8a03-caef89ee2c05/.meta.tmp' to config b'/volumes/_nogroup/d20a1bca-aac4-44af-8a03-caef89ee2c05/.meta'
Dec 06 10:27:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d20a1bca-aac4-44af-8a03-caef89ee2c05, vol_name:cephfs) < ""
Dec 06 10:27:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d20a1bca-aac4-44af-8a03-caef89ee2c05", "format": "json"}]: dispatch
Dec 06 10:27:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d20a1bca-aac4-44af-8a03-caef89ee2c05, vol_name:cephfs) < ""
Dec 06 10:27:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d20a1bca-aac4-44af-8a03-caef89ee2c05, vol_name:cephfs) < ""
Dec 06 10:27:15 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:27:15 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:27:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v651: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 173 KiB/s wr, 10 op/s
Dec 06 10:27:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a57baf93-1000-4372-9325-859e73a86488", "format": "json"}]: dispatch
Dec 06 10:27:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a57baf93-1000-4372-9325-859e73a86488, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:27:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a57baf93-1000-4372-9325-859e73a86488, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:27:15 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:27:15.672+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a57baf93-1000-4372-9325-859e73a86488' of type subvolume
Dec 06 10:27:15 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a57baf93-1000-4372-9325-859e73a86488' of type subvolume
Dec 06 10:27:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a57baf93-1000-4372-9325-859e73a86488", "force": true, "format": "json"}]: dispatch
Dec 06 10:27:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a57baf93-1000-4372-9325-859e73a86488, vol_name:cephfs) < ""
Dec 06 10:27:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a57baf93-1000-4372-9325-859e73a86488'' moved to trashcan
Dec 06 10:27:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:27:15 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a57baf93-1000-4372-9325-859e73a86488, vol_name:cephfs) < ""
Dec 06 10:27:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:16.344 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:16 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d20a1bca-aac4-44af-8a03-caef89ee2c05", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:27:16 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d20a1bca-aac4-44af-8a03-caef89ee2c05", "format": "json"}]: dispatch
Dec 06 10:27:16 np0005548790.localdomain ceph-mon[301742]: pgmap v651: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 173 KiB/s wr, 10 op/s
Dec 06 10:27:16 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a57baf93-1000-4372-9325-859e73a86488", "format": "json"}]: dispatch
Dec 06 10:27:16 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a57baf93-1000-4372-9325-859e73a86488", "force": true, "format": "json"}]: dispatch
Dec 06 10:27:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v652: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 173 KiB/s wr, 10 op/s
Dec 06 10:27:17 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e277 e277: 6 total, 6 up, 6 in
Dec 06 10:27:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:27:17 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:17 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 06 10:27:17 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:27:17 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec 06 10:27:17 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 06 10:27:17 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:27:17 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:17 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:27:17 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:27:17 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:27:17 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:18 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d20a1bca-aac4-44af-8a03-caef89ee2c05", "snap_name": "7884d7aa-ac96-42e0-bc99-f42bebf4c1fc", "format": "json"}]: dispatch
Dec 06 10:27:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7884d7aa-ac96-42e0-bc99-f42bebf4c1fc, sub_name:d20a1bca-aac4-44af-8a03-caef89ee2c05, vol_name:cephfs) < ""
Dec 06 10:27:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:27:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:27:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7884d7aa-ac96-42e0-bc99-f42bebf4c1fc, sub_name:d20a1bca-aac4-44af-8a03-caef89ee2c05, vol_name:cephfs) < ""
Dec 06 10:27:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:27:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:27:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:27:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18783 "" "Go-http-client/1.1"
Dec 06 10:27:18 np0005548790.localdomain ceph-mon[301742]: pgmap v652: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 173 KiB/s wr, 10 op/s
Dec 06 10:27:18 np0005548790.localdomain ceph-mon[301742]: osdmap e277: 6 total, 6 up, 6 in
Dec 06 10:27:18 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:27:18 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:27:18 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 06 10:27:18 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 06 10:27:18 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 06 10:27:19 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:27:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v654: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 731 B/s rd, 197 KiB/s wr, 12 op/s
Dec 06 10:27:19 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:27:19 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d20a1bca-aac4-44af-8a03-caef89ee2c05", "snap_name": "7884d7aa-ac96-42e0-bc99-f42bebf4c1fc", "format": "json"}]: dispatch
Dec 06 10:27:19 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:19.922 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:20 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:27:20 np0005548790.localdomain systemd[1]: tmp-crun.Gqt0Sg.mount: Deactivated successfully.
Dec 06 10:27:20 np0005548790.localdomain podman[323141]: 2025-12-06 10:27:20.577899836 +0000 UTC m=+0.092886865 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec 06 10:27:20 np0005548790.localdomain podman[323141]: 2025-12-06 10:27:20.608400691 +0000 UTC m=+0.123387690 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 10:27:20 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:27:20 np0005548790.localdomain ceph-mon[301742]: pgmap v654: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 731 B/s rd, 197 KiB/s wr, 12 op/s
Dec 06 10:27:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "r", "format": "json"}]: dispatch
Dec 06 10:27:21 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:27:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 06 10:27:21 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:27:21 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID alice with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:27:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:27:21 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:21 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:27:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:21.347 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v655: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 165 KiB/s wr, 10 op/s
Dec 06 10:27:21 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:27:21 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:21 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:21 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:27:22 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d20a1bca-aac4-44af-8a03-caef89ee2c05", "snap_name": "7884d7aa-ac96-42e0-bc99-f42bebf4c1fc_a26a2613-79e3-4fad-87ed-7fada9c58a20", "force": true, "format": "json"}]: dispatch
Dec 06 10:27:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7884d7aa-ac96-42e0-bc99-f42bebf4c1fc_a26a2613-79e3-4fad-87ed-7fada9c58a20, sub_name:d20a1bca-aac4-44af-8a03-caef89ee2c05, vol_name:cephfs) < ""
Dec 06 10:27:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d20a1bca-aac4-44af-8a03-caef89ee2c05/.meta.tmp'
Dec 06 10:27:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d20a1bca-aac4-44af-8a03-caef89ee2c05/.meta.tmp' to config b'/volumes/_nogroup/d20a1bca-aac4-44af-8a03-caef89ee2c05/.meta'
Dec 06 10:27:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7884d7aa-ac96-42e0-bc99-f42bebf4c1fc_a26a2613-79e3-4fad-87ed-7fada9c58a20, sub_name:d20a1bca-aac4-44af-8a03-caef89ee2c05, vol_name:cephfs) < ""
Dec 06 10:27:22 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d20a1bca-aac4-44af-8a03-caef89ee2c05", "snap_name": "7884d7aa-ac96-42e0-bc99-f42bebf4c1fc", "force": true, "format": "json"}]: dispatch
Dec 06 10:27:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7884d7aa-ac96-42e0-bc99-f42bebf4c1fc, sub_name:d20a1bca-aac4-44af-8a03-caef89ee2c05, vol_name:cephfs) < ""
Dec 06 10:27:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d20a1bca-aac4-44af-8a03-caef89ee2c05/.meta.tmp'
Dec 06 10:27:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d20a1bca-aac4-44af-8a03-caef89ee2c05/.meta.tmp' to config b'/volumes/_nogroup/d20a1bca-aac4-44af-8a03-caef89ee2c05/.meta'
Dec 06 10:27:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7884d7aa-ac96-42e0-bc99-f42bebf4c1fc, sub_name:d20a1bca-aac4-44af-8a03-caef89ee2c05, vol_name:cephfs) < ""
Dec 06 10:27:22 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "r", "format": "json"}]: dispatch
Dec 06 10:27:22 np0005548790.localdomain ceph-mon[301742]: pgmap v655: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 165 KiB/s wr, 10 op/s
Dec 06 10:27:23 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v656: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 157 KiB/s wr, 10 op/s
Dec 06 10:27:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:27:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:27:23 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:27:23 np0005548790.localdomain podman[323161]: 2025-12-06 10:27:23.579014616 +0000 UTC m=+0.085989108 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, version=9.6, com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, config_id=edpm, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc.)
Dec 06 10:27:23 np0005548790.localdomain podman[323161]: 2025-12-06 10:27:23.595345609 +0000 UTC m=+0.102320141 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, distribution-scope=public, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6)
Dec 06 10:27:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:27:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:27:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:27:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:27:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:27:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:27:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:27:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:27:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:27:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:27:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:27:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:27:23 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:27:23 np0005548790.localdomain podman[323159]: 2025-12-06 10:27:23.723383973 +0000 UTC m=+0.233759447 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:27:23 np0005548790.localdomain podman[323160]: 2025-12-06 10:27:23.783945823 +0000 UTC m=+0.290781422 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 10:27:23 np0005548790.localdomain podman[323159]: 2025-12-06 10:27:23.807218073 +0000 UTC m=+0.317593527 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 10:27:23 np0005548790.localdomain podman[323160]: 2025-12-06 10:27:23.817946833 +0000 UTC m=+0.324782442 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 06 10:27:23 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:27:23 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:27:23 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d20a1bca-aac4-44af-8a03-caef89ee2c05", "snap_name": "7884d7aa-ac96-42e0-bc99-f42bebf4c1fc_a26a2613-79e3-4fad-87ed-7fada9c58a20", "force": true, "format": "json"}]: dispatch
Dec 06 10:27:23 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d20a1bca-aac4-44af-8a03-caef89ee2c05", "snap_name": "7884d7aa-ac96-42e0-bc99-f42bebf4c1fc", "force": true, "format": "json"}]: dispatch
Dec 06 10:27:23 np0005548790.localdomain ceph-mon[301742]: pgmap v656: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 157 KiB/s wr, 10 op/s
Dec 06 10:27:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:27:24 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:27:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 06 10:27:24 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:27:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec 06 10:27:24 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 06 10:27:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:24 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:27:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:27:24 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:27:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:27:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:24 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:27:24 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:27:24 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 06 10:27:24 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 06 10:27:24 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 06 10:27:24 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:27:24 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:24.965 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v657: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 157 KiB/s wr, 10 op/s
Dec 06 10:27:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d20a1bca-aac4-44af-8a03-caef89ee2c05", "format": "json"}]: dispatch
Dec 06 10:27:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d20a1bca-aac4-44af-8a03-caef89ee2c05, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:27:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d20a1bca-aac4-44af-8a03-caef89ee2c05, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:27:25 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:27:25.672+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd20a1bca-aac4-44af-8a03-caef89ee2c05' of type subvolume
Dec 06 10:27:25 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd20a1bca-aac4-44af-8a03-caef89ee2c05' of type subvolume
Dec 06 10:27:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d20a1bca-aac4-44af-8a03-caef89ee2c05", "force": true, "format": "json"}]: dispatch
Dec 06 10:27:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d20a1bca-aac4-44af-8a03-caef89ee2c05, vol_name:cephfs) < ""
Dec 06 10:27:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d20a1bca-aac4-44af-8a03-caef89ee2c05'' moved to trashcan
Dec 06 10:27:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:27:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d20a1bca-aac4-44af-8a03-caef89ee2c05, vol_name:cephfs) < ""
Dec 06 10:27:25 np0005548790.localdomain ceph-mon[301742]: pgmap v657: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 157 KiB/s wr, 10 op/s
Dec 06 10:27:25 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d20a1bca-aac4-44af-8a03-caef89ee2c05", "format": "json"}]: dispatch
Dec 06 10:27:25 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d20a1bca-aac4-44af-8a03-caef89ee2c05", "force": true, "format": "json"}]: dispatch
Dec 06 10:27:25 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e278 e278: 6 total, 6 up, 6 in
Dec 06 10:27:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:26.350 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:26 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:27:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, vol_name:cephfs) < ""
Dec 06 10:27:26 np0005548790.localdomain ceph-mon[301742]: osdmap e278: 6 total, 6 up, 6 in
Dec 06 10:27:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/.meta.tmp'
Dec 06 10:27:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/.meta.tmp' to config b'/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/.meta'
Dec 06 10:27:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, vol_name:cephfs) < ""
Dec 06 10:27:26 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "format": "json"}]: dispatch
Dec 06 10:27:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, vol_name:cephfs) < ""
Dec 06 10:27:26 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, vol_name:cephfs) < ""
Dec 06 10:27:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v659: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 536 B/s rd, 165 KiB/s wr, 10 op/s
Dec 06 10:27:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:27:27 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:27:27 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 06 10:27:27 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:27:27 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID alice_bob with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:27:27 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:27:27 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:27 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:27:27 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "format": "json"}]: dispatch
Dec 06 10:27:27 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:27:27 np0005548790.localdomain ceph-mon[301742]: pgmap v659: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 536 B/s rd, 165 KiB/s wr, 10 op/s
Dec 06 10:27:27 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:27:27 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:27:27 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:27 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:27:28 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:27:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:27:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v660: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 146 KiB/s wr, 9 op/s
Dec 06 10:27:29 np0005548790.localdomain ceph-mon[301742]: pgmap v660: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 146 KiB/s wr, 9 op/s
Dec 06 10:27:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:29.989 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:30.520 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:30 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:27:30.522 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:27:30 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:27:30.523 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:27:30 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "39a0fb23-501d-479d-b543-1f708ea4574a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:27:30 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:39a0fb23-501d-479d-b543-1f708ea4574a, vol_name:cephfs) < ""
Dec 06 10:27:30 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/39a0fb23-501d-479d-b543-1f708ea4574a/.meta.tmp'
Dec 06 10:27:30 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/39a0fb23-501d-479d-b543-1f708ea4574a/.meta.tmp' to config b'/volumes/_nogroup/39a0fb23-501d-479d-b543-1f708ea4574a/.meta'
Dec 06 10:27:30 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:39a0fb23-501d-479d-b543-1f708ea4574a, vol_name:cephfs) < ""
Dec 06 10:27:30 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "39a0fb23-501d-479d-b543-1f708ea4574a", "format": "json"}]: dispatch
Dec 06 10:27:30 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:39a0fb23-501d-479d-b543-1f708ea4574a, vol_name:cephfs) < ""
Dec 06 10:27:30 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:39a0fb23-501d-479d-b543-1f708ea4574a, vol_name:cephfs) < ""
Dec 06 10:27:31 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "39a0fb23-501d-479d-b543-1f708ea4574a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:27:31 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "39a0fb23-501d-479d-b543-1f708ea4574a", "format": "json"}]: dispatch
Dec 06 10:27:31 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:27:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:27:31 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:31 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 06 10:27:31 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:27:31 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec 06 10:27:31 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 06 10:27:31 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:27:31 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:31 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:27:31 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:27:31 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:27:31 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:31.353 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v661: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 146 KiB/s wr, 9 op/s
Dec 06 10:27:31 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:27:31 np0005548790.localdomain podman[323223]: 2025-12-06 10:27:31.567939131 +0000 UTC m=+0.081390531 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 10:27:31 np0005548790.localdomain podman[323223]: 2025-12-06 10:27:31.585748278 +0000 UTC m=+0.099199678 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 10:27:31 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:27:32 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:27:32 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:27:32 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 06 10:27:32 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 06 10:27:32 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 06 10:27:32 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:27:32 np0005548790.localdomain ceph-mon[301742]: pgmap v661: 177 pgs: 177 active+clean; 217 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 146 KiB/s wr, 9 op/s
Dec 06 10:27:32 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 e279: 6 total, 6 up, 6 in
Dec 06 10:27:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v663: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 191 KiB/s wr, 11 op/s
Dec 06 10:27:33 np0005548790.localdomain ceph-mon[301742]: osdmap e279: 6 total, 6 up, 6 in
Dec 06 10:27:34 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "39a0fb23-501d-479d-b543-1f708ea4574a", "auth_id": "tempest-cephx-id-956934797", "tenant_id": "407388521bb04f21b3ced239438a361c", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:27:34 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume authorize, sub_name:39a0fb23-501d-479d-b543-1f708ea4574a, tenant_id:407388521bb04f21b3ced239438a361c, vol_name:cephfs) < ""
Dec 06 10:27:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} v 0)
Dec 06 10:27:34 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:27:34 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID tempest-cephx-id-956934797 with tenant 407388521bb04f21b3ced239438a361c
Dec 06 10:27:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:27:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/39a0fb23-501d-479d-b543-1f708ea4574a/3e5f0a88-786f-4b8b-a999-6c8148201aa5", "osd", "allow rw pool=manila_data namespace=fsvolumens_39a0fb23-501d-479d-b543-1f708ea4574a", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:27:34 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/39a0fb23-501d-479d-b543-1f708ea4574a/3e5f0a88-786f-4b8b-a999-6c8148201aa5", "osd", "allow rw pool=manila_data namespace=fsvolumens_39a0fb23-501d-479d-b543-1f708ea4574a", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:34 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume authorize, sub_name:39a0fb23-501d-479d-b543-1f708ea4574a, tenant_id:407388521bb04f21b3ced239438a361c, vol_name:cephfs) < ""
Dec 06 10:27:34 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "r", "format": "json"}]: dispatch
Dec 06 10:27:34 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:27:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 06 10:27:34 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:27:34 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID alice_bob with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:27:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:27:34 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:27:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:27:34 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:34 np0005548790.localdomain systemd[1]: tmp-crun.PRG1zV.mount: Deactivated successfully.
Dec 06 10:27:34 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:27:34 np0005548790.localdomain podman[323242]: 2025-12-06 10:27:34.602871014 +0000 UTC m=+0.110473729 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 06 10:27:34 np0005548790.localdomain podman[323241]: 2025-12-06 10:27:34.559995566 +0000 UTC m=+0.073143290 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:27:34 np0005548790.localdomain podman[323241]: 2025-12-06 10:27:34.639110514 +0000 UTC m=+0.152258228 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 10:27:34 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:27:34 np0005548790.localdomain podman[323242]: 2025-12-06 10:27:34.669720504 +0000 UTC m=+0.177323239 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 10:27:34 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:27:34 np0005548790.localdomain ceph-mon[301742]: pgmap v663: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 191 KiB/s wr, 11 op/s
Dec 06 10:27:34 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:27:34 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/39a0fb23-501d-479d-b543-1f708ea4574a/3e5f0a88-786f-4b8b-a999-6c8148201aa5", "osd", "allow rw pool=manila_data namespace=fsvolumens_39a0fb23-501d-479d-b543-1f708ea4574a", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:34 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/39a0fb23-501d-479d-b543-1f708ea4574a/3e5f0a88-786f-4b8b-a999-6c8148201aa5", "osd", "allow rw pool=manila_data namespace=fsvolumens_39a0fb23-501d-479d-b543-1f708ea4574a", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:34 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/39a0fb23-501d-479d-b543-1f708ea4574a/3e5f0a88-786f-4b8b-a999-6c8148201aa5", "osd", "allow rw pool=manila_data namespace=fsvolumens_39a0fb23-501d-479d-b543-1f708ea4574a", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:27:34 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:27:34 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:34 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:34 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:27:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:35.021 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v664: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 434 B/s rd, 162 KiB/s wr, 10 op/s
Dec 06 10:27:35 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:27:35.525 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=33b2d0f4-3dae-458c-b286-c937c7cb3d9e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:27:35 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "39a0fb23-501d-479d-b543-1f708ea4574a", "auth_id": "tempest-cephx-id-956934797", "tenant_id": "407388521bb04f21b3ced239438a361c", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:27:35 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "r", "format": "json"}]: dispatch
Dec 06 10:27:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:36.357 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:36 np0005548790.localdomain ceph-mon[301742]: pgmap v664: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 434 B/s rd, 162 KiB/s wr, 10 op/s
Dec 06 10:27:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v665: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 153 KiB/s wr, 9 op/s
Dec 06 10:27:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "39a0fb23-501d-479d-b543-1f708ea4574a", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:27:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume deauthorize, sub_name:39a0fb23-501d-479d-b543-1f708ea4574a, vol_name:cephfs) < ""
Dec 06 10:27:37 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} v 0)
Dec 06 10:27:37 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:27:37 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} v 0)
Dec 06 10:27:37 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch
Dec 06 10:27:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume deauthorize, sub_name:39a0fb23-501d-479d-b543-1f708ea4574a, vol_name:cephfs) < ""
Dec 06 10:27:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "39a0fb23-501d-479d-b543-1f708ea4574a", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:27:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume evict, sub_name:39a0fb23-501d-479d-b543-1f708ea4574a, vol_name:cephfs) < ""
Dec 06 10:27:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-956934797, client_metadata.root=/volumes/_nogroup/39a0fb23-501d-479d-b543-1f708ea4574a/3e5f0a88-786f-4b8b-a999-6c8148201aa5
Dec 06 10:27:37 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=tempest-cephx-id-956934797,client_metadata.root=/volumes/_nogroup/39a0fb23-501d-479d-b543-1f708ea4574a/3e5f0a88-786f-4b8b-a999-6c8148201aa5],prefix=session evict} (starting...)
Dec 06 10:27:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:27:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume evict, sub_name:39a0fb23-501d-479d-b543-1f708ea4574a, vol_name:cephfs) < ""
Dec 06 10:27:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "39a0fb23-501d-479d-b543-1f708ea4574a", "format": "json"}]: dispatch
Dec 06 10:27:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:39a0fb23-501d-479d-b543-1f708ea4574a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:27:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:39a0fb23-501d-479d-b543-1f708ea4574a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:27:37 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:27:37.901+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '39a0fb23-501d-479d-b543-1f708ea4574a' of type subvolume
Dec 06 10:27:37 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '39a0fb23-501d-479d-b543-1f708ea4574a' of type subvolume
Dec 06 10:27:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "39a0fb23-501d-479d-b543-1f708ea4574a", "force": true, "format": "json"}]: dispatch
Dec 06 10:27:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:39a0fb23-501d-479d-b543-1f708ea4574a, vol_name:cephfs) < ""
Dec 06 10:27:37 np0005548790.localdomain ceph-mon[301742]: pgmap v665: 177 pgs: 177 active+clean; 218 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 153 KiB/s wr, 9 op/s
Dec 06 10:27:37 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "39a0fb23-501d-479d-b543-1f708ea4574a", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:27:37 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:27:37 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch
Dec 06 10:27:37 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch
Dec 06 10:27:37 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"}]': finished
Dec 06 10:27:37 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "39a0fb23-501d-479d-b543-1f708ea4574a", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:27:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/39a0fb23-501d-479d-b543-1f708ea4574a'' moved to trashcan
Dec 06 10:27:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:27:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:39a0fb23-501d-479d-b543-1f708ea4574a, vol_name:cephfs) < ""
Dec 06 10:27:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:27:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:38 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 06 10:27:38 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:27:38 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec 06 10:27:38 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 06 10:27:38 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:38 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:27:38 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:38 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:27:38 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:27:38 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:27:38 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:38 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "39a0fb23-501d-479d-b543-1f708ea4574a", "format": "json"}]: dispatch
Dec 06 10:27:38 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "39a0fb23-501d-479d-b543-1f708ea4574a", "force": true, "format": "json"}]: dispatch
Dec 06 10:27:38 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:27:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:27:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 06 10:27:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 06 10:27:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 06 10:27:38 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:27:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:27:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v666: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 184 KiB/s wr, 10 op/s
Dec 06 10:27:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1777930601' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:27:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1777930601' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:27:39 np0005548790.localdomain ceph-mon[301742]: pgmap v666: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 184 KiB/s wr, 10 op/s
Dec 06 10:27:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:40.063 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:40 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5588df5a-ffa6-47fc-b21c-3740e4fe0c02", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:27:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5588df5a-ffa6-47fc-b21c-3740e4fe0c02, vol_name:cephfs) < ""
Dec 06 10:27:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5588df5a-ffa6-47fc-b21c-3740e4fe0c02/.meta.tmp'
Dec 06 10:27:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5588df5a-ffa6-47fc-b21c-3740e4fe0c02/.meta.tmp' to config b'/volumes/_nogroup/5588df5a-ffa6-47fc-b21c-3740e4fe0c02/.meta'
Dec 06 10:27:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5588df5a-ffa6-47fc-b21c-3740e4fe0c02, vol_name:cephfs) < ""
Dec 06 10:27:40 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5588df5a-ffa6-47fc-b21c-3740e4fe0c02", "format": "json"}]: dispatch
Dec 06 10:27:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5588df5a-ffa6-47fc-b21c-3740e4fe0c02, vol_name:cephfs) < ""
Dec 06 10:27:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5588df5a-ffa6-47fc-b21c-3740e4fe0c02, vol_name:cephfs) < ""
Dec 06 10:27:40 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5588df5a-ffa6-47fc-b21c-3740e4fe0c02", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:27:40 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5588df5a-ffa6-47fc-b21c-3740e4fe0c02", "format": "json"}]: dispatch
Dec 06 10:27:40 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:27:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:27:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:27:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 06 10:27:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:27:41 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID alice bob with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:27:41 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:27:41 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:27:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:41.359 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v667: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 184 KiB/s wr, 10 op/s
Dec 06 10:27:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Optimize plan auto_2025-12-06_10:27:41
Dec 06 10:27:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:27:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] do_upmap
Dec 06 10:27:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] pools ['backups', 'manila_data', 'vms', 'volumes', '.mgr', 'images', 'manila_metadata']
Dec 06 10:27:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] prepared 0/10 changes
Dec 06 10:27:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:27:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:27:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:27:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:27:41 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:27:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:27:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:27:41 np0005548790.localdomain ceph-mon[301742]: pgmap v667: 177 pgs: 177 active+clean; 219 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 184 KiB/s wr, 10 op/s
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32)
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32)
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32)
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.9084135957565606e-06 of space, bias 1.0, pg target 0.00037977430555555556 quantized to 32 (current 32)
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.001909685871487065 of space, bias 4.0, pg target 1.5201099537037035 quantized to 16 (current 16)
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:27:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:27:43 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/585070944' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:27:43 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/3710319401' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:27:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:43.334 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:27:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v668: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 485 B/s rd, 272 KiB/s wr, 16 op/s
Dec 06 10:27:44 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5588df5a-ffa6-47fc-b21c-3740e4fe0c02", "auth_id": "tempest-cephx-id-956934797", "tenant_id": "407388521bb04f21b3ced239438a361c", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:27:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume authorize, sub_name:5588df5a-ffa6-47fc-b21c-3740e4fe0c02, tenant_id:407388521bb04f21b3ced239438a361c, vol_name:cephfs) < ""
Dec 06 10:27:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} v 0)
Dec 06 10:27:44 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:27:44 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID tempest-cephx-id-956934797 with tenant 407388521bb04f21b3ced239438a361c
Dec 06 10:27:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:27:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/5588df5a-ffa6-47fc-b21c-3740e4fe0c02/c27ff085-a577-4abe-b26a-9dc51132e3c0", "osd", "allow rw pool=manila_data namespace=fsvolumens_5588df5a-ffa6-47fc-b21c-3740e4fe0c02", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:27:44 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/5588df5a-ffa6-47fc-b21c-3740e4fe0c02/c27ff085-a577-4abe-b26a-9dc51132e3c0", "osd", "allow rw pool=manila_data namespace=fsvolumens_5588df5a-ffa6-47fc-b21c-3740e4fe0c02", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:44 np0005548790.localdomain ceph-mon[301742]: pgmap v668: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 485 B/s rd, 272 KiB/s wr, 16 op/s
Dec 06 10:27:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:27:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume authorize, sub_name:5588df5a-ffa6-47fc-b21c-3740e4fe0c02, tenant_id:407388521bb04f21b3ced239438a361c, vol_name:cephfs) < ""
Dec 06 10:27:44 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:27:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 06 10:27:44 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:27:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec 06 10:27:44 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 06 10:27:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:44 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:27:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:27:44 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:27:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:27:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:45.100 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v669: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 168 KiB/s wr, 9 op/s
Dec 06 10:27:45 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5588df5a-ffa6-47fc-b21c-3740e4fe0c02", "auth_id": "tempest-cephx-id-956934797", "tenant_id": "407388521bb04f21b3ced239438a361c", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:27:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/5588df5a-ffa6-47fc-b21c-3740e4fe0c02/c27ff085-a577-4abe-b26a-9dc51132e3c0", "osd", "allow rw pool=manila_data namespace=fsvolumens_5588df5a-ffa6-47fc-b21c-3740e4fe0c02", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/5588df5a-ffa6-47fc-b21c-3740e4fe0c02/c27ff085-a577-4abe-b26a-9dc51132e3c0", "osd", "allow rw pool=manila_data namespace=fsvolumens_5588df5a-ffa6-47fc-b21c-3740e4fe0c02", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/5588df5a-ffa6-47fc-b21c-3740e4fe0c02/c27ff085-a577-4abe-b26a-9dc51132e3c0", "osd", "allow rw pool=manila_data namespace=fsvolumens_5588df5a-ffa6-47fc-b21c-3740e4fe0c02", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:27:45 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:27:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:27:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 06 10:27:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 06 10:27:45 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 06 10:27:45 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:27:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:46.362 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:46 np0005548790.localdomain ceph-mon[301742]: pgmap v669: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 168 KiB/s wr, 9 op/s
Dec 06 10:27:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:47.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:27:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:47.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:27:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:47.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:27:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:47.359 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:27:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:47.360 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:27:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v670: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 168 KiB/s wr, 9 op/s
Dec 06 10:27:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5588df5a-ffa6-47fc-b21c-3740e4fe0c02", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:27:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume deauthorize, sub_name:5588df5a-ffa6-47fc-b21c-3740e4fe0c02, vol_name:cephfs) < ""
Dec 06 10:27:47 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} v 0)
Dec 06 10:27:47 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:27:47 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} v 0)
Dec 06 10:27:47 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch
Dec 06 10:27:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume deauthorize, sub_name:5588df5a-ffa6-47fc-b21c-3740e4fe0c02, vol_name:cephfs) < ""
Dec 06 10:27:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5588df5a-ffa6-47fc-b21c-3740e4fe0c02", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:27:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume evict, sub_name:5588df5a-ffa6-47fc-b21c-3740e4fe0c02, vol_name:cephfs) < ""
Dec 06 10:27:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-956934797, client_metadata.root=/volumes/_nogroup/5588df5a-ffa6-47fc-b21c-3740e4fe0c02/c27ff085-a577-4abe-b26a-9dc51132e3c0
Dec 06 10:27:47 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=tempest-cephx-id-956934797,client_metadata.root=/volumes/_nogroup/5588df5a-ffa6-47fc-b21c-3740e4fe0c02/c27ff085-a577-4abe-b26a-9dc51132e3c0],prefix=session evict} (starting...)
Dec 06 10:27:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:27:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume evict, sub_name:5588df5a-ffa6-47fc-b21c-3740e4fe0c02, vol_name:cephfs) < ""
Dec 06 10:27:47 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:27:47 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch
Dec 06 10:27:47 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch
Dec 06 10:27:47 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"}]': finished
Dec 06 10:27:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5588df5a-ffa6-47fc-b21c-3740e4fe0c02", "format": "json"}]: dispatch
Dec 06 10:27:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5588df5a-ffa6-47fc-b21c-3740e4fe0c02, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:27:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5588df5a-ffa6-47fc-b21c-3740e4fe0c02, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:27:47 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:27:47.708+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5588df5a-ffa6-47fc-b21c-3740e4fe0c02' of type subvolume
Dec 06 10:27:47 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5588df5a-ffa6-47fc-b21c-3740e4fe0c02' of type subvolume
Dec 06 10:27:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5588df5a-ffa6-47fc-b21c-3740e4fe0c02", "force": true, "format": "json"}]: dispatch
Dec 06 10:27:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5588df5a-ffa6-47fc-b21c-3740e4fe0c02, vol_name:cephfs) < ""
Dec 06 10:27:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5588df5a-ffa6-47fc-b21c-3740e4fe0c02'' moved to trashcan
Dec 06 10:27:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:27:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5588df5a-ffa6-47fc-b21c-3740e4fe0c02, vol_name:cephfs) < ""
Dec 06 10:27:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "r", "format": "json"}]: dispatch
Dec 06 10:27:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:27:47 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 06 10:27:47 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:27:47 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID alice bob with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:27:47 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:27:47 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:48 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:27:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:27:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:27:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:27:48.409 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:27:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:27:48.409 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:27:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:27:48.409 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:27:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:27:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:27:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:27:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18776 "" "Go-http-client/1.1"
Dec 06 10:27:48 np0005548790.localdomain ceph-mon[301742]: pgmap v670: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 168 KiB/s wr, 9 op/s
Dec 06 10:27:48 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5588df5a-ffa6-47fc-b21c-3740e4fe0c02", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:27:48 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5588df5a-ffa6-47fc-b21c-3740e4fe0c02", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:27:48 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5588df5a-ffa6-47fc-b21c-3740e4fe0c02", "format": "json"}]: dispatch
Dec 06 10:27:48 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5588df5a-ffa6-47fc-b21c-3740e4fe0c02", "force": true, "format": "json"}]: dispatch
Dec 06 10:27:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:27:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:27:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:27:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:49.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:27:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:49.334 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:27:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v671: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 249 KiB/s wr, 14 op/s
Dec 06 10:27:49 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "r", "format": "json"}]: dispatch
Dec 06 10:27:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:50.134 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:50.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:27:50 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1591869a-dd1e-4430-a3b9-ab5fdaf392f8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:27:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1591869a-dd1e-4430-a3b9-ab5fdaf392f8, vol_name:cephfs) < ""
Dec 06 10:27:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1591869a-dd1e-4430-a3b9-ab5fdaf392f8/.meta.tmp'
Dec 06 10:27:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1591869a-dd1e-4430-a3b9-ab5fdaf392f8/.meta.tmp' to config b'/volumes/_nogroup/1591869a-dd1e-4430-a3b9-ab5fdaf392f8/.meta'
Dec 06 10:27:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1591869a-dd1e-4430-a3b9-ab5fdaf392f8, vol_name:cephfs) < ""
Dec 06 10:27:50 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1591869a-dd1e-4430-a3b9-ab5fdaf392f8", "format": "json"}]: dispatch
Dec 06 10:27:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1591869a-dd1e-4430-a3b9-ab5fdaf392f8, vol_name:cephfs) < ""
Dec 06 10:27:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1591869a-dd1e-4430-a3b9-ab5fdaf392f8, vol_name:cephfs) < ""
Dec 06 10:27:50 np0005548790.localdomain ceph-mon[301742]: pgmap v671: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 249 KiB/s wr, 14 op/s
Dec 06 10:27:50 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:27:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:27:51 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:51 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 06 10:27:51 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:27:51 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec 06 10:27:51 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 06 10:27:51 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:27:51 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:51 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:27:51 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:27:51 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:27:51 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:51.376 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v672: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 166 KiB/s wr, 10 op/s
Dec 06 10:27:51 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:27:51 np0005548790.localdomain podman[323294]: 2025-12-06 10:27:51.570921967 +0000 UTC m=+0.078254726 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec 06 10:27:51 np0005548790.localdomain podman[323294]: 2025-12-06 10:27:51.604233099 +0000 UTC m=+0.111565848 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 06 10:27:51 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:27:51 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1591869a-dd1e-4430-a3b9-ab5fdaf392f8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:27:51 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1591869a-dd1e-4430-a3b9-ab5fdaf392f8", "format": "json"}]: dispatch
Dec 06 10:27:51 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:27:51 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 06 10:27:51 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 06 10:27:51 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 06 10:27:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:52.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:27:52 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:27:52 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:27:52 np0005548790.localdomain ceph-mon[301742]: pgmap v672: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 166 KiB/s wr, 10 op/s
Dec 06 10:27:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:53.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:27:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:53.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:27:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v673: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 235 KiB/s wr, 14 op/s
Dec 06 10:27:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:27:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:27:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:27:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:27:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:27:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:27:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:27:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:27:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:27:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:27:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:27:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:27:53 np0005548790.localdomain ceph-mon[301742]: pgmap v673: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 235 KiB/s wr, 14 op/s
Dec 06 10:27:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1591869a-dd1e-4430-a3b9-ab5fdaf392f8", "auth_id": "tempest-cephx-id-956934797", "tenant_id": "407388521bb04f21b3ced239438a361c", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:27:53 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume authorize, sub_name:1591869a-dd1e-4430-a3b9-ab5fdaf392f8, tenant_id:407388521bb04f21b3ced239438a361c, vol_name:cephfs) < ""
Dec 06 10:27:53 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} v 0)
Dec 06 10:27:53 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:27:53 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID tempest-cephx-id-956934797 with tenant 407388521bb04f21b3ced239438a361c
Dec 06 10:27:53 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/1591869a-dd1e-4430-a3b9-ab5fdaf392f8/647cfa54-e318-4c22-b8b7-7c0ead2d396a", "osd", "allow rw pool=manila_data namespace=fsvolumens_1591869a-dd1e-4430-a3b9-ab5fdaf392f8", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:27:53 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/1591869a-dd1e-4430-a3b9-ab5fdaf392f8/647cfa54-e318-4c22-b8b7-7c0ead2d396a", "osd", "allow rw pool=manila_data namespace=fsvolumens_1591869a-dd1e-4430-a3b9-ab5fdaf392f8", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:54 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume authorize, sub_name:1591869a-dd1e-4430-a3b9-ab5fdaf392f8, tenant_id:407388521bb04f21b3ced239438a361c, vol_name:cephfs) < ""
Dec 06 10:27:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:27:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:54.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:27:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:54.355 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:27:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:54.356 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:27:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:54.356 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:27:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:54.356 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:27:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:54.357 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:27:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:27:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:27:54 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:27:54 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:27:54 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:27:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 06 10:27:54 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:27:54 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID alice with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:27:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:27:54 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:54 np0005548790.localdomain podman[323314]: 2025-12-06 10:27:54.588711029 +0000 UTC m=+0.106898932 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:27:54 np0005548790.localdomain podman[323314]: 2025-12-06 10:27:54.619100043 +0000 UTC m=+0.137287926 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:27:54 np0005548790.localdomain systemd[1]: tmp-crun.3OSgU5.mount: Deactivated successfully.
Dec 06 10:27:54 np0005548790.localdomain podman[323316]: 2025-12-06 10:27:54.651559433 +0000 UTC m=+0.166573481 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 06 10:27:54 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:27:54 np0005548790.localdomain podman[323317]: 2025-12-06 10:27:54.611061698 +0000 UTC m=+0.120608600 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, name=ubi9-minimal, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 06 10:27:54 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:27:54 np0005548790.localdomain podman[323316]: 2025-12-06 10:27:54.691120402 +0000 UTC m=+0.206134460 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec 06 10:27:54 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:27:54 np0005548790.localdomain podman[323317]: 2025-12-06 10:27:54.745496917 +0000 UTC m=+0.255043769 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, architecture=x86_64, vcs-type=git, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6)
Dec 06 10:27:54 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:27:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:27:54 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3684322741' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:27:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:54.830 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:27:54 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1591869a-dd1e-4430-a3b9-ab5fdaf392f8", "auth_id": "tempest-cephx-id-956934797", "tenant_id": "407388521bb04f21b3ced239438a361c", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:27:54 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:27:54 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/1591869a-dd1e-4430-a3b9-ab5fdaf392f8/647cfa54-e318-4c22-b8b7-7c0ead2d396a", "osd", "allow rw pool=manila_data namespace=fsvolumens_1591869a-dd1e-4430-a3b9-ab5fdaf392f8", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:54 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/1591869a-dd1e-4430-a3b9-ab5fdaf392f8/647cfa54-e318-4c22-b8b7-7c0ead2d396a", "osd", "allow rw pool=manila_data namespace=fsvolumens_1591869a-dd1e-4430-a3b9-ab5fdaf392f8", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:54 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/1591869a-dd1e-4430-a3b9-ab5fdaf392f8/647cfa54-e318-4c22-b8b7-7c0ead2d396a", "osd", "allow rw pool=manila_data namespace=fsvolumens_1591869a-dd1e-4430-a3b9-ab5fdaf392f8", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:27:54 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:27:54 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:27:54 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:54 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:27:54 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:27:54 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/3684322741' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:27:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:55.012 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:27:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:55.013 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=11430MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:27:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:55.013 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:27:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:55.013 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:27:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:55.086 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:27:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:55.087 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:27:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:55.128 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:27:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:55.161 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v674: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 149 KiB/s wr, 9 op/s
Dec 06 10:27:55 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:27:55 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1957927647' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:27:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:55.587 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:27:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:55.594 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:27:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:55.793 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:27:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:55.796 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:27:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:55.796 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:27:55 np0005548790.localdomain ceph-mon[301742]: pgmap v674: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 149 KiB/s wr, 9 op/s
Dec 06 10:27:55 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1957927647' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:27:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:27:56.382 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:27:56 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/1864458173' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1591869a-dd1e-4430-a3b9-ab5fdaf392f8", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume deauthorize, sub_name:1591869a-dd1e-4430-a3b9-ab5fdaf392f8, vol_name:cephfs) < ""
Dec 06 10:27:57 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} v 0)
Dec 06 10:27:57 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:27:57 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} v 0)
Dec 06 10:27:57 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v675: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 149 KiB/s wr, 9 op/s
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume deauthorize, sub_name:1591869a-dd1e-4430-a3b9-ab5fdaf392f8, vol_name:cephfs) < ""
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1591869a-dd1e-4430-a3b9-ab5fdaf392f8", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume evict, sub_name:1591869a-dd1e-4430-a3b9-ab5fdaf392f8, vol_name:cephfs) < ""
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-956934797, client_metadata.root=/volumes/_nogroup/1591869a-dd1e-4430-a3b9-ab5fdaf392f8/647cfa54-e318-4c22-b8b7-7c0ead2d396a
Dec 06 10:27:57 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=tempest-cephx-id-956934797,client_metadata.root=/volumes/_nogroup/1591869a-dd1e-4430-a3b9-ab5fdaf392f8/647cfa54-e318-4c22-b8b7-7c0ead2d396a],prefix=session evict} (starting...)
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume evict, sub_name:1591869a-dd1e-4430-a3b9-ab5fdaf392f8, vol_name:cephfs) < ""
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1591869a-dd1e-4430-a3b9-ab5fdaf392f8", "format": "json"}]: dispatch
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1591869a-dd1e-4430-a3b9-ab5fdaf392f8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1591869a-dd1e-4430-a3b9-ab5fdaf392f8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:27:57 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:27:57.517+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1591869a-dd1e-4430-a3b9-ab5fdaf392f8' of type subvolume
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1591869a-dd1e-4430-a3b9-ab5fdaf392f8' of type subvolume
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1591869a-dd1e-4430-a3b9-ab5fdaf392f8", "force": true, "format": "json"}]: dispatch
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1591869a-dd1e-4430-a3b9-ab5fdaf392f8, vol_name:cephfs) < ""
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1591869a-dd1e-4430-a3b9-ab5fdaf392f8'' moved to trashcan
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1591869a-dd1e-4430-a3b9-ab5fdaf392f8, vol_name:cephfs) < ""
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:57 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 06 10:27:57 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:27:57 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec 06 10:27:57 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:27:57 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:27:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:27:57 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1591869a-dd1e-4430-a3b9-ab5fdaf392f8", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:27:57 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2284770246' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:27:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:27:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch
Dec 06 10:27:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch
Dec 06 10:27:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"}]': finished
Dec 06 10:27:57 np0005548790.localdomain ceph-mon[301742]: pgmap v675: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 149 KiB/s wr, 9 op/s
Dec 06 10:27:57 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1591869a-dd1e-4430-a3b9-ab5fdaf392f8", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:27:57 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1591869a-dd1e-4430-a3b9-ab5fdaf392f8", "format": "json"}]: dispatch
Dec 06 10:27:57 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1591869a-dd1e-4430-a3b9-ab5fdaf392f8", "force": true, "format": "json"}]: dispatch
Dec 06 10:27:57 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:27:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:27:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 06 10:27:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 06 10:27:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 06 10:27:57 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:27:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:27:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v676: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 252 KiB/s wr, 15 op/s
Dec 06 10:28:00 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:00.184 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:00 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "auth_id": "tempest-cephx-id-956934797", "tenant_id": "407388521bb04f21b3ced239438a361c", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:28:00 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume authorize, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, tenant_id:407388521bb04f21b3ced239438a361c, vol_name:cephfs) < ""
Dec 06 10:28:00 np0005548790.localdomain ceph-mon[301742]: pgmap v676: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 252 KiB/s wr, 15 op/s
Dec 06 10:28:00 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} v 0)
Dec 06 10:28:00 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:28:00 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID tempest-cephx-id-956934797 with tenant 407388521bb04f21b3ced239438a361c
Dec 06 10:28:00 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:28:00 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:00 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume authorize, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, tenant_id:407388521bb04f21b3ced239438a361c, vol_name:cephfs) < ""
Dec 06 10:28:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "r", "format": "json"}]: dispatch
Dec 06 10:28:01 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:28:01 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 06 10:28:01 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:28:01 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID alice with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:28:01 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:28:01 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:01 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:28:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v677: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 171 KiB/s wr, 10 op/s
Dec 06 10:28:01 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:01.426 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:01 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "auth_id": "tempest-cephx-id-956934797", "tenant_id": "407388521bb04f21b3ced239438a361c", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:28:01 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:28:01 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:01 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:01 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:28:01 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:28:01 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:01 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:01 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:28:02 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "r", "format": "json"}]: dispatch
Dec 06 10:28:02 np0005548790.localdomain ceph-mon[301742]: pgmap v677: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 171 KiB/s wr, 10 op/s
Dec 06 10:28:02 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:28:02 np0005548790.localdomain podman[323419]: 2025-12-06 10:28:02.582858738 +0000 UTC m=+0.087530589 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125)
Dec 06 10:28:02 np0005548790.localdomain podman[323419]: 2025-12-06 10:28:02.597233363 +0000 UTC m=+0.101905194 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true)
Dec 06 10:28:02 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:28:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v678: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 235 KiB/s wr, 15 op/s
Dec 06 10:28:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:28:03 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume deauthorize, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, vol_name:cephfs) < ""
Dec 06 10:28:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} v 0)
Dec 06 10:28:03 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:28:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} v 0)
Dec 06 10:28:03 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch
Dec 06 10:28:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume deauthorize, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, vol_name:cephfs) < ""
Dec 06 10:28:04 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:28:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume evict, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, vol_name:cephfs) < ""
Dec 06 10:28:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-956934797, client_metadata.root=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73
Dec 06 10:28:04 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=tempest-cephx-id-956934797,client_metadata.root=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73],prefix=session evict} (starting...)
Dec 06 10:28:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:28:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume evict, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, vol_name:cephfs) < ""
Dec 06 10:28:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:28:04 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:28:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 06 10:28:04 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:28:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec 06 10:28:04 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 06 10:28:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:04 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:28:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:28:04 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:28:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:28:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:04 np0005548790.localdomain ceph-mon[301742]: pgmap v678: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 235 KiB/s wr, 15 op/s
Dec 06 10:28:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:28:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch
Dec 06 10:28:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch
Dec 06 10:28:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"}]': finished
Dec 06 10:28:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:28:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 06 10:28:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 06 10:28:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 06 10:28:05 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:05.210 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v679: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 167 KiB/s wr, 10 op/s
Dec 06 10:28:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:28:05 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:28:05 np0005548790.localdomain systemd[1]: tmp-crun.mV7fQ3.mount: Deactivated successfully.
Dec 06 10:28:05 np0005548790.localdomain podman[323440]: 2025-12-06 10:28:05.570955733 +0000 UTC m=+0.088690309 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:28:05 np0005548790.localdomain podman[323441]: 2025-12-06 10:28:05.587478176 +0000 UTC m=+0.096622672 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:28:05 np0005548790.localdomain podman[323440]: 2025-12-06 10:28:05.60926314 +0000 UTC m=+0.126997716 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:28:05 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:28:05 np0005548790.localdomain podman[323441]: 2025-12-06 10:28:05.632133644 +0000 UTC m=+0.141278080 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:28:05 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:28:05 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:28:05 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:28:05 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:28:05 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:28:06 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:06.428 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:06 np0005548790.localdomain ceph-mon[301742]: pgmap v679: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 167 KiB/s wr, 10 op/s
Dec 06 10:28:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v680: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 167 KiB/s wr, 10 op/s
Dec 06 10:28:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "auth_id": "tempest-cephx-id-956934797", "tenant_id": "407388521bb04f21b3ced239438a361c", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:28:07 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume authorize, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, tenant_id:407388521bb04f21b3ced239438a361c, vol_name:cephfs) < ""
Dec 06 10:28:07 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} v 0)
Dec 06 10:28:07 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:28:07 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID tempest-cephx-id-956934797 with tenant 407388521bb04f21b3ced239438a361c
Dec 06 10:28:07 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:28:07 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:07 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume authorize, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, tenant_id:407388521bb04f21b3ced239438a361c, vol_name:cephfs) < ""
Dec 06 10:28:07 np0005548790.localdomain ceph-mon[301742]: pgmap v680: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 167 KiB/s wr, 10 op/s
Dec 06 10:28:07 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "auth_id": "tempest-cephx-id-956934797", "tenant_id": "407388521bb04f21b3ced239438a361c", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:28:07 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:28:07 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:07 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:07 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:28:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:28:07 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:28:07 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 06 10:28:07 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:28:07 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID alice_bob with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:28:07 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:28:07 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:28:08 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:28:08 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:28:08 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:08 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:08 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:28:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:28:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v681: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 263 KiB/s wr, 17 op/s
Dec 06 10:28:09 np0005548790.localdomain ceph-mon[301742]: pgmap v681: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 263 KiB/s wr, 17 op/s
Dec 06 10:28:10 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:10.246 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:28:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume deauthorize, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, vol_name:cephfs) < ""
Dec 06 10:28:10 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} v 0)
Dec 06 10:28:10 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:28:10 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} v 0)
Dec 06 10:28:10 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch
Dec 06 10:28:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume deauthorize, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, vol_name:cephfs) < ""
Dec 06 10:28:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:28:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume evict, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, vol_name:cephfs) < ""
Dec 06 10:28:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-956934797, client_metadata.root=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73
Dec 06 10:28:10 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=tempest-cephx-id-956934797,client_metadata.root=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73],prefix=session evict} (starting...)
Dec 06 10:28:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:28:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume evict, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, vol_name:cephfs) < ""
Dec 06 10:28:10 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:28:10 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:28:10 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch
Dec 06 10:28:10 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch
Dec 06 10:28:10 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"}]': finished
Dec 06 10:28:10 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:28:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:28:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:11 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 06 10:28:11 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:28:11 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec 06 10:28:11 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 06 10:28:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:28:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:28:11 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:28:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:28:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v682: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 160 KiB/s wr, 11 op/s
Dec 06 10:28:11 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:11.496 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:11 np0005548790.localdomain sudo[323489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:28:11 np0005548790.localdomain sudo[323489]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:28:11 np0005548790.localdomain sudo[323489]: pam_unix(sudo:session): session closed for user root
Dec 06 10:28:11 np0005548790.localdomain sudo[323507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:28:11 np0005548790.localdomain sudo[323507]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:28:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:28:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:28:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:28:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:28:11 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:28:11 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:28:11 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 06 10:28:11 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 06 10:28:11 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 06 10:28:11 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:28:11 np0005548790.localdomain ceph-mon[301742]: pgmap v682: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 160 KiB/s wr, 11 op/s
Dec 06 10:28:12 np0005548790.localdomain sudo[323507]: pam_unix(sudo:session): session closed for user root
Dec 06 10:28:12 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:28:12 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:28:12 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:28:12 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:28:12 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:28:12 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] update: starting ev 05aba698-f953-4bca-8d49-887be951bce9 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:28:12 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] complete: finished ev 05aba698-f953-4bca-8d49-887be951bce9 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:28:12 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Completed event 05aba698-f953-4bca-8d49-887be951bce9 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 06 10:28:12 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:28:12 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:28:12 np0005548790.localdomain sudo[323558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:28:12 np0005548790.localdomain sudo[323558]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:28:12 np0005548790.localdomain sudo[323558]: pam_unix(sudo:session): session closed for user root
Dec 06 10:28:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:28:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:28:13 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:28:13 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:28:13 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:28:13 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:28:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v683: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 219 KiB/s wr, 14 op/s
Dec 06 10:28:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "auth_id": "tempest-cephx-id-956934797", "tenant_id": "407388521bb04f21b3ced239438a361c", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:28:13 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume authorize, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, tenant_id:407388521bb04f21b3ced239438a361c, vol_name:cephfs) < ""
Dec 06 10:28:13 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} v 0)
Dec 06 10:28:13 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:28:13 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID tempest-cephx-id-956934797 with tenant 407388521bb04f21b3ced239438a361c
Dec 06 10:28:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:28:14 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:28:14 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume authorize, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, tenant_id:407388521bb04f21b3ced239438a361c, vol_name:cephfs) < ""
Dec 06 10:28:14 np0005548790.localdomain ceph-mon[301742]: pgmap v683: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 219 KiB/s wr, 14 op/s
Dec 06 10:28:14 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:28:14 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:14 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:14 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:28:14 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "r", "format": "json"}]: dispatch
Dec 06 10:28:14 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:28:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 06 10:28:14 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:28:14 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID alice_bob with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:28:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:28:14 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:14 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:28:15 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "auth_id": "tempest-cephx-id-956934797", "tenant_id": "407388521bb04f21b3ced239438a361c", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:28:15 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "r", "format": "json"}]: dispatch
Dec 06 10:28:15 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:28:15 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:15 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:15 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:28:15 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:15.249 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v684: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 155 KiB/s wr, 9 op/s
Dec 06 10:28:16 np0005548790.localdomain ceph-mon[301742]: pgmap v684: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 155 KiB/s wr, 9 op/s
Dec 06 10:28:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:16.498 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:17 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Writing back 50 completed events
Dec 06 10:28:17 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:28:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:28:17 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume deauthorize, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, vol_name:cephfs) < ""
Dec 06 10:28:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v685: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 155 KiB/s wr, 9 op/s
Dec 06 10:28:17 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} v 0)
Dec 06 10:28:17 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:28:17 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} v 0)
Dec 06 10:28:17 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch
Dec 06 10:28:17 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume deauthorize, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, vol_name:cephfs) < ""
Dec 06 10:28:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:28:17 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume evict, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, vol_name:cephfs) < ""
Dec 06 10:28:17 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-956934797, client_metadata.root=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73
Dec 06 10:28:17 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=tempest-cephx-id-956934797,client_metadata.root=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73],prefix=session evict} (starting...)
Dec 06 10:28:17 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:28:17 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume evict, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, vol_name:cephfs) < ""
Dec 06 10:28:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:28:17 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:17 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 06 10:28:17 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:28:17 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec 06 10:28:17 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 06 10:28:17 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:28:17 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:17 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:28:17 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:28:17 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:28:17 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:18 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:28:18 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:28:18 np0005548790.localdomain ceph-mon[301742]: pgmap v685: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 155 KiB/s wr, 9 op/s
Dec 06 10:28:18 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:28:18 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch
Dec 06 10:28:18 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch
Dec 06 10:28:18 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"}]': finished
Dec 06 10:28:18 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:28:18 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:28:18 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:28:18 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 06 10:28:18 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 06 10:28:18 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 06 10:28:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:28:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:28:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:28:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:28:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:28:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18773 "" "Go-http-client/1.1"
Dec 06 10:28:19 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:28:19 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:28:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v686: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 254 KiB/s wr, 16 op/s
Dec 06 10:28:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:20.288 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:20 np0005548790.localdomain ceph-mon[301742]: pgmap v686: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 254 KiB/s wr, 16 op/s
Dec 06 10:28:20 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "auth_id": "tempest-cephx-id-956934797", "tenant_id": "407388521bb04f21b3ced239438a361c", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:28:20 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume authorize, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, tenant_id:407388521bb04f21b3ced239438a361c, vol_name:cephfs) < ""
Dec 06 10:28:20 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} v 0)
Dec 06 10:28:20 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:28:20 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID tempest-cephx-id-956934797 with tenant 407388521bb04f21b3ced239438a361c
Dec 06 10:28:20 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:28:20 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:20 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume authorize, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, tenant_id:407388521bb04f21b3ced239438a361c, vol_name:cephfs) < ""
Dec 06 10:28:20 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:28:20 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:28:20 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 06 10:28:20 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:28:20 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID alice bob with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:28:20 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:28:20 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:21 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:28:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v687: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 158 KiB/s wr, 10 op/s
Dec 06 10:28:21 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "auth_id": "tempest-cephx-id-956934797", "tenant_id": "407388521bb04f21b3ced239438a361c", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:28:21 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:28:21 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:21 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:21 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-956934797", "caps": ["mds", "allow rw path=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73", "osd", "allow rw pool=manila_data namespace=fsvolumens_b113f2fd-9e34-49b1-8d3c-8099c23d423a", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:28:21 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:28:21 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:21 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:21 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:28:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:21.533 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:22 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:28:22 np0005548790.localdomain ceph-mon[301742]: pgmap v687: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 158 KiB/s wr, 10 op/s
Dec 06 10:28:22 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:28:22 np0005548790.localdomain podman[323578]: 2025-12-06 10:28:22.564444065 +0000 UTC m=+0.078146097 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 06 10:28:22 np0005548790.localdomain podman[323578]: 2025-12-06 10:28:22.599026753 +0000 UTC m=+0.112728755 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 06 10:28:22 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:28:23 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v688: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 217 KiB/s wr, 14 op/s
Dec 06 10:28:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:28:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:28:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:28:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:28:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:28:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:28:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:28:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:28:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:28:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:28:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:28:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:28:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:28:24 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:28:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume deauthorize, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, vol_name:cephfs) < ""
Dec 06 10:28:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} v 0)
Dec 06 10:28:24 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:28:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} v 0)
Dec 06 10:28:24 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch
Dec 06 10:28:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume deauthorize, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, vol_name:cephfs) < ""
Dec 06 10:28:24 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:28:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume evict, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, vol_name:cephfs) < ""
Dec 06 10:28:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-956934797, client_metadata.root=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73
Dec 06 10:28:24 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=tempest-cephx-id-956934797,client_metadata.root=/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a/eeefc94a-19bd-4e4c-8e5e-d71eabbfaa73],prefix=session evict} (starting...)
Dec 06 10:28:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:28:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-956934797, format:json, prefix:fs subvolume evict, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, vol_name:cephfs) < ""
Dec 06 10:28:24 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:28:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 06 10:28:24 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:28:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec 06 10:28:24 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 06 10:28:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:24 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:28:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:28:24 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:28:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:28:24 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:24 np0005548790.localdomain ceph-mon[301742]: pgmap v688: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 217 KiB/s wr, 14 op/s
Dec 06 10:28:24 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-956934797", "format": "json"} : dispatch
Dec 06 10:28:24 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch
Dec 06 10:28:24 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"} : dispatch
Dec 06 10:28:24 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-956934797"}]': finished
Dec 06 10:28:24 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:28:24 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 06 10:28:24 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 06 10:28:24 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 06 10:28:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:25.324 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v689: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 158 KiB/s wr, 11 op/s
Dec 06 10:28:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:28:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:28:25 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:28:25 np0005548790.localdomain podman[323598]: 2025-12-06 10:28:25.580826058 +0000 UTC m=+0.085453952 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:28:25 np0005548790.localdomain podman[323598]: 2025-12-06 10:28:25.587677483 +0000 UTC m=+0.092305437 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:28:25 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:28:25 np0005548790.localdomain podman[323600]: 2025-12-06 10:28:25.634532959 +0000 UTC m=+0.130832890 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41)
Dec 06 10:28:25 np0005548790.localdomain podman[323600]: 2025-12-06 10:28:25.649199663 +0000 UTC m=+0.145499584 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, distribution-scope=public, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 06 10:28:25 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:28:25 np0005548790.localdomain podman[323599]: 2025-12-06 10:28:25.703848198 +0000 UTC m=+0.204362862 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 06 10:28:25 np0005548790.localdomain podman[323599]: 2025-12-06 10:28:25.712816468 +0000 UTC m=+0.213331113 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 06 10:28:25 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:28:25 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:28:25 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "auth_id": "tempest-cephx-id-956934797", "format": "json"}]: dispatch
Dec 06 10:28:25 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:28:25 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:28:25 np0005548790.localdomain ceph-mon[301742]: pgmap v689: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 158 KiB/s wr, 11 op/s
Dec 06 10:28:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:26.535 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v690: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 158 KiB/s wr, 11 op/s
Dec 06 10:28:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "r", "format": "json"}]: dispatch
Dec 06 10:28:27 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:28:27 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 06 10:28:27 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:28:27 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID alice bob with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:28:27 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:28:27 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:27 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:28:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "format": "json"}]: dispatch
Dec 06 10:28:27 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:28:27 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:28:27 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:28:27.959+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b113f2fd-9e34-49b1-8d3c-8099c23d423a' of type subvolume
Dec 06 10:28:27 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b113f2fd-9e34-49b1-8d3c-8099c23d423a' of type subvolume
Dec 06 10:28:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "force": true, "format": "json"}]: dispatch
Dec 06 10:28:27 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, vol_name:cephfs) < ""
Dec 06 10:28:27 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b113f2fd-9e34-49b1-8d3c-8099c23d423a'' moved to trashcan
Dec 06 10:28:27 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:28:27 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b113f2fd-9e34-49b1-8d3c-8099c23d423a, vol_name:cephfs) < ""
Dec 06 10:28:28 np0005548790.localdomain ceph-mon[301742]: pgmap v690: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 158 KiB/s wr, 11 op/s
Dec 06 10:28:28 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "r", "format": "json"}]: dispatch
Dec 06 10:28:28 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:28:28 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:28 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:28 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:28:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:28:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v691: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 240 KiB/s wr, 16 op/s
Dec 06 10:28:29 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "format": "json"}]: dispatch
Dec 06 10:28:29 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b113f2fd-9e34-49b1-8d3c-8099c23d423a", "force": true, "format": "json"}]: dispatch
Dec 06 10:28:29 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:29.947 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:28:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:30.365 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:30 np0005548790.localdomain ceph-mon[301742]: pgmap v691: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 240 KiB/s wr, 16 op/s
Dec 06 10:28:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:28:31 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:31 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 06 10:28:31 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:28:31 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec 06 10:28:31 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 06 10:28:31 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:28:31 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:31 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:28:31 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:28:31 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:28:31 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v692: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 141 KiB/s wr, 9 op/s
Dec 06 10:28:31 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:28:31 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 06 10:28:31 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 06 10:28:31 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 06 10:28:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:31.575 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:32 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:28:32.511 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:28:32 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:28:32.513 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:28:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:32.512 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:32 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:28:32 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:28:32 np0005548790.localdomain ceph-mon[301742]: pgmap v692: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 141 KiB/s wr, 9 op/s
Dec 06 10:28:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v693: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 168 KiB/s wr, 12 op/s
Dec 06 10:28:33 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:28:33 np0005548790.localdomain podman[323661]: 2025-12-06 10:28:33.55988018 +0000 UTC m=+0.072921207 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:28:33 np0005548790.localdomain podman[323661]: 2025-12-06 10:28:33.601240698 +0000 UTC m=+0.114281725 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Dec 06 10:28:33 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:28:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:28:34 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:28:34 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:28:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 06 10:28:34 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:28:34 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID alice with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:28:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:28:34 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:34 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:28:34 np0005548790.localdomain ceph-mon[301742]: pgmap v693: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 168 KiB/s wr, 12 op/s
Dec 06 10:28:34 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:28:34 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:34 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:34 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:28:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v694: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 109 KiB/s wr, 7 op/s
Dec 06 10:28:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:35.405 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:35 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:28:35 np0005548790.localdomain ceph-mon[301742]: pgmap v694: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 109 KiB/s wr, 7 op/s
Dec 06 10:28:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:28:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:28:36 np0005548790.localdomain podman[323680]: 2025-12-06 10:28:36.314990605 +0000 UTC m=+0.091068163 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:28:36 np0005548790.localdomain podman[323680]: 2025-12-06 10:28:36.329224067 +0000 UTC m=+0.105301675 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:28:36 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:28:36 np0005548790.localdomain podman[323681]: 2025-12-06 10:28:36.415242415 +0000 UTC m=+0.187831809 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:28:36 np0005548790.localdomain podman[323681]: 2025-12-06 10:28:36.45536249 +0000 UTC m=+0.227951874 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 10:28:36 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:28:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:36.623 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v695: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 109 KiB/s wr, 7 op/s
Dec 06 10:28:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:28:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:37 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 06 10:28:37 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:28:37 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec 06 10:28:37 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 06 10:28:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:28:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:28:37 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:28:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:28:37 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:38 np0005548790.localdomain ceph-mon[301742]: pgmap v695: 177 pgs: 177 active+clean; 228 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 109 KiB/s wr, 7 op/s
Dec 06 10:28:38 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:28:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:28:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 06 10:28:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 06 10:28:38 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 06 10:28:38 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:28:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:28:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v696: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 172 KiB/s wr, 11 op/s
Dec 06 10:28:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/12719951' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:28:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/12719951' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:28:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:40.445 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:40 np0005548790.localdomain ceph-mon[301742]: pgmap v696: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 172 KiB/s wr, 11 op/s
Dec 06 10:28:40 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "r", "format": "json"}]: dispatch
Dec 06 10:28:40 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:28:40 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 06 10:28:40 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:28:40 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID alice with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:28:40 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:28:40 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:28:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v697: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 90 KiB/s wr, 6 op/s
Dec 06 10:28:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:28:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:41 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:28:41 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:28:41.516 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=33b2d0f4-3dae-458c-b286-c937c7cb3d9e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:28:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:41.667 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Optimize plan auto_2025-12-06_10:28:41
Dec 06 10:28:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:28:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] do_upmap
Dec 06 10:28:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] pools ['volumes', 'images', 'vms', 'manila_data', 'manila_metadata', '.mgr', 'backups']
Dec 06 10:28:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] prepared 0/10 changes
Dec 06 10:28:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:28:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:28:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:28:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32)
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32)
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32)
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 8.17891541038526e-07 of space, bias 1.0, pg target 0.00016276041666666666 quantized to 32 (current 32)
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.002598441425879397 of space, bias 4.0, pg target 2.0683593749999996 quantized to 16 (current 16)
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:28:42 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "r", "format": "json"}]: dispatch
Dec 06 10:28:42 np0005548790.localdomain ceph-mon[301742]: pgmap v697: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 90 KiB/s wr, 6 op/s
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:28:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:28:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:43.090 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:43.336 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:28:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v698: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 123 KiB/s wr, 8 op/s
Dec 06 10:28:43 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/1883521674' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:28:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:28:44 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:28:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 06 10:28:44 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:28:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec 06 10:28:44 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 06 10:28:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:44 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:28:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:28:44 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:28:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:28:44 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:44 np0005548790.localdomain ceph-mon[301742]: pgmap v698: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 123 KiB/s wr, 8 op/s
Dec 06 10:28:44 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/3637666271' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:28:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 06 10:28:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 06 10:28:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 06 10:28:44 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 06 10:28:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v699: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 96 KiB/s wr, 5 op/s
Dec 06 10:28:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:45.470 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:45 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:28:45 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice", "format": "json"}]: dispatch
Dec 06 10:28:46 np0005548790.localdomain ceph-mon[301742]: pgmap v699: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 96 KiB/s wr, 5 op/s
Dec 06 10:28:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:46.602 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:46.669 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:47.329 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:28:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:47.354 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:28:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:47.354 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:28:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:47.354 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:28:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:47.368 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:28:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:47.369 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:28:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v700: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 96 KiB/s wr, 5 op/s
Dec 06 10:28:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:28:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:28:47 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 06 10:28:47 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:28:47 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID alice_bob with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:28:47 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:28:47 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:47 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:28:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:28:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:28:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:28:48.410 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:28:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:28:48.410 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:28:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:28:48.410 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:28:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:28:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:28:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:28:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18781 "" "Go-http-client/1.1"
Dec 06 10:28:48 np0005548790.localdomain ceph-mon[301742]: pgmap v700: 177 pgs: 177 active+clean; 229 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 96 KiB/s wr, 5 op/s
Dec 06 10:28:48 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:28:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:28:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:48 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:28:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v701: 177 pgs: 177 active+clean; 230 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 149 KiB/s wr, 9 op/s
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:28:49.603058) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016929603139, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2758, "num_deletes": 254, "total_data_size": 3125500, "memory_usage": 3191552, "flush_reason": "Manual Compaction"}
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016929616955, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 2002207, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32809, "largest_seqno": 35562, "table_properties": {"data_size": 1992001, "index_size": 6139, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 26470, "raw_average_key_size": 22, "raw_value_size": 1969480, "raw_average_value_size": 1637, "num_data_blocks": 266, "num_entries": 1203, "num_filter_entries": 1203, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016806, "oldest_key_time": 1765016806, "file_creation_time": 1765016929, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 13942 microseconds, and 5959 cpu microseconds.
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:28:49.617007) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 2002207 bytes OK
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:28:49.617034) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:28:49.618740) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:28:49.618760) EVENT_LOG_v1 {"time_micros": 1765016929618753, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:28:49.618806) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3112336, prev total WAL file size 3112336, number of live WAL files 2.
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:28:49.619880) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133303532' seq:72057594037927935, type:22 .. '7061786F73003133333034' seq:0, type:0; will stop at (end)
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(1955KB)], [54(19MB)]
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016929619989, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 22515074, "oldest_snapshot_seqno": -1}
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 14613 keys, 20699938 bytes, temperature: kUnknown
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016929737280, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 20699938, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 20614635, "index_size": 47698, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36549, "raw_key_size": 391752, "raw_average_key_size": 26, "raw_value_size": 20364823, "raw_average_value_size": 1393, "num_data_blocks": 1774, "num_entries": 14613, "num_filter_entries": 14613, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015768, "oldest_key_time": 0, "file_creation_time": 1765016929, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:28:49.737657) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 20699938 bytes
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:28:49.739737) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.8 rd, 176.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 19.6 +0.0 blob) out(19.7 +0.0 blob), read-write-amplify(21.6) write-amplify(10.3) OK, records in: 15144, records dropped: 531 output_compression: NoCompression
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:28:49.739858) EVENT_LOG_v1 {"time_micros": 1765016929739832, "job": 32, "event": "compaction_finished", "compaction_time_micros": 117388, "compaction_time_cpu_micros": 59141, "output_level": 6, "num_output_files": 1, "total_output_size": 20699938, "num_input_records": 15144, "num_output_records": 14613, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016929740654, "job": 32, "event": "table_file_deletion", "file_number": 56}
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016929744176, "job": 32, "event": "table_file_deletion", "file_number": 54}
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:28:49.619598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:28:49.744250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:28:49.744258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:28:49.744261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:28:49.744264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:28:49 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:28:49.744266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:28:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:50.367 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:28:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:50.507 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:50 np0005548790.localdomain ceph-mon[301742]: pgmap v701: 177 pgs: 177 active+clean; 230 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 149 KiB/s wr, 9 op/s
Dec 06 10:28:50 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:28:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:50 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 06 10:28:50 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:28:50 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec 06 10:28:50 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 06 10:28:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:50 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:28:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:28:50 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:28:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:28:50 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:51.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:28:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v702: 177 pgs: 177 active+clean; 230 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 86 KiB/s wr, 5 op/s
Dec 06 10:28:51 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:28:51 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:28:51 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 06 10:28:51 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 06 10:28:51 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 06 10:28:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:51.706 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:51.823 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:52.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:28:52 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:28:52 np0005548790.localdomain ceph-mon[301742]: pgmap v702: 177 pgs: 177 active+clean; 230 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 86 KiB/s wr, 5 op/s
Dec 06 10:28:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:53.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:28:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:53.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 06 10:28:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v703: 177 pgs: 177 active+clean; 230 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 109 KiB/s wr, 7 op/s
Dec 06 10:28:53 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:28:53 np0005548790.localdomain systemd[1]: tmp-crun.nq93KX.mount: Deactivated successfully.
Dec 06 10:28:53 np0005548790.localdomain podman[323733]: 2025-12-06 10:28:53.579054176 +0000 UTC m=+0.091358562 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 06 10:28:53 np0005548790.localdomain podman[323733]: 2025-12-06 10:28:53.589156496 +0000 UTC m=+0.101460882 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 06 10:28:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:28:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:28:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:28:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:28:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:28:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:28:53 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:28:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:28:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:28:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:28:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:28:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:28:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:28:53 np0005548790.localdomain ceph-mon[301742]: pgmap v703: 177 pgs: 177 active+clean; 230 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 109 KiB/s wr, 7 op/s
Dec 06 10:28:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:53.927 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:28:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:28:54 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "r", "format": "json"}]: dispatch
Dec 06 10:28:54 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:28:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 06 10:28:54 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:28:54 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID alice_bob with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:28:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:28:54 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:54.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:28:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:54.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:28:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:54.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:28:54 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:28:54 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "r", "format": "json"}]: dispatch
Dec 06 10:28:54 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:28:54 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:54 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:28:54 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:28:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:55.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:28:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:55.354 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:28:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:55.355 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:28:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:55.355 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:28:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:55.355 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:28:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:55.356 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:28:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v704: 177 pgs: 177 active+clean; 230 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 76 KiB/s wr, 5 op/s
Dec 06 10:28:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:55.537 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:55 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:28:55 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1248594145' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:28:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:55.830 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:28:55 np0005548790.localdomain ceph-mon[301742]: pgmap v704: 177 pgs: 177 active+clean; 230 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 76 KiB/s wr, 5 op/s
Dec 06 10:28:55 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1248594145' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:28:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:56.030 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:28:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:56.032 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=11403MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:28:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:56.032 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:28:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:56.033 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:28:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:56.312 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:28:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:56.312 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:28:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:56.419 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:28:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:28:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:28:56 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:28:56 np0005548790.localdomain podman[323775]: 2025-12-06 10:28:56.564125319 +0000 UTC m=+0.063363750 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, tcib_managed=true)
Dec 06 10:28:56 np0005548790.localdomain podman[323776]: 2025-12-06 10:28:56.573658945 +0000 UTC m=+0.067896712 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Dec 06 10:28:56 np0005548790.localdomain podman[323774]: 2025-12-06 10:28:56.625602748 +0000 UTC m=+0.128473997 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 10:28:56 np0005548790.localdomain podman[323774]: 2025-12-06 10:28:56.634446245 +0000 UTC m=+0.137317494 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 10:28:56 np0005548790.localdomain podman[323776]: 2025-12-06 10:28:56.64468373 +0000 UTC m=+0.138921497 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, container_name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git)
Dec 06 10:28:56 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:28:56 np0005548790.localdomain podman[323775]: 2025-12-06 10:28:56.65363435 +0000 UTC m=+0.152872751 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:28:56 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:28:56 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:28:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:56.741 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:28:56 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:28:56 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4015030383' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:28:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:56.826 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:28:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:56.833 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:28:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:56.855 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:28:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:56.857 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:28:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:56.858 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:28:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:56.859 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:28:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:56.859 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 06 10:28:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:28:56.882 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 06 10:28:56 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/4015030383' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:28:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v705: 177 pgs: 177 active+clean; 230 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 76 KiB/s wr, 5 op/s
Dec 06 10:28:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:28:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:57 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 06 10:28:57 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:28:57 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec 06 10:28:57 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 06 10:28:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:28:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:28:57 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:28:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:28:57 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:28:57 np0005548790.localdomain ceph-mon[301742]: pgmap v705: 177 pgs: 177 active+clean; 230 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 76 KiB/s wr, 5 op/s
Dec 06 10:28:57 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:28:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 06 10:28:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 06 10:28:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 06 10:28:57 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 06 10:28:57 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 06 10:28:59 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/1148555436' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:28:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:28:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v706: 177 pgs: 177 active+clean; 231 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 128 KiB/s wr, 8 op/s
Dec 06 10:29:00 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/3936924066' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:29:00 np0005548790.localdomain ceph-mon[301742]: pgmap v706: 177 pgs: 177 active+clean; 231 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 128 KiB/s wr, 8 op/s
Dec 06 10:29:00 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:00.558 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:00 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:29:00 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:29:00 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 06 10:29:00 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:29:00 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID alice bob with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:29:01 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:29:01 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:29:01 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:29:01 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:29:01 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:29:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v707: 177 pgs: 177 active+clean; 231 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 76 KiB/s wr, 5 op/s
Dec 06 10:29:01 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:01.785 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:02 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:29:02 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:29:02 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:29:02 np0005548790.localdomain ceph-mon[301742]: pgmap v707: 177 pgs: 177 active+clean; 231 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 76 KiB/s wr, 5 op/s
Dec 06 10:29:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v708: 177 pgs: 177 active+clean; 231 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 109 KiB/s wr, 7 op/s
Dec 06 10:29:03 np0005548790.localdomain ceph-mon[301742]: pgmap v708: 177 pgs: 177 active+clean; 231 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 109 KiB/s wr, 7 op/s
Dec 06 10:29:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:29:04 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:29:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:29:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 06 10:29:04 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:29:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec 06 10:29:04 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 06 10:29:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:29:04 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:29:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:29:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:29:04 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:29:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:29:04 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:29:04 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:29:04 np0005548790.localdomain podman[323853]: 2025-12-06 10:29:04.573708758 +0000 UTC m=+0.083755526 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:29:04 np0005548790.localdomain podman[323853]: 2025-12-06 10:29:04.58567695 +0000 UTC m=+0.095723738 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true)
Dec 06 10:29:04 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:29:04 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:29:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:29:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 06 10:29:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 06 10:29:04 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 06 10:29:04 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:29:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v709: 177 pgs: 177 active+clean; 231 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 86 KiB/s wr, 5 op/s
Dec 06 10:29:05 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:05.585 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:05 np0005548790.localdomain ceph-mon[301742]: pgmap v709: 177 pgs: 177 active+clean; 231 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 86 KiB/s wr, 5 op/s
Dec 06 10:29:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:29:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:29:06 np0005548790.localdomain podman[323872]: 2025-12-06 10:29:06.565495771 +0000 UTC m=+0.082150994 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:29:06 np0005548790.localdomain podman[323872]: 2025-12-06 10:29:06.574564945 +0000 UTC m=+0.091220278 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:29:06 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:29:06 np0005548790.localdomain podman[323873]: 2025-12-06 10:29:06.620379733 +0000 UTC m=+0.131960760 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:29:06 np0005548790.localdomain podman[323873]: 2025-12-06 10:29:06.702426784 +0000 UTC m=+0.214007801 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 06 10:29:06 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:29:06 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:06.787 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.332 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.332 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.332 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.332 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.332 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.332 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.333 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.333 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.333 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.333 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.333 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.334 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.334 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.334 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:29:07.334 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:29:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v710: 177 pgs: 177 active+clean; 231 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 86 KiB/s wr, 5 op/s
Dec 06 10:29:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "r", "format": "json"}]: dispatch
Dec 06 10:29:07 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:29:07 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 06 10:29:07 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:29:07 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID alice bob with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:29:07 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:29:07 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:29:07 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:29:08 np0005548790.localdomain ceph-mon[301742]: pgmap v710: 177 pgs: 177 active+clean; 231 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 86 KiB/s wr, 5 op/s
Dec 06 10:29:08 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "r", "format": "json"}]: dispatch
Dec 06 10:29:08 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:29:08 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:29:08 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:29:08 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow r pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:29:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:29:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v711: 177 pgs: 177 active+clean; 232 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 138 KiB/s wr, 9 op/s
Dec 06 10:29:10 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:10.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:29:10 np0005548790.localdomain ceph-mon[301742]: pgmap v711: 177 pgs: 177 active+clean; 232 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 138 KiB/s wr, 9 op/s
Dec 06 10:29:10 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:10.626 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:29:10 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:29:11 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 06 10:29:11 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:29:11 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec 06 10:29:11 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 06 10:29:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:29:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:29:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:29:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:29:11 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:29:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:29:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:29:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v712: 177 pgs: 177 active+clean; 232 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 86 KiB/s wr, 5 op/s
Dec 06 10:29:11 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 06 10:29:11 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 06 10:29:11 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 06 10:29:11 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 06 10:29:11 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:11.834 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:29:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:29:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:29:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:29:12 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:29:12 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 06 10:29:12 np0005548790.localdomain ceph-mon[301742]: pgmap v712: 177 pgs: 177 active+clean; 232 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 86 KiB/s wr, 5 op/s
Dec 06 10:29:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:29:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:29:12 np0005548790.localdomain sudo[323922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:29:12 np0005548790.localdomain sudo[323922]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:29:12 np0005548790.localdomain sudo[323922]: pam_unix(sudo:session): session closed for user root
Dec 06 10:29:12 np0005548790.localdomain sudo[323940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:29:12 np0005548790.localdomain sudo[323940]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:29:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v713: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 109 KiB/s wr, 7 op/s
Dec 06 10:29:13 np0005548790.localdomain sudo[323940]: pam_unix(sudo:session): session closed for user root
Dec 06 10:29:13 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:29:13 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:29:13 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:29:13 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:29:13 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:29:13 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] update: starting ev 7ce1b396-586e-45ff-b7a2-2e9c156a393e (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:29:13 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] complete: finished ev 7ce1b396-586e-45ff-b7a2-2e9c156a393e (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:29:13 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Completed event 7ce1b396-586e-45ff-b7a2-2e9c156a393e (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 06 10:29:13 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:29:13 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:29:13 np0005548790.localdomain ceph-mon[301742]: pgmap v713: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 109 KiB/s wr, 7 op/s
Dec 06 10:29:13 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:29:13 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:29:13 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:29:13 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:29:14 np0005548790.localdomain sudo[323990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:29:14 np0005548790.localdomain sudo[323990]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:29:14 np0005548790.localdomain sudo[323990]: pam_unix(sudo:session): session closed for user root
Dec 06 10:29:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:29:14 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:29:14 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:29:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Dec 06 10:29:14 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 06 10:29:14 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: Creating meta for ID bob with tenant 8223febae67d4b58a139c7a23382ebf9
Dec 06 10:29:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} v 0)
Dec 06 10:29:14 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:29:14 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:29:14 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:29:14 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 06 10:29:14 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:29:14 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"} : dispatch
Dec 06 10:29:14 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723", "mon", "allow r"], "format": "json"}]': finished
Dec 06 10:29:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v714: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 76 KiB/s wr, 5 op/s
Dec 06 10:29:15 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:15.667 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:15 np0005548790.localdomain ceph-mon[301742]: pgmap v714: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 76 KiB/s wr, 5 op/s
Dec 06 10:29:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:16.866 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:17 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Writing back 50 completed events
Dec 06 10:29:17 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:29:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v715: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 76 KiB/s wr, 5 op/s
Dec 06 10:29:18 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:29:18 np0005548790.localdomain ceph-mon[301742]: pgmap v715: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 76 KiB/s wr, 5 op/s
Dec 06 10:29:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:29:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:29:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:29:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:29:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:29:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18780 "" "Go-http-client/1.1"
Dec 06 10:29:18 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ac611655-851d-48c2-9d00-93668f6ff5e1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:29:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ac611655-851d-48c2-9d00-93668f6ff5e1, vol_name:cephfs) < ""
Dec 06 10:29:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ac611655-851d-48c2-9d00-93668f6ff5e1/.meta.tmp'
Dec 06 10:29:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ac611655-851d-48c2-9d00-93668f6ff5e1/.meta.tmp' to config b'/volumes/_nogroup/ac611655-851d-48c2-9d00-93668f6ff5e1/.meta'
Dec 06 10:29:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ac611655-851d-48c2-9d00-93668f6ff5e1, vol_name:cephfs) < ""
Dec 06 10:29:18 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ac611655-851d-48c2-9d00-93668f6ff5e1", "format": "json"}]: dispatch
Dec 06 10:29:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ac611655-851d-48c2-9d00-93668f6ff5e1, vol_name:cephfs) < ""
Dec 06 10:29:18 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ac611655-851d-48c2-9d00-93668f6ff5e1, vol_name:cephfs) < ""
Dec 06 10:29:19 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:29:19 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ac611655-851d-48c2-9d00-93668f6ff5e1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:29:19 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ac611655-851d-48c2-9d00-93668f6ff5e1", "format": "json"}]: dispatch
Dec 06 10:29:19 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:29:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v716: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 108 KiB/s wr, 7 op/s
Dec 06 10:29:19 np0005548790.localdomain sshd[324008]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:29:20 np0005548790.localdomain ceph-mon[301742]: pgmap v716: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 108 KiB/s wr, 7 op/s
Dec 06 10:29:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:20.699 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v717: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 55 KiB/s wr, 3 op/s
Dec 06 10:29:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:21.869 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ac611655-851d-48c2-9d00-93668f6ff5e1", "auth_id": "bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:29:21 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:ac611655-851d-48c2-9d00-93668f6ff5e1, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:29:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Dec 06 10:29:21 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 06 10:29:22 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b,allow rw path=/volumes/_nogroup/ac611655-851d-48c2-9d00-93668f6ff5e1/35c0f9c9-609d-4a42-9e75-505856ad36fb", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723,allow rw pool=manila_data namespace=fsvolumens_ac611655-851d-48c2-9d00-93668f6ff5e1"]} v 0)
Dec 06 10:29:22 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b,allow rw path=/volumes/_nogroup/ac611655-851d-48c2-9d00-93668f6ff5e1/35c0f9c9-609d-4a42-9e75-505856ad36fb", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723,allow rw pool=manila_data namespace=fsvolumens_ac611655-851d-48c2-9d00-93668f6ff5e1"]} : dispatch
Dec 06 10:29:22 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Dec 06 10:29:22 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 06 10:29:22 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:ac611655-851d-48c2-9d00-93668f6ff5e1, tenant_id:8223febae67d4b58a139c7a23382ebf9, vol_name:cephfs) < ""
Dec 06 10:29:22 np0005548790.localdomain ceph-mon[301742]: pgmap v717: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 55 KiB/s wr, 3 op/s
Dec 06 10:29:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 06 10:29:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b,allow rw path=/volumes/_nogroup/ac611655-851d-48c2-9d00-93668f6ff5e1/35c0f9c9-609d-4a42-9e75-505856ad36fb", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723,allow rw pool=manila_data namespace=fsvolumens_ac611655-851d-48c2-9d00-93668f6ff5e1"]} : dispatch
Dec 06 10:29:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b,allow rw path=/volumes/_nogroup/ac611655-851d-48c2-9d00-93668f6ff5e1/35c0f9c9-609d-4a42-9e75-505856ad36fb", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723,allow rw pool=manila_data namespace=fsvolumens_ac611655-851d-48c2-9d00-93668f6ff5e1"]} : dispatch
Dec 06 10:29:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b,allow rw path=/volumes/_nogroup/ac611655-851d-48c2-9d00-93668f6ff5e1/35c0f9c9-609d-4a42-9e75-505856ad36fb", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723,allow rw pool=manila_data namespace=fsvolumens_ac611655-851d-48c2-9d00-93668f6ff5e1"]}]': finished
Dec 06 10:29:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 06 10:29:22 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:29:22Z|00277|memory_trim|INFO|Detected inactivity (last active 30016 ms ago): trimming memory
Dec 06 10:29:23 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v718: 177 pgs: 177 active+clean; 233 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 95 KiB/s wr, 5 op/s
Dec 06 10:29:23 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "ac611655-851d-48c2-9d00-93668f6ff5e1", "auth_id": "bob", "tenant_id": "8223febae67d4b58a139c7a23382ebf9", "access_level": "rw", "format": "json"}]: dispatch
Dec 06 10:29:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:29:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:29:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:29:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:29:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:29:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:29:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:29:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:29:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:29:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:29:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:29:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:29:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:29:24 np0005548790.localdomain ceph-mon[301742]: pgmap v718: 177 pgs: 177 active+clean; 233 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 95 KiB/s wr, 5 op/s
Dec 06 10:29:24 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:29:24 np0005548790.localdomain podman[324010]: 2025-12-06 10:29:24.572072666 +0000 UTC m=+0.085034371 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:29:24 np0005548790.localdomain podman[324010]: 2025-12-06 10:29:24.608340439 +0000 UTC m=+0.121302174 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:29:24 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:29:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ac611655-851d-48c2-9d00-93668f6ff5e1", "auth_id": "bob", "format": "json"}]: dispatch
Dec 06 10:29:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:ac611655-851d-48c2-9d00-93668f6ff5e1, vol_name:cephfs) < ""
Dec 06 10:29:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v719: 177 pgs: 177 active+clean; 233 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 71 KiB/s wr, 4 op/s
Dec 06 10:29:25 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Dec 06 10:29:25 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 06 10:29:25 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723"]} v 0)
Dec 06 10:29:25 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723"]} : dispatch
Dec 06 10:29:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:ac611655-851d-48c2-9d00-93668f6ff5e1, vol_name:cephfs) < ""
Dec 06 10:29:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ac611655-851d-48c2-9d00-93668f6ff5e1", "auth_id": "bob", "format": "json"}]: dispatch
Dec 06 10:29:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:ac611655-851d-48c2-9d00-93668f6ff5e1, vol_name:cephfs) < ""
Dec 06 10:29:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/ac611655-851d-48c2-9d00-93668f6ff5e1/35c0f9c9-609d-4a42-9e75-505856ad36fb
Dec 06 10:29:25 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/ac611655-851d-48c2-9d00-93668f6ff5e1/35c0f9c9-609d-4a42-9e75-505856ad36fb],prefix=session evict} (starting...)
Dec 06 10:29:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:29:25 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:ac611655-851d-48c2-9d00-93668f6ff5e1, vol_name:cephfs) < ""
Dec 06 10:29:25 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:25.731 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:26 np0005548790.localdomain sshd[324008]: Connection closed by 101.47.160.186 port 47922 [preauth]
Dec 06 10:29:26 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "ac611655-851d-48c2-9d00-93668f6ff5e1", "auth_id": "bob", "format": "json"}]: dispatch
Dec 06 10:29:26 np0005548790.localdomain ceph-mon[301742]: pgmap v719: 177 pgs: 177 active+clean; 233 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 71 KiB/s wr, 4 op/s
Dec 06 10:29:26 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 06 10:29:26 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723"]} : dispatch
Dec 06 10:29:26 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723"]} : dispatch
Dec 06 10:29:26 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b", "osd", "allow rw pool=manila_data namespace=fsvolumens_6fddad9a-edda-44e9-b738-5688693ea723"]}]': finished
Dec 06 10:29:26 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "ac611655-851d-48c2-9d00-93668f6ff5e1", "auth_id": "bob", "format": "json"}]: dispatch
Dec 06 10:29:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:26.898 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v720: 177 pgs: 177 active+clean; 233 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 71 KiB/s wr, 4 op/s
Dec 06 10:29:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:29:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:29:27 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:29:27 np0005548790.localdomain podman[324028]: 2025-12-06 10:29:27.578057111 +0000 UTC m=+0.087334773 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 06 10:29:27 np0005548790.localdomain podman[324028]: 2025-12-06 10:29:27.588029759 +0000 UTC m=+0.097307381 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 06 10:29:27 np0005548790.localdomain podman[324027]: 2025-12-06 10:29:27.620330815 +0000 UTC m=+0.134009866 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:29:27 np0005548790.localdomain podman[324027]: 2025-12-06 10:29:27.630673123 +0000 UTC m=+0.144352114 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:29:27 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:29:27 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:29:27 np0005548790.localdomain podman[324029]: 2025-12-06 10:29:27.736905902 +0000 UTC m=+0.242701411 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350)
Dec 06 10:29:27 np0005548790.localdomain podman[324029]: 2025-12-06 10:29:27.774476269 +0000 UTC m=+0.280271778 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_id=edpm, io.openshift.tags=minimal rhel9, vcs-type=git)
Dec 06 10:29:27 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:29:28 np0005548790.localdomain ceph-mon[301742]: pgmap v720: 177 pgs: 177 active+clean; 233 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 71 KiB/s wr, 4 op/s
Dec 06 10:29:28 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "bob", "format": "json"}]: dispatch
Dec 06 10:29:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:29:28 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Dec 06 10:29:28 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 06 10:29:28 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.bob"} v 0)
Dec 06 10:29:28 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch
Dec 06 10:29:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:29:28 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                           ** DB Stats **
                                                           Uptime(secs): 1200.0 total, 600.0 interval
                                                           Cumulative writes: 4747 writes, 36K keys, 4747 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.05 MB/s
                                                           Cumulative WAL: 4747 writes, 4747 syncs, 1.00 writes per sync, written: 0.06 GB, 0.05 MB/s
                                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           Interval writes: 2468 writes, 13K keys, 2468 commit groups, 1.0 writes per commit group, ingest: 18.67 MB, 0.03 MB/s
                                                           Interval WAL: 2468 writes, 2468 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           
                                                           ** Compaction Stats [default] **
                                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    141.7      0.30              0.10        16    0.019       0      0       0.0       0.0
                                                             L6      1/0   19.74 MB   0.0      0.3     0.0      0.3       0.3      0.0       0.0   6.6    178.6    165.3      1.68              0.72        15    0.112    201K   7676       0.0       0.0
                                                            Sum      1/0   19.74 MB   0.0      0.3     0.0      0.3       0.3      0.1       0.0   7.6    151.8    161.7      1.98              0.82        31    0.064    201K   7676       0.0       0.0
                                                            Int      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0  14.2    165.1    166.0      0.96              0.40        16    0.060    111K   4216       0.0       0.0
                                                           
                                                           ** Compaction Stats [default] **
                                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            Low      0/0    0.00 KB   0.0      0.3     0.0      0.3       0.3      0.0       0.0   0.0    178.6    165.3      1.68              0.72        15    0.112    201K   7676       0.0       0.0
                                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    143.4      0.29              0.10        15    0.020       0      0       0.0       0.0
                                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           
                                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                           
                                                           Uptime(secs): 1200.0 total, 600.0 interval
                                                           Flush(GB): cumulative 0.041, interval 0.011
                                                           AddFile(GB): cumulative 0.000, interval 0.000
                                                           AddFile(Total Files): cumulative 0, interval 0
                                                           AddFile(L0 Files): cumulative 0, interval 0
                                                           AddFile(Keys): cumulative 0, interval 0
                                                           Cumulative compaction: 0.31 GB write, 0.27 MB/s write, 0.29 GB read, 0.25 MB/s read, 2.0 seconds
                                                           Interval compaction: 0.16 GB write, 0.26 MB/s write, 0.15 GB read, 0.26 MB/s read, 1.0 seconds
                                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                           Block cache BinnedLRUCache@0x55bcb02831f0#2 capacity: 304.00 MB usage: 24.86 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000207 secs_since: 0
                                                           Block cache entry stats(count,size,portion): DataBlock(1294,23.56 MB,7.75005%) FilterBlock(31,588.30 KB,0.188983%) IndexBlock(31,743.73 KB,0.238915%) Misc(1,0.00 KB,0%)
                                                           
                                                           ** File Read Latency Histogram By Level [default] **
Dec 06 10:29:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:29:28 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "bob", "format": "json"}]: dispatch
Dec 06 10:29:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:29:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b
Dec 06 10:29:28 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723/6646f798-4ba5-4e2d-868f-aafbfee9e73b],prefix=session evict} (starting...)
Dec 06 10:29:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 06 10:29:28 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:29:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:29:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v721: 177 pgs: 177 active+clean; 233 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 105 KiB/s wr, 6 op/s
Dec 06 10:29:29 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "bob", "format": "json"}]: dispatch
Dec 06 10:29:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 06 10:29:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch
Dec 06 10:29:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch
Dec 06 10:29:29 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Dec 06 10:29:30 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "auth_id": "bob", "format": "json"}]: dispatch
Dec 06 10:29:30 np0005548790.localdomain ceph-mon[301742]: pgmap v721: 177 pgs: 177 active+clean; 233 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 105 KiB/s wr, 6 op/s
Dec 06 10:29:30 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:30.762 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v722: 177 pgs: 177 active+clean; 233 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 73 KiB/s wr, 4 op/s
Dec 06 10:29:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:31.945 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:32 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ac611655-851d-48c2-9d00-93668f6ff5e1", "format": "json"}]: dispatch
Dec 06 10:29:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ac611655-851d-48c2-9d00-93668f6ff5e1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:29:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ac611655-851d-48c2-9d00-93668f6ff5e1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:29:32 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ac611655-851d-48c2-9d00-93668f6ff5e1' of type subvolume
Dec 06 10:29:32 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:29:32.415+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ac611655-851d-48c2-9d00-93668f6ff5e1' of type subvolume
Dec 06 10:29:32 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ac611655-851d-48c2-9d00-93668f6ff5e1", "force": true, "format": "json"}]: dispatch
Dec 06 10:29:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ac611655-851d-48c2-9d00-93668f6ff5e1, vol_name:cephfs) < ""
Dec 06 10:29:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ac611655-851d-48c2-9d00-93668f6ff5e1'' moved to trashcan
Dec 06 10:29:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:29:32 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ac611655-851d-48c2-9d00-93668f6ff5e1, vol_name:cephfs) < ""
Dec 06 10:29:32 np0005548790.localdomain ceph-mon[301742]: pgmap v722: 177 pgs: 177 active+clean; 233 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 73 KiB/s wr, 4 op/s
Dec 06 10:29:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v723: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 119 KiB/s wr, 6 op/s
Dec 06 10:29:33 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ac611655-851d-48c2-9d00-93668f6ff5e1", "format": "json"}]: dispatch
Dec 06 10:29:33 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ac611655-851d-48c2-9d00-93668f6ff5e1", "force": true, "format": "json"}]: dispatch
Dec 06 10:29:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:29:34 np0005548790.localdomain ceph-mon[301742]: pgmap v723: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 119 KiB/s wr, 6 op/s
Dec 06 10:29:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v724: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 79 KiB/s wr, 4 op/s
Dec 06 10:29:35 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:29:35 np0005548790.localdomain podman[324087]: 2025-12-06 10:29:35.568106057 +0000 UTC m=+0.085004711 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 06 10:29:35 np0005548790.localdomain podman[324087]: 2025-12-06 10:29:35.579184944 +0000 UTC m=+0.096083548 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:29:35 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:29:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6fddad9a-edda-44e9-b738-5688693ea723", "format": "json"}]: dispatch
Dec 06 10:29:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6fddad9a-edda-44e9-b738-5688693ea723, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:29:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6fddad9a-edda-44e9-b738-5688693ea723, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:29:35 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:29:35.742+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6fddad9a-edda-44e9-b738-5688693ea723' of type subvolume
Dec 06 10:29:35 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6fddad9a-edda-44e9-b738-5688693ea723' of type subvolume
Dec 06 10:29:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "force": true, "format": "json"}]: dispatch
Dec 06 10:29:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:29:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6fddad9a-edda-44e9-b738-5688693ea723'' moved to trashcan
Dec 06 10:29:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:29:35 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6fddad9a-edda-44e9-b738-5688693ea723, vol_name:cephfs) < ""
Dec 06 10:29:35 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:35.794 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:36 np0005548790.localdomain ceph-mon[301742]: pgmap v724: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 79 KiB/s wr, 4 op/s
Dec 06 10:29:36 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6fddad9a-edda-44e9-b738-5688693ea723", "format": "json"}]: dispatch
Dec 06 10:29:36 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6fddad9a-edda-44e9-b738-5688693ea723", "force": true, "format": "json"}]: dispatch
Dec 06 10:29:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:36.993 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v725: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 79 KiB/s wr, 4 op/s
Dec 06 10:29:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:29:37 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:29:37 np0005548790.localdomain podman[324107]: 2025-12-06 10:29:37.557021343 +0000 UTC m=+0.069135265 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 06 10:29:37 np0005548790.localdomain systemd[1]: tmp-crun.Rj8MDt.mount: Deactivated successfully.
Dec 06 10:29:37 np0005548790.localdomain podman[324106]: 2025-12-06 10:29:37.627275017 +0000 UTC m=+0.139181323 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 10:29:37 np0005548790.localdomain podman[324106]: 2025-12-06 10:29:37.637350348 +0000 UTC m=+0.149256654 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:29:37 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:29:37 np0005548790.localdomain podman[324107]: 2025-12-06 10:29:37.694282744 +0000 UTC m=+0.206396716 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 06 10:29:37 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:29:38 np0005548790.localdomain ceph-mon[301742]: pgmap v725: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 79 KiB/s wr, 4 op/s
Dec 06 10:29:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:29:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v726: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 103 KiB/s wr, 5 op/s
Dec 06 10:29:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3284518139' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:29:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3284518139' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:29:40 np0005548790.localdomain ceph-mon[301742]: pgmap v726: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 103 KiB/s wr, 5 op/s
Dec 06 10:29:40 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:29:40.691 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:29:40 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:29:40.692 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:29:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:40.730 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:40 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:40.797 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v727: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 69 KiB/s wr, 3 op/s
Dec 06 10:29:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Optimize plan auto_2025-12-06_10:29:41
Dec 06 10:29:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:29:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] do_upmap
Dec 06 10:29:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] pools ['volumes', 'backups', 'images', '.mgr', 'vms', 'manila_data', 'manila_metadata']
Dec 06 10:29:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] prepared 0/10 changes
Dec 06 10:29:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:29:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:29:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:29:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:29:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:42.031 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32)
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32)
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32)
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00010850694444444444 quantized to 32 (current 32)
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.002951225310580681 of space, bias 4.0, pg target 2.349175347222222 quantized to 16 (current 16)
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:29:42 np0005548790.localdomain ceph-mon[301742]: pgmap v727: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 69 KiB/s wr, 3 op/s
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:29:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:29:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v728: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 80 KiB/s wr, 4 op/s
Dec 06 10:29:43 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:29:43.694 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=33b2d0f4-3dae-458c-b286-c937c7cb3d9e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:29:43 np0005548790.localdomain ceph-mon[301742]: pgmap v728: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 80 KiB/s wr, 4 op/s
Dec 06 10:29:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:29:44 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:44.351 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:29:44 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/1217810012' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:29:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v729: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 35 KiB/s wr, 2 op/s
Dec 06 10:29:45 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:45.819 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:45 np0005548790.localdomain ceph-mon[301742]: pgmap v729: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 35 KiB/s wr, 2 op/s
Dec 06 10:29:45 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/3042594742' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:29:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:47.078 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:47.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:29:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:47.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:29:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:47.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:29:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:47.357 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:29:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v730: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 35 KiB/s wr, 2 op/s
Dec 06 10:29:47 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Dec 06 10:29:47 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:29:47.943395) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:29:47 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Dec 06 10:29:47 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016987943475, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1148, "num_deletes": 257, "total_data_size": 1153329, "memory_usage": 1177360, "flush_reason": "Manual Compaction"}
Dec 06 10:29:47 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Dec 06 10:29:47 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016987952112, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 741105, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35567, "largest_seqno": 36710, "table_properties": {"data_size": 736493, "index_size": 2083, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11423, "raw_average_key_size": 19, "raw_value_size": 726591, "raw_average_value_size": 1268, "num_data_blocks": 92, "num_entries": 573, "num_filter_entries": 573, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016930, "oldest_key_time": 1765016930, "file_creation_time": 1765016987, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:29:47 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 8779 microseconds, and 3354 cpu microseconds.
Dec 06 10:29:47 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:29:47 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:29:47.952177) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 741105 bytes OK
Dec 06 10:29:47 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:29:47.952205) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Dec 06 10:29:47 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:29:47.954029) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Dec 06 10:29:47 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:29:47.954052) EVENT_LOG_v1 {"time_micros": 1765016987954045, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:29:47 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:29:47.954075) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:29:47 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1147508, prev total WAL file size 1147832, number of live WAL files 2.
Dec 06 10:29:47 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:29:47 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:29:47.954817) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034353232' seq:72057594037927935, type:22 .. '6C6F676D0034373735' seq:0, type:0; will stop at (end)
Dec 06 10:29:47 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:29:47 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(723KB)], [57(19MB)]
Dec 06 10:29:47 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016987954883, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 21441043, "oldest_snapshot_seqno": -1}
Dec 06 10:29:48 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 14651 keys, 21308485 bytes, temperature: kUnknown
Dec 06 10:29:48 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016988072701, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 21308485, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 21221470, "index_size": 49282, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36677, "raw_key_size": 393816, "raw_average_key_size": 26, "raw_value_size": 20969595, "raw_average_value_size": 1431, "num_data_blocks": 1837, "num_entries": 14651, "num_filter_entries": 14651, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015768, "oldest_key_time": 0, "file_creation_time": 1765016987, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:29:48 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:29:48 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:29:48.073090) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 21308485 bytes
Dec 06 10:29:48 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:29:48.075602) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.8 rd, 180.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 19.7 +0.0 blob) out(20.3 +0.0 blob), read-write-amplify(57.7) write-amplify(28.8) OK, records in: 15186, records dropped: 535 output_compression: NoCompression
Dec 06 10:29:48 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:29:48.075632) EVENT_LOG_v1 {"time_micros": 1765016988075619, "job": 34, "event": "compaction_finished", "compaction_time_micros": 117947, "compaction_time_cpu_micros": 56086, "output_level": 6, "num_output_files": 1, "total_output_size": 21308485, "num_input_records": 15186, "num_output_records": 14651, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:29:48 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:29:48 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016988075909, "job": 34, "event": "table_file_deletion", "file_number": 59}
Dec 06 10:29:48 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:29:48 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765016988079098, "job": 34, "event": "table_file_deletion", "file_number": 57}
Dec 06 10:29:48 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:29:47.954667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:29:48 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:29:48.079173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:29:48 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:29:48.079178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:29:48 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:29:48.079179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:29:48 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:29:48.079181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:29:48 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:29:48.079182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:29:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:29:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:29:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:29:48.410 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:29:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:29:48.411 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:29:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:29:48.411 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:29:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:29:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:29:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:29:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18782 "" "Go-http-client/1.1"
Dec 06 10:29:48 np0005548790.localdomain ceph-mon[301742]: pgmap v730: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 35 KiB/s wr, 2 op/s
Dec 06 10:29:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:29:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:49.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:29:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v731: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 39 KiB/s wr, 2 op/s
Dec 06 10:29:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:50.328 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:29:50 np0005548790.localdomain ceph-mon[301742]: pgmap v731: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 39 KiB/s wr, 2 op/s
Dec 06 10:29:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:50.864 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:51.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:29:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v732: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 15 KiB/s wr, 1 op/s
Dec 06 10:29:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:52.113 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:52 np0005548790.localdomain ceph-mon[301742]: pgmap v732: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 15 KiB/s wr, 1 op/s
Dec 06 10:29:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:53.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:29:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v733: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 15 KiB/s wr, 1 op/s
Dec 06 10:29:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:29:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:29:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:29:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:29:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:29:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:29:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:29:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:29:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:29:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:29:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:29:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:29:53 np0005548790.localdomain ceph-mon[301742]: pgmap v733: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 15 KiB/s wr, 1 op/s
Dec 06 10:29:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:29:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:54.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:29:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:54.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:29:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:54.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:29:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:55.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:29:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:55.363 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:29:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:55.364 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:29:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:55.364 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:29:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:55.364 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:29:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:55.365 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:29:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v734: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec 06 10:29:55 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:29:55 np0005548790.localdomain podman[324156]: 2025-12-06 10:29:55.576931596 +0000 UTC m=+0.082402882 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:29:55 np0005548790.localdomain podman[324156]: 2025-12-06 10:29:55.60916759 +0000 UTC m=+0.114638886 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 06 10:29:55 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:29:55 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:29:55 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/522862524' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:29:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:55.833 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:29:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:55.905 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:56.063 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:29:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:56.065 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=11377MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:29:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:56.065 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:29:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:56.066 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:29:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:56.306 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:29:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:56.306 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:29:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:56.368 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Refreshing inventories for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 06 10:29:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:56.390 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updating ProviderTree inventory for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 06 10:29:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:56.390 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Updating inventory in ProviderTree for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 06 10:29:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:56.413 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Refreshing aggregate associations for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 06 10:29:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:56.434 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Refreshing trait associations for resource provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0, traits: HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AMD_SVM,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_F16C,HW_CPU_X86_ABM,HW_CPU_X86_BMI,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SVM,HW_CPU_X86_SSE2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 06 10:29:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:56.453 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:29:56 np0005548790.localdomain ceph-mon[301742]: pgmap v734: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec 06 10:29:56 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/522862524' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:29:56 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:29:56 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1038709677' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:29:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:56.919 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:29:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:56.926 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:29:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:56.948 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:29:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:56.950 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:29:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:56.951 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:29:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:29:57.155 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:29:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v735: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec 06 10:29:57 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1038709677' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:29:58 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "caa1e0c3-0842-4cff-aec6-04859a719f92", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:29:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:caa1e0c3-0842-4cff-aec6-04859a719f92, vol_name:cephfs) < ""
Dec 06 10:29:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/caa1e0c3-0842-4cff-aec6-04859a719f92/.meta.tmp'
Dec 06 10:29:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/caa1e0c3-0842-4cff-aec6-04859a719f92/.meta.tmp' to config b'/volumes/_nogroup/caa1e0c3-0842-4cff-aec6-04859a719f92/.meta'
Dec 06 10:29:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:caa1e0c3-0842-4cff-aec6-04859a719f92, vol_name:cephfs) < ""
Dec 06 10:29:58 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "caa1e0c3-0842-4cff-aec6-04859a719f92", "format": "json"}]: dispatch
Dec 06 10:29:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:caa1e0c3-0842-4cff-aec6-04859a719f92, vol_name:cephfs) < ""
Dec 06 10:29:58 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:caa1e0c3-0842-4cff-aec6-04859a719f92, vol_name:cephfs) < ""
Dec 06 10:29:58 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:29:58 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:29:58 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:29:58 np0005548790.localdomain ceph-mon[301742]: pgmap v735: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec 06 10:29:58 np0005548790.localdomain ceph-mon[301742]: from='client.25237 172.18.0.34:0/2532600599' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 06 10:29:58 np0005548790.localdomain systemd[1]: tmp-crun.SAQhfJ.mount: Deactivated successfully.
Dec 06 10:29:58 np0005548790.localdomain podman[324216]: 2025-12-06 10:29:58.557630083 +0000 UTC m=+0.064084670 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 10:29:58 np0005548790.localdomain podman[324216]: 2025-12-06 10:29:58.593107285 +0000 UTC m=+0.099561852 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:29:58 np0005548790.localdomain systemd[1]: tmp-crun.UNJZ9A.mount: Deactivated successfully.
Dec 06 10:29:58 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:29:58 np0005548790.localdomain podman[324217]: 2025-12-06 10:29:58.604532231 +0000 UTC m=+0.110689060 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:29:58 np0005548790.localdomain podman[324217]: 2025-12-06 10:29:58.612525715 +0000 UTC m=+0.118682574 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=edpm)
Dec 06 10:29:58 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:29:58 np0005548790.localdomain podman[324218]: 2025-12-06 10:29:58.659988908 +0000 UTC m=+0.162314505 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., distribution-scope=public, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-type=git, config_id=edpm, version=9.6, architecture=x86_64, release=1755695350)
Dec 06 10:29:58 np0005548790.localdomain podman[324218]: 2025-12-06 10:29:58.672210286 +0000 UTC m=+0.174535953 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_id=edpm, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, name=ubi9-minimal)
Dec 06 10:29:58 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:29:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:29:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v736: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 13 KiB/s wr, 0 op/s
Dec 06 10:29:59 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "caa1e0c3-0842-4cff-aec6-04859a719f92", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 06 10:29:59 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "caa1e0c3-0842-4cff-aec6-04859a719f92", "format": "json"}]: dispatch
Dec 06 10:29:59 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/4167972808' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:30:00 np0005548790.localdomain ceph-mon[301742]: pgmap v736: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 13 KiB/s wr, 0 op/s
Dec 06 10:30:00 np0005548790.localdomain ceph-mon[301742]: overall HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 06 10:30:00 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:00.908 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "caa1e0c3-0842-4cff-aec6-04859a719f92", "snap_name": "651d1145-bde4-4592-9ceb-573b615a4af1", "format": "json"}]: dispatch
Dec 06 10:30:01 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:651d1145-bde4-4592-9ceb-573b615a4af1, sub_name:caa1e0c3-0842-4cff-aec6-04859a719f92, vol_name:cephfs) < ""
Dec 06 10:30:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v737: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec 06 10:30:01 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2145475030' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:30:01 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:651d1145-bde4-4592-9ceb-573b615a4af1, sub_name:caa1e0c3-0842-4cff-aec6-04859a719f92, vol_name:cephfs) < ""
Dec 06 10:30:02 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:02.186 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:03 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "caa1e0c3-0842-4cff-aec6-04859a719f92", "snap_name": "651d1145-bde4-4592-9ceb-573b615a4af1", "format": "json"}]: dispatch
Dec 06 10:30:03 np0005548790.localdomain ceph-mon[301742]: pgmap v737: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec 06 10:30:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v738: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 18 KiB/s wr, 0 op/s
Dec 06 10:30:04 np0005548790.localdomain ceph-mon[301742]: pgmap v738: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 18 KiB/s wr, 0 op/s
Dec 06 10:30:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:30:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "caa1e0c3-0842-4cff-aec6-04859a719f92", "snap_name": "651d1145-bde4-4592-9ceb-573b615a4af1_41e045a4-96eb-4b50-9798-8ab25b9deb95", "force": true, "format": "json"}]: dispatch
Dec 06 10:30:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:651d1145-bde4-4592-9ceb-573b615a4af1_41e045a4-96eb-4b50-9798-8ab25b9deb95, sub_name:caa1e0c3-0842-4cff-aec6-04859a719f92, vol_name:cephfs) < ""
Dec 06 10:30:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/caa1e0c3-0842-4cff-aec6-04859a719f92/.meta.tmp'
Dec 06 10:30:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/caa1e0c3-0842-4cff-aec6-04859a719f92/.meta.tmp' to config b'/volumes/_nogroup/caa1e0c3-0842-4cff-aec6-04859a719f92/.meta'
Dec 06 10:30:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:651d1145-bde4-4592-9ceb-573b615a4af1_41e045a4-96eb-4b50-9798-8ab25b9deb95, sub_name:caa1e0c3-0842-4cff-aec6-04859a719f92, vol_name:cephfs) < ""
Dec 06 10:30:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "caa1e0c3-0842-4cff-aec6-04859a719f92", "snap_name": "651d1145-bde4-4592-9ceb-573b615a4af1", "force": true, "format": "json"}]: dispatch
Dec 06 10:30:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:651d1145-bde4-4592-9ceb-573b615a4af1, sub_name:caa1e0c3-0842-4cff-aec6-04859a719f92, vol_name:cephfs) < ""
Dec 06 10:30:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/caa1e0c3-0842-4cff-aec6-04859a719f92/.meta.tmp'
Dec 06 10:30:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/caa1e0c3-0842-4cff-aec6-04859a719f92/.meta.tmp' to config b'/volumes/_nogroup/caa1e0c3-0842-4cff-aec6-04859a719f92/.meta'
Dec 06 10:30:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v739: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 18 KiB/s wr, 0 op/s
Dec 06 10:30:05 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:651d1145-bde4-4592-9ceb-573b615a4af1, sub_name:caa1e0c3-0842-4cff-aec6-04859a719f92, vol_name:cephfs) < ""
Dec 06 10:30:05 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:05.951 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:06 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:30:06 np0005548790.localdomain podman[324277]: 2025-12-06 10:30:06.309739697 +0000 UTC m=+0.078256830 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 06 10:30:06 np0005548790.localdomain podman[324277]: 2025-12-06 10:30:06.325244713 +0000 UTC m=+0.093761856 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 10:30:06 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:30:06 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "caa1e0c3-0842-4cff-aec6-04859a719f92", "snap_name": "651d1145-bde4-4592-9ceb-573b615a4af1_41e045a4-96eb-4b50-9798-8ab25b9deb95", "force": true, "format": "json"}]: dispatch
Dec 06 10:30:06 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "caa1e0c3-0842-4cff-aec6-04859a719f92", "snap_name": "651d1145-bde4-4592-9ceb-573b615a4af1", "force": true, "format": "json"}]: dispatch
Dec 06 10:30:06 np0005548790.localdomain ceph-mon[301742]: pgmap v739: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 18 KiB/s wr, 0 op/s
Dec 06 10:30:07 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:07.231 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v740: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 18 KiB/s wr, 0 op/s
Dec 06 10:30:08 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:30:08 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:30:08 np0005548790.localdomain ceph-mon[301742]: pgmap v740: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 18 KiB/s wr, 0 op/s
Dec 06 10:30:08 np0005548790.localdomain podman[324297]: 2025-12-06 10:30:08.578202181 +0000 UTC m=+0.086296486 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 10:30:08 np0005548790.localdomain podman[324297]: 2025-12-06 10:30:08.591016694 +0000 UTC m=+0.099110989 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:30:08 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:30:08 np0005548790.localdomain podman[324298]: 2025-12-06 10:30:08.636530495 +0000 UTC m=+0.141026994 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:30:08 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "caa1e0c3-0842-4cff-aec6-04859a719f92", "format": "json"}]: dispatch
Dec 06 10:30:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:caa1e0c3-0842-4cff-aec6-04859a719f92, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:30:08 np0005548790.localdomain podman[324298]: 2025-12-06 10:30:08.702814352 +0000 UTC m=+0.207310851 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 10:30:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:caa1e0c3-0842-4cff-aec6-04859a719f92, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 06 10:30:08 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'caa1e0c3-0842-4cff-aec6-04859a719f92' of type subvolume
Dec 06 10:30:08 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:30:08.703+0000 7f06345ec640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'caa1e0c3-0842-4cff-aec6-04859a719f92' of type subvolume
Dec 06 10:30:08 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "caa1e0c3-0842-4cff-aec6-04859a719f92", "force": true, "format": "json"}]: dispatch
Dec 06 10:30:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:caa1e0c3-0842-4cff-aec6-04859a719f92, vol_name:cephfs) < ""
Dec 06 10:30:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/caa1e0c3-0842-4cff-aec6-04859a719f92'' moved to trashcan
Dec 06 10:30:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 06 10:30:08 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:caa1e0c3-0842-4cff-aec6-04859a719f92, vol_name:cephfs) < ""
Dec 06 10:30:08 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:30:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:30:08 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 9000.1 total, 600.0 interval
                                                          Cumulative writes: 20K writes, 76K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s
                                                          Cumulative WAL: 20K writes, 6905 syncs, 2.95 writes per sync, written: 0.06 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 25.51 MB, 0.04 MB/s
                                                          Interval WAL: 10K writes, 4083 syncs, 2.54 writes per sync, written: 0.02 GB, 0.04 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 10:30:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:30:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v741: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 45 KiB/s wr, 2 op/s
Dec 06 10:30:09 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "caa1e0c3-0842-4cff-aec6-04859a719f92", "format": "json"}]: dispatch
Dec 06 10:30:09 np0005548790.localdomain ceph-mon[301742]: from='client.25237 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "caa1e0c3-0842-4cff-aec6-04859a719f92", "force": true, "format": "json"}]: dispatch
Dec 06 10:30:10 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e280 e280: 6 total, 6 up, 6 in
Dec 06 10:30:10 np0005548790.localdomain ceph-mon[301742]: pgmap v741: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 45 KiB/s wr, 2 op/s
Dec 06 10:30:10 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:10.996 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v743: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 44 KiB/s wr, 3 op/s
Dec 06 10:30:11 np0005548790.localdomain ceph-mon[301742]: osdmap e280: 6 total, 6 up, 6 in
Dec 06 10:30:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:30:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:30:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:30:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:30:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:12.262 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:12 np0005548790.localdomain ceph-mon[301742]: pgmap v743: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 44 KiB/s wr, 3 op/s
Dec 06 10:30:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:30:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:30:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:30:13 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 9000.2 total, 600.0 interval
                                                          Cumulative writes: 26K writes, 100K keys, 26K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.01 MB/s
                                                          Cumulative WAL: 26K writes, 9481 syncs, 2.80 writes per sync, written: 0.10 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 13K writes, 51K keys, 13K commit groups, 1.0 writes per commit group, ingest: 54.67 MB, 0.09 MB/s
                                                          Interval WAL: 13K writes, 5358 syncs, 2.48 writes per sync, written: 0.05 GB, 0.09 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 10:30:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v744: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 62 KiB/s wr, 3 op/s
Dec 06 10:30:13 np0005548790.localdomain ceph-mon[301742]: pgmap v744: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 62 KiB/s wr, 3 op/s
Dec 06 10:30:14 np0005548790.localdomain sudo[324345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:30:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:30:14 np0005548790.localdomain sudo[324345]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:30:14 np0005548790.localdomain sudo[324345]: pam_unix(sudo:session): session closed for user root
Dec 06 10:30:14 np0005548790.localdomain sudo[324363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:30:14 np0005548790.localdomain sudo[324363]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:30:14 np0005548790.localdomain sudo[324363]: pam_unix(sudo:session): session closed for user root
Dec 06 10:30:15 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:30:15 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:30:15 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:30:15 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:30:15 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:30:15 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] update: starting ev 0ac94c56-8a1a-4d35-a832-6d0dd597112c (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:30:15 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] complete: finished ev 0ac94c56-8a1a-4d35-a832-6d0dd597112c (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:30:15 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Completed event 0ac94c56-8a1a-4d35-a832-6d0dd597112c (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 06 10:30:15 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:30:15 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:30:15 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:30:15 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:30:15 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:30:15 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:30:15 np0005548790.localdomain sudo[324414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:30:15 np0005548790.localdomain sudo[324414]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:30:15 np0005548790.localdomain sudo[324414]: pam_unix(sudo:session): session closed for user root
Dec 06 10:30:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v745: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 62 KiB/s wr, 3 op/s
Dec 06 10:30:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:16.028 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:16 np0005548790.localdomain ceph-mon[301742]: pgmap v745: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 62 KiB/s wr, 3 op/s
Dec 06 10:30:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:17.307 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:17 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Writing back 50 completed events
Dec 06 10:30:17 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:30:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v746: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 62 KiB/s wr, 3 op/s
Dec 06 10:30:17 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e281 e281: 6 total, 6 up, 6 in
Dec 06 10:30:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:30:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:30:18 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:30:18 np0005548790.localdomain ceph-mon[301742]: pgmap v746: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 62 KiB/s wr, 3 op/s
Dec 06 10:30:18 np0005548790.localdomain ceph-mon[301742]: osdmap e281: 6 total, 6 up, 6 in
Dec 06 10:30:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:30:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:30:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:30:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18776 "" "Go-http-client/1.1"
Dec 06 10:30:19 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:30:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v748: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 230 B/s rd, 54 KiB/s wr, 2 op/s
Dec 06 10:30:20 np0005548790.localdomain ceph-mon[301742]: pgmap v748: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 230 B/s rd, 54 KiB/s wr, 2 op/s
Dec 06 10:30:20 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:20.770 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:30:20.772 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '72:6c:02', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:a8:2f:0c:cb:a1'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:30:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:30:20.773 159200 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 06 10:30:20 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:30:20.774 159200 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=33b2d0f4-3dae-458c-b286-c937c7cb3d9e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 06 10:30:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:21.030 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v749: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 48 KiB/s wr, 2 op/s
Dec 06 10:30:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:22.343 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:22 np0005548790.localdomain ceph-mon[301742]: pgmap v749: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 48 KiB/s wr, 2 op/s
Dec 06 10:30:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:22.868 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Dec 06 10:30:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:30:22.977170) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:30:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Dec 06 10:30:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765017022977234, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 716, "num_deletes": 251, "total_data_size": 682557, "memory_usage": 695248, "flush_reason": "Manual Compaction"}
Dec 06 10:30:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Dec 06 10:30:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765017022982580, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 346964, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36715, "largest_seqno": 37426, "table_properties": {"data_size": 344004, "index_size": 879, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8408, "raw_average_key_size": 21, "raw_value_size": 337544, "raw_average_value_size": 843, "num_data_blocks": 39, "num_entries": 400, "num_filter_entries": 400, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765016987, "oldest_key_time": 1765016987, "file_creation_time": 1765017022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:30:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 5471 microseconds, and 2276 cpu microseconds.
Dec 06 10:30:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:30:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:30:22.982640) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 346964 bytes OK
Dec 06 10:30:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:30:22.982667) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Dec 06 10:30:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:30:22.984758) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Dec 06 10:30:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:30:22.984821) EVENT_LOG_v1 {"time_micros": 1765017022984811, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:30:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:30:22.984850) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:30:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 678674, prev total WAL file size 678998, number of live WAL files 2.
Dec 06 10:30:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:30:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:30:22.986628) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034323535' seq:72057594037927935, type:22 .. '6D6772737461740034353036' seq:0, type:0; will stop at (end)
Dec 06 10:30:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:30:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(338KB)], [60(20MB)]
Dec 06 10:30:22 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765017022986736, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 21655449, "oldest_snapshot_seqno": -1}
Dec 06 10:30:23 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 14540 keys, 19629898 bytes, temperature: kUnknown
Dec 06 10:30:23 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765017023098426, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 19629898, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19548032, "index_size": 44453, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36357, "raw_key_size": 391789, "raw_average_key_size": 26, "raw_value_size": 19302404, "raw_average_value_size": 1327, "num_data_blocks": 1635, "num_entries": 14540, "num_filter_entries": 14540, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015768, "oldest_key_time": 0, "file_creation_time": 1765017022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:30:23 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:30:23 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:30:23.098763) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 19629898 bytes
Dec 06 10:30:23 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:30:23.101175) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 193.7 rd, 175.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 20.3 +0.0 blob) out(18.7 +0.0 blob), read-write-amplify(119.0) write-amplify(56.6) OK, records in: 15051, records dropped: 511 output_compression: NoCompression
Dec 06 10:30:23 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:30:23.101206) EVENT_LOG_v1 {"time_micros": 1765017023101193, "job": 36, "event": "compaction_finished", "compaction_time_micros": 111799, "compaction_time_cpu_micros": 55231, "output_level": 6, "num_output_files": 1, "total_output_size": 19629898, "num_input_records": 15051, "num_output_records": 14540, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:30:23 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:30:23 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765017023101551, "job": 36, "event": "table_file_deletion", "file_number": 62}
Dec 06 10:30:23 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:30:23 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765017023104963, "job": 36, "event": "table_file_deletion", "file_number": 60}
Dec 06 10:30:23 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:30:22.986476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:30:23 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:30:23.105135) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:30:23 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:30:23.105144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:30:23 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:30:23.105147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:30:23 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:30:23.105150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:30:23 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:30:23.105153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:30:23 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v750: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 18 KiB/s wr, 0 op/s
Dec 06 10:30:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:30:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:30:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:30:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:30:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:30:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:30:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:30:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:30:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:30:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:30:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:30:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:30:24 np0005548790.localdomain ceph-mon[301742]: pgmap v750: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 18 KiB/s wr, 0 op/s
Dec 06 10:30:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:30:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v751: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 18 KiB/s wr, 0 op/s
Dec 06 10:30:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:26.057 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:26 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:30:26 np0005548790.localdomain ceph-mon[301742]: pgmap v751: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 18 KiB/s wr, 0 op/s
Dec 06 10:30:26 np0005548790.localdomain podman[324432]: 2025-12-06 10:30:26.579916264 +0000 UTC m=+0.088897256 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 06 10:30:26 np0005548790.localdomain podman[324432]: 2025-12-06 10:30:26.592183663 +0000 UTC m=+0.101164675 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 06 10:30:26 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:30:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:27.374 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v752: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 18 KiB/s wr, 0 op/s
Dec 06 10:30:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:27.952 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:28 np0005548790.localdomain ceph-mon[301742]: pgmap v752: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 18 KiB/s wr, 0 op/s
Dec 06 10:30:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:30:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v753: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s wr, 0 op/s
Dec 06 10:30:29 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:30:29 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:30:29 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:30:29 np0005548790.localdomain podman[324451]: 2025-12-06 10:30:29.588317304 +0000 UTC m=+0.098642776 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 06 10:30:29 np0005548790.localdomain podman[324450]: 2025-12-06 10:30:29.62619284 +0000 UTC m=+0.138761123 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 10:30:29 np0005548790.localdomain podman[324450]: 2025-12-06 10:30:29.666343667 +0000 UTC m=+0.178911960 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:30:29 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:30:29 np0005548790.localdomain podman[324452]: 2025-12-06 10:30:29.684420751 +0000 UTC m=+0.189518664 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, name=ubi9-minimal, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc.)
Dec 06 10:30:29 np0005548790.localdomain podman[324451]: 2025-12-06 10:30:29.72349861 +0000 UTC m=+0.233824082 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 06 10:30:29 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:30:29 np0005548790.localdomain podman[324452]: 2025-12-06 10:30:29.777865427 +0000 UTC m=+0.282963340 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.buildah.version=1.33.7)
Dec 06 10:30:29 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:30:30 np0005548790.localdomain ceph-mon[301742]: pgmap v753: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s wr, 0 op/s
Dec 06 10:30:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:31.093 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v754: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:30:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:32.415 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:32 np0005548790.localdomain ceph-mon[301742]: pgmap v754: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:30:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v755: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:30:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:33.583 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:34 np0005548790.localdomain ceph-mon[301742]: pgmap v755: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:30:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:30:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v756: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:30:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:36.151 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:36 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:30:36 np0005548790.localdomain ceph-mon[301742]: pgmap v756: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:30:36 np0005548790.localdomain podman[324513]: 2025-12-06 10:30:36.571042922 +0000 UTC m=+0.082991957 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 06 10:30:36 np0005548790.localdomain podman[324513]: 2025-12-06 10:30:36.585301675 +0000 UTC m=+0.097250710 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:30:36 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:30:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v757: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:30:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:37.452 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:38 np0005548790.localdomain ceph-mon[301742]: pgmap v757: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:30:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:38.616 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:30:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v758: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:30:39 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:30:39 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:30:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3426178596' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:30:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3426178596' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:30:39 np0005548790.localdomain podman[324532]: 2025-12-06 10:30:39.565262482 +0000 UTC m=+0.082409631 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:30:39 np0005548790.localdomain podman[324532]: 2025-12-06 10:30:39.602288274 +0000 UTC m=+0.119435443 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:30:39 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:30:39 np0005548790.localdomain podman[324533]: 2025-12-06 10:30:39.626654588 +0000 UTC m=+0.139332298 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 06 10:30:39 np0005548790.localdomain podman[324533]: 2025-12-06 10:30:39.723922477 +0000 UTC m=+0.236600217 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec 06 10:30:39 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:30:40 np0005548790.localdomain ceph-mon[301742]: pgmap v758: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:30:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:41.175 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:41 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:30:41.248 262327 INFO neutron.agent.linux.ip_lib [None req-e2c1a55b-54b9-4462-8890-98a4b51f1dc5 - - - - - -] Device tapca83c526-b3 cannot be used as it has no MAC address
Dec 06 10:30:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:41.270 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:41 np0005548790.localdomain kernel: device tapca83c526-b3 entered promiscuous mode
Dec 06 10:30:41 np0005548790.localdomain NetworkManager[5968]: <info>  [1765017041.2785] manager: (tapca83c526-b3): new Generic device (/org/freedesktop/NetworkManager/Devices/54)
Dec 06 10:30:41 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:30:41Z|00278|binding|INFO|Claiming lport ca83c526-b334-4997-b9bb-8bcd94123357 for this chassis.
Dec 06 10:30:41 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:30:41Z|00279|binding|INFO|ca83c526-b334-4997-b9bb-8bcd94123357: Claiming unknown
Dec 06 10:30:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:41.280 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:41 np0005548790.localdomain systemd-udevd[324590]: Network interface NamePolicy= disabled on kernel command line.
Dec 06 10:30:41 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:30:41.292 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/16', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-a99226e9-a30e-48e7-aced-0a65a5ee127f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a99226e9-a30e-48e7-aced-0a65a5ee127f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d2c3fc1d605488db2b4af2af7696c67', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cdf9d823-28a7-4b4c-8100-32a472f63310, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=ca83c526-b334-4997-b9bb-8bcd94123357) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:30:41 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:30:41.293 159200 INFO neutron.agent.ovn.metadata.agent [-] Port ca83c526-b334-4997-b9bb-8bcd94123357 in datapath a99226e9-a30e-48e7-aced-0a65a5ee127f bound to our chassis
Dec 06 10:30:41 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:30:41.295 159200 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a99226e9-a30e-48e7-aced-0a65a5ee127f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 06 10:30:41 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:30:41.296 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[20de8ef6-ab3f-480b-9e1f-96e1d4eb58f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:30:41 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapca83c526-b3: No such device
Dec 06 10:30:41 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:30:41Z|00280|binding|INFO|Setting lport ca83c526-b334-4997-b9bb-8bcd94123357 ovn-installed in OVS
Dec 06 10:30:41 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:30:41Z|00281|binding|INFO|Setting lport ca83c526-b334-4997-b9bb-8bcd94123357 up in Southbound
Dec 06 10:30:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:41.312 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:41 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapca83c526-b3: No such device
Dec 06 10:30:41 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapca83c526-b3: No such device
Dec 06 10:30:41 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapca83c526-b3: No such device
Dec 06 10:30:41 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapca83c526-b3: No such device
Dec 06 10:30:41 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapca83c526-b3: No such device
Dec 06 10:30:41 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapca83c526-b3: No such device
Dec 06 10:30:41 np0005548790.localdomain virtnodedevd[228934]: ethtool ioctl error on tapca83c526-b3: No such device
Dec 06 10:30:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:41.352 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:41.385 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v759: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:30:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Optimize plan auto_2025-12-06_10:30:41
Dec 06 10:30:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:30:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] do_upmap
Dec 06 10:30:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] pools ['images', 'vms', '.mgr', 'manila_metadata', 'backups', 'volumes', 'manila_data']
Dec 06 10:30:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] prepared 0/10 changes
Dec 06 10:30:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:30:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:30:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:30:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32)
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32)
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32)
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.003030833420575098 of space, bias 4.0, pg target 2.412543402777778 quantized to 16 (current 16)
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:30:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:42.492 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:42 np0005548790.localdomain ceph-mon[301742]: pgmap v759: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:30:42 np0005548790.localdomain podman[324662]: 
Dec 06 10:30:42 np0005548790.localdomain podman[324662]: 2025-12-06 10:30:42.750835904 +0000 UTC m=+0.090927100 container create baf3ed04b02d1182bab65a1c2b1637a55fda92da2bfa9bf686caae24237fa27f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a99226e9-a30e-48e7-aced-0a65a5ee127f, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:30:42 np0005548790.localdomain systemd[1]: Started libpod-conmon-baf3ed04b02d1182bab65a1c2b1637a55fda92da2bfa9bf686caae24237fa27f.scope.
Dec 06 10:30:42 np0005548790.localdomain systemd[1]: tmp-crun.32y2v9.mount: Deactivated successfully.
Dec 06 10:30:42 np0005548790.localdomain podman[324662]: 2025-12-06 10:30:42.705025525 +0000 UTC m=+0.045116731 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 06 10:30:42 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:30:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:30:42 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2292df0543032fa442338bdd14a0fd88123cc94a4c101e30a5ba2661386d91f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 06 10:30:42 np0005548790.localdomain podman[324662]: 2025-12-06 10:30:42.851449112 +0000 UTC m=+0.191540308 container init baf3ed04b02d1182bab65a1c2b1637a55fda92da2bfa9bf686caae24237fa27f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a99226e9-a30e-48e7-aced-0a65a5ee127f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:30:42 np0005548790.localdomain podman[324662]: 2025-12-06 10:30:42.860547157 +0000 UTC m=+0.200638353 container start baf3ed04b02d1182bab65a1c2b1637a55fda92da2bfa9bf686caae24237fa27f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a99226e9-a30e-48e7-aced-0a65a5ee127f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 06 10:30:42 np0005548790.localdomain dnsmasq[324680]: started, version 2.85 cachesize 150
Dec 06 10:30:42 np0005548790.localdomain dnsmasq[324680]: DNS service limited to local subnets
Dec 06 10:30:42 np0005548790.localdomain dnsmasq[324680]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 06 10:30:42 np0005548790.localdomain dnsmasq[324680]: warning: no upstream servers configured
Dec 06 10:30:42 np0005548790.localdomain dnsmasq-dhcp[324680]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 06 10:30:42 np0005548790.localdomain dnsmasq[324680]: read /var/lib/neutron/dhcp/a99226e9-a30e-48e7-aced-0a65a5ee127f/addn_hosts - 0 addresses
Dec 06 10:30:42 np0005548790.localdomain dnsmasq-dhcp[324680]: read /var/lib/neutron/dhcp/a99226e9-a30e-48e7-aced-0a65a5ee127f/host
Dec 06 10:30:42 np0005548790.localdomain dnsmasq-dhcp[324680]: read /var/lib/neutron/dhcp/a99226e9-a30e-48e7-aced-0a65a5ee127f/opts
Dec 06 10:30:42 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:30:42.980 262327 INFO neutron.agent.dhcp.agent [None req-215a717a-6a45-41f6-ac0b-cfddbd9c0785 - - - - - -] DHCP configuration for ports {'75ee166a-6c46-400b-b328-310e105def0e'} is completed
Dec 06 10:30:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v760: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:30:44 np0005548790.localdomain ceph-mon[301742]: pgmap v760: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:30:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:30:45 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e282 e282: 6 total, 6 up, 6 in
Dec 06 10:30:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v762: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:30:46 np0005548790.localdomain ceph-mon[301742]: osdmap e282: 6 total, 6 up, 6 in
Dec 06 10:30:46 np0005548790.localdomain ceph-mon[301742]: pgmap v762: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:30:46 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/510296205' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:30:46 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e283 e283: 6 total, 6 up, 6 in
Dec 06 10:30:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:46.210 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:46.951 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:30:47 np0005548790.localdomain ceph-mon[301742]: osdmap e283: 6 total, 6 up, 6 in
Dec 06 10:30:47 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/495746363' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:30:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v764: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:30:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:47.531 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:48 np0005548790.localdomain ceph-mon[301742]: pgmap v764: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:30:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:30:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:30:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:30:48.411 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:30:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:30:48.412 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:30:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:30:48.412 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:30:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:30:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156741 "" "Go-http-client/1.1"
Dec 06 10:30:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:30:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19262 "" "Go-http-client/1.1"
Dec 06 10:30:48 np0005548790.localdomain systemd[1]: tmp-crun.krLSD5.mount: Deactivated successfully.
Dec 06 10:30:48 np0005548790.localdomain dnsmasq[324680]: exiting on receipt of SIGTERM
Dec 06 10:30:48 np0005548790.localdomain podman[324698]: 2025-12-06 10:30:48.838738652 +0000 UTC m=+0.072998109 container kill baf3ed04b02d1182bab65a1c2b1637a55fda92da2bfa9bf686caae24237fa27f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a99226e9-a30e-48e7-aced-0a65a5ee127f, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:30:48 np0005548790.localdomain systemd[1]: libpod-baf3ed04b02d1182bab65a1c2b1637a55fda92da2bfa9bf686caae24237fa27f.scope: Deactivated successfully.
Dec 06 10:30:48 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:30:48Z|00282|binding|INFO|Removing iface tapca83c526-b3 ovn-installed in OVS
Dec 06 10:30:48 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:30:48Z|00283|binding|INFO|Removing lport ca83c526-b334-4997-b9bb-8bcd94123357 ovn-installed in OVS
Dec 06 10:30:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:30:48.847 159200 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 5f9de9aa-9e2a-47f4-84af-e101b689d0e4 with type ""
Dec 06 10:30:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:30:48.849 159200 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005548790.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/16', 'neutron:device_id': 'dhcp6af71730-fb73-5b19-9dbd-4376e3ccff87-a99226e9-a30e-48e7-aced-0a65a5ee127f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a99226e9-a30e-48e7-aced-0a65a5ee127f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7d2c3fc1d605488db2b4af2af7696c67', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005548790.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cdf9d823-28a7-4b4c-8100-32a472f63310, chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f83544f19a0>], logical_port=ca83c526-b334-4997-b9bb-8bcd94123357) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 06 10:30:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:30:48.851 159200 INFO neutron.agent.ovn.metadata.agent [-] Port ca83c526-b334-4997-b9bb-8bcd94123357 in datapath a99226e9-a30e-48e7-aced-0a65a5ee127f unbound from our chassis
Dec 06 10:30:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:48.851 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:30:48.853 159200 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a99226e9-a30e-48e7-aced-0a65a5ee127f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 06 10:30:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:30:48.854 262518 DEBUG oslo.privsep.daemon [-] privsep: reply[817c6a1b-59b8-4b67-8f2c-4d3531e9b7ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 06 10:30:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:48.857 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:48 np0005548790.localdomain podman[324712]: 2025-12-06 10:30:48.906939211 +0000 UTC m=+0.056597139 container died baf3ed04b02d1182bab65a1c2b1637a55fda92da2bfa9bf686caae24237fa27f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a99226e9-a30e-48e7-aced-0a65a5ee127f, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:30:48 np0005548790.localdomain podman[324712]: 2025-12-06 10:30:48.940402188 +0000 UTC m=+0.090060076 container cleanup baf3ed04b02d1182bab65a1c2b1637a55fda92da2bfa9bf686caae24237fa27f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a99226e9-a30e-48e7-aced-0a65a5ee127f, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 10:30:48 np0005548790.localdomain systemd[1]: libpod-conmon-baf3ed04b02d1182bab65a1c2b1637a55fda92da2bfa9bf686caae24237fa27f.scope: Deactivated successfully.
Dec 06 10:30:48 np0005548790.localdomain podman[324714]: 2025-12-06 10:30:48.990482461 +0000 UTC m=+0.125241740 container remove baf3ed04b02d1182bab65a1c2b1637a55fda92da2bfa9bf686caae24237fa27f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a99226e9-a30e-48e7-aced-0a65a5ee127f, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 06 10:30:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:49.001 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:49 np0005548790.localdomain kernel: device tapca83c526-b3 left promiscuous mode
Dec 06 10:30:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:49.016 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:49 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:30:49.052 262327 INFO neutron.agent.dhcp.agent [None req-c05ceb22-2625-4f14-9fb7-b3623a74e17e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:30:49 np0005548790.localdomain neutron_dhcp_agent[262322]: 2025-12-06 10:30:49.053 262327 INFO neutron.agent.dhcp.agent [None req-c05ceb22-2625-4f14-9fb7-b3623a74e17e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 06 10:30:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:30:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:49.245 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:49.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:30:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:49.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:30:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:49.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:30:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:49.355 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:30:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v765: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 2.6 MiB/s wr, 55 op/s
Dec 06 10:30:49 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-d2292df0543032fa442338bdd14a0fd88123cc94a4c101e30a5ba2661386d91f-merged.mount: Deactivated successfully.
Dec 06 10:30:49 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-baf3ed04b02d1182bab65a1c2b1637a55fda92da2bfa9bf686caae24237fa27f-userdata-shm.mount: Deactivated successfully.
Dec 06 10:30:49 np0005548790.localdomain systemd[1]: run-netns-qdhcp\x2da99226e9\x2da30e\x2d48e7\x2daced\x2d0a65a5ee127f.mount: Deactivated successfully.
Dec 06 10:30:50 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:50.352 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:30:50 np0005548790.localdomain ceph-mon[301742]: pgmap v765: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 2.6 MiB/s wr, 55 op/s
Dec 06 10:30:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:51.242 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:51.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:30:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:51.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:30:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v766: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 2.6 MiB/s wr, 55 op/s
Dec 06 10:30:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:52.328 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:30:52 np0005548790.localdomain ceph-mon[301742]: pgmap v766: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 2.6 MiB/s wr, 55 op/s
Dec 06 10:30:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:52.572 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:53 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 e284: 6 total, 6 up, 6 in
Dec 06 10:30:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v768: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 2.6 MiB/s wr, 55 op/s
Dec 06 10:30:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:30:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:30:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:30:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:30:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:30:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:30:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:30:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:30:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:30:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:30:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:30:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:30:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:53.703 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:54 np0005548790.localdomain ceph-mon[301742]: osdmap e284: 6 total, 6 up, 6 in
Dec 06 10:30:54 np0005548790.localdomain ceph-mon[301742]: pgmap v768: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 2.6 MiB/s wr, 55 op/s
Dec 06 10:30:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:30:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:54.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:30:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:55.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:30:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:55.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:30:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:55.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:30:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:55.356 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:30:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:55.356 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:30:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:55.356 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:30:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:55.357 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:30:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:55.357 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:30:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v769: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 2.2 MiB/s wr, 47 op/s
Dec 06 10:30:55 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:30:55 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/11701804' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:30:55 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:55.861 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:30:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:56.078 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:30:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:56.080 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=11366MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:30:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:56.081 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:30:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:56.081 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:30:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:56.196 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:30:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:56.196 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:30:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:56.222 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:30:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:56.283 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:56 np0005548790.localdomain ceph-mon[301742]: pgmap v769: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 2.2 MiB/s wr, 47 op/s
Dec 06 10:30:56 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/11701804' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:30:56 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:30:56 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4191358629' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:30:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:56.680 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:30:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:56.686 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:30:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:57.194 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:30:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:57.197 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:30:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:57.197 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:30:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v770: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 2.0 MiB/s wr, 44 op/s
Dec 06 10:30:57 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:30:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:57.551 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:57 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/4191358629' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:30:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:57.573 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:30:57 np0005548790.localdomain podman[324789]: 2025-12-06 10:30:57.585414271 +0000 UTC m=+0.089106852 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 06 10:30:57 np0005548790.localdomain podman[324789]: 2025-12-06 10:30:57.615440566 +0000 UTC m=+0.119133147 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 10:30:57 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:30:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:30:58.197 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:30:58 np0005548790.localdomain ceph-mon[301742]: pgmap v770: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 2.0 MiB/s wr, 44 op/s
Dec 06 10:30:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:30:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v771: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:00 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:31:00 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:31:00 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:31:00 np0005548790.localdomain podman[324804]: 2025-12-06 10:31:00.577545405 +0000 UTC m=+0.078289480 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 10:31:00 np0005548790.localdomain podman[324804]: 2025-12-06 10:31:00.586106555 +0000 UTC m=+0.086850620 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 10:31:00 np0005548790.localdomain ceph-mon[301742]: pgmap v771: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:00 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/3107954852' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:31:00 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:31:00 np0005548790.localdomain podman[324805]: 2025-12-06 10:31:00.645674082 +0000 UTC m=+0.141154706 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 06 10:31:00 np0005548790.localdomain podman[324805]: 2025-12-06 10:31:00.682148971 +0000 UTC m=+0.177629575 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:31:00 np0005548790.localdomain systemd[1]: tmp-crun.kXEahP.mount: Deactivated successfully.
Dec 06 10:31:00 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:31:00 np0005548790.localdomain podman[324809]: 2025-12-06 10:31:00.702130916 +0000 UTC m=+0.192637947 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-type=git)
Dec 06 10:31:00 np0005548790.localdomain podman[324809]: 2025-12-06 10:31:00.714046787 +0000 UTC m=+0.204553838 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.tags=minimal rhel9, distribution-scope=public, architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9-minimal)
Dec 06 10:31:00 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:31:01 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:01.323 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:31:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v772: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:01 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/3944337570' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:31:02 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:02.601 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:31:02 np0005548790.localdomain ceph-mon[301742]: pgmap v772: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v773: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:04 np0005548790.localdomain ceph-mon[301742]: pgmap v773: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:31:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v774: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:06 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:06.366 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:31:06 np0005548790.localdomain ceph-mon[301742]: pgmap v774: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.329 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.332 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.332 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.332 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.332 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.332 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.332 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.332 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.333 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.333 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:31:07.333 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:31:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v775: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:07 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:31:07 np0005548790.localdomain podman[324865]: 2025-12-06 10:31:07.561365621 +0000 UTC m=+0.076943805 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd)
Dec 06 10:31:07 np0005548790.localdomain podman[324865]: 2025-12-06 10:31:07.595970049 +0000 UTC m=+0.111548223 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd)
Dec 06 10:31:07 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:07.604 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:31:07 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:31:08 np0005548790.localdomain ceph-mon[301742]: pgmap v775: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:31:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v776: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:10 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:31:10 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:31:10 np0005548790.localdomain podman[324884]: 2025-12-06 10:31:10.566905435 +0000 UTC m=+0.076180714 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 06 10:31:10 np0005548790.localdomain ceph-mon[301742]: pgmap v776: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:10 np0005548790.localdomain podman[324884]: 2025-12-06 10:31:10.609085756 +0000 UTC m=+0.118361025 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec 06 10:31:10 np0005548790.localdomain systemd[1]: tmp-crun.HSTFxL.mount: Deactivated successfully.
Dec 06 10:31:10 np0005548790.localdomain podman[324883]: 2025-12-06 10:31:10.626638866 +0000 UTC m=+0.140005356 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:31:10 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:31:10 np0005548790.localdomain podman[324883]: 2025-12-06 10:31:10.662221041 +0000 UTC m=+0.175587511 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 06 10:31:10 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:31:11 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:11.403 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:31:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v777: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:31:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f0618c331f0>)]
Dec 06 10:31:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 06 10:31:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:31:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f0654f3a250>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f0618c33760>)]
Dec 06 10:31:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 06 10:31:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 06 10:31:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:12.638 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:31:12 np0005548790.localdomain ceph-mon[301742]: pgmap v777: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:31:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:31:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v778: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 06 10:31:14 np0005548790.localdomain ceph-mon[301742]: pgmap v778: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 06 10:31:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:31:15 np0005548790.localdomain ceph-mon[301742]: mgrmap e55: np0005548790.kvkfyr(active, since 19m), standbys: np0005548785.vhqlsq, np0005548788.yvwbqq, np0005548789.mzhmje
Dec 06 10:31:15 np0005548790.localdomain sudo[324930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:31:15 np0005548790.localdomain sudo[324930]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:31:15 np0005548790.localdomain sudo[324930]: pam_unix(sudo:session): session closed for user root
Dec 06 10:31:15 np0005548790.localdomain sudo[324948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:31:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v779: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 06 10:31:15 np0005548790.localdomain sudo[324948]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:31:16 np0005548790.localdomain sudo[324948]: pam_unix(sudo:session): session closed for user root
Dec 06 10:31:16 np0005548790.localdomain ceph-mon[301742]: pgmap v779: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 06 10:31:16 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:31:16 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:31:16 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:31:16 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:31:16 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:31:16 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] update: starting ev 1ce7c473-2c8a-488a-ad40-aa181c00d6b4 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:31:16 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] complete: finished ev 1ce7c473-2c8a-488a-ad40-aa181c00d6b4 (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:31:16 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Completed event 1ce7c473-2c8a-488a-ad40-aa181c00d6b4 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 06 10:31:16 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:31:16 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:31:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:16.407 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:31:16 np0005548790.localdomain sudo[324997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:31:16 np0005548790.localdomain sudo[324997]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:31:16 np0005548790.localdomain sudo[324997]: pam_unix(sudo:session): session closed for user root
Dec 06 10:31:17 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:31:17 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:31:17 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:31:17 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:31:17 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Writing back 50 completed events
Dec 06 10:31:17 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:31:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v780: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 06 10:31:17 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:17.686 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:31:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:31:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:31:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:31:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:31:18 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:31:18 np0005548790.localdomain ceph-mon[301742]: pgmap v780: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 06 10:31:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:31:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18773 "" "Go-http-client/1.1"
Dec 06 10:31:19 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:31:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v781: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 06 10:31:20 np0005548790.localdomain ceph-mon[301742]: pgmap v781: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 06 10:31:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:21.449 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:31:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v782: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 06 10:31:22 np0005548790.localdomain ceph-mon[301742]: pgmap v782: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 06 10:31:22 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:22.731 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:31:23 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v783: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 06 10:31:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:31:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:31:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:31:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:31:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:31:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:31:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:31:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:31:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:31:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:31:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:31:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:31:24 np0005548790.localdomain ceph-mon[301742]: pgmap v783: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 06 10:31:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:31:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v784: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 0 B/s wr, 0 op/s
Dec 06 10:31:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:26.492 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:31:26 np0005548790.localdomain ceph-mon[301742]: pgmap v784: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 0 B/s wr, 0 op/s
Dec 06 10:31:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v785: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 0 B/s wr, 0 op/s
Dec 06 10:31:27 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:27.774 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:31:28 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:31:28 np0005548790.localdomain systemd[1]: tmp-crun.z2HC7B.mount: Deactivated successfully.
Dec 06 10:31:28 np0005548790.localdomain ceph-mon[301742]: pgmap v785: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 0 B/s wr, 0 op/s
Dec 06 10:31:28 np0005548790.localdomain podman[325017]: 2025-12-06 10:31:28.570327774 +0000 UTC m=+0.082817352 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 06 10:31:28 np0005548790.localdomain podman[325017]: 2025-12-06 10:31:28.60559011 +0000 UTC m=+0.118079718 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 06 10:31:28 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:31:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:31:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v786: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 0 B/s wr, 0 op/s
Dec 06 10:31:29 np0005548790.localdomain ovn_controller[153552]: 2025-12-06T10:31:29Z|00284|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Dec 06 10:31:30 np0005548790.localdomain ceph-mon[301742]: pgmap v786: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 0 B/s wr, 0 op/s
Dec 06 10:31:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v787: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:31 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:31:31 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:31:31 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:31:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:31.547 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:31:31 np0005548790.localdomain podman[325037]: 2025-12-06 10:31:31.611028621 +0000 UTC m=+0.064382608 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:31:31 np0005548790.localdomain podman[325037]: 2025-12-06 10:31:31.621106701 +0000 UTC m=+0.074460678 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 06 10:31:31 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:31:31 np0005548790.localdomain podman[325038]: 2025-12-06 10:31:31.659879101 +0000 UTC m=+0.113243118 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., release=1755695350, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 06 10:31:31 np0005548790.localdomain podman[325038]: 2025-12-06 10:31:31.680244878 +0000 UTC m=+0.133608935 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release=1755695350, architecture=x86_64, config_id=edpm, distribution-scope=public)
Dec 06 10:31:31 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:31:31 np0005548790.localdomain podman[325036]: 2025-12-06 10:31:31.727919496 +0000 UTC m=+0.179805304 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 10:31:31 np0005548790.localdomain podman[325036]: 2025-12-06 10:31:31.764185319 +0000 UTC m=+0.216071067 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 06 10:31:31 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:31:32 np0005548790.localdomain ceph-mon[301742]: pgmap v787: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:32 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:32.816 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:31:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v788: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:34 np0005548790.localdomain ceph-mon[301742]: pgmap v788: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:31:34 np0005548790.localdomain sshd[325101]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:31:34 np0005548790.localdomain sshd[325101]: Accepted publickey for zuul from 38.102.83.114 port 36972 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 10:31:34 np0005548790.localdomain systemd-logind[760]: New session 74 of user zuul.
Dec 06 10:31:34 np0005548790.localdomain systemd[1]: Started Session 74 of User zuul.
Dec 06 10:31:34 np0005548790.localdomain sshd[325101]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 10:31:34 np0005548790.localdomain sudo[325121]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hqctowllbyqubxwcjnnxzuitscfbemhb ; /usr/bin/python3
Dec 06 10:31:34 np0005548790.localdomain sudo[325121]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 10:31:34 np0005548790.localdomain python3[325123]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister
                                                           _uses_shell=True zuul_log_id=fa163ef9-e89a-7de2-0762-00000000000c-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 06 10:31:35 np0005548790.localdomain sudo[325121]: pam_unix(sudo:session): session closed for user root
Dec 06 10:31:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v789: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:36 np0005548790.localdomain ceph-mon[301742]: pgmap v789: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:36.578 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:31:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v790: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:31:37.582509) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765017097582543, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1135, "num_deletes": 252, "total_data_size": 1870586, "memory_usage": 2067728, "flush_reason": "Manual Compaction"}
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765017097591268, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 1214849, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37431, "largest_seqno": 38561, "table_properties": {"data_size": 1210299, "index_size": 2149, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10698, "raw_average_key_size": 20, "raw_value_size": 1200818, "raw_average_value_size": 2291, "num_data_blocks": 96, "num_entries": 524, "num_filter_entries": 524, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765017022, "oldest_key_time": 1765017022, "file_creation_time": 1765017097, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 8808 microseconds, and 4006 cpu microseconds.
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:31:37.591314) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 1214849 bytes OK
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:31:37.591335) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:31:37.592811) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:31:37.592833) EVENT_LOG_v1 {"time_micros": 1765017097592825, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:31:37.592854) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1865029, prev total WAL file size 1865029, number of live WAL files 2.
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:31:37.593502) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133333033' seq:72057594037927935, type:22 .. '7061786F73003133353535' seq:0, type:0; will stop at (end)
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(1186KB)], [63(18MB)]
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765017097593762, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 20844747, "oldest_snapshot_seqno": -1}
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 14536 keys, 19226967 bytes, temperature: kUnknown
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765017097696846, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 19226967, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19145830, "index_size": 43724, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36357, "raw_key_size": 392197, "raw_average_key_size": 26, "raw_value_size": 18900951, "raw_average_value_size": 1300, "num_data_blocks": 1601, "num_entries": 14536, "num_filter_entries": 14536, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765015768, "oldest_key_time": 0, "file_creation_time": 1765017097, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4dd2910d-705d-477e-9f8b-a80f7db9791a", "db_session_id": "CFD0WFBBCIFLI72L04W0", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:31:37.697187) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 19226967 bytes
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:31:37.698671) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.1 rd, 186.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 18.7 +0.0 blob) out(18.3 +0.0 blob), read-write-amplify(33.0) write-amplify(15.8) OK, records in: 15064, records dropped: 528 output_compression: NoCompression
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:31:37.698685) EVENT_LOG_v1 {"time_micros": 1765017097698679, "job": 38, "event": "compaction_finished", "compaction_time_micros": 103154, "compaction_time_cpu_micros": 49783, "output_level": 6, "num_output_files": 1, "total_output_size": 19226967, "num_input_records": 15064, "num_output_records": 14536, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765017097698900, "job": 38, "event": "table_file_deletion", "file_number": 65}
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005548790/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765017097700954, "job": 38, "event": "table_file_deletion", "file_number": 63}
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:31:37.593391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:31:37.701042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:31:37.701050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:31:37.701052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:31:37.701056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:31:37 np0005548790.localdomain ceph-mon[301742]: rocksdb: (Original Log Time 2025/12/06-10:31:37.701059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 06 10:31:37 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:37.853 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:31:38 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:31:38 np0005548790.localdomain podman[325126]: 2025-12-06 10:31:38.574190913 +0000 UTC m=+0.088862053 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd)
Dec 06 10:31:38 np0005548790.localdomain podman[325126]: 2025-12-06 10:31:38.588181769 +0000 UTC m=+0.102852929 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:31:38 np0005548790.localdomain ceph-mon[301742]: pgmap v790: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:38 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:31:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:31:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v791: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:39 np0005548790.localdomain sshd[325101]: pam_unix(sshd:session): session closed for user zuul
Dec 06 10:31:39 np0005548790.localdomain systemd[1]: session-74.scope: Deactivated successfully.
Dec 06 10:31:39 np0005548790.localdomain systemd-logind[760]: Session 74 logged out. Waiting for processes to exit.
Dec 06 10:31:39 np0005548790.localdomain systemd-logind[760]: Removed session 74.
Dec 06 10:31:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1455173782' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:31:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/1455173782' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:31:40 np0005548790.localdomain ceph-mon[301742]: pgmap v791: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:41 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:31:41 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:31:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v792: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:41 np0005548790.localdomain systemd[1]: tmp-crun.TUP9PS.mount: Deactivated successfully.
Dec 06 10:31:41 np0005548790.localdomain podman[325145]: 2025-12-06 10:31:41.571788604 +0000 UTC m=+0.081615200 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 10:31:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:41.627 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:31:41 np0005548790.localdomain systemd[1]: tmp-crun.Ru9Ow3.mount: Deactivated successfully.
Dec 06 10:31:41 np0005548790.localdomain podman[325146]: 2025-12-06 10:31:41.655208251 +0000 UTC m=+0.165177861 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:31:41 np0005548790.localdomain podman[325146]: 2025-12-06 10:31:41.686040378 +0000 UTC m=+0.196009998 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:31:41 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:31:41 np0005548790.localdomain podman[325145]: 2025-12-06 10:31:41.707607017 +0000 UTC m=+0.217433703 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:31:41 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:31:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Optimize plan auto_2025-12-06_10:31:41
Dec 06 10:31:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:31:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] do_upmap
Dec 06 10:31:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] pools ['.mgr', 'backups', 'vms', 'manila_data', 'manila_metadata', 'images', 'volumes']
Dec 06 10:31:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] prepared 0/10 changes
Dec 06 10:31:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:31:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:31:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:31:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32)
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32)
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32)
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.003030833420575098 of space, bias 4.0, pg target 2.412543402777778 quantized to 16 (current 16)
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:31:42 np0005548790.localdomain ceph-mon[301742]: pgmap v792: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:31:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:31:42 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:42.894 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:31:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v793: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:44 np0005548790.localdomain ceph-mgr[286934]: [devicehealth INFO root] Check health
Dec 06 10:31:44 np0005548790.localdomain ceph-mon[301742]: pgmap v793: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:31:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v794: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:46.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:31:46 np0005548790.localdomain ceph-mon[301742]: pgmap v794: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:46 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/1627710630' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:31:46 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/3484471563' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:31:46 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:46.686 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:31:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v795: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:47.922 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:31:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:31:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:31:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:31:48.412 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:31:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:31:48.413 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:31:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:31:48.413 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:31:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:31:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:31:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:31:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18780 "" "Go-http-client/1.1"
Dec 06 10:31:48 np0005548790.localdomain ceph-mon[301742]: pgmap v795: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:31:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:49.334 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:31:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:49.334 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:31:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:49.334 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:31:49 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:49.346 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:31:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v796: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:50 np0005548790.localdomain ceph-mon[301742]: pgmap v796: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:51.342 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:31:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v797: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:51.730 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:31:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:52.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:31:52 np0005548790.localdomain ceph-mon[301742]: pgmap v797: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:52.924 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:31:53 np0005548790.localdomain sshd[325189]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:31:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:53.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:31:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v798: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:31:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:31:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:31:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:31:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:31:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:31:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:31:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:31:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:31:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:31:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:31:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:31:54 np0005548790.localdomain ceph-mon[301742]: pgmap v798: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:31:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:54.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:31:55 np0005548790.localdomain sshd[325189]: Received disconnect from 101.47.160.186 port 34366:11: Bye Bye [preauth]
Dec 06 10:31:55 np0005548790.localdomain sshd[325189]: Disconnected from authenticating user root 101.47.160.186 port 34366 [preauth]
Dec 06 10:31:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v799: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:56.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:31:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:56.360 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:31:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:56.360 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:31:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:56.361 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:31:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:56.361 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:31:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:56.361 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:31:56 np0005548790.localdomain ceph-mon[301742]: pgmap v799: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:56.774 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:31:56 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:31:56 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3668081092' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:31:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:56.794 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:31:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:56.977 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:31:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:56.978 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=11378MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:31:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:56.978 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:31:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:56.979 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:31:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:57.055 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:31:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:57.056 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:31:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:57.072 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:31:57 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:31:57 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1158284102' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:31:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v800: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:57.496 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:31:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:57.503 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:31:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:57.531 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:31:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:57.533 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:31:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:57.534 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:31:57 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/3668081092' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:31:57 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1158284102' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:31:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:57.968 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:31:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:58.534 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:31:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:58.535 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:31:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:31:58.535 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:31:58 np0005548790.localdomain ceph-mon[301742]: pgmap v800: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:31:59 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:31:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v801: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:31:59 np0005548790.localdomain podman[325235]: 2025-12-06 10:31:59.56834866 +0000 UTC m=+0.083255144 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:31:59 np0005548790.localdomain podman[325235]: 2025-12-06 10:31:59.578166213 +0000 UTC m=+0.093072737 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 06 10:31:59 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:31:59 np0005548790.localdomain sshd[325254]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:31:59 np0005548790.localdomain sshd[325254]: Accepted publickey for zuul from 38.102.83.114 port 32956 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 10:32:00 np0005548790.localdomain systemd-logind[760]: New session 75 of user zuul.
Dec 06 10:32:00 np0005548790.localdomain systemd[1]: Started Session 75 of User zuul.
Dec 06 10:32:00 np0005548790.localdomain sshd[325254]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 10:32:00 np0005548790.localdomain sudo[325258]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /var/log
Dec 06 10:32:00 np0005548790.localdomain sudo[325258]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 10:32:00 np0005548790.localdomain ceph-mon[301742]: pgmap v801: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:00 np0005548790.localdomain sudo[325258]: pam_unix(sudo:session): session closed for user root
Dec 06 10:32:00 np0005548790.localdomain sshd[325257]: Received disconnect from 38.102.83.114 port 32956:11: disconnected by user
Dec 06 10:32:00 np0005548790.localdomain sshd[325257]: Disconnected from user zuul 38.102.83.114 port 32956
Dec 06 10:32:00 np0005548790.localdomain sshd[325254]: pam_unix(sshd:session): session closed for user zuul
Dec 06 10:32:00 np0005548790.localdomain systemd[1]: session-75.scope: Deactivated successfully.
Dec 06 10:32:00 np0005548790.localdomain systemd-logind[760]: Session 75 logged out. Waiting for processes to exit.
Dec 06 10:32:00 np0005548790.localdomain systemd-logind[760]: Removed session 75.
Dec 06 10:32:01 np0005548790.localdomain sshd[325276]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:32:01 np0005548790.localdomain sshd[325276]: Accepted publickey for zuul from 38.102.83.114 port 32968 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 10:32:01 np0005548790.localdomain systemd-logind[760]: New session 76 of user zuul.
Dec 06 10:32:01 np0005548790.localdomain systemd[1]: Started Session 76 of User zuul.
Dec 06 10:32:01 np0005548790.localdomain sshd[325276]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 10:32:01 np0005548790.localdomain sudo[325280]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/containers/networks
Dec 06 10:32:01 np0005548790.localdomain sudo[325280]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 10:32:01 np0005548790.localdomain sudo[325280]: pam_unix(sudo:session): session closed for user root
Dec 06 10:32:01 np0005548790.localdomain sshd[325279]: Received disconnect from 38.102.83.114 port 32968:11: disconnected by user
Dec 06 10:32:01 np0005548790.localdomain sshd[325279]: Disconnected from user zuul 38.102.83.114 port 32968
Dec 06 10:32:01 np0005548790.localdomain sshd[325276]: pam_unix(sshd:session): session closed for user zuul
Dec 06 10:32:01 np0005548790.localdomain systemd[1]: session-76.scope: Deactivated successfully.
Dec 06 10:32:01 np0005548790.localdomain systemd-logind[760]: Session 76 logged out. Waiting for processes to exit.
Dec 06 10:32:01 np0005548790.localdomain systemd-logind[760]: Removed session 76.
Dec 06 10:32:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v802: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:01 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:01.807 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:32:01 np0005548790.localdomain sshd[325298]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:32:01 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/70586077' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:32:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:32:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:32:01 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:32:01 np0005548790.localdomain sshd[325298]: Accepted publickey for zuul from 38.102.83.114 port 32980 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 10:32:01 np0005548790.localdomain systemd-logind[760]: New session 77 of user zuul.
Dec 06 10:32:01 np0005548790.localdomain systemd[1]: Started Session 77 of User zuul.
Dec 06 10:32:01 np0005548790.localdomain sshd[325298]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 10:32:02 np0005548790.localdomain podman[325302]: 2025-12-06 10:32:02.015217959 +0000 UTC m=+0.090398536 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0)
Dec 06 10:32:02 np0005548790.localdomain podman[325302]: 2025-12-06 10:32:02.027457937 +0000 UTC m=+0.102638514 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 06 10:32:02 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:32:02 np0005548790.localdomain sudo[325342]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/containers/containers.conf
Dec 06 10:32:02 np0005548790.localdomain sudo[325342]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 10:32:02 np0005548790.localdomain sudo[325342]: pam_unix(sudo:session): session closed for user root
Dec 06 10:32:02 np0005548790.localdomain sshd[325334]: Received disconnect from 38.102.83.114 port 32980:11: disconnected by user
Dec 06 10:32:02 np0005548790.localdomain sshd[325334]: Disconnected from user zuul 38.102.83.114 port 32980
Dec 06 10:32:02 np0005548790.localdomain sshd[325298]: pam_unix(sshd:session): session closed for user zuul
Dec 06 10:32:02 np0005548790.localdomain systemd[1]: session-77.scope: Deactivated successfully.
Dec 06 10:32:02 np0005548790.localdomain systemd-logind[760]: Session 77 logged out. Waiting for processes to exit.
Dec 06 10:32:02 np0005548790.localdomain systemd-logind[760]: Removed session 77.
Dec 06 10:32:02 np0005548790.localdomain podman[325300]: 2025-12-06 10:32:02.119478915 +0000 UTC m=+0.195341010 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 06 10:32:02 np0005548790.localdomain podman[325300]: 2025-12-06 10:32:02.132214287 +0000 UTC m=+0.208076402 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 10:32:02 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:32:02 np0005548790.localdomain podman[325303]: 2025-12-06 10:32:02.082995366 +0000 UTC m=+0.152883161 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Dec 06 10:32:02 np0005548790.localdomain podman[325303]: 2025-12-06 10:32:02.217715419 +0000 UTC m=+0.287603204 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, version=9.6, io.buildah.version=1.33.7, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350)
Dec 06 10:32:02 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:32:02 np0005548790.localdomain sshd[325379]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:32:02 np0005548790.localdomain sshd[325379]: Accepted publickey for zuul from 38.102.83.114 port 32996 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 10:32:02 np0005548790.localdomain systemd-logind[760]: New session 78 of user zuul.
Dec 06 10:32:02 np0005548790.localdomain systemd[1]: Started Session 78 of User zuul.
Dec 06 10:32:02 np0005548790.localdomain sshd[325379]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 10:32:02 np0005548790.localdomain sudo[325383]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/ceph
Dec 06 10:32:02 np0005548790.localdomain sudo[325383]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 10:32:02 np0005548790.localdomain sudo[325383]: pam_unix(sudo:session): session closed for user root
Dec 06 10:32:02 np0005548790.localdomain sshd[325382]: Received disconnect from 38.102.83.114 port 32996:11: disconnected by user
Dec 06 10:32:02 np0005548790.localdomain sshd[325382]: Disconnected from user zuul 38.102.83.114 port 32996
Dec 06 10:32:02 np0005548790.localdomain sshd[325379]: pam_unix(sshd:session): session closed for user zuul
Dec 06 10:32:02 np0005548790.localdomain systemd[1]: session-78.scope: Deactivated successfully.
Dec 06 10:32:02 np0005548790.localdomain systemd-logind[760]: Session 78 logged out. Waiting for processes to exit.
Dec 06 10:32:02 np0005548790.localdomain systemd-logind[760]: Removed session 78.
Dec 06 10:32:02 np0005548790.localdomain ceph-mon[301742]: pgmap v802: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:02 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/1583459833' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:32:03 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:03.002 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:32:03 np0005548790.localdomain sshd[325401]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:32:03 np0005548790.localdomain sshd[325401]: Accepted publickey for zuul from 38.102.83.114 port 33004 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 10:32:03 np0005548790.localdomain systemd-logind[760]: New session 79 of user zuul.
Dec 06 10:32:03 np0005548790.localdomain systemd[1]: Started Session 79 of User zuul.
Dec 06 10:32:03 np0005548790.localdomain sshd[325401]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 10:32:03 np0005548790.localdomain sudo[325405]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/ci
Dec 06 10:32:03 np0005548790.localdomain sudo[325405]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 10:32:03 np0005548790.localdomain sudo[325405]: pam_unix(sudo:session): session closed for user root
Dec 06 10:32:03 np0005548790.localdomain sshd[325404]: Received disconnect from 38.102.83.114 port 33004:11: disconnected by user
Dec 06 10:32:03 np0005548790.localdomain sshd[325404]: Disconnected from user zuul 38.102.83.114 port 33004
Dec 06 10:32:03 np0005548790.localdomain sshd[325401]: pam_unix(sshd:session): session closed for user zuul
Dec 06 10:32:03 np0005548790.localdomain systemd[1]: session-79.scope: Deactivated successfully.
Dec 06 10:32:03 np0005548790.localdomain systemd-logind[760]: Session 79 logged out. Waiting for processes to exit.
Dec 06 10:32:03 np0005548790.localdomain systemd-logind[760]: Removed session 79.
Dec 06 10:32:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v803: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:03 np0005548790.localdomain sshd[325423]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:32:03 np0005548790.localdomain sshd[325423]: Accepted publickey for zuul from 38.102.83.114 port 33006 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 10:32:03 np0005548790.localdomain systemd-logind[760]: New session 80 of user zuul.
Dec 06 10:32:03 np0005548790.localdomain systemd[1]: Started Session 80 of User zuul.
Dec 06 10:32:03 np0005548790.localdomain sshd[325423]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 10:32:03 np0005548790.localdomain sudo[325427]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/yum.conf
Dec 06 10:32:03 np0005548790.localdomain sudo[325427]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 10:32:03 np0005548790.localdomain sudo[325427]: pam_unix(sudo:session): session closed for user root
Dec 06 10:32:03 np0005548790.localdomain sshd[325426]: Received disconnect from 38.102.83.114 port 33006:11: disconnected by user
Dec 06 10:32:03 np0005548790.localdomain sshd[325426]: Disconnected from user zuul 38.102.83.114 port 33006
Dec 06 10:32:03 np0005548790.localdomain sshd[325423]: pam_unix(sshd:session): session closed for user zuul
Dec 06 10:32:03 np0005548790.localdomain systemd[1]: session-80.scope: Deactivated successfully.
Dec 06 10:32:03 np0005548790.localdomain systemd-logind[760]: Session 80 logged out. Waiting for processes to exit.
Dec 06 10:32:03 np0005548790.localdomain systemd-logind[760]: Removed session 80.
Dec 06 10:32:04 np0005548790.localdomain ceph-mon[301742]: pgmap v803: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:32:04 np0005548790.localdomain sshd[325445]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:32:04 np0005548790.localdomain sshd[325445]: Accepted publickey for zuul from 38.102.83.114 port 33008 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 10:32:04 np0005548790.localdomain systemd-logind[760]: New session 81 of user zuul.
Dec 06 10:32:04 np0005548790.localdomain systemd[1]: Started Session 81 of User zuul.
Dec 06 10:32:04 np0005548790.localdomain sshd[325445]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 10:32:04 np0005548790.localdomain sudo[325449]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/yum.repos.d
Dec 06 10:32:04 np0005548790.localdomain sudo[325449]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 10:32:04 np0005548790.localdomain sudo[325449]: pam_unix(sudo:session): session closed for user root
Dec 06 10:32:04 np0005548790.localdomain sshd[325448]: Received disconnect from 38.102.83.114 port 33008:11: disconnected by user
Dec 06 10:32:04 np0005548790.localdomain sshd[325448]: Disconnected from user zuul 38.102.83.114 port 33008
Dec 06 10:32:04 np0005548790.localdomain sshd[325445]: pam_unix(sshd:session): session closed for user zuul
Dec 06 10:32:04 np0005548790.localdomain systemd[1]: session-81.scope: Deactivated successfully.
Dec 06 10:32:04 np0005548790.localdomain systemd-logind[760]: Session 81 logged out. Waiting for processes to exit.
Dec 06 10:32:04 np0005548790.localdomain systemd-logind[760]: Removed session 81.
Dec 06 10:32:04 np0005548790.localdomain sshd[325467]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:32:04 np0005548790.localdomain sshd[325467]: Accepted publickey for zuul from 38.102.83.114 port 33018 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 10:32:04 np0005548790.localdomain systemd-logind[760]: New session 82 of user zuul.
Dec 06 10:32:04 np0005548790.localdomain systemd[1]: Started Session 82 of User zuul.
Dec 06 10:32:04 np0005548790.localdomain sshd[325467]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 10:32:05 np0005548790.localdomain sudo[325471]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/os-net-config
Dec 06 10:32:05 np0005548790.localdomain sudo[325471]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 10:32:05 np0005548790.localdomain sudo[325471]: pam_unix(sudo:session): session closed for user root
Dec 06 10:32:05 np0005548790.localdomain sshd[325470]: Received disconnect from 38.102.83.114 port 33018:11: disconnected by user
Dec 06 10:32:05 np0005548790.localdomain sshd[325470]: Disconnected from user zuul 38.102.83.114 port 33018
Dec 06 10:32:05 np0005548790.localdomain sshd[325467]: pam_unix(sshd:session): session closed for user zuul
Dec 06 10:32:05 np0005548790.localdomain systemd[1]: session-82.scope: Deactivated successfully.
Dec 06 10:32:05 np0005548790.localdomain systemd-logind[760]: Session 82 logged out. Waiting for processes to exit.
Dec 06 10:32:05 np0005548790.localdomain systemd-logind[760]: Removed session 82.
Dec 06 10:32:05 np0005548790.localdomain sshd[325489]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:32:05 np0005548790.localdomain sshd[325489]: Accepted publickey for zuul from 38.102.83.114 port 33030 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 10:32:05 np0005548790.localdomain systemd-logind[760]: New session 83 of user zuul.
Dec 06 10:32:05 np0005548790.localdomain systemd[1]: Started Session 83 of User zuul.
Dec 06 10:32:05 np0005548790.localdomain sshd[325489]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 10:32:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v804: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:05 np0005548790.localdomain sudo[325493]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /home/zuul/ansible_hostname
Dec 06 10:32:05 np0005548790.localdomain sudo[325493]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 10:32:05 np0005548790.localdomain sudo[325493]: pam_unix(sudo:session): session closed for user root
Dec 06 10:32:05 np0005548790.localdomain sshd[325492]: Received disconnect from 38.102.83.114 port 33030:11: disconnected by user
Dec 06 10:32:05 np0005548790.localdomain sshd[325492]: Disconnected from user zuul 38.102.83.114 port 33030
Dec 06 10:32:05 np0005548790.localdomain sshd[325489]: pam_unix(sshd:session): session closed for user zuul
Dec 06 10:32:05 np0005548790.localdomain systemd[1]: session-83.scope: Deactivated successfully.
Dec 06 10:32:05 np0005548790.localdomain systemd-logind[760]: Session 83 logged out. Waiting for processes to exit.
Dec 06 10:32:05 np0005548790.localdomain systemd-logind[760]: Removed session 83.
Dec 06 10:32:06 np0005548790.localdomain ceph-mon[301742]: pgmap v804: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:06 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:06.816 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:32:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v805: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:08 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:08.050 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:32:08 np0005548790.localdomain ceph-mon[301742]: pgmap v805: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:32:09 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:32:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v806: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:09 np0005548790.localdomain podman[325511]: 2025-12-06 10:32:09.58532714 +0000 UTC m=+0.086815099 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 06 10:32:09 np0005548790.localdomain podman[325511]: 2025-12-06 10:32:09.620645588 +0000 UTC m=+0.122133517 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, container_name=multipathd)
Dec 06 10:32:09 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:32:10 np0005548790.localdomain ceph-mon[301742]: pgmap v806: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v807: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:11 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:11.844 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:32:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:32:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:32:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:32:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:32:12 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:32:12 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:32:12 np0005548790.localdomain podman[325532]: 2025-12-06 10:32:12.567113507 +0000 UTC m=+0.078280311 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125)
Dec 06 10:32:12 np0005548790.localdomain podman[325532]: 2025-12-06 10:32:12.608173138 +0000 UTC m=+0.119339982 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:32:12 np0005548790.localdomain systemd[1]: tmp-crun.UEHndn.mount: Deactivated successfully.
Dec 06 10:32:12 np0005548790.localdomain ceph-mon[301742]: pgmap v807: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:12 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:32:12 np0005548790.localdomain podman[325531]: 2025-12-06 10:32:12.627744443 +0000 UTC m=+0.142961116 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 06 10:32:12 np0005548790.localdomain podman[325531]: 2025-12-06 10:32:12.642216762 +0000 UTC m=+0.157433435 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 06 10:32:12 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:32:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:32:12 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:32:13 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:13.084 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:32:13 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v808: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:14 np0005548790.localdomain ceph-mon[301742]: pgmap v808: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:14 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:32:15 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v809: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:16 np0005548790.localdomain ceph-mon[301742]: pgmap v809: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:16 np0005548790.localdomain sudo[325577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:32:16 np0005548790.localdomain sudo[325577]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:32:16 np0005548790.localdomain sudo[325577]: pam_unix(sudo:session): session closed for user root
Dec 06 10:32:16 np0005548790.localdomain sudo[325595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 06 10:32:16 np0005548790.localdomain sudo[325595]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:32:16 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:16.881 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:32:17 np0005548790.localdomain systemd[1]: tmp-crun.WWQ89I.mount: Deactivated successfully.
Dec 06 10:32:17 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v810: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:17 np0005548790.localdomain podman[325684]: 2025-12-06 10:32:17.511666558 +0000 UTC m=+0.106697123 container exec 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, RELEASE=main, description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, release=1763362218, name=rhceph, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph)
Dec 06 10:32:17 np0005548790.localdomain podman[325684]: 2025-12-06 10:32:17.652292749 +0000 UTC m=+0.247323304 container exec_died 585fec6e84bebfad788f88a950aa936e86dd08579d1a8c6fe82bc7621927d9e3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-crash-np0005548790, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, vendor=Red Hat, Inc., ceph=True, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.buildah.version=1.41.4, GIT_BRANCH=main, release=1763362218, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, version=7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 06 10:32:18 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:18.123 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:32:18 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0)
Dec 06 10:32:18 np0005548790.localdomain sudo[325595]: pam_unix(sudo:session): session closed for user root
Dec 06 10:32:18 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0)
Dec 06 10:32:18 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0)
Dec 06 10:32:18 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0)
Dec 06 10:32:18 np0005548790.localdomain sudo[325803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:32:18 np0005548790.localdomain sudo[325803]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:32:18 np0005548790.localdomain sudo[325803]: pam_unix(sudo:session): session closed for user root
Dec 06 10:32:18 np0005548790.localdomain podman[239825]: time="2025-12-06T10:32:18Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:32:18 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:32:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:32:18 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:32:18 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:32:18 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:32:18 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18787 "" "Go-http-client/1.1"
Dec 06 10:32:18 np0005548790.localdomain sudo[325821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 06 10:32:18 np0005548790.localdomain sudo[325821]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:32:18 np0005548790.localdomain ceph-mon[301742]: pgmap v810: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:18 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:32:18 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:32:18 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:32:18 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:32:18 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:32:18 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:32:19 np0005548790.localdomain sudo[325821]: pam_unix(sudo:session): session closed for user root
Dec 06 10:32:19 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:32:19 np0005548790.localdomain sudo[325871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 06 10:32:19 np0005548790.localdomain sudo[325871]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:32:19 np0005548790.localdomain sudo[325871]: pam_unix(sudo:session): session closed for user root
Dec 06 10:32:19 np0005548790.localdomain sudo[325889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/1939e851-b10c-5c3b-9bb7-8e7f380233e8/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid 1939e851-b10c-5c3b-9bb7-8e7f380233e8 -- inventory --format=json-pretty --filter-for-batch
Dec 06 10:32:19 np0005548790.localdomain sudo[325889]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:32:19 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v811: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:20 np0005548790.localdomain podman[325948]: 
Dec 06 10:32:20 np0005548790.localdomain podman[325948]: 2025-12-06 10:32:20.050296078 +0000 UTC m=+0.087897359 container create babbf699f2d14360ba0335337715e57371d336c5c8667f489ffc0eef8fa950a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_pasteur, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, distribution-scope=public)
Dec 06 10:32:20 np0005548790.localdomain systemd[1]: Started libpod-conmon-babbf699f2d14360ba0335337715e57371d336c5c8667f489ffc0eef8fa950a5.scope.
Dec 06 10:32:20 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:32:20 np0005548790.localdomain podman[325948]: 2025-12-06 10:32:20.013606164 +0000 UTC m=+0.051207475 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:32:20 np0005548790.localdomain podman[325948]: 2025-12-06 10:32:20.129169993 +0000 UTC m=+0.166771264 container init babbf699f2d14360ba0335337715e57371d336c5c8667f489ffc0eef8fa950a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_pasteur, com.redhat.component=rhceph-container, RELEASE=main, vendor=Red Hat, Inc., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-26T19:44:28Z)
Dec 06 10:32:20 np0005548790.localdomain systemd[1]: libpod-babbf699f2d14360ba0335337715e57371d336c5c8667f489ffc0eef8fa950a5.scope: Deactivated successfully.
Dec 06 10:32:20 np0005548790.localdomain vigorous_pasteur[325963]: 167 167
Dec 06 10:32:20 np0005548790.localdomain podman[325948]: 2025-12-06 10:32:20.149338714 +0000 UTC m=+0.186939985 container start babbf699f2d14360ba0335337715e57371d336c5c8667f489ffc0eef8fa950a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_pasteur, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, release=1763362218, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, version=7, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 06 10:32:20 np0005548790.localdomain podman[325948]: 2025-12-06 10:32:20.149605101 +0000 UTC m=+0.187206412 container attach babbf699f2d14360ba0335337715e57371d336c5c8667f489ffc0eef8fa950a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_pasteur, vendor=Red Hat, Inc., name=rhceph, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, distribution-scope=public, version=7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, com.redhat.component=rhceph-container, RELEASE=main, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, architecture=x86_64, io.openshift.expose-services=, release=1763362218)
Dec 06 10:32:20 np0005548790.localdomain podman[325948]: 2025-12-06 10:32:20.154876602 +0000 UTC m=+0.192477893 container died babbf699f2d14360ba0335337715e57371d336c5c8667f489ffc0eef8fa950a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_pasteur, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, distribution-scope=public, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, RELEASE=main, architecture=x86_64, name=rhceph, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 06 10:32:20 np0005548790.localdomain podman[325968]: 2025-12-06 10:32:20.25023859 +0000 UTC m=+0.089980004 container remove babbf699f2d14360ba0335337715e57371d336c5c8667f489ffc0eef8fa950a5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_pasteur, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=rhceph-container, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, release=1763362218, version=7, name=rhceph, GIT_BRANCH=main, GIT_CLEAN=True, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True)
Dec 06 10:32:20 np0005548790.localdomain systemd[1]: libpod-conmon-babbf699f2d14360ba0335337715e57371d336c5c8667f489ffc0eef8fa950a5.scope: Deactivated successfully.
Dec 06 10:32:20 np0005548790.localdomain podman[325990]: 
Dec 06 10:32:20 np0005548790.localdomain podman[325990]: 2025-12-06 10:32:20.463427268 +0000 UTC m=+0.068534209 container create 77889f020f946376de73a8e2826d536817e5b2da9dcc984a0b58b30e72111dfd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_bose, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, com.redhat.component=rhceph-container, vcs-type=git, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, version=7, distribution-scope=public, RELEASE=main, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 06 10:32:20 np0005548790.localdomain systemd[1]: Started libpod-conmon-77889f020f946376de73a8e2826d536817e5b2da9dcc984a0b58b30e72111dfd.scope.
Dec 06 10:32:20 np0005548790.localdomain systemd[1]: Started libcrun container.
Dec 06 10:32:20 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6385149b30444c094d30e250f423ea5d3cdfb4aba0e06eed66a9ef59a97e3b0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 06 10:32:20 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6385149b30444c094d30e250f423ea5d3cdfb4aba0e06eed66a9ef59a97e3b0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 06 10:32:20 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6385149b30444c094d30e250f423ea5d3cdfb4aba0e06eed66a9ef59a97e3b0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 06 10:32:20 np0005548790.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6385149b30444c094d30e250f423ea5d3cdfb4aba0e06eed66a9ef59a97e3b0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 06 10:32:20 np0005548790.localdomain podman[325990]: 2025-12-06 10:32:20.431968034 +0000 UTC m=+0.037074995 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 06 10:32:20 np0005548790.localdomain podman[325990]: 2025-12-06 10:32:20.534927456 +0000 UTC m=+0.140034397 container init 77889f020f946376de73a8e2826d536817e5b2da9dcc984a0b58b30e72111dfd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_bose, description=Red Hat Ceph Storage 7, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, release=1763362218, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 06 10:32:20 np0005548790.localdomain podman[325990]: 2025-12-06 10:32:20.545407457 +0000 UTC m=+0.150514398 container start 77889f020f946376de73a8e2826d536817e5b2da9dcc984a0b58b30e72111dfd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_bose, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.openshift.tags=rhceph ceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.buildah.version=1.41.4)
Dec 06 10:32:20 np0005548790.localdomain podman[325990]: 2025-12-06 10:32:20.545704655 +0000 UTC m=+0.150811606 container attach 77889f020f946376de73a8e2826d536817e5b2da9dcc984a0b58b30e72111dfd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_bose, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.component=rhceph-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, version=7, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, ceph=True, CEPH_POINT_RELEASE=, vcs-type=git, name=rhceph, RELEASE=main)
Dec 06 10:32:20 np0005548790.localdomain ceph-mon[301742]: pgmap v811: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:21 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-c43659ce412d97194ec4abfcd9cc1df7a0ea9372f7f34902068e31ac837a959a-merged.mount: Deactivated successfully.
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]: [
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:     {
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:         "available": false,
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:         "ceph_device": false,
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:         "lsm_data": {},
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:         "lvs": [],
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:         "path": "/dev/sr0",
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:         "rejected_reasons": [
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:             "Has a FileSystem",
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:             "Insufficient space (<5GB)"
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:         ],
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:         "sys_api": {
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:             "actuators": null,
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:             "device_nodes": "sr0",
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:             "human_readable_size": "482.00 KB",
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:             "id_bus": "ata",
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:             "model": "QEMU DVD-ROM",
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:             "nr_requests": "2",
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:             "partitions": {},
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:             "path": "/dev/sr0",
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:             "removable": "1",
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:             "rev": "2.5+",
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:             "ro": "0",
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:             "rotational": "1",
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:             "sas_address": "",
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:             "sas_device_handle": "",
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:             "scheduler_mode": "mq-deadline",
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:             "sectors": 0,
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:             "sectorsize": "2048",
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:             "size": 493568.0,
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:             "support_discard": "0",
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:             "type": "disk",
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:             "vendor": "QEMU"
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:         }
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]:     }
Dec 06 10:32:21 np0005548790.localdomain determined_bose[326005]: ]
Dec 06 10:32:21 np0005548790.localdomain systemd[1]: libpod-77889f020f946376de73a8e2826d536817e5b2da9dcc984a0b58b30e72111dfd.scope: Deactivated successfully.
Dec 06 10:32:21 np0005548790.localdomain podman[325990]: 2025-12-06 10:32:21.470357206 +0000 UTC m=+1.075464167 container died 77889f020f946376de73a8e2826d536817e5b2da9dcc984a0b58b30e72111dfd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_bose, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, release=1763362218, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True)
Dec 06 10:32:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v812: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:21 np0005548790.localdomain systemd[1]: var-lib-containers-storage-overlay-6385149b30444c094d30e250f423ea5d3cdfb4aba0e06eed66a9ef59a97e3b0a-merged.mount: Deactivated successfully.
Dec 06 10:32:21 np0005548790.localdomain podman[327759]: 2025-12-06 10:32:21.546397845 +0000 UTC m=+0.069131845 container remove 77889f020f946376de73a8e2826d536817e5b2da9dcc984a0b58b30e72111dfd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_bose, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, ceph=True, GIT_CLEAN=True, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., version=7, name=rhceph, distribution-scope=public, vcs-type=git, GIT_BRANCH=main, CEPH_POINT_RELEASE=)
Dec 06 10:32:21 np0005548790.localdomain systemd[1]: libpod-conmon-77889f020f946376de73a8e2826d536817e5b2da9dcc984a0b58b30e72111dfd.scope: Deactivated successfully.
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain.devices.0}] v 0)
Dec 06 10:32:21 np0005548790.localdomain sudo[325889]: pam_unix(sudo:session): session closed for user root
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain.devices.0}] v 0)
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548789.localdomain}] v 0)
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548790.localdomain}] v 0)
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0)
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 06 10:32:21 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO root] Adjusting osd_memory_target on np0005548789.localdomain to 836.6M
Dec 06 10:32:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005548789.localdomain to 836.6M
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0)
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 06 10:32:21 np0005548790.localdomain ceph-mgr[286934]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:32:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:32:21 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO root] Adjusting osd_memory_target on np0005548790.localdomain to 836.6M
Dec 06 10:32:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005548790.localdomain to 836.6M
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 06 10:32:21 np0005548790.localdomain ceph-mgr[286934]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:32:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain.devices.0}] v 0)
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005548788.localdomain}] v 0)
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0)
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 06 10:32:21 np0005548790.localdomain ceph-mgr[286934]: [cephadm INFO root] Adjusting osd_memory_target on np0005548788.localdomain to 836.6M
Dec 06 10:32:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005548788.localdomain to 836.6M
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 06 10:32:21 np0005548790.localdomain ceph-mgr[286934]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Dec 06 10:32:21 np0005548790.localdomain ceph-mgr[286934]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [INF] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 06 10:32:21 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] update: starting ev 93efb4e6-596d-43e8-a976-290bb913dfde (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:32:21 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] complete: finished ev 93efb4e6-596d-43e8-a976-290bb913dfde (Updating node-proxy deployment (+3 -> 3))
Dec 06 10:32:21 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Completed event 93efb4e6-596d-43e8-a976-290bb913dfde (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 06 10:32:21 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:32:21 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:21.881 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:32:22 np0005548790.localdomain sudo[327774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 06 10:32:22 np0005548790.localdomain sudo[327774]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 06 10:32:22 np0005548790.localdomain sudo[327774]: pam_unix(sudo:session): session closed for user root
Dec 06 10:32:22 np0005548790.localdomain ceph-mgr[286934]: [progress INFO root] Writing back 50 completed events
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: pgmap v812: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: Adjusting osd_memory_target on np0005548789.localdomain to 836.6M
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: Unable to set osd_memory_target on np0005548789.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: Adjusting osd_memory_target on np0005548790.localdomain to 836.6M
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: Unable to set osd_memory_target on np0005548790.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: Adjusting osd_memory_target on np0005548788.localdomain to 836.6M
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: Unable to set osd_memory_target on np0005548788.localdomain to 877243801: error parsing value: Value '877243801' is below minimum 939524096
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 172.18.0.108:0/2122066654' entity='mgr.np0005548790.kvkfyr' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 06 10:32:22 np0005548790.localdomain ceph-mon[301742]: from='mgr.27020 ' entity='mgr.np0005548790.kvkfyr' 
Dec 06 10:32:23 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:23.165 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:32:23 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v813: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:32:23 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:32:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:32:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:32:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:32:23 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:32:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:32:23 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:32:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:32:23 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:32:23 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:32:23 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:32:24 np0005548790.localdomain ceph-mon[301742]: pgmap v813: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:24 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:32:25 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v814: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:26 np0005548790.localdomain ceph-mon[301742]: pgmap v814: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:26 np0005548790.localdomain sshd[327792]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:32:26 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:26.912 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:32:27 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v815: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:27 np0005548790.localdomain sshd[327792]: Invalid user admin from 45.135.232.92 port 29454
Dec 06 10:32:28 np0005548790.localdomain sshd[327792]: Connection reset by invalid user admin 45.135.232.92 port 29454 [preauth]
Dec 06 10:32:28 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:28.212 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:32:28 np0005548790.localdomain sshd[327794]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:32:28 np0005548790.localdomain ceph-mon[301742]: pgmap v815: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:29 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:32:29 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v816: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:30 np0005548790.localdomain sshd[327794]: Invalid user kodi from 45.135.232.92 port 29470
Dec 06 10:32:30 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:32:30 np0005548790.localdomain podman[327796]: 2025-12-06 10:32:30.364139092 +0000 UTC m=+0.086728417 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Dec 06 10:32:30 np0005548790.localdomain podman[327796]: 2025-12-06 10:32:30.395217596 +0000 UTC m=+0.117807141 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec 06 10:32:30 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:32:30 np0005548790.localdomain ceph-mon[301742]: pgmap v816: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:30 np0005548790.localdomain sshd[327794]: Connection reset by invalid user kodi 45.135.232.92 port 29470 [preauth]
Dec 06 10:32:30 np0005548790.localdomain sshd[327816]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:32:31 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v817: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:31 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:31.951 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:32:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:32:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:32:32 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:32:32 np0005548790.localdomain podman[327818]: 2025-12-06 10:32:32.584517325 +0000 UTC m=+0.093758475 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 06 10:32:32 np0005548790.localdomain podman[327818]: 2025-12-06 10:32:32.59363558 +0000 UTC m=+0.102876750 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:32:32 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:32:32 np0005548790.localdomain ceph-mon[301742]: pgmap v817: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:32 np0005548790.localdomain podman[327820]: 2025-12-06 10:32:32.686908802 +0000 UTC m=+0.190447409 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, release=1755695350, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc.)
Dec 06 10:32:32 np0005548790.localdomain podman[327820]: 2025-12-06 10:32:32.703182418 +0000 UTC m=+0.206720995 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, container_name=openstack_network_exporter, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, config_id=edpm)
Dec 06 10:32:32 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:32:32 np0005548790.localdomain podman[327819]: 2025-12-06 10:32:32.794127197 +0000 UTC m=+0.302636378 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 06 10:32:32 np0005548790.localdomain podman[327819]: 2025-12-06 10:32:32.83485337 +0000 UTC m=+0.343362511 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:32:32 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:32:32 np0005548790.localdomain sshd[327816]: Connection reset by authenticating user root 45.135.232.92 port 29500 [preauth]
Dec 06 10:32:33 np0005548790.localdomain sshd[327883]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:32:33 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:33.259 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:32:33 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v818: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:34 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:32:34 np0005548790.localdomain ceph-mon[301742]: pgmap v818: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:34 np0005548790.localdomain sshd[327883]: Connection reset by authenticating user root 45.135.232.92 port 29514 [preauth]
Dec 06 10:32:34 np0005548790.localdomain sshd[327885]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:32:35 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v819: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:36 np0005548790.localdomain ceph-mon[301742]: pgmap v819: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:36 np0005548790.localdomain sshd[327885]: Connection reset by authenticating user root 45.135.232.92 port 64934 [preauth]
Dec 06 10:32:36 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:36.952 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:32:37 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v820: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:38 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:38.291 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:32:38 np0005548790.localdomain ceph-mon[301742]: pgmap v820: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 06 10:32:39 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3405830284' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:32:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 06 10:32:39 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3405830284' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:32:39 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:32:39 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v821: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3405830284' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 06 10:32:39 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.32:0/3405830284' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 06 10:32:40 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:32:40 np0005548790.localdomain podman[327887]: 2025-12-06 10:32:40.569997029 +0000 UTC m=+0.084246900 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:32:40 np0005548790.localdomain podman[327887]: 2025-12-06 10:32:40.586273766 +0000 UTC m=+0.100523637 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true)
Dec 06 10:32:40 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:32:40 np0005548790.localdomain ceph-mon[301742]: pgmap v821: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:41 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v822: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Optimize plan auto_2025-12-06_10:32:41
Dec 06 10:32:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 06 10:32:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] do_upmap
Dec 06 10:32:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] pools ['images', 'vms', 'volumes', 'manila_metadata', 'manila_data', 'backups', '.mgr']
Dec 06 10:32:41 np0005548790.localdomain ceph-mgr[286934]: [balancer INFO root] prepared 0/10 changes
Dec 06 10:32:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:32:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:32:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:32:41 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:32:41 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:41.986 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] _maybe_adjust
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033250017448352874 of space, bias 1.0, pg target 0.6650003489670575 quantized to 32 (current 32)
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32)
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32)
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.003030833420575098 of space, bias 4.0, pg target 2.412543402777778 quantized to 16 (current 16)
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 06 10:32:42 np0005548790.localdomain ceph-mon[301742]: pgmap v822: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:32:42 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:32:43 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:43.294 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:32:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.
Dec 06 10:32:43 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.
Dec 06 10:32:43 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v823: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:43 np0005548790.localdomain systemd[1]: tmp-crun.6yPuf8.mount: Deactivated successfully.
Dec 06 10:32:43 np0005548790.localdomain podman[327906]: 2025-12-06 10:32:43.56986193 +0000 UTC m=+0.083950503 container health_status 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 06 10:32:43 np0005548790.localdomain podman[327906]: 2025-12-06 10:32:43.5832673 +0000 UTC m=+0.097355873 container exec_died 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 06 10:32:43 np0005548790.localdomain systemd[1]: 028351a501d210d53e57bf100d7bd3f16d862ebdc5c0b7200016897ae117e537.service: Deactivated successfully.
Dec 06 10:32:43 np0005548790.localdomain podman[327907]: 2025-12-06 10:32:43.668195557 +0000 UTC m=+0.179583797 container health_status f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 06 10:32:43 np0005548790.localdomain podman[327907]: 2025-12-06 10:32:43.703205266 +0000 UTC m=+0.214593536 container exec_died f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 06 10:32:43 np0005548790.localdomain systemd[1]: f25d3ffff98de47a69d62499f96ea994b4e109d6116b0c970c46bf6739ab7b89.service: Deactivated successfully.
Dec 06 10:32:44 np0005548790.localdomain ceph-mon[301742]: pgmap v823: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:44 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:32:44 np0005548790.localdomain sshd[327950]: main: sshd: ssh-rsa algorithm is disabled
Dec 06 10:32:45 np0005548790.localdomain sshd[327950]: Accepted publickey for zuul from 192.168.122.10 port 44806 ssh2: RSA SHA256:4DkDDFgR7XMiN5v7tZau5snVtHmG3ulto18u1RwA0us
Dec 06 10:32:45 np0005548790.localdomain systemd-logind[760]: New session 84 of user zuul.
Dec 06 10:32:45 np0005548790.localdomain systemd[1]: Started Session 84 of User zuul.
Dec 06 10:32:45 np0005548790.localdomain sshd[327950]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 06 10:32:45 np0005548790.localdomain sudo[327954]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt
Dec 06 10:32:45 np0005548790.localdomain sudo[327954]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 06 10:32:45 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v824: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:46 np0005548790.localdomain ceph-mon[301742]: pgmap v824: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:46 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/1680123915' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:32:46 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/2760621750' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:32:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:47.021 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:32:47 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:47.334 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:32:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v825: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.49656 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:47 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59071 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:48 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69257 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:48 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:48.320 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:32:48 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.49665 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:48 np0005548790.localdomain podman[239825]: time="2025-12-06T10:32:48Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 06 10:32:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:32:48.413 159200 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:32:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:32:48.413 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:32:48 np0005548790.localdomain ovn_metadata_agent[159195]: 2025-12-06 10:32:48.413 159200 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:32:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:32:48 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154918 "" "Go-http-client/1.1"
Dec 06 10:32:48 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59077 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:48 np0005548790.localdomain podman[239825]: @ - - [06/Dec/2025:10:32:48 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18791 "" "Go-http-client/1.1"
Dec 06 10:32:48 np0005548790.localdomain ceph-mon[301742]: pgmap v825: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:48 np0005548790.localdomain ceph-mon[301742]: from='client.49656 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:48 np0005548790.localdomain ceph-mon[301742]: from='client.59071 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:48 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69266 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "status"} v 0)
Dec 06 10:32:49 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3542862083' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 06 10:32:49 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:32:49 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v826: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:49 np0005548790.localdomain ceph-mon[301742]: from='client.69257 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:49 np0005548790.localdomain ceph-mon[301742]: from='client.49665 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:49 np0005548790.localdomain ceph-mon[301742]: from='client.59077 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:49 np0005548790.localdomain ceph-mon[301742]: from='client.69266 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:49 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/1062379559' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 06 10:32:49 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/4047748327' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 06 10:32:49 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/3542862083' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 06 10:32:50 np0005548790.localdomain ceph-mon[301742]: pgmap v826: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:51.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:32:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:51.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 06 10:32:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:51.333 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 06 10:32:51 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v827: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:51 np0005548790.localdomain ovs-vsctl[328201]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 06 10:32:51 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:51.735 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 06 10:32:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:52.024 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:32:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:52.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:32:52 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:52.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:32:52 np0005548790.localdomain virtqemud[228868]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 06 10:32:52 np0005548790.localdomain virtqemud[228868]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 06 10:32:52 np0005548790.localdomain virtqemud[228868]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 06 10:32:52 np0005548790.localdomain ceph-mon[301742]: pgmap v827: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:52 np0005548790.localdomain systemd[1]: efi.automount: Got automount request for /efi, triggered by 328354 (lsinitrd)
Dec 06 10:32:52 np0005548790.localdomain systemd[1]: Mounting EFI System Partition Automount...
Dec 06 10:32:52 np0005548790.localdomain systemd[1]: Mounted EFI System Partition Automount.
Dec 06 10:32:52 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.49686 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:52 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59095 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:53 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: cache status {prefix=cache status} (starting...)
Dec 06 10:32:53 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: client ls {prefix=client ls} (starting...)
Dec 06 10:32:53 np0005548790.localdomain lvm[328438]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 06 10:32:53 np0005548790.localdomain lvm[328438]: VG ceph_vg1 finished
Dec 06 10:32:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.49692 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69284 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:53.323 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:32:53 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:53.328 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:32:53 np0005548790.localdomain lvm[328453]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 06 10:32:53 np0005548790.localdomain lvm[328453]: VG ceph_vg0 finished
Dec 06 10:32:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69287 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v828: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:32:53 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 06 10:32:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:32:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:32:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:32:53 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 06 10:32:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:32:53 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 06 10:32:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:32:53 np0005548790.localdomain openstack_network_exporter[241796]: ERROR   10:32:53 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 06 10:32:53 np0005548790.localdomain openstack_network_exporter[241796]: 
Dec 06 10:32:53 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69299 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:53 np0005548790.localdomain ceph-mon[301742]: from='client.49686 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:53 np0005548790.localdomain ceph-mon[301742]: from='client.59095 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:53 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: damage ls {prefix=damage ls} (starting...)
Dec 06 10:32:53 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: dump loads {prefix=dump loads} (starting...)
Dec 06 10:32:54 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.49716 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:54 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 10:32:54 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:32:54.026+0000 7f066579c640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 10:32:54 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 06 10:32:54 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59125 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:54 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 10:32:54 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:32:54.151+0000 7f066579c640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "report"} v 0)
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2581391178' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:32:54 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 06 10:32:54 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 06 10:32:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:54.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:32:54 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:54.333 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:32:54 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69323 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:54 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 10:32:54 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:32:54.336+0000 7f066579c640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 10:32:54 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1949286175' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:32:54 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: from='client.49692 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: from='client.69284 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: from='client.69287 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: pgmap v828: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: from='client.69299 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2201466626' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/595936019' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: from='client.? ' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/4106160759' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2581391178' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: from='client.? ' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/2116789574' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/3723108171' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/1477964806' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/3383880708' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1949286175' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/3501209297' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1390228284' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 06 10:32:54 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config log"} v 0)
Dec 06 10:32:54 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3256193873' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 06 10:32:55 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: ops {prefix=ops} (starting...)
Dec 06 10:32:55 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 06 10:32:55 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3484329055' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 06 10:32:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59170 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:55 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec 06 10:32:55 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3151208344' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 06 10:32:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.49767 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v829: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:55 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 06 10:32:55 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2354599686' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 06 10:32:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59182 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:55 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: session ls {prefix=session ls} (starting...)
Dec 06 10:32:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69380 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:55 np0005548790.localdomain ceph-mon[301742]: from='client.49716 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:55 np0005548790.localdomain ceph-mon[301742]: from='client.59125 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:55 np0005548790.localdomain ceph-mon[301742]: from='client.69323 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:55 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1390228284' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 06 10:32:55 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/4083339819' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 06 10:32:55 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/2031437587' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 06 10:32:55 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/3256193873' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 06 10:32:55 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/26526715' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 06 10:32:55 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/1960715726' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 06 10:32:55 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/3484329055' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 06 10:32:55 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/439920032' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 06 10:32:55 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/3151208344' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 06 10:32:55 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/512599547' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 06 10:32:55 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2354599686' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 06 10:32:55 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.49785 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:55 np0005548790.localdomain ceph-mds[285635]: mds.mds.np0005548790.vhcezv asok_command: status {prefix=status} (starting...)
Dec 06 10:32:56 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69395 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:56.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:32:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:56.352 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:32:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:56.352 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:32:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:56.352 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:32:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:56.352 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Auditing locally available compute resources for np0005548790.localdomain (node: np0005548790.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 06 10:32:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:56.352 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1911255415' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "features"} v 0)
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/608376543' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/558313208' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:32:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:56.801 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: from='client.59170 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: from='client.49767 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: pgmap v829: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: from='client.59182 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: from='client.69380 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2396717601' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: from='client.49785 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/1988228710' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/923202201' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/3829669704' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/3306622931' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/3917801459' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/3394250906' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1911255415' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/608376543' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: from='client.? ' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/2522593498' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/1497560563' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2669572884' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/558313208' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:32:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:56.914 280869 WARNING nova.virt.libvirt.driver [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 06 10:32:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:56.915 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Hypervisor/Node resource view: name=np0005548790.localdomain free_ram=11250MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 06 10:32:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:56.915 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 06 10:32:56 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:56.915 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 06 10:32:56 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/192676122' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 06 10:32:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:57.026 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:32:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59227 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:57 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:32:57.029+0000 7f066579c640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 10:32:57 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 10:32:57 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec 06 10:32:57 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3282262568' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 06 10:32:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.49836 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:57 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:32:57.147+0000 7f066579c640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 10:32:57 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 10:32:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:57.152 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 06 10:32:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:57.152 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Final resource view: name=np0005548790.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 06 10:32:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:57.165 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 06 10:32:57 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 06 10:32:57 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2365653258' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 06 10:32:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69452 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:57 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:32:57.452+0000 7f066579c640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 10:32:57 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 06 10:32:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69446 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v830: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59254 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:57.652 280869 DEBUG oslo_concurrency.processutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 06 10:32:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:57.659 280869 DEBUG nova.compute.provider_tree [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed in ProviderTree for provider: 9d142787-bd19-4b53-bf45-24c0e0c1cff0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 06 10:32:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:57.678 280869 DEBUG nova.scheduler.client.report [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Inventory has not changed for provider 9d142787-bd19-4b53-bf45-24c0e0c1cff0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 06 10:32:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:57.680 280869 DEBUG nova.compute.resource_tracker [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Compute_service record updated for np0005548790.localdomain:np0005548790.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 06 10:32:57 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:57.680 280869 DEBUG oslo_concurrency.lockutils [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 06 10:32:57 np0005548790.localdomain ceph-mon[301742]: from='client.69395 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:57 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/4026296822' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 06 10:32:57 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/192676122' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 06 10:32:57 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/3282262568' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 06 10:32:57 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/1118821392' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 06 10:32:57 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/2399987487' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 06 10:32:57 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2365653258' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 06 10:32:57 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/1461476266' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 06 10:32:57 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/1880444688' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 06 10:32:57 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1069869138' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:32:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69464 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:57 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.49863 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:57 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec 06 10:32:57 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2083356224' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 06 10:32:58 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59269 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:58 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69476 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:58 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.49875 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:58.324 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:32:58 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59275 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:58 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec 06 10:32:58 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/997443539' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 06 10:32:58 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69488 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:58.681 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:32:58 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:32:58.681 280869 DEBUG nova.compute.manager [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 06 10:32:58 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.49890 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:58 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59293 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:58 np0005548790.localdomain ceph-mon[301742]: from='client.59227 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:58 np0005548790.localdomain ceph-mon[301742]: from='client.49836 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:58 np0005548790.localdomain ceph-mon[301742]: from='client.69452 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:58 np0005548790.localdomain ceph-mon[301742]: from='client.69446 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:58 np0005548790.localdomain ceph-mon[301742]: pgmap v830: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:58 np0005548790.localdomain ceph-mon[301742]: from='client.59254 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:58 np0005548790.localdomain ceph-mon[301742]: from='client.69464 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:58 np0005548790.localdomain ceph-mon[301742]: from='client.49863 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:58 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2083356224' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 06 10:32:58 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/1654648070' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 06 10:32:58 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2756974343' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 06 10:32:58 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/997443539' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 06 10:32:58 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2621128474' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 06 10:32:58 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/880484158' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 06 10:32:58 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69503 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:58 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 06 10:32:58 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3924164660' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 06 10:32:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.49905 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59308 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:31.730183+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85827584 unmapped: 1671168 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:32.730358+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85827584 unmapped: 1671168 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 94 heartbeat osd_stat(store_statfs(0x1b9bdc000/0x0/0x1bfc00000, data 0x1e26d0f/0x1eb2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:33.730546+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 876393 data_alloc: 285212672 data_used: 3788800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85827584 unmapped: 1671168 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:34.730698+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85827584 unmapped: 1671168 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:35.730877+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85827584 unmapped: 1671168 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:36.731061+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85827584 unmapped: 1671168 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:37.731225+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 94 heartbeat osd_stat(store_statfs(0x1b9bdc000/0x0/0x1bfc00000, data 0x1e26d0f/0x1eb2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85827584 unmapped: 1671168 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:38.731397+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 876393 data_alloc: 285212672 data_used: 3788800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85827584 unmapped: 1671168 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:39.731597+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85827584 unmapped: 1671168 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:40.731750+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85827584 unmapped: 1671168 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:41.731961+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 94 heartbeat osd_stat(store_statfs(0x1b9bdc000/0x0/0x1bfc00000, data 0x1e26d0f/0x1eb2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85827584 unmapped: 1671168 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 94 heartbeat osd_stat(store_statfs(0x1b9bdc000/0x0/0x1bfc00000, data 0x1e26d0f/0x1eb2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:42.732104+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85827584 unmapped: 1671168 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:43.732294+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 876393 data_alloc: 285212672 data_used: 3788800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85827584 unmapped: 1671168 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:44.732514+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85827584 unmapped: 1671168 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:45.732700+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85827584 unmapped: 1671168 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:46.732865+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85827584 unmapped: 1671168 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 94 heartbeat osd_stat(store_statfs(0x1b9bdc000/0x0/0x1bfc00000, data 0x1e26d0f/0x1eb2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:47.733052+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85827584 unmapped: 1671168 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:48.733213+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 876393 data_alloc: 285212672 data_used: 3788800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85835776 unmapped: 1662976 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:49.733414+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85835776 unmapped: 1662976 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:50.733574+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 94 heartbeat osd_stat(store_statfs(0x1b9bdc000/0x0/0x1bfc00000, data 0x1e26d0f/0x1eb2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85835776 unmapped: 1662976 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:51.733763+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 94 heartbeat osd_stat(store_statfs(0x1b9bdc000/0x0/0x1bfc00000, data 0x1e26d0f/0x1eb2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85835776 unmapped: 1662976 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:52.733989+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85835776 unmapped: 1662976 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:53.734149+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 876393 data_alloc: 285212672 data_used: 3788800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85835776 unmapped: 1662976 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:54.734304+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85835776 unmapped: 1662976 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:55.734452+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85835776 unmapped: 1662976 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:56.734599+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85843968 unmapped: 1654784 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 94 heartbeat osd_stat(store_statfs(0x1b9bdc000/0x0/0x1bfc00000, data 0x1e26d0f/0x1eb2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:57.734747+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85843968 unmapped: 1654784 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 94 heartbeat osd_stat(store_statfs(0x1b9bdc000/0x0/0x1bfc00000, data 0x1e26d0f/0x1eb2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:58.734886+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 876393 data_alloc: 285212672 data_used: 3788800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85852160 unmapped: 1646592 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:59.735154+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 94 heartbeat osd_stat(store_statfs(0x1b9bdc000/0x0/0x1bfc00000, data 0x1e26d0f/0x1eb2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85852160 unmapped: 1646592 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:00.735347+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85852160 unmapped: 1646592 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:01.735536+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85852160 unmapped: 1646592 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:02.735738+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85852160 unmapped: 1646592 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:03.735954+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 876393 data_alloc: 285212672 data_used: 3788800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85852160 unmapped: 1646592 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:04.736148+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 94 heartbeat osd_stat(store_statfs(0x1b9bdc000/0x0/0x1bfc00000, data 0x1e26d0f/0x1eb2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85852160 unmapped: 1646592 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:05.736359+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 94 heartbeat osd_stat(store_statfs(0x1b9bdc000/0x0/0x1bfc00000, data 0x1e26d0f/0x1eb2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85860352 unmapped: 1638400 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:06.736558+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85860352 unmapped: 1638400 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:07.736716+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85860352 unmapped: 1638400 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:08.736915+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 876393 data_alloc: 285212672 data_used: 3788800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85860352 unmapped: 1638400 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:09.737160+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 94 heartbeat osd_stat(store_statfs(0x1b9bdc000/0x0/0x1bfc00000, data 0x1e26d0f/0x1eb2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85860352 unmapped: 1638400 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:10.737358+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85860352 unmapped: 1638400 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:11.737544+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 94 handle_osd_map epochs [94,95], i have 94, src has [1,95]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 63.744449615s of 63.819202423s, submitted: 17
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Got map version 42
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Active mgr is now 
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc reconnect Terminating session with v2:172.18.0.106:6810/2148019987
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc reconnect No active mgr available yet
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 ms_handle_reset con 0x560b57784000 session 0x560b5723ad20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 ms_handle_reset con 0x560b57701800 session 0x560b56fe9a40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 ms_handle_reset con 0x560b56091000 session 0x560b571d10e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57785000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85893120 unmapped: 1605632 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:12.737744+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Got map version 43
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc reconnect Starting new session with [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: get_auth_request con 0x560b57977c00 auth_method 0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_configure stats_period=5
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86048768 unmapped: 1449984 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:13.737895+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b547b7800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5517fc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 881041 data_alloc: 285212672 data_used: 3801088
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86269952 unmapped: 1228800 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:14.738041+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Got map version 44
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86278144 unmapped: 1220608 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:15.738205+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86278144 unmapped: 1220608 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Got map version 45
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:16.738350+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:17.738491+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:18.738627+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 881041 data_alloc: 285212672 data_used: 3801088
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:19.738871+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:20.739051+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:21.739226+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:22.739392+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:23.739556+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 881041 data_alloc: 285212672 data_used: 3801088
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:24.739721+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:25.739944+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:26.740109+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:27.740262+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:28.740491+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 881041 data_alloc: 285212672 data_used: 3801088
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:29.740675+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:30.740828+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:31.740998+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:32.741190+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:33.741363+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 881041 data_alloc: 285212672 data_used: 3801088
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:34.741507+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:35.741690+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:36.741858+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:37.742043+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:38.742214+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 881041 data_alloc: 285212672 data_used: 3801088
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:39.742414+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:40.742607+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:41.742769+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:42.742956+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:43.743133+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 881041 data_alloc: 285212672 data_used: 3801088
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:44.743308+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:45.743443+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:46.743615+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:47.743813+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:48.743954+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 881041 data_alloc: 285212672 data_used: 3801088
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:49.744135+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:50.744281+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:51.744484+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86097920 unmapped: 1400832 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:52.744692+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86106112 unmapped: 1392640 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:53.744864+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 881041 data_alloc: 285212672 data_used: 3801088
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86106112 unmapped: 1392640 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:54.745000+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86106112 unmapped: 1392640 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:55.745160+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86106112 unmapped: 1392640 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:56.745289+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86106112 unmapped: 1392640 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:57.745451+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 86106112 unmapped: 1392640 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:58.745605+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 881041 data_alloc: 285212672 data_used: 3801088
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85811200 unmapped: 1687552 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:59.745822+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85811200 unmapped: 1687552 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:00.745969+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85811200 unmapped: 1687552 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:01.746215+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85811200 unmapped: 1687552 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:02.746364+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85811200 unmapped: 1687552 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:03.746534+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 881041 data_alloc: 285212672 data_used: 3801088
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85811200 unmapped: 1687552 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:04.746692+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85811200 unmapped: 1687552 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:05.746876+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85811200 unmapped: 1687552 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:06.747023+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85811200 unmapped: 1687552 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:07.747161+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85811200 unmapped: 1687552 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:08.747327+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 881041 data_alloc: 285212672 data_used: 3801088
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85811200 unmapped: 1687552 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:09.747526+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85811200 unmapped: 1687552 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:10.747669+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85811200 unmapped: 1687552 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:11.747808+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85811200 unmapped: 1687552 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:12.747965+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85811200 unmapped: 1687552 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:13.748117+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 881041 data_alloc: 285212672 data_used: 3801088
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85811200 unmapped: 1687552 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:14.748258+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85811200 unmapped: 1687552 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:15.748448+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85811200 unmapped: 1687552 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:16.748624+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85811200 unmapped: 1687552 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:17.748825+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85811200 unmapped: 1687552 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:18.776913+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 881041 data_alloc: 285212672 data_used: 3801088
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85811200 unmapped: 1687552 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:19.777115+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85811200 unmapped: 1687552 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:20.777346+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85811200 unmapped: 1687552 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:21.777517+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85811200 unmapped: 1687552 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:22.777742+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85819392 unmapped: 1679360 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:23.778376+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 881041 data_alloc: 285212672 data_used: 3801088
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85819392 unmapped: 1679360 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:24.778540+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85819392 unmapped: 1679360 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:25.779149+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85819392 unmapped: 1679360 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:26.779617+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85819392 unmapped: 1679360 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:27.779886+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85819392 unmapped: 1679360 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:28.780495+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 881041 data_alloc: 285212672 data_used: 3801088
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85819392 unmapped: 1679360 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:29.781126+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85819392 unmapped: 1679360 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:30.781341+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85819392 unmapped: 1679360 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:31.781473+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85819392 unmapped: 1679360 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:32.781630+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85827584 unmapped: 1671168 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:33.781844+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 881041 data_alloc: 285212672 data_used: 3801088
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85827584 unmapped: 1671168 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:34.782308+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85827584 unmapped: 1671168 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:35.782870+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85827584 unmapped: 1671168 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:36.783327+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85827584 unmapped: 1671168 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:37.783580+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85835776 unmapped: 1662976 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:38.783803+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 881041 data_alloc: 285212672 data_used: 3801088
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85835776 unmapped: 1662976 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:39.784035+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 85835776 unmapped: 1662976 heap: 87498752 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:40.784205+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57765c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 89.555458069s of 89.631095886s, submitted: 19
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 94380032 unmapped: 9904128 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:41.784371+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 heartbeat osd_stat(store_statfs(0x1b9bd6000/0x0/0x1bfc00000, data 0x1e296ac/0x1eb7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87162880 unmapped: 17121280 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:42.784551+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Got map version 46
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 96 ms_handle_reset con 0x560b57765c00 session 0x560b5723a3c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56091000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 96 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87457792 unmapped: 16826368 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:43.784709+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1116436 data_alloc: 285212672 data_used: 3817472
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87465984 unmapped: 16818176 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:44.784851+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 97 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 97 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 97 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 97 ms_handle_reset con 0x560b56091000 session 0x560b5723a1e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87523328 unmapped: 16760832 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:45.785110+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87523328 unmapped: 16760832 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 97 heartbeat osd_stat(store_statfs(0x1b7aeb000/0x0/0x1bfc00000, data 0x3f0e295/0x3fa2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:46.785322+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87523328 unmapped: 16760832 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:47.785578+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87523328 unmapped: 16760832 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:48.785713+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 97 heartbeat osd_stat(store_statfs(0x1b7aeb000/0x0/0x1bfc00000, data 0x3f0e295/0x3fa2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1120172 data_alloc: 285212672 data_used: 3817472
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87523328 unmapped: 16760832 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:49.785933+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87523328 unmapped: 16760832 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:50.786229+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87523328 unmapped: 16760832 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:51.786516+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87523328 unmapped: 16760832 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:52.786840+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87523328 unmapped: 16760832 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 97 heartbeat osd_stat(store_statfs(0x1b7aeb000/0x0/0x1bfc00000, data 0x3f0e295/0x3fa2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:53.787129+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1120172 data_alloc: 285212672 data_used: 3817472
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87523328 unmapped: 16760832 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:54.787353+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87523328 unmapped: 16760832 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:55.787740+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87523328 unmapped: 16760832 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:56.787895+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 97 heartbeat osd_stat(store_statfs(0x1b7aeb000/0x0/0x1bfc00000, data 0x3f0e295/0x3fa2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87523328 unmapped: 16760832 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69521 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:57.788155+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87523328 unmapped: 16760832 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:58.788394+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1120172 data_alloc: 285212672 data_used: 3817472
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87523328 unmapped: 16760832 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:59.788651+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 97 heartbeat osd_stat(store_statfs(0x1b7aeb000/0x0/0x1bfc00000, data 0x3f0e295/0x3fa2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87531520 unmapped: 16752640 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:00.788863+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87531520 unmapped: 16752640 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:01.789077+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87531520 unmapped: 16752640 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:02.789266+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87531520 unmapped: 16752640 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:03.789508+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1120172 data_alloc: 285212672 data_used: 3817472
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87531520 unmapped: 16752640 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:04.789682+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 97 heartbeat osd_stat(store_statfs(0x1b7aeb000/0x0/0x1bfc00000, data 0x3f0e295/0x3fa2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:05.789956+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87531520 unmapped: 16752640 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:06.790164+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87531520 unmapped: 16752640 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 97 heartbeat osd_stat(store_statfs(0x1b7aeb000/0x0/0x1bfc00000, data 0x3f0e295/0x3fa2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:07.790347+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87531520 unmapped: 16752640 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:08.790610+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 87531520 unmapped: 16752640 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57701800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 27.581750870s of 27.921802521s, submitted: 46
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1122465 data_alloc: 285212672 data_used: 3817472
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:09.790839+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 88604672 unmapped: 15679488 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 98 handle_osd_map epochs [98,98], i have 98, src has [1,98]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 98 ms_handle_reset con 0x560b57701800 session 0x560b571d10e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 98 heartbeat osd_stat(store_statfs(0x1b7aeb000/0x0/0x1bfc00000, data 0x3f0e295/0x3fa2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:10.791002+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 88653824 unmapped: 15630336 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:11.791150+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 88653824 unmapped: 15630336 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b580d6800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:12.791289+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 88662016 unmapped: 15622144 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 99 handle_osd_map epochs [99,99], i have 99, src has [1,99]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 99 handle_osd_map epochs [99,99], i have 99, src has [1,99]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 99 ms_handle_reset con 0x560b580d6800 session 0x560b571805a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:13.791449+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 88694784 unmapped: 15589376 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1130657 data_alloc: 285212672 data_used: 3842048
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:14.791599+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 88694784 unmapped: 15589376 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:15.791748+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 88711168 unmapped: 15572992 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 99 heartbeat osd_stat(store_statfs(0x1b7ae3000/0x0/0x1bfc00000, data 0x3f12d4f/0x3faa000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:16.791888+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 88711168 unmapped: 15572992 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:17.792064+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 88711168 unmapped: 15572992 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:18.792248+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 88711168 unmapped: 15572992 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1130657 data_alloc: 285212672 data_used: 3842048
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:19.792414+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 88711168 unmapped: 15572992 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b580d7400
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 10.987459183s of 11.214921951s, submitted: 63
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:20.792560+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 88760320 unmapped: 15523840 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 100 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 100 ms_handle_reset con 0x560b580d7400 session 0x560b55bf2780
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 100 heartbeat osd_stat(store_statfs(0x1b7adc000/0x0/0x1bfc00000, data 0x3f156d1/0x3fb1000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:21.792741+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 88768512 unmapped: 15515648 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:22.792908+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 88850432 unmapped: 15433728 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 101 heartbeat osd_stat(store_statfs(0x1b7adc000/0x0/0x1bfc00000, data 0x3f156d1/0x3fb1000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 101 heartbeat osd_stat(store_statfs(0x1b7ad7000/0x0/0x1bfc00000, data 0x3f17aea/0x3fb5000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:23.793050+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 88850432 unmapped: 15433728 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:24.793189+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1144213 data_alloc: 285212672 data_used: 3854336
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 88866816 unmapped: 15417344 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 101 heartbeat osd_stat(store_statfs(0x1b7ad7000/0x0/0x1bfc00000, data 0x3f17aea/0x3fb5000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:25.793338+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 88866816 unmapped: 15417344 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:26.793475+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 88866816 unmapped: 15417344 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b580d6c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:27.793686+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 88899584 unmapped: 15384576 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 102 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 102 ms_handle_reset con 0x560b580d6c00 session 0x560b551f1e00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:28.793825+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 15269888 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:29.794009+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1144154 data_alloc: 285212672 data_used: 3866624
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 15269888 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 102 heartbeat osd_stat(store_statfs(0x1b7ad5000/0x0/0x1bfc00000, data 0x3f19c3e/0x3fb6000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:30.794151+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 89022464 unmapped: 15261696 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:31.794322+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 89022464 unmapped: 15261696 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:32.794504+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 89022464 unmapped: 15261696 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:33.794696+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 89022464 unmapped: 15261696 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:34.794885+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1144154 data_alloc: 285212672 data_used: 3866624
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 89022464 unmapped: 15261696 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 102 heartbeat osd_stat(store_statfs(0x1b7ad5000/0x0/0x1bfc00000, data 0x3f19c3e/0x3fb6000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:35.795154+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 89022464 unmapped: 15261696 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:36.795372+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 89022464 unmapped: 15261696 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 16.053407669s of 16.401380539s, submitted: 101
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:37.795625+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 89063424 unmapped: 15220736 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:38.795888+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 89063424 unmapped: 15220736 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:39.796151+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1146964 data_alloc: 285212672 data_used: 3866624
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 heartbeat osd_stat(store_statfs(0x1b7ad3000/0x0/0x1bfc00000, data 0x3f1c057/0x3fba000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 89063424 unmapped: 15220736 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:40.796296+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 89063424 unmapped: 15220736 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:41.796517+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 89063424 unmapped: 15220736 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 heartbeat osd_stat(store_statfs(0x1b7ad3000/0x0/0x1bfc00000, data 0x3f1c057/0x3fba000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:42.796693+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 89063424 unmapped: 15220736 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:43.796974+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 89063424 unmapped: 15220736 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:44.797126+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1146964 data_alloc: 285212672 data_used: 3866624
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 89071616 unmapped: 15212544 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56091000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 ms_handle_reset con 0x560b56091000 session 0x560b55586f00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57701800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:45.797329+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 96436224 unmapped: 7847936 heap: 104284160 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 ms_handle_reset con 0x560b57701800 session 0x560b58db25a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 heartbeat osd_stat(store_statfs(0x1b7ad3000/0x0/0x1bfc00000, data 0x3f1c057/0x3fba000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b580d6800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:46.797472+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.867151260s of 10.004196167s, submitted: 50
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 108519424 unmapped: 4235264 heap: 112754688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 ms_handle_reset con 0x560b580d6800 session 0x560b58db2960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b580d7400
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 ms_handle_reset con 0x560b580d7400 session 0x560b58db2d20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56185000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 ms_handle_reset con 0x560b56185000 session 0x560b58db2f00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56091000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 heartbeat osd_stat(store_statfs(0x1b69e2000/0x0/0x1bfc00000, data 0x500d080/0x50ac000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 ms_handle_reset con 0x560b56091000 session 0x560b58db30e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57701800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 ms_handle_reset con 0x560b57701800 session 0x560b58db32c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:47.797616+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 103153664 unmapped: 10657792 heap: 113811456 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:48.797819+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 103235584 unmapped: 10575872 heap: 113811456 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 heartbeat osd_stat(store_statfs(0x1b69e2000/0x0/0x1bfc00000, data 0x500d0b9/0x50ac000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b580d6800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 ms_handle_reset con 0x560b580d6800 session 0x560b57264f00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:49.798039+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b580d7400
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57764000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1319294 data_alloc: 301989888 data_used: 16912384
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 102752256 unmapped: 11059200 heap: 113811456 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:50.798182+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 103079936 unmapped: 10731520 heap: 113811456 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:51.798408+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 heartbeat osd_stat(store_statfs(0x1b69e2000/0x0/0x1bfc00000, data 0x500d0b9/0x50ac000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 103079936 unmapped: 10731520 heap: 113811456 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:52.798568+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 103079936 unmapped: 10731520 heap: 113811456 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 heartbeat osd_stat(store_statfs(0x1b69e2000/0x0/0x1bfc00000, data 0x500d0b9/0x50ac000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:53.798920+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 103079936 unmapped: 10731520 heap: 113811456 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:54.799132+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1323454 data_alloc: 301989888 data_used: 17444864
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 103079936 unmapped: 10731520 heap: 113811456 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:55.799366+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 103079936 unmapped: 10731520 heap: 113811456 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:56.799522+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 103079936 unmapped: 10731520 heap: 113811456 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:57.799691+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 103079936 unmapped: 10731520 heap: 113811456 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:58.799886+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 heartbeat osd_stat(store_statfs(0x1b69e2000/0x0/0x1bfc00000, data 0x500d0b9/0x50ac000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 103079936 unmapped: 10731520 heap: 113811456 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 heartbeat osd_stat(store_statfs(0x1b69e2000/0x0/0x1bfc00000, data 0x500d0b9/0x50ac000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:59.800210+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1323454 data_alloc: 301989888 data_used: 17444864
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 103079936 unmapped: 10731520 heap: 113811456 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:00.800407+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 103088128 unmapped: 10723328 heap: 113811456 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 14.007783890s of 14.179537773s, submitted: 24
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:01.800584+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 110149632 unmapped: 10043392 heap: 120193024 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 heartbeat osd_stat(store_statfs(0x1b51d4000/0x0/0x1bfc00000, data 0x68150b9/0x68b4000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:02.800827+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 111345664 unmapped: 8847360 heap: 120193024 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:03.800959+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 110804992 unmapped: 9388032 heap: 120193024 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:04.801112+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1516986 data_alloc: 301989888 data_used: 17641472
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 110927872 unmapped: 9265152 heap: 120193024 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:05.801436+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 110927872 unmapped: 9265152 heap: 120193024 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 ms_handle_reset con 0x560b576ff000 session 0x560b551df2c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ffc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 heartbeat osd_stat(store_statfs(0x1b50f1000/0x0/0x1bfc00000, data 0x68f80b9/0x6997000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 ms_handle_reset con 0x560b580d7400 session 0x560b571581e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 ms_handle_reset con 0x560b57764000 session 0x560b561a65a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:06.801597+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 ms_handle_reset con 0x560b576ffc00 session 0x560b56fe9a40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 110968832 unmapped: 9224192 heap: 120193024 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56091000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:07.801759+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 ms_handle_reset con 0x560b576ff000 session 0x560b573d52c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 ms_handle_reset con 0x560b56091000 session 0x560b5899f4a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57701800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b580d6800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 ms_handle_reset con 0x560b580d6800 session 0x560b561a6780
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56091000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 112033792 unmapped: 11837440 heap: 123871232 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 ms_handle_reset con 0x560b56091000 session 0x560b561a7e00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 ms_handle_reset con 0x560b576ff000 session 0x560b574021e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ffc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 ms_handle_reset con 0x560b576ffc00 session 0x560b57403e00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:08.801945+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57764000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 ms_handle_reset con 0x560b57764000 session 0x560b57403680
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 103 handle_osd_map epochs [103,104], i have 103, src has [1,104]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 112074752 unmapped: 11796480 heap: 123871232 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 104 ms_handle_reset con 0x560b57701800 session 0x560b5899f2c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56091000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 104 ms_handle_reset con 0x560b56091000 session 0x560b571d0780
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 104 ms_handle_reset con 0x560b576ff000 session 0x560b5734a1e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ffc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:09.802150+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1858656 data_alloc: 301989888 data_used: 23789568
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 126164992 unmapped: 22396928 heap: 148561920 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 104 ms_handle_reset con 0x560b576ffc00 session 0x560b571d10e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57764000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576fec00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 104 ms_handle_reset con 0x560b576fec00 session 0x560b5734a000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:10.802292+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576fe000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 104 ms_handle_reset con 0x560b576fe000 session 0x560b5734be00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56091000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 122716160 unmapped: 25845760 heap: 148561920 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 8.606901169s of 10.002552032s, submitted: 382
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 105 handle_osd_map epochs [105,105], i have 105, src has [1,105]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 105 handle_osd_map epochs [105,105], i have 105, src has [1,105]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 105 ms_handle_reset con 0x560b56091000 session 0x560b56cea3c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 105 ms_handle_reset con 0x560b57764000 session 0x560b55bf34a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 105 heartbeat osd_stat(store_statfs(0x1b25a9000/0x0/0x1bfc00000, data 0x943cc67/0x94e4000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576fec00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.a] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ffc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:11.802493+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 123715584 unmapped: 24846336 heap: 148561920 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 106 handle_osd_map epochs [106,106], i have 106, src has [1,106]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 106 ms_handle_reset con 0x560b576fec00 session 0x560b55586000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57f90c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 106 ms_handle_reset con 0x560b57f90c00 session 0x560b571d0960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57f91c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 106 ms_handle_reset con 0x560b57f91c00 session 0x560b573d5e00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56091000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 106 ms_handle_reset con 0x560b56091000 session 0x560b573d4f00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:12.802616+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 106 heartbeat osd_stat(store_statfs(0x1b25a4000/0x0/0x1bfc00000, data 0x943f1b6/0x94e8000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 123805696 unmapped: 24756224 heap: 148561920 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576fec00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:13.802748+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 106 ms_handle_reset con 0x560b576fec00 session 0x560b573d41e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 119431168 unmapped: 29130752 heap: 148561920 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:14.802969+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1586091 data_alloc: 301989888 data_used: 16936960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 119431168 unmapped: 29130752 heap: 148561920 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:15.803164+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 119431168 unmapped: 29130752 heap: 148561920 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:16.803294+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 106 handle_osd_map epochs [106,107], i have 106, src has [1,107]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.a] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 115884032 unmapped: 32677888 heap: 148561920 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 107 heartbeat osd_stat(store_statfs(0x1b4f7e000/0x0/0x1bfc00000, data 0x6a6556d/0x6b0f000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:17.803420+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 115884032 unmapped: 32677888 heap: 148561920 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57f90c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 107 ms_handle_reset con 0x560b57f90c00 session 0x560b55ccc000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:18.803574+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b549ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b549fe400
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 107 ms_handle_reset con 0x560b549ff000 session 0x560b55ccc960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 107 ms_handle_reset con 0x560b549fe400 session 0x560b55b4f4a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b549ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 107 ms_handle_reset con 0x560b549ff000 session 0x560b57c9b0e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56091000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 107 ms_handle_reset con 0x560b56091000 session 0x560b57c9b2c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 115908608 unmapped: 32653312 heap: 148561920 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576fec00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 107 ms_handle_reset con 0x560b576fec00 session 0x560b57c9ab40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57f90c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 107 ms_handle_reset con 0x560b57f90c00 session 0x560b55bf3a40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4fc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:19.803836+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1850987 data_alloc: 318767104 data_used: 27734016
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 130826240 unmapped: 24010752 heap: 154836992 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 107 ms_handle_reset con 0x560b56f4fc00 session 0x560b5899f860
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:20.803977+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 130834432 unmapped: 24002560 heap: 154836992 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:21.804102+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b549ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 10.107078552s of 10.821249962s, submitted: 179
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 107 ms_handle_reset con 0x560b549ff000 session 0x560b5734b4a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131022848 unmapped: 23814144 heap: 154836992 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56091000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4fc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576fec00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:22.804232+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 107 heartbeat osd_stat(store_statfs(0x1b307c000/0x0/0x1bfc00000, data 0x89665f2/0x8a12000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131219456 unmapped: 23617536 heap: 154836992 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 108 ms_handle_reset con 0x560b576fec00 session 0x560b55587a40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:23.804348+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 117456896 unmapped: 37380096 heap: 154836992 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:24.804495+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1589730 data_alloc: 301989888 data_used: 17014784
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 117456896 unmapped: 37380096 heap: 154836992 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 108 heartbeat osd_stat(store_statfs(0x1b5184000/0x0/0x1bfc00000, data 0x685bb07/0x6907000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:25.804683+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 118480896 unmapped: 36356096 heap: 154836992 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:26.804852+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 118480896 unmapped: 36356096 heap: 154836992 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:27.805009+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57f90c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57784000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 123928576 unmapped: 30908416 heap: 154836992 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:28.805134+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 108 ms_handle_reset con 0x560b576ff000 session 0x560b55b4e780
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 108 ms_handle_reset con 0x560b576ffc00 session 0x560b55586f00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 123936768 unmapped: 30900224 heap: 154836992 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:29.805305+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 108 heartbeat osd_stat(store_statfs(0x1b3ff6000/0x0/0x1bfc00000, data 0x79e4b07/0x7a90000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1732052 data_alloc: 301989888 data_used: 18792448
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 122216448 unmapped: 32620544 heap: 154836992 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:30.805499+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 121569280 unmapped: 33267712 heap: 154836992 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:31.805685+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 108 handle_osd_map epochs [108,109], i have 108, src has [1,109]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.235944748s of 10.038082123s, submitted: 264
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 121569280 unmapped: 33267712 heap: 154836992 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:32.805836+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 121610240 unmapped: 33226752 heap: 154836992 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 109 heartbeat osd_stat(store_statfs(0x1b3f54000/0x0/0x1bfc00000, data 0x7a8bf20/0x7b39000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:33.805958+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 109 heartbeat osd_stat(store_statfs(0x1b3f54000/0x0/0x1bfc00000, data 0x7a8bf20/0x7b39000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 121798656 unmapped: 33038336 heap: 154836992 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:34.806108+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1768794 data_alloc: 301989888 data_used: 20467712
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 121962496 unmapped: 32874496 heap: 154836992 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:35.806332+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 123363328 unmapped: 31473664 heap: 154836992 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:36.806486+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 123363328 unmapped: 31473664 heap: 154836992 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:37.806650+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 109 ms_handle_reset con 0x560b56091000 session 0x560b55b4fe00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 109 ms_handle_reset con 0x560b56f4fc00 session 0x560b5899fc20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 123379712 unmapped: 31457280 heap: 154836992 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 109 heartbeat osd_stat(store_statfs(0x1b3f2b000/0x0/0x1bfc00000, data 0x7ab2f20/0x7b60000, compress 0x0/0x0/0x0, omap 0x647, meta 0x416f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b549ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:38.806861+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 109 ms_handle_reset con 0x560b549ff000 session 0x560b58038d20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56091000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 109 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 110 ms_handle_reset con 0x560b56091000 session 0x560b580392c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576fec00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a545800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 110 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 123437056 unmapped: 31399936 heap: 154836992 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 110 ms_handle_reset con 0x560b576fec00 session 0x560b580394a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 110 ms_handle_reset con 0x560b5a545800 session 0x560b58039860
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b549ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:39.807025+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1999699 data_alloc: 318767104 data_used: 27246592
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 110 ms_handle_reset con 0x560b549ff000 session 0x560b58039a40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576fec00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 135880704 unmapped: 28073984 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:40.807092+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 111 ms_handle_reset con 0x560b576fec00 session 0x560b5befe3c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 111 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.a] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 135962624 unmapped: 27992064 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:41.807206+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 111 handle_osd_map epochs [111,112], i have 111, src has [1,112]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.565119743s of 10.127847672s, submitted: 142
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 112 handle_osd_map epochs [112,112], i have 112, src has [1,112]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 112 ms_handle_reset con 0x560b56f4e000 session 0x560b574023c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 135979008 unmapped: 27975680 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4fc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 112 ms_handle_reset con 0x560b56f4fc00 session 0x560b58db3680
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56091800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 112 ms_handle_reset con 0x560b56091800 session 0x560b571d03c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b549ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:42.807329+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 112 ms_handle_reset con 0x560b549ff000 session 0x560b58039c20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 135987200 unmapped: 27967488 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56091800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:43.807461+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 112 ms_handle_reset con 0x560b56091800 session 0x560b58db2960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 123830272 unmapped: 40124416 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 112 heartbeat osd_stat(store_statfs(0x1b3ba2000/0x0/0x1bfc00000, data 0x7391f16/0x7444000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 112 ms_handle_reset con 0x560b56f4e000 session 0x560b57265680
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4fc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 112 ms_handle_reset con 0x560b56f4fc00 session 0x560b561a6000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576fec00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 112 ms_handle_reset con 0x560b576fec00 session 0x560b571581e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets getting new tickets!
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:44.807701+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _finish_auth 0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:44.809306+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b549ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 112 ms_handle_reset con 0x560b549ff000 session 0x560b5befef00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56091800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 112 ms_handle_reset con 0x560b56091800 session 0x560b5beff680
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1835045 data_alloc: 301989888 data_used: 17154048
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 121888768 unmapped: 42065920 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 112 heartbeat osd_stat(store_statfs(0x1b2cea000/0x0/0x1bfc00000, data 0x8247f88/0x82fc000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:45.807858+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 121970688 unmapped: 41984000 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:46.808003+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 112 ms_handle_reset con 0x560b56f4e000 session 0x560b5befe780
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4fc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.a] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56091000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[2.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 113 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 113 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 113 ms_handle_reset con 0x560b56f4fc00 session 0x560b55587a40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 113 ms_handle_reset con 0x560b56091000 session 0x560b555863c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 121102336 unmapped: 42852352 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b549ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 113 heartbeat osd_stat(store_statfs(0x1b338d000/0x0/0x1bfc00000, data 0x824a3a1/0x8300000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 113 ms_handle_reset con 0x560b549ff000 session 0x560b55bb4960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56091000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 113 ms_handle_reset con 0x560b56091000 session 0x560b57265680
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56091800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 113 ms_handle_reset con 0x560b56091800 session 0x560b57265e00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 113 ms_handle_reset con 0x560b56f4e000 session 0x560b55cccd20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4fc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:47.808169+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 135495680 unmapped: 28459008 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 113 ms_handle_reset con 0x560b56f4fc00 session 0x560b574012c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b549ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 113 ms_handle_reset con 0x560b549ff000 session 0x560b5723b0e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:48.808297+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56091000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56091800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 132546560 unmapped: 31408128 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:49.808450+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2088396 data_alloc: 318767104 data_used: 25935872
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 113 heartbeat osd_stat(store_statfs(0x1b189f000/0x0/0x1bfc00000, data 0x9d373d4/0x9def000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 113 ms_handle_reset con 0x560b56f4e000 session 0x560b55586000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 132153344 unmapped: 31801344 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4fc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57764c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57765400
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:50.808577+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 132169728 unmapped: 31784960 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 113 handle_osd_map epochs [113,114], i have 113, src has [1,114]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 114 handle_osd_map epochs [114,114], i have 114, src has [1,114]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 114 handle_osd_map epochs [114,114], i have 114, src has [1,114]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 114 ms_handle_reset con 0x560b57765400 session 0x560b571d03c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:51.808703+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57765c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.138403893s of 10.126540184s, submitted: 270
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 125992960 unmapped: 37961728 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 114 ms_handle_reset con 0x560b57765c00 session 0x560b5734b4a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5808e800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 114 ms_handle_reset con 0x560b5808e800 session 0x560b57264b40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b549ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 114 ms_handle_reset con 0x560b549ff000 session 0x560b574023c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:52.808818+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 114 ms_handle_reset con 0x560b56f4fc00 session 0x560b54ee6000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 114 ms_handle_reset con 0x560b57764c00 session 0x560b55bf2b40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 114 ms_handle_reset con 0x560b56f4e000 session 0x560b573d52c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57765400
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 114 ms_handle_reset con 0x560b57765400 session 0x560b5734b2c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 114 ms_handle_reset con 0x560b57f90c00 session 0x560b55ccc960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 114 ms_handle_reset con 0x560b57784000 session 0x560b571590e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b549ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 122912768 unmapped: 41041920 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 114 ms_handle_reset con 0x560b549ff000 session 0x560b5899ef00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:53.808945+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 114 ms_handle_reset con 0x560b56f4e000 session 0x560b571801e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 121159680 unmapped: 42795008 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:54.809100+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1586041 data_alloc: 301989888 data_used: 15572992
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 121159680 unmapped: 42795008 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 114 heartbeat osd_stat(store_statfs(0x1b4461000/0x0/0x1bfc00000, data 0x5b48877/0x5bfe000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:55.809293+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 121200640 unmapped: 42754048 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:56.809422+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4fc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 122658816 unmapped: 41295872 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 ms_handle_reset con 0x560b56f4fc00 session 0x560b57265e00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b549ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:57.809544+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 122667008 unmapped: 41287680 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 heartbeat osd_stat(store_statfs(0x1b5a61000/0x0/0x1bfc00000, data 0x5b74c90/0x5c2c000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:58.809676+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 122699776 unmapped: 41254912 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 heartbeat osd_stat(store_statfs(0x1b5a61000/0x0/0x1bfc00000, data 0x5b74c90/0x5c2c000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:59.809885+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1610062 data_alloc: 301989888 data_used: 17887232
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 123838464 unmapped: 40116224 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:00.810006+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 123895808 unmapped: 40058880 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:01.810194+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 123920384 unmapped: 40034304 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:02.810342+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 123920384 unmapped: 40034304 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:03.810483+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 heartbeat osd_stat(store_statfs(0x1b5a61000/0x0/0x1bfc00000, data 0x5b74c90/0x5c2c000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 123920384 unmapped: 40034304 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:04.810619+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1610222 data_alloc: 301989888 data_used: 17895424
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 123920384 unmapped: 40034304 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57784000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 12.591785431s of 13.189778328s, submitted: 191
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57f90c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:05.810819+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 ms_handle_reset con 0x560b56091000 session 0x560b55ccc5a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 ms_handle_reset con 0x560b56091800 session 0x560b551de1e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 123928576 unmapped: 40026112 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:06.810986+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 123928576 unmapped: 40026112 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 heartbeat osd_stat(store_statfs(0x1b5a62000/0x0/0x1bfc00000, data 0x5b74c90/0x5c2c000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:07.811163+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 123928576 unmapped: 40026112 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:08.811320+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131555328 unmapped: 32399360 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:09.811507+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1797668 data_alloc: 301989888 data_used: 18616320
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 134471680 unmapped: 29483008 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 heartbeat osd_stat(store_statfs(0x1b3e75000/0x0/0x1bfc00000, data 0x7761c90/0x7819000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:10.811637+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 heartbeat osd_stat(store_statfs(0x1b3e75000/0x0/0x1bfc00000, data 0x7761c90/0x7819000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 134168576 unmapped: 29786112 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:11.811817+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 134643712 unmapped: 29310976 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:12.811940+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 134676480 unmapped: 29278208 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:13.812102+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 134692864 unmapped: 29261824 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:14.812246+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1898390 data_alloc: 301989888 data_used: 19091456
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 134791168 unmapped: 29163520 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:15.812381+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 134791168 unmapped: 29163520 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 heartbeat osd_stat(store_statfs(0x1b38a0000/0x0/0x1bfc00000, data 0x7d2dc90/0x7de5000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:16.812527+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 10.565533638s of 11.564317703s, submitted: 305
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 134709248 unmapped: 29245440 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 ms_handle_reset con 0x560b549ff000 session 0x560b5734be00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 ms_handle_reset con 0x560b56f4e000 session 0x560b571705a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:17.812694+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b55afd800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133226496 unmapped: 30728192 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 ms_handle_reset con 0x560b57784000 session 0x560b55bb4960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 ms_handle_reset con 0x560b57f90c00 session 0x560b57171860
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 ms_handle_reset con 0x560b55afd800 session 0x560b56f90f00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 heartbeat osd_stat(store_statfs(0x1b53db000/0x0/0x1bfc00000, data 0x5e73c80/0x5f2a000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b549ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:18.812851+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 ms_handle_reset con 0x560b549ff000 session 0x560b571814a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131309568 unmapped: 32645120 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:19.813054+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1388775 data_alloc: 301989888 data_used: 15581184
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131309568 unmapped: 32645120 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:20.813205+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131309568 unmapped: 32645120 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:21.813342+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131309568 unmapped: 32645120 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:22.813492+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 heartbeat osd_stat(store_statfs(0x1b6d54000/0x0/0x1bfc00000, data 0x3f37bdb/0x3fea000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131309568 unmapped: 32645120 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:23.813645+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131309568 unmapped: 32645120 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:24.813816+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1388775 data_alloc: 301989888 data_used: 15581184
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131309568 unmapped: 32645120 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:25.814021+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131309568 unmapped: 32645120 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:26.814237+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131309568 unmapped: 32645120 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:27.814439+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131309568 unmapped: 32645120 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 heartbeat osd_stat(store_statfs(0x1b6d54000/0x0/0x1bfc00000, data 0x3f37bdb/0x3fea000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:28.814584+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 130932736 unmapped: 33021952 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:29.814764+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1388775 data_alloc: 301989888 data_used: 15581184
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 130932736 unmapped: 33021952 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:30.814973+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 130932736 unmapped: 33021952 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:31.815126+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 heartbeat osd_stat(store_statfs(0x1b6d54000/0x0/0x1bfc00000, data 0x3f37bdb/0x3fea000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131014656 unmapped: 32940032 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:32.815521+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131014656 unmapped: 32940032 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:33.815703+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131014656 unmapped: 32940032 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:34.815905+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1388775 data_alloc: 301989888 data_used: 15581184
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131014656 unmapped: 32940032 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:35.816057+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131014656 unmapped: 32940032 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:36.816232+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131022848 unmapped: 32931840 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 heartbeat osd_stat(store_statfs(0x1b6d54000/0x0/0x1bfc00000, data 0x3f37bdb/0x3fea000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:37.816444+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131022848 unmapped: 32931840 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:38.816613+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131022848 unmapped: 32931840 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:39.816832+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1388775 data_alloc: 301989888 data_used: 15581184
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131022848 unmapped: 32931840 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:40.816993+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131022848 unmapped: 32931840 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:41.817167+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131022848 unmapped: 32931840 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:42.817313+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 heartbeat osd_stat(store_statfs(0x1b6d54000/0x0/0x1bfc00000, data 0x3f37bdb/0x3fea000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131022848 unmapped: 32931840 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:43.817486+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131022848 unmapped: 32931840 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 heartbeat osd_stat(store_statfs(0x1b6d54000/0x0/0x1bfc00000, data 0x3f37bdb/0x3fea000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:44.817643+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1388775 data_alloc: 301989888 data_used: 15581184
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131031040 unmapped: 32923648 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:45.817854+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131031040 unmapped: 32923648 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:46.818059+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131031040 unmapped: 32923648 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:47.818267+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 heartbeat osd_stat(store_statfs(0x1b6d54000/0x0/0x1bfc00000, data 0x3f37bdb/0x3fea000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131031040 unmapped: 32923648 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:48.818441+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131031040 unmapped: 32923648 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:49.818624+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1388775 data_alloc: 301989888 data_used: 15581184
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131031040 unmapped: 32923648 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:50.818767+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131031040 unmapped: 32923648 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:51.818957+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131031040 unmapped: 32923648 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:52.819145+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131039232 unmapped: 32915456 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:53.819331+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 heartbeat osd_stat(store_statfs(0x1b6d54000/0x0/0x1bfc00000, data 0x3f37bdb/0x3fea000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 heartbeat osd_stat(store_statfs(0x1b6d54000/0x0/0x1bfc00000, data 0x3f37bdb/0x3fea000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131055616 unmapped: 32899072 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:54.819510+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1388775 data_alloc: 301989888 data_used: 15581184
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 heartbeat osd_stat(store_statfs(0x1b6d54000/0x0/0x1bfc00000, data 0x3f37bdb/0x3fea000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131055616 unmapped: 32899072 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:55.819643+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131055616 unmapped: 32899072 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:56.819829+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131055616 unmapped: 32899072 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:57.820012+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131055616 unmapped: 32899072 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:58.820176+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131055616 unmapped: 32899072 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 heartbeat osd_stat(store_statfs(0x1b6d54000/0x0/0x1bfc00000, data 0x3f37bdb/0x3fea000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:59.820413+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1388775 data_alloc: 301989888 data_used: 15581184
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131055616 unmapped: 32899072 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:00.820614+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 heartbeat osd_stat(store_statfs(0x1b6d54000/0x0/0x1bfc00000, data 0x3f37bdb/0x3fea000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131072000 unmapped: 32882688 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:01.820799+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131072000 unmapped: 32882688 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:02.820966+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131072000 unmapped: 32882688 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:03.821152+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131072000 unmapped: 32882688 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:04.821295+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 heartbeat osd_stat(store_statfs(0x1b6d54000/0x0/0x1bfc00000, data 0x3f37bdb/0x3fea000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1388775 data_alloc: 301989888 data_used: 15581184
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131072000 unmapped: 32882688 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:05.821477+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131072000 unmapped: 32882688 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:06.821636+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131072000 unmapped: 32882688 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:07.821858+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131063808 unmapped: 32890880 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:08.822043+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 heartbeat osd_stat(store_statfs(0x1b6d54000/0x0/0x1bfc00000, data 0x3f37bdb/0x3fea000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131072000 unmapped: 32882688 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:09.822235+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1388775 data_alloc: 301989888 data_used: 15581184
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131072000 unmapped: 32882688 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:10.822393+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131072000 unmapped: 32882688 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:11.822571+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131072000 unmapped: 32882688 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:12.822836+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131072000 unmapped: 32882688 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:13.822994+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131072000 unmapped: 32882688 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 heartbeat osd_stat(store_statfs(0x1b6d54000/0x0/0x1bfc00000, data 0x3f37bdb/0x3fea000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:14.823117+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1388775 data_alloc: 301989888 data_used: 15581184
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131072000 unmapped: 32882688 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:15.823539+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131072000 unmapped: 32882688 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:16.824138+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131080192 unmapped: 32874496 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:17.824325+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131080192 unmapped: 32874496 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:18.824518+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 heartbeat osd_stat(store_statfs(0x1b6d54000/0x0/0x1bfc00000, data 0x3f37bdb/0x3fea000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131047424 unmapped: 32907264 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:19.824747+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1388775 data_alloc: 301989888 data_used: 15581184
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131047424 unmapped: 32907264 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:20.824924+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56091000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 64.259132385s of 64.730323792s, submitted: 137
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131047424 unmapped: 32907264 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:21.825062+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 115 handle_osd_map epochs [115,116], i have 115, src has [1,116]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 116 ms_handle_reset con 0x560b56091000 session 0x560b551f05a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131129344 unmapped: 32825344 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 116 handle_osd_map epochs [116,116], i have 116, src has [1,116]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:22.825211+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 117 ms_handle_reset con 0x560b54cd2c00 session 0x560b551df4a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 117 ms_handle_reset con 0x560b56f4e000 session 0x560b58db3680
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131186688 unmapped: 32768000 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b549ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:23.825920+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 118 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 118 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 118 ms_handle_reset con 0x560b549ff000 session 0x560b58038b40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131227648 unmapped: 32727040 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 118 heartbeat osd_stat(store_statfs(0x1b768f000/0x0/0x1bfc00000, data 0x3f3fefd/0x3ffe000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:24.826060+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b55afd800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 118 handle_osd_map epochs [118,119], i have 118, src has [1,119]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1423030 data_alloc: 301989888 data_used: 15605760
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 119 ms_handle_reset con 0x560b54cd2c00 session 0x560b571581e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131244032 unmapped: 32710656 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:25.826199+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 120 ms_handle_reset con 0x560b55afd800 session 0x560b580385a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131260416 unmapped: 32694272 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:26.826348+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56091000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 120 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131260416 unmapped: 32694272 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:27.826486+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 121 ms_handle_reset con 0x560b56091000 session 0x560b58db2960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b549ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 121 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 121 handle_osd_map epochs [120,121], i have 121, src has [1,121]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131317760 unmapped: 32636928 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:28.826631+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 121 heartbeat osd_stat(store_statfs(0x1b7683000/0x0/0x1bfc00000, data 0x3f46e49/0x400a000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 122 handle_osd_map epochs [122,122], i have 122, src has [1,122]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 122 ms_handle_reset con 0x560b549ff000 session 0x560b57081860
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131350528 unmapped: 32604160 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:29.826837+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 122 heartbeat osd_stat(store_statfs(0x1b767b000/0x0/0x1bfc00000, data 0x3f493ee/0x4011000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1439013 data_alloc: 301989888 data_used: 15618048
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131350528 unmapped: 32604160 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:30.826994+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 122 heartbeat osd_stat(store_statfs(0x1b767e000/0x0/0x1bfc00000, data 0x3f493cb/0x4010000, compress 0x0/0x0/0x0, omap 0x647, meta 0x456f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 131358720 unmapped: 32595968 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:31.827175+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.870009422s of 10.358222961s, submitted: 131
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 123 ms_handle_reset con 0x560b54cd2c00 session 0x560b57402000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 132472832 unmapped: 31481856 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:32.827426+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b55afd800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 132481024 unmapped: 31473664 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:33.827599+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 124 handle_osd_map epochs [124,124], i have 124, src has [1,124]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 124 ms_handle_reset con 0x560b56f4e000 session 0x560b56ceb0e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 124 handle_osd_map epochs [124,124], i have 124, src has [1,124]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 124 ms_handle_reset con 0x560b55afd800 session 0x560b56fe85a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 124 heartbeat osd_stat(store_statfs(0x1b64d4000/0x0/0x1bfc00000, data 0x3f4df1c/0x4019000, compress 0x0/0x0/0x0, omap 0x647, meta 0x570f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57f90c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 124 ms_handle_reset con 0x560b57f90c00 session 0x560b551f0f00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 132554752 unmapped: 31399936 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b549ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:34.827861+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 124 ms_handle_reset con 0x560b54cd2c00 session 0x560b56fe92c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b55afd800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1449939 data_alloc: 301989888 data_used: 15646720
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 132571136 unmapped: 31383552 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:35.828043+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 125 handle_osd_map epochs [125,125], i have 125, src has [1,125]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 125 ms_handle_reset con 0x560b55afd800 session 0x560b58db2d20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 125 heartbeat osd_stat(store_statfs(0x1b64d4000/0x0/0x1bfc00000, data 0x3f4df1c/0x4019000, compress 0x0/0x0/0x0, omap 0x647, meta 0x570f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 125 ms_handle_reset con 0x560b56f4e000 session 0x560b573d5a40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 125 ms_handle_reset con 0x560b549ff000 session 0x560b58db2960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 125 ms_handle_reset con 0x560b54cd2800 session 0x560b58db2000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 132636672 unmapped: 31318016 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b549ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:36.828153+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 126 handle_osd_map epochs [126,126], i have 126, src has [1,126]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 126 handle_osd_map epochs [126,126], i have 126, src has [1,126]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 132734976 unmapped: 31219712 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:37.828323+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 127 handle_osd_map epochs [126,127], i have 127, src has [1,127]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 127 handle_osd_map epochs [127,127], i have 127, src has [1,127]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 127 ms_handle_reset con 0x560b549ff000 session 0x560b56fe8780
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b55afd800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 127 heartbeat osd_stat(store_statfs(0x1b60ca000/0x0/0x1bfc00000, data 0x3f541df/0x4020000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 127 ms_handle_reset con 0x560b54cd2c00 session 0x560b5899e1e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 132825088 unmapped: 31129600 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:38.828490+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 127 handle_osd_map epochs [128,128], i have 128, src has [1,128]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 127 handle_osd_map epochs [128,128], i have 128, src has [1,128]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 128 heartbeat osd_stat(store_statfs(0x1b60cb000/0x0/0x1bfc00000, data 0x3f563a3/0x4022000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [1])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 128 ms_handle_reset con 0x560b56f4e000 session 0x560b57401a40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 128 ms_handle_reset con 0x560b55afd800 session 0x560b58038b40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 132907008 unmapped: 31047680 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:39.828651+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b580d6000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1456036 data_alloc: 301989888 data_used: 15654912
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 129 ms_handle_reset con 0x560b580d6000 session 0x560b55b4e780
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133013504 unmapped: 30941184 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:40.828834+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 129 heartbeat osd_stat(store_statfs(0x1b60c8000/0x0/0x1bfc00000, data 0x3f588f1/0x4022000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133013504 unmapped: 30941184 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:41.829004+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 8.547754288s of 10.013732910s, submitted: 459
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133054464 unmapped: 30900224 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:42.829283+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133054464 unmapped: 30900224 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:43.829422+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133054464 unmapped: 30900224 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:44.829566+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1457998 data_alloc: 301989888 data_used: 15654912
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133054464 unmapped: 30900224 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:45.829716+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133062656 unmapped: 30892032 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:46.829847+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 131 heartbeat osd_stat(store_statfs(0x1b60c7000/0x0/0x1bfc00000, data 0x3f5ad66/0x4026000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 131 heartbeat osd_stat(store_statfs(0x1b60c3000/0x0/0x1bfc00000, data 0x3f5d1bf/0x402a000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133128192 unmapped: 30826496 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:47.829937+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 131 heartbeat osd_stat(store_statfs(0x1b60c3000/0x0/0x1bfc00000, data 0x3f5d1bf/0x402a000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133128192 unmapped: 30826496 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:48.830071+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133128192 unmapped: 30826496 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:49.830210+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1461000 data_alloc: 301989888 data_used: 15654912
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133128192 unmapped: 30826496 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:50.830349+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 131 heartbeat osd_stat(store_statfs(0x1b60c3000/0x0/0x1bfc00000, data 0x3f5d1bf/0x402a000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133128192 unmapped: 30826496 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:51.830738+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:52.830905+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133128192 unmapped: 30826496 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 131 heartbeat osd_stat(store_statfs(0x1b60c3000/0x0/0x1bfc00000, data 0x3f5d1bf/0x402a000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:53.831046+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133128192 unmapped: 30826496 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:54.831195+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133136384 unmapped: 30818304 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 131 heartbeat osd_stat(store_statfs(0x1b60c3000/0x0/0x1bfc00000, data 0x3f5d1bf/0x402a000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1461000 data_alloc: 301989888 data_used: 15654912
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 131 heartbeat osd_stat(store_statfs(0x1b60c3000/0x0/0x1bfc00000, data 0x3f5d1bf/0x402a000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:55.831370+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133136384 unmapped: 30818304 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 131 heartbeat osd_stat(store_statfs(0x1b60c3000/0x0/0x1bfc00000, data 0x3f5d1bf/0x402a000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:56.831507+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133144576 unmapped: 30810112 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:57.831661+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133144576 unmapped: 30810112 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:58.831976+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133144576 unmapped: 30810112 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 131 heartbeat osd_stat(store_statfs(0x1b60c3000/0x0/0x1bfc00000, data 0x3f5d1bf/0x402a000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4060345088' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:59.832255+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133144576 unmapped: 30810112 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1461000 data_alloc: 301989888 data_used: 15654912
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:00.832437+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133144576 unmapped: 30810112 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:01.832574+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133144576 unmapped: 30810112 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:02.832722+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133144576 unmapped: 30810112 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:03.832949+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133144576 unmapped: 30810112 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:04.833081+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133144576 unmapped: 30810112 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 131 heartbeat osd_stat(store_statfs(0x1b60c3000/0x0/0x1bfc00000, data 0x3f5d1bf/0x402a000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1461000 data_alloc: 301989888 data_used: 15654912
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:05.833241+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133144576 unmapped: 30810112 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 131 heartbeat osd_stat(store_statfs(0x1b60c3000/0x0/0x1bfc00000, data 0x3f5d1bf/0x402a000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 131 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 131 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 131 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 24.554210663s of 24.624324799s, submitted: 41
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:06.833396+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133169152 unmapped: 30785536 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b549ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 133 ms_handle_reset con 0x560b549ff000 session 0x560b5befe3c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:07.833541+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133210112 unmapped: 30744576 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:08.834440+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133210112 unmapped: 30744576 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 134 ms_handle_reset con 0x560b54cd2c00 session 0x560b5befef00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:09.839116+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133226496 unmapped: 30728192 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1476752 data_alloc: 301989888 data_used: 15659008
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b55afd800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:10.839341+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 134 ms_handle_reset con 0x560b55afd800 session 0x560b571590e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133234688 unmapped: 30720000 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 135 heartbeat osd_stat(store_statfs(0x1b60b4000/0x0/0x1bfc00000, data 0x3f6423a/0x4038000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [1])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 135 ms_handle_reset con 0x560b56f4e000 session 0x560b573e8b40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:11.839489+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133316608 unmapped: 30638080 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57701000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 136 ms_handle_reset con 0x560b57701000 session 0x560b571814a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 136 heartbeat osd_stat(store_statfs(0x1b60af000/0x0/0x1bfc00000, data 0x3f667d1/0x403c000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b549ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:12.839621+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133398528 unmapped: 30556160 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 136 ms_handle_reset con 0x560b549ff000 session 0x560b57171860
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:13.839797+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133398528 unmapped: 30556160 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:14.840447+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133398528 unmapped: 30556160 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1478756 data_alloc: 301989888 data_used: 15654912
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:15.841608+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133398528 unmapped: 30556160 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:16.841818+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133398528 unmapped: 30556160 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 10.152268410s of 10.471338272s, submitted: 122
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 137 heartbeat osd_stat(store_statfs(0x1b60ab000/0x0/0x1bfc00000, data 0x3f6afad/0x4042000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:17.842329+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133488640 unmapped: 30466048 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:18.842810+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133488640 unmapped: 30466048 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:19.843088+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133488640 unmapped: 30466048 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:20.843381+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1481758 data_alloc: 301989888 data_used: 15654912
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133488640 unmapped: 30466048 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 137 ms_handle_reset con 0x560b54cd2c00 session 0x560b5723b2c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:21.843580+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133488640 unmapped: 30466048 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:22.843911+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133488640 unmapped: 30466048 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 137 heartbeat osd_stat(store_statfs(0x1b60ab000/0x0/0x1bfc00000, data 0x3f6afad/0x4042000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 138 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 138 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:23.844112+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133521408 unmapped: 30433280 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 138 ms_handle_reset con 0x560b56f4e000 session 0x560b571e5e00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:24.844314+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133521408 unmapped: 30433280 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:25.844480+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1492041 data_alloc: 301989888 data_used: 15667200
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 138 heartbeat osd_stat(store_statfs(0x1b60a6000/0x0/0x1bfc00000, data 0x3f6d5a4/0x4048000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57700800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133521408 unmapped: 30433280 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 138 ms_handle_reset con 0x560b57700800 session 0x560b5899e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b55af2000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 138 ms_handle_reset con 0x560b55af2000 session 0x560b551dfa40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:26.844683+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133562368 unmapped: 30392320 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 10.002162933s of 10.157238960s, submitted: 60
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b549ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 138 ms_handle_reset con 0x560b549ff000 session 0x560b55ccd4a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:27.844858+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133562368 unmapped: 30392320 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 139 ms_handle_reset con 0x560b54cd2c00 session 0x560b55bf2000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:28.845523+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133586944 unmapped: 30367744 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:29.845754+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133595136 unmapped: 30359552 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:30.845905+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1491709 data_alloc: 301989888 data_used: 15679488
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 134651904 unmapped: 29302784 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 139 heartbeat osd_stat(store_statfs(0x1b60a4000/0x0/0x1bfc00000, data 0x3f6fa67/0x404a000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:31.846100+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 134651904 unmapped: 29302784 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 139 ms_handle_reset con 0x560b56f4e000 session 0x560b57265680
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:32.846236+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133603328 unmapped: 30351360 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:33.846421+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133603328 unmapped: 30351360 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57700800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 139 ms_handle_reset con 0x560b57700800 session 0x560b56cea780
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:34.846593+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133603328 unmapped: 30351360 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:35.846811+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1496489 data_alloc: 301989888 data_used: 15679488
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133603328 unmapped: 30351360 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 139 heartbeat osd_stat(store_statfs(0x1b60a3000/0x0/0x1bfc00000, data 0x3f6fa78/0x404b000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:36.846964+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133636096 unmapped: 30318592 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.913813591s of 10.091645241s, submitted: 60
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:37.847157+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133636096 unmapped: 30318592 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:38.847327+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133636096 unmapped: 30318592 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:39.847506+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133636096 unmapped: 30318592 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1a000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1a400
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 140 ms_handle_reset con 0x560b58e1a400 session 0x560b561a63c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:40.847750+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1503183 data_alloc: 301989888 data_used: 15691776
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133685248 unmapped: 30269440 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 141 handle_osd_map epochs [141,141], i have 141, src has [1,141]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 141 ms_handle_reset con 0x560b58e1a000 session 0x560b5723be00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 141 ms_handle_reset con 0x560b56f4e000 session 0x560b573d5860
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57700800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:41.848014+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 141 ms_handle_reset con 0x560b57700800 session 0x560b55cced20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133718016 unmapped: 30236672 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 142 handle_osd_map epochs [142,142], i have 142, src has [1,142]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 142 ms_handle_reset con 0x560b54cd2c00 session 0x560b57180960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 142 heartbeat osd_stat(store_statfs(0x1b6099000/0x0/0x1bfc00000, data 0x3f743c4/0x4053000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:42.848154+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133750784 unmapped: 30203904 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1a800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 142 ms_handle_reset con 0x560b58e1a800 session 0x560b58038000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:43.849515+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133750784 unmapped: 30203904 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 143 ms_handle_reset con 0x560b54cd2c00 session 0x560b573d52c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 143 handle_osd_map epochs [143,143], i have 143, src has [1,143]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 143 ms_handle_reset con 0x560b56f4e000 session 0x560b5723b0e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57700800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:44.849707+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 133857280 unmapped: 30097408 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 143 handle_osd_map epochs [144,144], i have 144, src has [1,144]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 144 ms_handle_reset con 0x560b57700800 session 0x560b56fe9a40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1a000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:45.849872+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1ac00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 144 heartbeat osd_stat(store_statfs(0x1b608e000/0x0/0x1bfc00000, data 0x3f78e7e/0x405b000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [0,0,0,1])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1516659 data_alloc: 301989888 data_used: 15704064
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 144 ms_handle_reset con 0x560b58e1ac00 session 0x560b55cce780
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1b000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 134971392 unmapped: 28983296 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 144 handle_osd_map epochs [145,145], i have 145, src has [1,145]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 145 heartbeat osd_stat(store_statfs(0x1b608d000/0x0/0x1bfc00000, data 0x3f7b415/0x4060000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 145 ms_handle_reset con 0x560b58e1a000 session 0x560b551f1680
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 145 ms_handle_reset con 0x560b58e1b000 session 0x560b57171c20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 145 ms_handle_reset con 0x560b54cd2c00 session 0x560b54ee8780
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:46.850008+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 135069696 unmapped: 28884992 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 145 ms_handle_reset con 0x560b56f4e000 session 0x560b59a6fe00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:47.850576+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57700800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.952062607s of 10.532035828s, submitted: 171
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 145 ms_handle_reset con 0x560b57700800 session 0x560b55bf2b40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 135069696 unmapped: 28884992 heap: 163954688 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1ac00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:48.851084+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 135077888 unmapped: 37273600 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1b400
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:49.851340+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 145 heartbeat osd_stat(store_statfs(0x1b5889000/0x0/0x1bfc00000, data 0x477d9d8/0x4865000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 143810560 unmapped: 28540928 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 145 ms_handle_reset con 0x560b58e1b400 session 0x560b5befe3c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:50.851563+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 145 ms_handle_reset con 0x560b54cd2c00 session 0x560b57401a40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1744199 data_alloc: 301989888 data_used: 15704064
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 145 heartbeat osd_stat(store_statfs(0x1b4088000/0x0/0x1bfc00000, data 0x5f7da3b/0x6066000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 143802368 unmapped: 28549120 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 146 handle_osd_map epochs [146,146], i have 146, src has [1,146]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:51.851726+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 135454720 unmapped: 36896768 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 147 ms_handle_reset con 0x560b56f4e000 session 0x560b56fe8780
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57700800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 147 handle_osd_map epochs [146,147], i have 147, src has [1,147]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:52.851919+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1b000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 147 ms_handle_reset con 0x560b58e1b000 session 0x560b573d4960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 135528448 unmapped: 36823040 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 147 ms_handle_reset con 0x560b57700800 session 0x560b58db2000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 148 handle_osd_map epochs [148,148], i have 148, src has [1,148]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 148 handle_osd_map epochs [148,148], i have 148, src has [1,148]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 148 ms_handle_reset con 0x560b58e1ac00 session 0x560b55bf2000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:53.852209+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 135569408 unmapped: 36782080 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 149 handle_osd_map epochs [149,149], i have 149, src has [1,149]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 149 ms_handle_reset con 0x560b56f4e000 session 0x560b59a6ef00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:54.852351+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 149 ms_handle_reset con 0x560b54cd2c00 session 0x560b55bf34a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57700800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 135577600 unmapped: 36773888 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 150 handle_osd_map epochs [150,150], i have 150, src has [1,150]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 150 heartbeat osd_stat(store_statfs(0x1b6078000/0x0/0x1bfc00000, data 0x3f86fd9/0x4074000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1ac00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:55.852558+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 150 ms_handle_reset con 0x560b57700800 session 0x560b59a6f2c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1553062 data_alloc: 301989888 data_used: 15732736
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 135585792 unmapped: 36765696 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 151 ms_handle_reset con 0x560b58e1ac00 session 0x560b57170f00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:56.852756+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1b000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 151 ms_handle_reset con 0x560b58e1b000 session 0x560b571d10e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136658944 unmapped: 35692544 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 152 ms_handle_reset con 0x560b54cd2c00 session 0x560b571d1680
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:57.853052+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 152 ms_handle_reset con 0x560b56f4e000 session 0x560b59a6e780
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57700800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.298711777s of 10.296799660s, submitted: 257
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136683520 unmapped: 35667968 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 152 ms_handle_reset con 0x560b57700800 session 0x560b57181c20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 152 heartbeat osd_stat(store_statfs(0x1b606d000/0x0/0x1bfc00000, data 0x3f8e09a/0x4080000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:58.853480+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136683520 unmapped: 35667968 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:59.853698+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136683520 unmapped: 35667968 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:00.853924+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1556362 data_alloc: 301989888 data_used: 15732736
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136691712 unmapped: 35659776 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:01.854093+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136699904 unmapped: 35651584 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 153 heartbeat osd_stat(store_statfs(0x1b606f000/0x0/0x1bfc00000, data 0x3f8dfe8/0x407e000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:02.854243+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136699904 unmapped: 35651584 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:03.854411+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136699904 unmapped: 35651584 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:04.854542+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136699904 unmapped: 35651584 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 153 heartbeat osd_stat(store_statfs(0x1b606b000/0x0/0x1bfc00000, data 0x3f90459/0x4082000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:05.854693+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1559364 data_alloc: 301989888 data_used: 15732736
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1ac00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 153 ms_handle_reset con 0x560b58e1ac00 session 0x560b56f90780
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136699904 unmapped: 35651584 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:06.854853+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1b000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136699904 unmapped: 35651584 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:07.855017+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.896513939s of 10.003350258s, submitted: 56
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136716288 unmapped: 35635200 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 154 ms_handle_reset con 0x560b58e1b000 session 0x560b5899ed20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:08.855140+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 154 ms_handle_reset con 0x560b54cd2c00 session 0x560b56fe85a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 154 heartbeat osd_stat(store_statfs(0x1b6065000/0x0/0x1bfc00000, data 0x3f929fe/0x4088000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136724480 unmapped: 35627008 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:09.855286+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 154 ms_handle_reset con 0x560b56f4e000 session 0x560b574030e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57700800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136732672 unmapped: 35618816 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 155 handle_osd_map epochs [155,155], i have 155, src has [1,155]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 155 handle_osd_map epochs [155,155], i have 155, src has [1,155]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 155 ms_handle_reset con 0x560b57700800 session 0x560b57171a40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:10.855427+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1572594 data_alloc: 301989888 data_used: 15757312
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136773632 unmapped: 35577856 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1ac00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 155 ms_handle_reset con 0x560b58e1ac00 session 0x560b57265e00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:11.855561+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1b800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 156 ms_handle_reset con 0x560b58e1b800 session 0x560b58db25a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136839168 unmapped: 35512320 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 156 ms_handle_reset con 0x560b54cd2c00 session 0x560b59a6e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:12.855681+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 156 heartbeat osd_stat(store_statfs(0x1b6060000/0x0/0x1bfc00000, data 0x3f974aa/0x408e000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136863744 unmapped: 35487744 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:13.855851+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136863744 unmapped: 35487744 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:14.856176+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136863744 unmapped: 35487744 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:15.856402+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1570876 data_alloc: 301989888 data_used: 15757312
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136863744 unmapped: 35487744 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:16.856601+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136880128 unmapped: 35471360 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 157 heartbeat osd_stat(store_statfs(0x1b6060000/0x0/0x1bfc00000, data 0x3f974aa/0x408e000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:17.856919+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136880128 unmapped: 35471360 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:18.857686+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 10.440340042s of 10.707306862s, submitted: 125
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Got map version 47
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136880128 unmapped: 35471360 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1ac00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 157 ms_handle_reset con 0x560b58e1ac00 session 0x560b59a6fa40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 157 heartbeat osd_stat(store_statfs(0x1b6057000/0x0/0x1bfc00000, data 0x3f9e6e0/0x4097000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:19.857912+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136880128 unmapped: 35471360 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:20.858097+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1577011 data_alloc: 301989888 data_used: 15769600
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136880128 unmapped: 35471360 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:21.858248+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136880128 unmapped: 35471360 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:22.858410+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136880128 unmapped: 35471360 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:23.858635+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136880128 unmapped: 35471360 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:24.858938+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Got map version 48
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 136888320 unmapped: 35463168 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 157 heartbeat osd_stat(store_statfs(0x1b6051000/0x0/0x1bfc00000, data 0x3fa4f2c/0x409d000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:25.859125+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1bc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1577251 data_alloc: 301989888 data_used: 15769600
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 157 ms_handle_reset con 0x560b58e1bc00 session 0x560b54ee8b40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b74fc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 137330688 unmapped: 35020800 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b74f800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 157 ms_handle_reset con 0x560b5b74f800 session 0x560b55587a40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b74e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 157 ms_handle_reset con 0x560b5b74fc00 session 0x560b54ee81e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 157 heartbeat osd_stat(store_statfs(0x1b604f000/0x0/0x1bfc00000, data 0x3fa6c87/0x409f000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:26.859327+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 157 ms_handle_reset con 0x560b5b74e000 session 0x560b555863c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 137338880 unmapped: 35012608 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:27.859533+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 137338880 unmapped: 35012608 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:28.859710+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 137338880 unmapped: 35012608 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 157 heartbeat osd_stat(store_statfs(0x1b604f000/0x0/0x1bfc00000, data 0x3fa6c87/0x409f000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:29.859995+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 137338880 unmapped: 35012608 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1ac00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 11.398914337s of 11.508928299s, submitted: 29
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 157 ms_handle_reset con 0x560b54cd2c00 session 0x560b54ee8960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 157 ms_handle_reset con 0x560b58e1ac00 session 0x560b55cce5a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 157 heartbeat osd_stat(store_statfs(0x1b604c000/0x0/0x1bfc00000, data 0x3fa8332/0x40a2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:30.860189+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1580085 data_alloc: 301989888 data_used: 17211392
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 137338880 unmapped: 35012608 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:31.860425+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 137338880 unmapped: 35012608 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 157 heartbeat osd_stat(store_statfs(0x1b604c000/0x0/0x1bfc00000, data 0x3fa8332/0x40a2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x5b0f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1bc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:32.860614+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b74f800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 157 ms_handle_reset con 0x560b5b74f800 session 0x560b58db32c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 157 ms_handle_reset con 0x560b58e1bc00 session 0x560b54ee94a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1ac00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 137355264 unmapped: 34996224 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 157 ms_handle_reset con 0x560b54cd2c00 session 0x560b573d4960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b74e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 157 ms_handle_reset con 0x560b58e1ac00 session 0x560b551f01e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:33.860815+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b74f800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 157 ms_handle_reset con 0x560b5b74e000 session 0x560b573d5860
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 157 ms_handle_reset con 0x560b5b74f800 session 0x560b5734af00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b74e400
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141459456 unmapped: 30892032 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 157 ms_handle_reset con 0x560b5b74e400 session 0x560b5befef00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 157 ms_handle_reset con 0x560b54cd2c00 session 0x560b57c9b4a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:34.860988+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1ac00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141492224 unmapped: 30859264 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:35.861110+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 158 ms_handle_reset con 0x560b58e1ac00 session 0x560b59a6f4a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b74f800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 158 ms_handle_reset con 0x560b5b74f800 session 0x560b551f1c20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57765c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1806262 data_alloc: 301989888 data_used: 17223680
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140550144 unmapped: 31801344 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 158 ms_handle_reset con 0x560b57765c00 session 0x560b59cbe5a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:36.861294+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5808fc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 158 ms_handle_reset con 0x560b5808fc00 session 0x560b59cbfa40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140550144 unmapped: 31801344 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:37.861489+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 158 heartbeat osd_stat(store_statfs(0x1b453c000/0x0/0x1bfc00000, data 0x4911dab/0x4a11000, compress 0x0/0x0/0x0, omap 0x647, meta 0x6caf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140550144 unmapped: 31801344 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:38.861818+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140558336 unmapped: 31793152 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 158 ms_handle_reset con 0x560b54cd2c00 session 0x560b59cbfc20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 159 handle_osd_map epochs [159,159], i have 159, src has [1,159]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:39.862026+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57765c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140574720 unmapped: 31776768 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 159 ms_handle_reset con 0x560b57765c00 session 0x560b59cbf860
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.286872864s of 10.002053261s, submitted: 187
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1ac00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 159 heartbeat osd_stat(store_statfs(0x1b4539000/0x0/0x1bfc00000, data 0x491430e/0x4a14000, compress 0x0/0x0/0x0, omap 0x647, meta 0x6caf9b9), peers [0,1,2,4,5] op hist [0,0,0,1])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 159 ms_handle_reset con 0x560b58e1ac00 session 0x560b553c41e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:40.862199+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1684615 data_alloc: 301989888 data_used: 17240064
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140591104 unmapped: 31760384 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b74f800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 159 heartbeat osd_stat(store_statfs(0x1b4539000/0x0/0x1bfc00000, data 0x491430e/0x4a14000, compress 0x0/0x0/0x0, omap 0x647, meta 0x6caf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 159 ms_handle_reset con 0x560b5b74f800 session 0x560b553c4000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5808f000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 159 ms_handle_reset con 0x560b5808f000 session 0x560b553c45a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:41.862296+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140591104 unmapped: 31760384 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 160 handle_osd_map epochs [160,160], i have 160, src has [1,160]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:42.862477+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140591104 unmapped: 31760384 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:43.862620+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 161 handle_osd_map epochs [161,161], i have 161, src has [1,161]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 161 ms_handle_reset con 0x560b54cd2c00 session 0x560b571e5860
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140632064 unmapped: 31719424 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:44.862838+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140648448 unmapped: 31703040 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57765c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 161 ms_handle_reset con 0x560b57765c00 session 0x560b553c4960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:45.862958+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1691522 data_alloc: 301989888 data_used: 17244160
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1ac00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 161 ms_handle_reset con 0x560b58e1ac00 session 0x560b553c4d20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b74f800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140648448 unmapped: 31703040 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 161 ms_handle_reset con 0x560b5b74f800 session 0x560b553c4f00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:46.863098+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140648448 unmapped: 31703040 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 161 heartbeat osd_stat(store_statfs(0x1b4132000/0x0/0x1bfc00000, data 0x4918d52/0x4a1c000, compress 0x0/0x0/0x0, omap 0x647, meta 0x70af9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:47.863223+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140648448 unmapped: 31703040 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:48.863320+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140648448 unmapped: 31703040 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:49.863535+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140648448 unmapped: 31703040 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.744586945s of 10.173274040s, submitted: 116
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:50.863675+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 161 heartbeat osd_stat(store_statfs(0x1b4130000/0x0/0x1bfc00000, data 0x4918dc5/0x4a1e000, compress 0x0/0x0/0x0, omap 0x647, meta 0x70af9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1696475 data_alloc: 301989888 data_used: 17240064
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5808e400
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 161 ms_handle_reset con 0x560b5808e400 session 0x560b553c52c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140664832 unmapped: 31686656 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 161 heartbeat osd_stat(store_statfs(0x1b412e000/0x0/0x1bfc00000, data 0x4918e15/0x4a1f000, compress 0x0/0x0/0x0, omap 0x647, meta 0x70af9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:51.863831+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 161 ms_handle_reset con 0x560b54cd2c00 session 0x560b553c54a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140664832 unmapped: 31686656 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57765c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 162 ms_handle_reset con 0x560b57765c00 session 0x560b553c5680
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1ac00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:52.863995+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 162 handle_osd_map epochs [162,162], i have 162, src has [1,162]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 162 ms_handle_reset con 0x560b58e1ac00 session 0x560b553c5c20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140738560 unmapped: 31612928 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:53.864149+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b74f800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 162 heartbeat osd_stat(store_statfs(0x1b412d000/0x0/0x1bfc00000, data 0x491b16b/0x4a20000, compress 0x0/0x0/0x0, omap 0x647, meta 0x70af9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 162 ms_handle_reset con 0x560b5b74f800 session 0x560b59cbfc20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140738560 unmapped: 31612928 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:54.864300+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 handle_osd_map epochs [163,163], i have 163, src has [1,163]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 handle_osd_map epochs [163,163], i have 163, src has [1,163]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57e48c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140746752 unmapped: 31604736 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 ms_handle_reset con 0x560b57e48c00 session 0x560b59cbfa40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:55.864513+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 ms_handle_reset con 0x560b54cd2c00 session 0x560b59cbe5a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1716104 data_alloc: 301989888 data_used: 17264640
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140763136 unmapped: 31588352 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:56.864660+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57765c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140779520 unmapped: 31571968 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 ms_handle_reset con 0x560b57765c00 session 0x560b551f1c20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:57.864891+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 heartbeat osd_stat(store_statfs(0x1b4124000/0x0/0x1bfc00000, data 0x491d793/0x4a29000, compress 0x0/0x0/0x0, omap 0x647, meta 0x70af9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140787712 unmapped: 31563776 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1ac00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 ms_handle_reset con 0x560b58e1ac00 session 0x560b59a6f4a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b74f800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:58.865026+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 ms_handle_reset con 0x560b5b74f800 session 0x560b57264b40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140787712 unmapped: 31563776 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 heartbeat osd_stat(store_statfs(0x1b4128000/0x0/0x1bfc00000, data 0x491d711/0x4a26000, compress 0x0/0x0/0x0, omap 0x647, meta 0x70af9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b55afc800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:59.865214+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 ms_handle_reset con 0x560b55afc800 session 0x560b5befef00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140861440 unmapped: 31490048 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:00.865378+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1716450 data_alloc: 301989888 data_used: 17264640
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140861440 unmapped: 31490048 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:01.865547+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 ms_handle_reset con 0x560b54cd2c00 session 0x560b5734af00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57765c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140861440 unmapped: 31490048 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 ms_handle_reset con 0x560b57765c00 session 0x560b573d5860
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:02.865695+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 heartbeat osd_stat(store_statfs(0x1b4122000/0x0/0x1bfc00000, data 0x4922ada/0x4a2c000, compress 0x0/0x0/0x0, omap 0x647, meta 0x70af9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140861440 unmapped: 31490048 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 12.517396927s of 12.948227882s, submitted: 133
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:03.865832+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140869632 unmapped: 31481856 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:04.865984+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 140877824 unmapped: 31473664 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:05.866131+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1720389 data_alloc: 301989888 data_used: 17264640
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 142024704 unmapped: 30326784 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1ac00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b74f800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:06.866517+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 ms_handle_reset con 0x560b58e1ac00 session 0x560b573d4960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141819904 unmapped: 30531584 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:07.866678+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 heartbeat osd_stat(store_statfs(0x1b3b27000/0x0/0x1bfc00000, data 0x4f1bc0c/0x5027000, compress 0x0/0x0/0x0, omap 0x647, meta 0x70af9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141819904 unmapped: 30531584 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:08.866853+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 heartbeat osd_stat(store_statfs(0x1b3b27000/0x0/0x1bfc00000, data 0x4f1bc0c/0x5027000, compress 0x0/0x0/0x0, omap 0x647, meta 0x70af9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141844480 unmapped: 30507008 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:09.867052+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141860864 unmapped: 30490624 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:10.867196+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 heartbeat osd_stat(store_statfs(0x1b3b1d000/0x0/0x1bfc00000, data 0x4f21dc2/0x5030000, compress 0x0/0x0/0x0, omap 0x647, meta 0x70af9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1780445 data_alloc: 301989888 data_used: 17264640
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141877248 unmapped: 30474240 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 heartbeat osd_stat(store_statfs(0x1b4af7000/0x0/0x1bfc00000, data 0x4f265c2/0x5033000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:11.867328+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a714000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 ms_handle_reset con 0x560b5a714000 session 0x560b55cce5a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141885440 unmapped: 30466048 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:12.867496+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141893632 unmapped: 30457856 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:13.867641+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.763146400s of 10.314648628s, submitted: 132
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141893632 unmapped: 30457856 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:14.867795+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141893632 unmapped: 30457856 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:15.867965+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 heartbeat osd_stat(store_statfs(0x1b50e4000/0x0/0x1bfc00000, data 0x4941daf/0x4a4a000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1727607 data_alloc: 301989888 data_used: 17264640
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141918208 unmapped: 30433280 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:16.868126+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141926400 unmapped: 30425088 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 heartbeat osd_stat(store_statfs(0x1b50dc000/0x0/0x1bfc00000, data 0x4949722/0x4a52000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:17.868294+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141934592 unmapped: 30416896 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:18.868434+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141934592 unmapped: 30416896 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:19.868610+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141934592 unmapped: 30416896 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:20.868809+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1726535 data_alloc: 301989888 data_used: 17264640
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141934592 unmapped: 30416896 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:21.868907+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 heartbeat osd_stat(store_statfs(0x1b50da000/0x0/0x1bfc00000, data 0x494b85c/0x4a54000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141934592 unmapped: 30416896 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:22.869005+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141934592 unmapped: 30416896 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:23.869153+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141934592 unmapped: 30416896 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:24.869275+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 11.511870384s of 11.563188553s, submitted: 17
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141934592 unmapped: 30416896 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 heartbeat osd_stat(store_statfs(0x1b50d9000/0x0/0x1bfc00000, data 0x494c8bd/0x4a55000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:25.869410+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1726733 data_alloc: 301989888 data_used: 17264640
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141934592 unmapped: 30416896 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:26.869504+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141934592 unmapped: 30416896 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:27.869659+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141934592 unmapped: 30416896 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:28.869836+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141934592 unmapped: 30416896 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:29.869981+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 heartbeat osd_stat(store_statfs(0x1b50d5000/0x0/0x1bfc00000, data 0x4950996/0x4a59000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141934592 unmapped: 30416896 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:30.870110+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 heartbeat osd_stat(store_statfs(0x1b50d4000/0x0/0x1bfc00000, data 0x4951beb/0x4a5a000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1727937 data_alloc: 301989888 data_used: 17264640
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141934592 unmapped: 30416896 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:31.870250+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141934592 unmapped: 30416896 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:32.870376+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141934592 unmapped: 30416896 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:33.870552+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 141934592 unmapped: 30416896 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:34.870712+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.949091911s of 10.000473976s, submitted: 12
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 142983168 unmapped: 29368320 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:35.870866+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1731311 data_alloc: 301989888 data_used: 17264640
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 144080896 unmapped: 28270592 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:36.871028+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 heartbeat osd_stat(store_statfs(0x1b50c8000/0x0/0x1bfc00000, data 0x495ce36/0x4a66000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 143958016 unmapped: 28393472 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:37.871224+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 143958016 unmapped: 28393472 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:38.871326+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 143958016 unmapped: 28393472 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:39.871499+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 143966208 unmapped: 28385280 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 164 handle_osd_map epochs [164,164], i have 164, src has [1,164]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:40.871674+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5c019000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 164 ms_handle_reset con 0x560b5c019000 session 0x560b555863c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1742025 data_alloc: 301989888 data_used: 17276928
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 143982592 unmapped: 28368896 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:41.871828+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 164 heartbeat osd_stat(store_statfs(0x1b50b1000/0x0/0x1bfc00000, data 0x496e6f2/0x4a7c000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 143982592 unmapped: 28368896 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:42.871971+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 8400.2 total, 600.0 interval
                                                          Cumulative writes: 13K writes, 49K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.01 MB/s
                                                          Cumulative WAL: 13K writes, 4123 syncs, 3.23 writes per sync, written: 0.04 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 7636 writes, 24K keys, 7636 commit groups, 1.0 writes per commit group, ingest: 22.35 MB, 0.04 MB/s
                                                          Interval WAL: 7636 writes, 3272 syncs, 2.33 writes per sync, written: 0.02 GB, 0.04 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 164 heartbeat osd_stat(store_statfs(0x1b50ac000/0x0/0x1bfc00000, data 0x4973f0c/0x4a81000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [0,0,0,1])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 164 handle_osd_map epochs [165,165], i have 165, src has [1,165]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 164 handle_osd_map epochs [165,165], i have 165, src has [1,165]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 164 handle_osd_map epochs [165,165], i have 165, src has [1,165]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 165 ms_handle_reset con 0x560b54cd2c00 session 0x560b54ee81e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 143974400 unmapped: 28377088 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:43.872141+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 143974400 unmapped: 28377088 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:44.872259+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57765c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.687820435s of 10.004899025s, submitted: 91
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 165 handle_osd_map epochs [165,166], i have 165, src has [1,166]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 166 handle_osd_map epochs [166,166], i have 166, src has [1,166]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 144023552 unmapped: 28327936 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 166 handle_osd_map epochs [166,166], i have 166, src has [1,166]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 166 ms_handle_reset con 0x560b57765c00 session 0x560b54ee9680
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:45.872398+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 166 heartbeat osd_stat(store_statfs(0x1b5096000/0x0/0x1bfc00000, data 0x4985ad6/0x4a96000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1ac00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 166 ms_handle_reset con 0x560b58e1ac00 session 0x560b59a6fa40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1748549 data_alloc: 301989888 data_used: 17281024
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a714000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 144039936 unmapped: 28311552 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:46.872566+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 166 handle_osd_map epochs [166,167], i have 166, src has [1,167]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 167 handle_osd_map epochs [167,167], i have 167, src has [1,167]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 167 ms_handle_reset con 0x560b5a714000 session 0x560b59a6e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 144048128 unmapped: 28303360 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:47.872703+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 167 heartbeat osd_stat(store_statfs(0x1b5092000/0x0/0x1bfc00000, data 0x4988095/0x4a9a000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 144048128 unmapped: 28303360 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a4a2400
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:48.872854+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 167 handle_osd_map epochs [167,168], i have 167, src has [1,168]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 168 handle_osd_map epochs [168,168], i have 168, src has [1,168]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 168 handle_osd_map epochs [168,168], i have 168, src has [1,168]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 168 ms_handle_reset con 0x560b5a4a2400 session 0x560b57265e00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 144080896 unmapped: 28270592 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:49.873041+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 168 ms_handle_reset con 0x560b54cd2c00 session 0x560b5c4fc000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57765c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 168 handle_osd_map epochs [169,169], i have 169, src has [1,169]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 169 ms_handle_reset con 0x560b57765c00 session 0x560b5c4fc1e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 145203200 unmapped: 27148288 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:50.873219+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1ac00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 169 handle_osd_map epochs [169,170], i have 169, src has [1,170]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 170 handle_osd_map epochs [170,170], i have 170, src has [1,170]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 170 ms_handle_reset con 0x560b58e1ac00 session 0x560b5c4fc5a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1764964 data_alloc: 301989888 data_used: 17305600
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 145244160 unmapped: 27107328 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:51.873349+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 170 heartbeat osd_stat(store_statfs(0x1b506d000/0x0/0x1bfc00000, data 0x49a5286/0x4abd000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 170 handle_osd_map epochs [170,171], i have 170, src has [1,171]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 171 handle_osd_map epochs [171,171], i have 171, src has [1,171]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a4a2000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 171 ms_handle_reset con 0x560b5a4a2000 session 0x560b5c4fc3c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a4a3400
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 145276928 unmapped: 27074560 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:52.873520+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 171 handle_osd_map epochs [171,172], i have 171, src has [1,172]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 172 handle_osd_map epochs [172,172], i have 172, src has [1,172]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 172 ms_handle_reset con 0x560b5a4a3400 session 0x560b5c4fc960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 145334272 unmapped: 27017216 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:53.873682+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 145334272 unmapped: 27017216 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:54.873873+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 173 handle_osd_map epochs [173,173], i have 173, src has [1,173]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.348553658s of 10.002063751s, submitted: 206
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 173 ms_handle_reset con 0x560b54cd2c00 session 0x560b5c4fcd20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 145350656 unmapped: 27000832 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:55.874021+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57765c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 173 ms_handle_reset con 0x560b57765c00 session 0x560b5c4fd0e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1776649 data_alloc: 301989888 data_used: 17317888
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1ac00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 145358848 unmapped: 26992640 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 173 heartbeat osd_stat(store_statfs(0x1b5046000/0x0/0x1bfc00000, data 0x49c8ecd/0x4ae7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:56.874179+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 173 handle_osd_map epochs [174,174], i have 173, src has [1,174]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 174 ms_handle_reset con 0x560b58e1ac00 session 0x560b5c4fd2c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 145391616 unmapped: 26959872 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:57.874301+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 145473536 unmapped: 26877952 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:58.874448+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a4a2000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 174 ms_handle_reset con 0x560b5a4a2000 session 0x560b5c4fd680
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5523ec00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 174 handle_osd_map epochs [174,175], i have 174, src has [1,175]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 175 handle_osd_map epochs [175,175], i have 175, src has [1,175]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 175 ms_handle_reset con 0x560b5523ec00 session 0x560b5c4fd860
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 146579456 unmapped: 25772032 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:59.874616+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 175 ms_handle_reset con 0x560b54cd2c00 session 0x560b5c504000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 146604032 unmapped: 25747456 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:00.874815+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1778649 data_alloc: 301989888 data_used: 17326080
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 146604032 unmapped: 25747456 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:01.874999+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 175 heartbeat osd_stat(store_statfs(0x1b5034000/0x0/0x1bfc00000, data 0x49daf1b/0x4af9000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 175 handle_osd_map epochs [176,176], i have 175, src has [1,176]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 146546688 unmapped: 25804800 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:02.875175+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57765c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 146563072 unmapped: 25788416 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:03.875326+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 146563072 unmapped: 25788416 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:04.875500+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.431014061s of 10.000574112s, submitted: 182
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Got map version 49
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 146841600 unmapped: 25509888 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:05.875658+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1791603 data_alloc: 301989888 data_used: 17330176
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 146866176 unmapped: 25485312 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:06.875846+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 176 handle_osd_map epochs [177,177], i have 176, src has [1,177]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 146866176 unmapped: 25485312 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:07.876000+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 177 heartbeat osd_stat(store_statfs(0x1b4fe9000/0x0/0x1bfc00000, data 0x4a1d334/0x4b44000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 146874368 unmapped: 25477120 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:08.876143+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58dd5000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 177 ms_handle_reset con 0x560b58dd5000 session 0x560b54ee9e00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 146923520 unmapped: 25427968 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:09.876335+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 177 handle_osd_map epochs [177,178], i have 177, src has [1,178]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 146931712 unmapped: 25419776 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:10.876485+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 178 handle_osd_map epochs [178,178], i have 178, src has [1,178]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1801082 data_alloc: 301989888 data_used: 17350656
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 146989056 unmapped: 25362432 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:11.876609+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 146989056 unmapped: 25362432 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:12.876730+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 178 heartbeat osd_stat(store_statfs(0x1b4fd4000/0x0/0x1bfc00000, data 0x4a35133/0x4b59000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 146989056 unmapped: 25362432 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:13.876843+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 147005440 unmapped: 25346048 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:14.877028+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 147005440 unmapped: 25346048 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:15.877186+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 10.341442108s of 10.744112015s, submitted: 132
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1801520 data_alloc: 301989888 data_used: 17354752
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 147005440 unmapped: 25346048 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ff800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 178 ms_handle_reset con 0x560b576ff800 session 0x560b59a6f0e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:16.877305+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 178 handle_osd_map epochs [178,179], i have 178, src has [1,179]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 147013632 unmapped: 25337856 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:17.877464+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 heartbeat osd_stat(store_statfs(0x1b4fba000/0x0/0x1bfc00000, data 0x4a4cd15/0x4b74000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 ms_handle_reset con 0x560b56f4e000 session 0x560b5c504000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57700800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 ms_handle_reset con 0x560b57700800 session 0x560b573d43c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 147021824 unmapped: 25329664 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:18.877601+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 147079168 unmapped: 25272320 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:19.877832+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 147087360 unmapped: 25264128 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 heartbeat osd_stat(store_statfs(0x1b4fae000/0x0/0x1bfc00000, data 0x4a5903b/0x4b80000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:20.878211+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1804904 data_alloc: 301989888 data_used: 17367040
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 147087360 unmapped: 25264128 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:21.878368+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 heartbeat osd_stat(store_statfs(0x1b4fad000/0x0/0x1bfc00000, data 0x4a5b2c8/0x4b81000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 147087360 unmapped: 25264128 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:22.878572+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 heartbeat osd_stat(store_statfs(0x1b4fad000/0x0/0x1bfc00000, data 0x4a5b2c8/0x4b81000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 147087360 unmapped: 25264128 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:23.878738+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 147095552 unmapped: 25255936 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:24.879192+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 heartbeat osd_stat(store_statfs(0x1b4fa3000/0x0/0x1bfc00000, data 0x4a64efc/0x4b8b000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 148144128 unmapped: 24207360 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:25.879826+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 10.196972847s of 10.390867233s, submitted: 54
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57700800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 ms_handle_reset con 0x560b57700800 session 0x560b571d0780
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1808280 data_alloc: 301989888 data_used: 17367040
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 148176896 unmapped: 24174592 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:26.879927+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 heartbeat osd_stat(store_statfs(0x1b4f9f000/0x0/0x1bfc00000, data 0x4a6a7be/0x4b8f000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 148176896 unmapped: 24174592 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:27.880264+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 ms_handle_reset con 0x560b54cd2c00 session 0x560b571d1c20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 ms_handle_reset con 0x560b56f4e000 session 0x560b57402b40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 148226048 unmapped: 24125440 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:28.880469+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 148226048 unmapped: 24125440 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:29.880732+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 heartbeat osd_stat(store_statfs(0x1b4f9f000/0x0/0x1bfc00000, data 0x4a6a7be/0x4b8f000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 148226048 unmapped: 24125440 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:30.880857+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57e48800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 ms_handle_reset con 0x560b57e48800 session 0x560b54ee6d20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1808575 data_alloc: 301989888 data_used: 17367040
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 148234240 unmapped: 24117248 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:31.881044+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 ms_handle_reset con 0x560b576ff000 session 0x560b54ee61e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 ms_handle_reset con 0x560b576ff000 session 0x560b5befe780
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 heartbeat osd_stat(store_statfs(0x1b4f9c000/0x0/0x1bfc00000, data 0x4a6ad30/0x4b92000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:32.881186+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 148258816 unmapped: 24092672 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57e48800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 ms_handle_reset con 0x560b57e48800 session 0x560b59cbe3c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:33.881346+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 148258816 unmapped: 24092672 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:34.881560+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 148267008 unmapped: 24084480 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:35.881815+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 148299776 unmapped: 24051712 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1823437 data_alloc: 301989888 data_used: 17367040
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 10.058636665s of 10.236333847s, submitted: 33
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:36.883896+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 148299776 unmapped: 24051712 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 heartbeat osd_stat(store_statfs(0x1b4f96000/0x0/0x1bfc00000, data 0x4a6c3ba/0x4b98000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 heartbeat osd_stat(store_statfs(0x1b4f99000/0x0/0x1bfc00000, data 0x4a6c2e6/0x4b95000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 ms_handle_reset con 0x560b54cd2c00 session 0x560b59cbe960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:37.884017+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149430272 unmapped: 22921216 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 ms_handle_reset con 0x560b56f4e000 session 0x560b55ccdc20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57700800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ff800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 ms_handle_reset con 0x560b576ff800 session 0x560b5c4fd2c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 ms_handle_reset con 0x560b57700800 session 0x560b5c505860
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:38.884149+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149495808 unmapped: 22855680 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 ms_handle_reset con 0x560b54cd2c00 session 0x560b5c5050e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 ms_handle_reset con 0x560b56f4e000 session 0x560b5c5054a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:39.884412+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149520384 unmapped: 22831104 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:40.884598+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149520384 unmapped: 22831104 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 heartbeat osd_stat(store_statfs(0x1b4f94000/0x0/0x1bfc00000, data 0x4a73e33/0x4b9a000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1819012 data_alloc: 301989888 data_used: 17367040
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:41.884743+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149528576 unmapped: 22822912 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:42.884841+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149528576 unmapped: 22822912 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:43.885014+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149528576 unmapped: 22822912 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:44.885205+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149528576 unmapped: 22822912 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 heartbeat osd_stat(store_statfs(0x1b4f92000/0x0/0x1bfc00000, data 0x4a764bf/0x4b9c000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:45.885362+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149553152 unmapped: 22798336 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1823152 data_alloc: 301989888 data_used: 17367040
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:46.885558+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.536793709s of 10.035903931s, submitted: 123
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149561344 unmapped: 22790144 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:47.885829+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149561344 unmapped: 22790144 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:48.886037+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149569536 unmapped: 22781952 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:49.886206+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149569536 unmapped: 22781952 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 ms_handle_reset con 0x560b576ff000 session 0x560b57c9b860
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 179 handle_osd_map epochs [179,180], i have 179, src has [1,180]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:50.886352+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 180 heartbeat osd_stat(store_statfs(0x1b4f71000/0x0/0x1bfc00000, data 0x4a96f2a/0x4bbd000, compress 0x0/0x0/0x0, omap 0x647, meta 0x60cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149602304 unmapped: 22749184 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1829385 data_alloc: 301989888 data_used: 17379328
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:51.886520+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149602304 unmapped: 22749184 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ff800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 180 ms_handle_reset con 0x560b576ff800 session 0x560b57171860
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ff800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:52.886676+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 180 ms_handle_reset con 0x560b576ff800 session 0x560b59ad2000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149618688 unmapped: 22732800 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:53.886885+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 180 ms_handle_reset con 0x560b54cd2c00 session 0x560b59ad21e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149618688 unmapped: 22732800 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:54.887163+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149618688 unmapped: 22732800 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 180 ms_handle_reset con 0x560b56f4e000 session 0x560b59ad23c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 180 ms_handle_reset con 0x560b576ff000 session 0x560b59ad2960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:55.887535+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149659648 unmapped: 22691840 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 180 heartbeat osd_stat(store_statfs(0x1b4b6d000/0x0/0x1bfc00000, data 0x4a9948d/0x4bc1000, compress 0x0/0x0/0x0, omap 0x647, meta 0x64cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1829608 data_alloc: 301989888 data_used: 17379328
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:56.887826+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149659648 unmapped: 22691840 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 180 handle_osd_map epochs [180,181], i have 180, src has [1,181]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 10.043324471s of 10.442698479s, submitted: 109
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:57.888058+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149684224 unmapped: 22667264 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 181 handle_osd_map epochs [181,181], i have 181, src has [1,181]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57700800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 181 ms_handle_reset con 0x560b57700800 session 0x560b59ad3e00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 181 heartbeat osd_stat(store_statfs(0x1b4b5e000/0x0/0x1bfc00000, data 0x4aa6047/0x4bd0000, compress 0x0/0x0/0x0, omap 0x647, meta 0x64cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:58.888264+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149692416 unmapped: 22659072 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:59.888491+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149692416 unmapped: 22659072 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57700800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 181 ms_handle_reset con 0x560b57700800 session 0x560b59ae2000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 181 ms_handle_reset con 0x560b54cd2c00 session 0x560b59ae23c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:00.888648+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149725184 unmapped: 22626304 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1835831 data_alloc: 301989888 data_used: 17391616
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:01.888807+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149725184 unmapped: 22626304 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 181 heartbeat osd_stat(store_statfs(0x1b4b54000/0x0/0x1bfc00000, data 0x4aaef0b/0x4bda000, compress 0x0/0x0/0x0, omap 0x647, meta 0x64cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:02.888943+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149733376 unmapped: 22618112 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:03.889153+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149757952 unmapped: 22593536 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:04.889343+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 181 heartbeat osd_stat(store_statfs(0x1b4b48000/0x0/0x1bfc00000, data 0x4abb2fc/0x4be5000, compress 0x0/0x0/0x0, omap 0x647, meta 0x64cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149757952 unmapped: 22593536 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 181 handle_osd_map epochs [182,182], i have 181, src has [1,182]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:05.889523+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149757952 unmapped: 22593536 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 182 handle_osd_map epochs [183,183], i have 182, src has [1,183]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 182 handle_osd_map epochs [183,183], i have 183, src has [1,183]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 183 ms_handle_reset con 0x560b56f4e000 session 0x560b59ae3860
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1856154 data_alloc: 301989888 data_used: 17403904
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:06.889719+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149798912 unmapped: 22552576 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 183 handle_osd_map epochs [184,184], i have 183, src has [1,184]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.798701286s of 10.199620247s, submitted: 120
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ff800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 184 ms_handle_reset con 0x560b576ff800 session 0x560b59cbed20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57e48800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:07.889935+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149807104 unmapped: 22544384 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 184 handle_osd_map epochs [185,185], i have 184, src has [1,185]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 184 handle_osd_map epochs [185,185], i have 185, src has [1,185]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 185 handle_osd_map epochs [185,185], i have 185, src has [1,185]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 185 ms_handle_reset con 0x560b57e48800 session 0x560b59a6fa40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 185 handle_osd_map epochs [183,185], i have 185, src has [1,185]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 185 ms_handle_reset con 0x560b576ff000 session 0x560b59ae3a40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:08.890102+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149815296 unmapped: 22536192 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:09.890338+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 149815296 unmapped: 22536192 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 185 heartbeat osd_stat(store_statfs(0x1b4b16000/0x0/0x1bfc00000, data 0x4ae065f/0x4c16000, compress 0x0/0x0/0x0, omap 0x647, meta 0x64cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 185 handle_osd_map epochs [185,186], i have 185, src has [1,186]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 186 handle_osd_map epochs [186,186], i have 186, src has [1,186]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:10.890500+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 186 ms_handle_reset con 0x560b54cd2c00 session 0x560b57c9b4a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 150929408 unmapped: 21422080 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 186 handle_osd_map epochs [186,186], i have 186, src has [1,186]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1865927 data_alloc: 301989888 data_used: 17416192
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:11.893645+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 150953984 unmapped: 21397504 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 186 ms_handle_reset con 0x560b57765c00 session 0x560b5c4fde00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 186 handle_osd_map epochs [187,187], i have 186, src has [1,187]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 187 handle_osd_map epochs [187,187], i have 187, src has [1,187]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 187 ms_handle_reset con 0x560b56f4e000 session 0x560b573d5e00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:12.893825+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 151986176 unmapped: 20365312 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 187 heartbeat osd_stat(store_statfs(0x1b4b08000/0x0/0x1bfc00000, data 0x4af21a4/0x4c25000, compress 0x0/0x0/0x0, omap 0x647, meta 0x64cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Got map version 50
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:13.893975+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 152002560 unmapped: 20348928 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:14.894171+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 152002560 unmapped: 20348928 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:15.894362+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 152002560 unmapped: 20348928 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1868003 data_alloc: 301989888 data_used: 17416192
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:16.894519+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 152002560 unmapped: 20348928 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 187 handle_osd_map epochs [187,188], i have 187, src has [1,188]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 8.562884331s of 10.000224113s, submitted: 441
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 188 handle_osd_map epochs [188,188], i have 188, src has [1,188]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 188 handle_osd_map epochs [188,188], i have 188, src has [1,188]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:17.894717+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 152002560 unmapped: 20348928 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 188 handle_osd_map epochs [188,189], i have 188, src has [1,189]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 189 heartbeat osd_stat(store_statfs(0x1b4ad7000/0x0/0x1bfc00000, data 0x4b22340/0x4c56000, compress 0x0/0x0/0x0, omap 0x647, meta 0x64cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:18.894891+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 153075712 unmapped: 19275776 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:19.895070+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 153075712 unmapped: 19275776 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:20.895256+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 153075712 unmapped: 19275776 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1879475 data_alloc: 301989888 data_used: 17428480
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:21.895449+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 153075712 unmapped: 19275776 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 189 handle_osd_map epochs [189,190], i have 189, src has [1,190]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 190 handle_osd_map epochs [190,190], i have 190, src has [1,190]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 190 heartbeat osd_stat(store_statfs(0x1b4ac7000/0x0/0x1bfc00000, data 0x4b2fce1/0x4c67000, compress 0x0/0x0/0x0, omap 0x647, meta 0x64cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ff800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 190 ms_handle_reset con 0x560b576ff800 session 0x560b59cbef00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:22.895576+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 153092096 unmapped: 19259392 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 190 ms_handle_reset con 0x560b54cd2c00 session 0x560b55586000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:23.895716+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 153092096 unmapped: 19259392 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ff000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 190 ms_handle_reset con 0x560b56f4e000 session 0x560b59ad3c20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:24.895909+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 153133056 unmapped: 19218432 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ff800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 190 ms_handle_reset con 0x560b576ff800 session 0x560b59ae3c20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:25.896048+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 153133056 unmapped: 19218432 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 190 heartbeat osd_stat(store_statfs(0x1b4aa1000/0x0/0x1bfc00000, data 0x4b50645/0x4c8d000, compress 0x0/0x0/0x0, omap 0x647, meta 0x64cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2003444 data_alloc: 301989888 data_used: 17440768
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:26.896187+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 161570816 unmapped: 10780672 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 190 heartbeat osd_stat(store_statfs(0x1b3aa1000/0x0/0x1bfc00000, data 0x5b50647/0x5c8d000, compress 0x0/0x0/0x0, omap 0x647, meta 0x64cf9b9), peers [0,1,2,4,5] op hist [0,0,0,0,0,0,0,1])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 190 handle_osd_map epochs [190,191], i have 190, src has [1,191]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.433677673s of 10.011911392s, submitted: 148
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:27.896341+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 153223168 unmapped: 19128320 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:28.896489+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 191 handle_osd_map epochs [191,191], i have 191, src has [1,191]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 153337856 unmapped: 19013632 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:29.896670+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 191 heartbeat osd_stat(store_statfs(0x1b2283000/0x0/0x1bfc00000, data 0x736c4cb/0x74ab000, compress 0x0/0x0/0x0, omap 0x647, meta 0x64cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 162865152 unmapped: 9486336 heap: 172351488 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 191 handle_osd_map epochs [191,192], i have 191, src has [1,192]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 192 handle_osd_map epochs [192,192], i have 192, src has [1,192]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:30.896859+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 154673152 unmapped: 26075136 heap: 180748288 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:31.896981+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2503360 data_alloc: 301989888 data_used: 17465344
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 154714112 unmapped: 26034176 heap: 180748288 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:32.897087+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 154828800 unmapped: 25919488 heap: 180748288 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:33.897284+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 163340288 unmapped: 17408000 heap: 180748288 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:34.897441+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 163454976 unmapped: 17293312 heap: 180748288 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 192 heartbeat osd_stat(store_statfs(0x1aba4a000/0x0/0x1bfc00000, data 0xdba54d3/0xdce4000, compress 0x0/0x0/0x0, omap 0x647, meta 0x64cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:35.897607+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 163676160 unmapped: 17072128 heap: 180748288 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:36.897843+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3081838 data_alloc: 301989888 data_used: 17465344
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 155377664 unmapped: 25370624 heap: 180748288 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 192 handle_osd_map epochs [192,193], i have 192, src has [1,193]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 193 handle_osd_map epochs [193,193], i have 193, src has [1,193]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.131910324s of 10.278173447s, submitted: 152
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:37.898035+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 193 handle_osd_map epochs [193,193], i have 193, src has [1,193]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 155467776 unmapped: 25280512 heap: 180748288 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:38.898143+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 155648000 unmapped: 25100288 heap: 180748288 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:39.898331+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 164102144 unmapped: 16646144 heap: 180748288 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 193 heartbeat osd_stat(store_statfs(0x1a8203000/0x0/0x1bfc00000, data 0x113ea328/0x1152a000, compress 0x0/0x0/0x0, omap 0x647, meta 0x64cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:40.898497+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 193 heartbeat osd_stat(store_statfs(0x1a7201000/0x0/0x1bfc00000, data 0x123ecdc2/0x1252d000, compress 0x0/0x0/0x0, omap 0x647, meta 0x64cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 155877376 unmapped: 24870912 heap: 180748288 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:41.898615+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3499856 data_alloc: 301989888 data_used: 17477632
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 165502976 unmapped: 15245312 heap: 180748288 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:42.898762+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 157171712 unmapped: 23576576 heap: 180748288 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:43.898970+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 157220864 unmapped: 31924224 heap: 189145088 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:44.899123+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 157302784 unmapped: 31842304 heap: 189145088 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 193 heartbeat osd_stat(store_statfs(0x1a41e9000/0x0/0x1bfc00000, data 0x15403fa7/0x15545000, compress 0x0/0x0/0x0, omap 0x647, meta 0x64cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:45.899338+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 165765120 unmapped: 23379968 heap: 189145088 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:46.899491+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3963028 data_alloc: 301989888 data_used: 17477632
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 165928960 unmapped: 23216128 heap: 189145088 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 8.979548454s of 10.042301178s, submitted: 83
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:47.899667+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 166109184 unmapped: 23035904 heap: 189145088 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 193 heartbeat osd_stat(store_statfs(0x1a09ca000/0x0/0x1bfc00000, data 0x18c2291c/0x18d64000, compress 0x0/0x0/0x0, omap 0x647, meta 0x64cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:48.899858+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 157900800 unmapped: 31244288 heap: 189145088 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:49.900045+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 158040064 unmapped: 31105024 heap: 189145088 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:50.900190+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 158146560 unmapped: 30998528 heap: 189145088 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Got map version 51
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:51.900345+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 4560442 data_alloc: 301989888 data_used: 17477632
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 158441472 unmapped: 30703616 heap: 189145088 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:52.900503+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 159539200 unmapped: 29605888 heap: 189145088 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:53.900641+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 159596544 unmapped: 29548544 heap: 189145088 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 193 heartbeat osd_stat(store_statfs(0x19b973000/0x0/0x1bfc00000, data 0x1dc7777e/0x1ddbb000, compress 0x0/0x0/0x0, omap 0x647, meta 0x64cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:54.900879+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 159596544 unmapped: 29548544 heap: 189145088 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:55.901052+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 168206336 unmapped: 20938752 heap: 189145088 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:56.901204+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 4838162 data_alloc: 301989888 data_used: 17477632
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 159875072 unmapped: 29270016 heap: 189145088 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57700800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 8.934077263s of 10.002331734s, submitted: 103
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 193 ms_handle_reset con 0x560b57700800 session 0x560b55b4e780
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:57.901348+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 159940608 unmapped: 29204480 heap: 189145088 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 193 heartbeat osd_stat(store_statfs(0x19993d000/0x0/0x1bfc00000, data 0x1fcac261/0x1fdf1000, compress 0x0/0x0/0x0, omap 0x647, meta 0x64cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:58.901475+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 160129024 unmapped: 29016064 heap: 189145088 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58dd5000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b57c000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 193 ms_handle_reset con 0x560b58dd5000 session 0x560b59ae25a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:59.901739+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 193 ms_handle_reset con 0x560b56f4e000 session 0x560b5beff2c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 168747008 unmapped: 20398080 heap: 189145088 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 193 ms_handle_reset con 0x560b54cd2c00 session 0x560b5befeb40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 193 handle_osd_map epochs [193,194], i have 193, src has [1,194]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 194 handle_osd_map epochs [194,194], i have 194, src has [1,194]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 194 handle_osd_map epochs [194,194], i have 194, src has [1,194]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 194 handle_osd_map epochs [194,194], i have 194, src has [1,194]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57700800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:00.901873+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 194 ms_handle_reset con 0x560b57700800 session 0x560b59ae32c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 160505856 unmapped: 28639232 heap: 189145088 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 194 handle_osd_map epochs [194,195], i have 194, src has [1,195]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 195 handle_osd_map epochs [195,195], i have 195, src has [1,195]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:01.902124+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 195 handle_osd_map epochs [195,195], i have 195, src has [1,195]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 195 handle_osd_map epochs [195,195], i have 195, src has [1,195]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 5297539 data_alloc: 301989888 data_used: 17506304
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 195 heartbeat osd_stat(store_statfs(0x196111000/0x0/0x1bfc00000, data 0x234ce9c5/0x2361c000, compress 0x0/0x0/0x0, omap 0x647, meta 0x64cf9b9), peers [0,1,2,4,5] op hist [0,0,1])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 195 ms_handle_reset con 0x560b5b57c000 session 0x560b573d5a40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 160612352 unmapped: 28532736 heap: 189145088 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 195 heartbeat osd_stat(store_statfs(0x196111000/0x0/0x1bfc00000, data 0x234ce9c5/0x2361c000, compress 0x0/0x0/0x0, omap 0x647, meta 0x64cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:02.902377+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 160604160 unmapped: 28540928 heap: 189145088 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:03.902566+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 160612352 unmapped: 28532736 heap: 189145088 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 195 heartbeat osd_stat(store_statfs(0x1958fc000/0x0/0x1bfc00000, data 0x23ce4d35/0x23e31000, compress 0x0/0x0/0x0, omap 0x647, meta 0x64cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:04.902839+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b57cc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 162816000 unmapped: 26329088 heap: 189145088 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 195 ms_handle_reset con 0x560b5b57cc00 session 0x560b57264f00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:05.902982+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 195 ms_handle_reset con 0x560b54cd2c00 session 0x560b59ae3a40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 162611200 unmapped: 34930688 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:06.903120+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 5590114 data_alloc: 301989888 data_used: 17506304
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57700800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 162701312 unmapped: 34840576 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b57c000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 195 ms_handle_reset con 0x560b5b57c000 session 0x560b56d1a5a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 195 handle_osd_map epochs [195,196], i have 195, src has [1,196]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 196 handle_osd_map epochs [196,196], i have 196, src has [1,196]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:07.903375+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 196 ms_handle_reset con 0x560b56f4e000 session 0x560b59a6fa40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 8.338501930s of 10.066807747s, submitted: 263
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 162734080 unmapped: 34807808 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b57d000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 196 ms_handle_reset con 0x560b5b57d000 session 0x560b5c5050e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 196 handle_osd_map epochs [197,197], i have 197, src has [1,197]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 197 ms_handle_reset con 0x560b576ff000 session 0x560b55bf2780
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:08.903558+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 197 ms_handle_reset con 0x560b57700800 session 0x560b59ae3860
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 197 heartbeat osd_stat(store_statfs(0x1940c2000/0x0/0x1bfc00000, data 0x25518eae/0x25669000, compress 0x0/0x0/0x0, omap 0x647, meta 0x64cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 197 ms_handle_reset con 0x560b54cd2c00 session 0x560b5c505860
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 161898496 unmapped: 35643392 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:09.903805+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 161914880 unmapped: 35627008 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 197 handle_osd_map epochs [198,198], i have 197, src has [1,198]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 198 handle_osd_map epochs [198,198], i have 198, src has [1,198]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 198 handle_osd_map epochs [198,198], i have 198, src has [1,198]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:10.903937+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 198 handle_osd_map epochs [198,198], i have 198, src has [1,198]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 161980416 unmapped: 35561472 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 198 ms_handle_reset con 0x560b56f4e000 session 0x560b5c4fd2c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b57c000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 198 ms_handle_reset con 0x560b5b57c000 session 0x560b54ee6d20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:11.904098+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2063646 data_alloc: 301989888 data_used: 17530880
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 159793152 unmapped: 37748736 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 198 handle_osd_map epochs [198,199], i have 198, src has [1,199]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 199 handle_osd_map epochs [199,199], i have 199, src has [1,199]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 199 handle_osd_map epochs [199,199], i have 199, src has [1,199]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:12.904248+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 159834112 unmapped: 37707776 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b57d000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:13.904398+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 199 handle_osd_map epochs [199,199], i have 199, src has [1,199]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 159834112 unmapped: 37707776 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 199 handle_osd_map epochs [200,200], i have 199, src has [1,200]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 200 handle_osd_map epochs [200,200], i have 200, src has [1,200]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 200 ms_handle_reset con 0x560b5b57d000 session 0x560b57402b40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:14.904567+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 200 heartbeat osd_stat(store_statfs(0x1b4499000/0x0/0x1bfc00000, data 0x4d49356/0x4e94000, compress 0x0/0x0/0x0, omap 0x647, meta 0x68cf9b9), peers [0,1,2,4,5] op hist [0,0,0,0,1])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 200 ms_handle_reset con 0x560b54cd2c00 session 0x560b571d1c20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 159965184 unmapped: 37576704 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:15.904747+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 161030144 unmapped: 36511744 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 201 ms_handle_reset con 0x560b56f4e000 session 0x560b571d0780
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:16.904935+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2072575 data_alloc: 301989888 data_used: 17543168
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 161030144 unmapped: 36511744 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57700800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 202 handle_osd_map epochs [202,202], i have 202, src has [1,202]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 202 handle_osd_map epochs [202,202], i have 202, src has [1,202]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:17.905071+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 202 handle_osd_map epochs [202,202], i have 202, src has [1,202]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 8.981177330s of 10.320007324s, submitted: 471
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b57c000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 202 ms_handle_reset con 0x560b5b57c000 session 0x560b5a4e7a40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 202 handle_osd_map epochs [202,202], i have 202, src has [1,202]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 161054720 unmapped: 36487168 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 202 handle_osd_map epochs [203,203], i have 202, src has [1,203]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 203 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 203 ms_handle_reset con 0x560b57700800 session 0x560b56ceb860
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b57c800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 203 ms_handle_reset con 0x560b5b57c800 session 0x560b54ee9e00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:18.905297+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 203 heartbeat osd_stat(store_statfs(0x1b445c000/0x0/0x1bfc00000, data 0x4d7a379/0x4ed0000, compress 0x0/0x0/0x0, omap 0x647, meta 0x68cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57700800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 203 ms_handle_reset con 0x560b56f4e000 session 0x560b571d1860
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 161185792 unmapped: 36356096 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 203 handle_osd_map epochs [204,204], i have 203, src has [1,204]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 203 handle_osd_map epochs [204,204], i have 204, src has [1,204]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 204 heartbeat osd_stat(store_statfs(0x1b445c000/0x0/0x1bfc00000, data 0x4d7a379/0x4ed0000, compress 0x0/0x0/0x0, omap 0x647, meta 0x68cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 204 ms_handle_reset con 0x560b54cd2c00 session 0x560b57081a40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:19.905474+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 204 ms_handle_reset con 0x560b57700800 session 0x560b5734a780
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b57c000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 162250752 unmapped: 35291136 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 204 handle_osd_map epochs [205,205], i have 204, src has [1,205]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 205 ms_handle_reset con 0x560b5b57c000 session 0x560b571703c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:20.905663+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b57c400
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 162242560 unmapped: 35299328 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 205 handle_osd_map epochs [205,206], i have 205, src has [1,206]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 206 handle_osd_map epochs [206,206], i have 206, src has [1,206]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5c3dcc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5c3dd800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:21.905825+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 206 ms_handle_reset con 0x560b5c3dd800 session 0x560b57c9b2c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 206 ms_handle_reset con 0x560b5b57c400 session 0x560b571e5c20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2114784 data_alloc: 301989888 data_used: 17575936
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 162267136 unmapped: 35274752 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 207 handle_osd_map epochs [207,207], i have 207, src has [1,207]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:22.905959+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 207 ms_handle_reset con 0x560b5c3dcc00 session 0x560b55587c20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 162299904 unmapped: 35241984 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 207 ms_handle_reset con 0x560b54cd2c00 session 0x560b54ee6f00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:23.906113+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 207 ms_handle_reset con 0x560b56f4e000 session 0x560b57400960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 162349056 unmapped: 35192832 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b57700800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 208 ms_handle_reset con 0x560b57700800 session 0x560b580390e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 208 handle_osd_map epochs [208,208], i have 208, src has [1,208]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:24.906249+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 208 heartbeat osd_stat(store_statfs(0x1b443b000/0x0/0x1bfc00000, data 0x4d95ca8/0x4ef2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x68cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 208 ms_handle_reset con 0x560b54cd2c00 session 0x560b57401c20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 161841152 unmapped: 35700736 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:25.906396+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 161849344 unmapped: 35692544 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:26.906518+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 208 ms_handle_reset con 0x560b56f4e000 session 0x560b57180780
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b57c400
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2109434 data_alloc: 301989888 data_used: 17571840
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 161890304 unmapped: 35651584 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 208 ms_handle_reset con 0x560b5b57c400 session 0x560b571701e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 208 handle_osd_map epochs [209,209], i have 208, src has [1,209]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:27.906681+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 161939456 unmapped: 35602432 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:28.906833+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 161939456 unmapped: 35602432 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:29.907045+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 209 heartbeat osd_stat(store_statfs(0x1b441d000/0x0/0x1bfc00000, data 0x4db97df/0x4f10000, compress 0x0/0x0/0x0, omap 0x647, meta 0x68cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 161939456 unmapped: 35602432 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 11.335001945s of 12.224678993s, submitted: 326
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:30.907231+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 161964032 unmapped: 35577856 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5c3dcc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 209 ms_handle_reset con 0x560b5c3dcc00 session 0x560b551f0b40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:31.907385+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2112126 data_alloc: 301989888 data_used: 17584128
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 161980416 unmapped: 35561472 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 209 handle_osd_map epochs [210,210], i have 209, src has [1,210]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 210 handle_osd_map epochs [210,210], i have 210, src has [1,210]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 210 handle_osd_map epochs [210,210], i have 210, src has [1,210]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:32.907544+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 161988608 unmapped: 35553280 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 210 handle_osd_map epochs [210,211], i have 210, src has [1,211]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 211 handle_osd_map epochs [211,211], i have 211, src has [1,211]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 211 handle_osd_map epochs [211,211], i have 211, src has [1,211]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 211 heartbeat osd_stat(store_statfs(0x1b43fe000/0x0/0x1bfc00000, data 0x4dd54bd/0x4f2f000, compress 0x0/0x0/0x0, omap 0x647, meta 0x68cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:33.907691+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 161996800 unmapped: 35545088 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:34.907842+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 161554432 unmapped: 35987456 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 211 handle_osd_map epochs [212,212], i have 211, src has [1,212]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 211 handle_osd_map epochs [212,212], i have 212, src has [1,212]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:35.907968+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 212 heartbeat osd_stat(store_statfs(0x1b43ef000/0x0/0x1bfc00000, data 0x4dde6c0/0x4f3e000, compress 0x0/0x0/0x0, omap 0x647, meta 0x68cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5c3dd800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 212 ms_handle_reset con 0x560b5c3dd800 session 0x560b5c4fd4a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 162611200 unmapped: 34930688 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 212 handle_osd_map epochs [213,213], i have 212, src has [1,213]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:36.908098+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 213 ms_handle_reset con 0x560b54cd2c00 session 0x560b5c5054a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 213 handle_osd_map epochs [213,213], i have 213, src has [1,213]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2132681 data_alloc: 301989888 data_used: 17608704
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 162717696 unmapped: 34824192 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 213 handle_osd_map epochs [213,214], i have 213, src has [1,214]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 214 handle_osd_map epochs [214,214], i have 214, src has [1,214]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:37.908245+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 214 handle_osd_map epochs [214,214], i have 214, src has [1,214]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 214 ms_handle_reset con 0x560b56f4e000 session 0x560b59ad30e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 162750464 unmapped: 34791424 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:38.908403+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 214 heartbeat osd_stat(store_statfs(0x1b43d5000/0x0/0x1bfc00000, data 0x4df6daa/0x4f58000, compress 0x0/0x0/0x0, omap 0x647, meta 0x68cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 162750464 unmapped: 34791424 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:39.908542+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 162750464 unmapped: 34791424 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.638002396s of 10.000406265s, submitted: 115
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:40.908647+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 162750464 unmapped: 34791424 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 214 handle_osd_map epochs [215,215], i have 214, src has [1,215]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:41.908820+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2139976 data_alloc: 301989888 data_used: 17620992
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 162758656 unmapped: 34783232 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:42.908986+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 215 handle_osd_map epochs [216,216], i have 215, src has [1,216]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 216 handle_osd_map epochs [216,216], i have 216, src has [1,216]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 216 handle_osd_map epochs [216,216], i have 216, src has [1,216]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b57c400
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 216 ms_handle_reset con 0x560b5b57c400 session 0x560b59cbf4a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 162824192 unmapped: 34717696 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:43.909140+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5c3dcc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 216 ms_handle_reset con 0x560b5c3dcc00 session 0x560b561a61e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 162914304 unmapped: 34627584 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5c3dd000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 216 ms_handle_reset con 0x560b5c3dd000 session 0x560b573d43c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 216 ms_handle_reset con 0x560b54cd2c00 session 0x560b561a65a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 216 heartbeat osd_stat(store_statfs(0x1b43a4000/0x0/0x1bfc00000, data 0x4e1ea83/0x4f88000, compress 0x0/0x0/0x0, omap 0x647, meta 0x68cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:44.909296+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 162930688 unmapped: 34611200 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:45.909480+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 162930688 unmapped: 34611200 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:46.909663+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2154334 data_alloc: 301989888 data_used: 17620992
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 163028992 unmapped: 34512896 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:47.909867+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 163028992 unmapped: 34512896 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 216 ms_handle_reset con 0x560b56f4e000 session 0x560b5c505c20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:48.910075+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 163028992 unmapped: 34512896 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b57c400
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5c3dcc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 216 handle_osd_map epochs [216,217], i have 216, src has [1,217]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 216 ms_handle_reset con 0x560b5c3dcc00 session 0x560b5c504f00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 216 handle_osd_map epochs [217,217], i have 217, src has [1,217]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 217 handle_osd_map epochs [217,217], i have 217, src has [1,217]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:49.910296+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 217 ms_handle_reset con 0x560b5b57c400 session 0x560b5a4e6b40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 163176448 unmapped: 34365440 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.746376991s of 10.000884056s, submitted: 79
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 217 heartbeat osd_stat(store_statfs(0x1b436d000/0x0/0x1bfc00000, data 0x4e55594/0x4fc0000, compress 0x0/0x0/0x0, omap 0x647, meta 0x68cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 217 handle_osd_map epochs [218,218], i have 217, src has [1,218]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a4a2000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:50.910420+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 218 handle_osd_map epochs [218,218], i have 218, src has [1,218]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a4a2800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 218 ms_handle_reset con 0x560b5a4a2800 session 0x560b55bb4b40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 218 ms_handle_reset con 0x560b5a4a2000 session 0x560b59a6e1e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 163201024 unmapped: 34340864 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 218 heartbeat osd_stat(store_statfs(0x1b435c000/0x0/0x1bfc00000, data 0x4e63b08/0x4fd0000, compress 0x0/0x0/0x0, omap 0x647, meta 0x68cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:51.910559+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a4a2800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2169032 data_alloc: 301989888 data_used: 17633280
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 218 heartbeat osd_stat(store_statfs(0x1b435e000/0x0/0x1bfc00000, data 0x4e63a95/0x4fce000, compress 0x0/0x0/0x0, omap 0x647, meta 0x68cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 171622400 unmapped: 25919488 heap: 197541888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:52.910710+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 163520512 unmapped: 42426368 heap: 205946880 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 218 ms_handle_reset con 0x560b56f4e000 session 0x560b59cbe000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b57c400
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:53.910847+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 171819008 unmapped: 34127872 heap: 205946880 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 218 ms_handle_reset con 0x560b5b57c400 session 0x560b59cbe1e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:54.910985+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 218 heartbeat osd_stat(store_statfs(0x1aaf54000/0x0/0x1bfc00000, data 0xe270221/0xe3da000, compress 0x0/0x0/0x0, omap 0x647, meta 0x68cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 163454976 unmapped: 42491904 heap: 205946880 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:55.911106+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 163487744 unmapped: 46661632 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:56.911257+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 218 heartbeat osd_stat(store_statfs(0x1a8f45000/0x0/0x1bfc00000, data 0x1027ff5b/0x103e9000, compress 0x0/0x0/0x0, omap 0x647, meta 0x68cf9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3700908 data_alloc: 301989888 data_used: 17633280
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 176128000 unmapped: 34021376 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 218 handle_osd_map epochs [218,219], i have 218, src has [1,219]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 219 handle_osd_map epochs [219,219], i have 219, src has [1,219]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 219 handle_osd_map epochs [219,219], i have 219, src has [1,219]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 219 handle_osd_map epochs [219,219], i have 219, src has [1,219]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:57.911372+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 168894464 unmapped: 41254912 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:58.911532+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 219 heartbeat osd_stat(store_statfs(0x1a2b2c000/0x0/0x1bfc00000, data 0x16695d15/0x16802000, compress 0x0/0x0/0x0, omap 0x647, meta 0x68cf9b9), peers [0,1,2,4,5] op hist [0,0,0,0,0,0,0,1,1])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 164716544 unmapped: 45432832 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:59.911707+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 7.755083561s of 10.005051613s, submitted: 268
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 173154304 unmapped: 36995072 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:00.911828+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5c3dcc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 219 ms_handle_reset con 0x560b5c3dcc00 session 0x560b571703c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 173162496 unmapped: 36986880 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:01.911989+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 219 heartbeat osd_stat(store_statfs(0x19df15000/0x0/0x1bfc00000, data 0x1b2ac5e3/0x1b419000, compress 0x0/0x0/0x0, omap 0x647, meta 0x68cf9b9), peers [0,1,2,4,5] op hist [0,0,0,0,0,0,0,1])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 5040408 data_alloc: 301989888 data_used: 17645568
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 173187072 unmapped: 36962304 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:02.912141+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b4fa800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 169017344 unmapped: 41132032 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:03.912310+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 219 heartbeat osd_stat(store_statfs(0x195d72000/0x0/0x1bfc00000, data 0x222af49d/0x2241c000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7a6f9b9), peers [0,1,2,4,5] op hist [0,0,0,0,0,0,0,1,1])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 164855808 unmapped: 45293568 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 219 ms_handle_reset con 0x560b5b4fa800 session 0x560b571d1860
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:04.912483+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 219 ms_handle_reset con 0x560b56f4e000 session 0x560b56d1a960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 181755904 unmapped: 28393472 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 219 heartbeat osd_stat(store_statfs(0x193962000/0x0/0x1bfc00000, data 0x246bedae/0x2482c000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7a6f9b9), peers [0,1,2,4,5] op hist [0,2,0,0,0,0,0,1,1])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:05.912652+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 219 ms_handle_reset con 0x560b54cd2c00 session 0x560b571e5e00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a4a2000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 219 ms_handle_reset con 0x560b5a4a2000 session 0x560b5c5045a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 165183488 unmapped: 44965888 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 219 ms_handle_reset con 0x560b5a4a2800 session 0x560b5723ab40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b57c400
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 219 ms_handle_reset con 0x560b5b57c400 session 0x560b54ee9e00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:06.912828+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 219 ms_handle_reset con 0x560b54cd2c00 session 0x560b5a4e7a40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 219 heartbeat osd_stat(store_statfs(0x19054e000/0x0/0x1bfc00000, data 0x27ad28f4/0x27c3e000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7a6f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 6085292 data_alloc: 301989888 data_used: 17645568
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 165216256 unmapped: 44933120 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:07.912989+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 219 ms_handle_reset con 0x560b56f4e000 session 0x560b571d1c20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a4a2000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 219 ms_handle_reset con 0x560b5a4a2000 session 0x560b59ae3a40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 166019072 unmapped: 44130304 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a4a2800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 219 ms_handle_reset con 0x560b5a4a2800 session 0x560b57264f00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5c3dcc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:08.913129+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 166027264 unmapped: 44122112 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 220 ms_handle_reset con 0x560b5c3dcc00 session 0x560b5beff2c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:09.913266+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 7.459018230s of 10.001904488s, submitted: 312
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 166084608 unmapped: 44064768 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 220 handle_osd_map epochs [220,221], i have 220, src has [1,221]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 221 heartbeat osd_stat(store_statfs(0x1b3128000/0x0/0x1bfc00000, data 0x4efb7bb/0x5065000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7a6f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:10.913398+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 166207488 unmapped: 43941888 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 221 handle_osd_map epochs [221,222], i have 221, src has [1,222]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 222 ms_handle_reset con 0x560b54cd2c00 session 0x560b55586000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:11.913599+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2346543 data_alloc: 301989888 data_used: 17657856
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 167690240 unmapped: 42459136 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:12.913861+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 222 ms_handle_reset con 0x560b56f4e000 session 0x560b5c4fd4a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a4a2000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 167690240 unmapped: 42459136 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Got map version 52
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 223 ms_handle_reset con 0x560b5a4a2000 session 0x560b57401c20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:13.914042+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 167698432 unmapped: 42450944 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 223 heartbeat osd_stat(store_statfs(0x1b2cef000/0x0/0x1bfc00000, data 0x4f2ee7b/0x509e000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7e6f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:14.914219+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 223 heartbeat osd_stat(store_statfs(0x1b2cee000/0x0/0x1bfc00000, data 0x4f2eedd/0x509f000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7e6f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 167723008 unmapped: 42426368 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:15.914376+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 168771584 unmapped: 41377792 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:16.914601+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 223 heartbeat osd_stat(store_statfs(0x1b2cd4000/0x0/0x1bfc00000, data 0x4f49b33/0x50b9000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7e6f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 224 handle_osd_map epochs [224,224], i have 224, src has [1,224]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2354587 data_alloc: 301989888 data_used: 17657856
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 168820736 unmapped: 41328640 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:17.914818+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 225 handle_osd_map epochs [225,225], i have 225, src has [1,225]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a4a2800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 225 ms_handle_reset con 0x560b5a4a2800 session 0x560b571e5c20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 168828928 unmapped: 41320448 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:18.914986+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5c3dcc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 225 ms_handle_reset con 0x560b5c3dcc00 session 0x560b57c9b2c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 168878080 unmapped: 41271296 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:19.915241+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 225 ms_handle_reset con 0x560b54cd2c00 session 0x560b55587a40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b56f4e000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 225 ms_handle_reset con 0x560b56f4e000 session 0x560b59ae3e00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a4a2000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 8.773941040s of 10.001762390s, submitted: 496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 168927232 unmapped: 41222144 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 225 ms_handle_reset con 0x560b5a4a2000 session 0x560b551f0000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:20.915440+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 225 heartbeat osd_stat(store_statfs(0x1b2cb2000/0x0/0x1bfc00000, data 0x4f686dd/0x50dc000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7e6f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 225 handle_osd_map epochs [225,226], i have 225, src has [1,226]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 226 handle_osd_map epochs [226,226], i have 226, src has [1,226]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 226 handle_osd_map epochs [226,226], i have 226, src has [1,226]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 168927232 unmapped: 41222144 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a4a2800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 226 ms_handle_reset con 0x560b5a4a2800 session 0x560b555870e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:21.915600+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2374101 data_alloc: 301989888 data_used: 17670144
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 227 handle_osd_map epochs [227,227], i have 227, src has [1,227]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 227 handle_osd_map epochs [227,227], i have 227, src has [1,227]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 227 handle_osd_map epochs [227,228], i have 227, src has [1,228]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 169000960 unmapped: 41148416 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 228 handle_osd_map epochs [228,228], i have 228, src has [1,228]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:22.915771+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 169017344 unmapped: 41132032 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 228 handle_osd_map epochs [227,228], i have 228, src has [1,228]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:23.915961+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 228 heartbeat osd_stat(store_statfs(0x1b2c84000/0x0/0x1bfc00000, data 0x4f9003b/0x510a000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7e6f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b4fac00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 228 ms_handle_reset con 0x560b5b4fac00 session 0x560b57180000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 169041920 unmapped: 41107456 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:24.916107+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 228 ms_handle_reset con 0x560b54cd2c00 session 0x560b56cea3c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 228 handle_osd_map epochs [228,229], i have 228, src has [1,229]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 229 handle_osd_map epochs [229,229], i have 229, src has [1,229]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 229 handle_osd_map epochs [229,229], i have 229, src has [1,229]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 229 handle_osd_map epochs [229,229], i have 229, src has [1,229]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 170196992 unmapped: 39952384 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:25.916429+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a4a2000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 229 ms_handle_reset con 0x560b5a4a2000 session 0x560b573d52c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 169263104 unmapped: 40886272 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:26.916575+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a4a2800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 229 ms_handle_reset con 0x560b5a4a2800 session 0x560b58db2960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2516764 data_alloc: 301989888 data_used: 17682432
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 229 handle_osd_map epochs [229,230], i have 229, src has [1,230]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 169402368 unmapped: 40747008 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 230 handle_osd_map epochs [230,230], i have 230, src has [1,230]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5c69bc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5c69b800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a544000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 230 ms_handle_reset con 0x560b5c69bc00 session 0x560b571d01e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:27.916717+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 230 ms_handle_reset con 0x560b5c69b800 session 0x560b57159680
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5c69b800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 169861120 unmapped: 40288256 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 230 ms_handle_reset con 0x560b5c69b800 session 0x560b574010e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 230 handle_osd_map epochs [231,231], i have 230, src has [1,231]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 231 ms_handle_reset con 0x560b5a544000 session 0x560b55cccd20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 231 handle_osd_map epochs [231,231], i have 231, src has [1,231]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 231 heartbeat osd_stat(store_statfs(0x1b1841000/0x0/0x1bfc00000, data 0x63cb02b/0x6549000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7e6f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:28.916871+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 170909696 unmapped: 39239680 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 231 handle_osd_map epochs [232,232], i have 231, src has [1,232]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 232 handle_osd_map epochs [232,232], i have 232, src has [1,232]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 232 handle_osd_map epochs [232,232], i have 232, src has [1,232]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:29.917069+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.023007393s of 10.003026962s, submitted: 304
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 171032576 unmapped: 39116800 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:30.917230+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 232 ms_handle_reset con 0x560b54cd2c00 session 0x560b54ee72c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 171040768 unmapped: 39108608 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a4a2000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:31.917405+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a4a2800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5c69bc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a544c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 232 ms_handle_reset con 0x560b5c69bc00 session 0x560b56d1b0e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b74e800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 232 ms_handle_reset con 0x560b5a4a2800 session 0x560b551f01e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2681252 data_alloc: 301989888 data_used: 17694720
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 232 ms_handle_reset con 0x560b5b74e800 session 0x560b56d1b680
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 171114496 unmapped: 39034880 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 232 handle_osd_map epochs [232,233], i have 232, src has [1,233]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 233 handle_osd_map epochs [233,233], i have 233, src has [1,233]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 233 ms_handle_reset con 0x560b5a544c00 session 0x560b57401860
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:32.917554+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 233 ms_handle_reset con 0x560b5a4a2000 session 0x560b573d4f00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 171114496 unmapped: 39034880 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:33.917727+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 233 heartbeat osd_stat(store_statfs(0x1b0a7e000/0x0/0x1bfc00000, data 0x718e0a5/0x730f000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7e6f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 233 handle_osd_map epochs [234,234], i have 234, src has [1,234]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 233 handle_osd_map epochs [234,234], i have 234, src has [1,234]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 233 handle_osd_map epochs [234,234], i have 234, src has [1,234]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 171106304 unmapped: 39043072 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 234 handle_osd_map epochs [235,235], i have 234, src has [1,235]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:34.917886+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 235 handle_osd_map epochs [235,235], i have 235, src has [1,235]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 235 ms_handle_reset con 0x560b54cd2c00 session 0x560b59cbf4a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 171212800 unmapped: 38936576 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:35.918034+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a544000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 171229184 unmapped: 38920192 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:36.918251+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 235 handle_osd_map epochs [235,236], i have 235, src has [1,236]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2598876 data_alloc: 301989888 data_used: 17719296
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 236 heartbeat osd_stat(store_statfs(0x1b17f4000/0x0/0x1bfc00000, data 0x64130ff/0x6598000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7e6f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 236 handle_osd_map epochs [236,237], i have 236, src has [1,237]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 237 handle_osd_map epochs [237,237], i have 237, src has [1,237]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 171261952 unmapped: 38887424 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 237 ms_handle_reset con 0x560b5a544000 session 0x560b59ae32c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:37.918429+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 171360256 unmapped: 38789120 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:38.918604+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 237 handle_osd_map epochs [237,238], i have 237, src has [1,238]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 238 handle_osd_map epochs [238,238], i have 238, src has [1,238]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 171368448 unmapped: 38780928 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 238 handle_osd_map epochs [238,238], i have 238, src has [1,238]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 238 handle_osd_map epochs [238,238], i have 238, src has [1,238]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 238 handle_osd_map epochs [238,238], i have 238, src has [1,238]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 238 ms_handle_reset con 0x560b54cd2c00 session 0x560b561a65a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:39.918847+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.022356033s of 10.000051498s, submitted: 289
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 171524096 unmapped: 38625280 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:40.918992+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a4a2000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 238 ms_handle_reset con 0x560b5a4a2000 session 0x560b59a6ed20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 171548672 unmapped: 38600704 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:41.919132+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 238 handle_osd_map epochs [239,239], i have 239, src has [1,239]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2455325 data_alloc: 301989888 data_used: 17743872
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 171556864 unmapped: 38592512 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 240 heartbeat osd_stat(store_statfs(0x1b2bc9000/0x0/0x1bfc00000, data 0x503805b/0x51c3000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7e6f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:42.919312+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 171597824 unmapped: 38551552 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:43.919470+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 240 heartbeat osd_stat(store_statfs(0x1b2bc4000/0x0/0x1bfc00000, data 0x503a51c/0x51c7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7e6f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 171679744 unmapped: 38469632 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:44.919633+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 240 handle_osd_map epochs [240,241], i have 240, src has [1,241]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 172113920 unmapped: 38035456 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:45.919834+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 172130304 unmapped: 38019072 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:46.920015+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2464769 data_alloc: 301989888 data_used: 17756160
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 172359680 unmapped: 37789696 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:47.920173+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 242 heartbeat osd_stat(store_statfs(0x1b2b71000/0x0/0x1bfc00000, data 0x508e75b/0x521c000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7e6f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a544c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 177119232 unmapped: 33030144 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 242 ms_handle_reset con 0x560b5a544c00 session 0x560b5c505a40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b74e800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 242 ms_handle_reset con 0x560b5b74e800 session 0x560b561a6780
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:48.920363+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5c69b800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 242 ms_handle_reset con 0x560b5c69b800 session 0x560b59ad34a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 242 ms_handle_reset con 0x560b54cd2c00 session 0x560b551f0f00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a4a2000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 242 ms_handle_reset con 0x560b5a4a2000 session 0x560b56ceb0e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 173178880 unmapped: 36970496 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 242 heartbeat osd_stat(store_statfs(0x1b1f32000/0x0/0x1bfc00000, data 0x5cc9c4b/0x5e5b000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7e6f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:49.920620+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 242 heartbeat osd_stat(store_statfs(0x1b1f32000/0x0/0x1bfc00000, data 0x5cc9c4b/0x5e5b000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7e6f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.353137970s of 10.003338814s, submitted: 242
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 173375488 unmapped: 36773888 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:50.920884+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a544c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 242 ms_handle_reset con 0x560b5a544c00 session 0x560b5c504f00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b74e800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 242 ms_handle_reset con 0x560b5b74e800 session 0x560b571e5860
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 173375488 unmapped: 36773888 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:51.921088+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 242 heartbeat osd_stat(store_statfs(0x1b1f03000/0x0/0x1bfc00000, data 0x5cf9cda/0x5e8b000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7e6f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5c69b800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 242 ms_handle_reset con 0x560b5c69b800 session 0x560b573d5e00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 242 ms_handle_reset con 0x560b54cd2c00 session 0x560b54ee7680
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 242 heartbeat osd_stat(store_statfs(0x1b1ed8000/0x0/0x1bfc00000, data 0x5d23cfd/0x5eb6000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7e6f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2574948 data_alloc: 301989888 data_used: 17756160
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a4a2000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a544c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 174620672 unmapped: 35528704 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:52.921226+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 243 heartbeat osd_stat(store_statfs(0x1b1ed8000/0x0/0x1bfc00000, data 0x5d23cfd/0x5eb6000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7e6f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 174718976 unmapped: 35430400 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:53.921567+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 174825472 unmapped: 35323904 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:54.921747+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 243 heartbeat osd_stat(store_statfs(0x1b1ec1000/0x0/0x1bfc00000, data 0x5d38a10/0x5ecd000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7e6f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 243 heartbeat osd_stat(store_statfs(0x1b1eab000/0x0/0x1bfc00000, data 0x5d4f62a/0x5ee3000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7e6f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 174563328 unmapped: 35586048 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:55.921917+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 174735360 unmapped: 35414016 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:56.922095+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 243 heartbeat osd_stat(store_statfs(0x1b1e81000/0x0/0x1bfc00000, data 0x5d79001/0x5f0d000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7e6f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2601186 data_alloc: 301989888 data_used: 20029440
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 174858240 unmapped: 35291136 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:57.922247+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 243 heartbeat osd_stat(store_statfs(0x1b1e81000/0x0/0x1bfc00000, data 0x5d79001/0x5f0d000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7e6f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 174858240 unmapped: 35291136 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:58.922421+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 174858240 unmapped: 35291136 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:59.922634+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 176087040 unmapped: 34062336 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:00.922795+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 10.924610138s of 11.111027718s, submitted: 52
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 176250880 unmapped: 33898496 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:01.922989+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 243 heartbeat osd_stat(store_statfs(0x1b1e43000/0x0/0x1bfc00000, data 0x5db737f/0x5f4b000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7e6f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2607490 data_alloc: 301989888 data_used: 20029440
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 176381952 unmapped: 33767424 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:02.923204+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 176504832 unmapped: 33644544 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:03.923419+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 176504832 unmapped: 33644544 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:04.923604+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 243 heartbeat osd_stat(store_statfs(0x1b1e0d000/0x0/0x1bfc00000, data 0x5dec0e7/0x5f81000, compress 0x0/0x0/0x0, omap 0x647, meta 0x7e6f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 176766976 unmapped: 33382400 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:05.923718+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 180633600 unmapped: 29515776 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:06.923829+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2701292 data_alloc: 301989888 data_used: 20189184
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 181354496 unmapped: 28794880 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:07.924007+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 181354496 unmapped: 28794880 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:08.924205+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 243 heartbeat osd_stat(store_statfs(0x1b00ee000/0x0/0x1bfc00000, data 0x696a0e9/0x6b00000, compress 0x0/0x0/0x0, omap 0x647, meta 0x900f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 181354496 unmapped: 28794880 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:09.924440+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 180420608 unmapped: 29728768 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:10.924622+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:11.924811+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.778106689s of 10.319179535s, submitted: 137
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 180502528 unmapped: 29646848 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2715034 data_alloc: 301989888 data_used: 20193280
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:12.924977+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 180502528 unmapped: 29646848 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:13.925140+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 180781056 unmapped: 29368320 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:14.925277+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 180887552 unmapped: 29261824 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 243 heartbeat osd_stat(store_statfs(0x1b0074000/0x0/0x1bfc00000, data 0x69e22a6/0x6b79000, compress 0x0/0x0/0x0, omap 0x647, meta 0x900f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:15.925450+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 180887552 unmapped: 29261824 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:16.925589+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 180887552 unmapped: 29261824 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2716802 data_alloc: 301989888 data_used: 20193280
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:17.925868+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 181067776 unmapped: 29081600 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:18.926226+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 182206464 unmapped: 27942912 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 243 heartbeat osd_stat(store_statfs(0x1afff6000/0x0/0x1bfc00000, data 0x6a60ca4/0x6bf8000, compress 0x0/0x0/0x0, omap 0x647, meta 0x900f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:19.926457+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 182345728 unmapped: 27803648 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:20.926597+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 181731328 unmapped: 28418048 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:21.926808+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.746399879s of 10.002966881s, submitted: 65
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 181739520 unmapped: 28409856 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2731524 data_alloc: 301989888 data_used: 20193280
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:22.926965+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 181739520 unmapped: 28409856 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 243 heartbeat osd_stat(store_statfs(0x1aff8d000/0x0/0x1bfc00000, data 0x6ac8dee/0x6c60000, compress 0x0/0x0/0x0, omap 0x647, meta 0x900f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:23.927150+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 182067200 unmapped: 28082176 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:24.927355+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 182067200 unmapped: 28082176 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:25.927506+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 182214656 unmapped: 27934720 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 243 heartbeat osd_stat(store_statfs(0x1aff8d000/0x0/0x1bfc00000, data 0x6ac8dee/0x6c60000, compress 0x0/0x0/0x0, omap 0x647, meta 0x900f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 243 handle_osd_map epochs [244,244], i have 244, src has [1,244]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 243 handle_osd_map epochs [244,244], i have 244, src has [1,244]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 243 handle_osd_map epochs [244,244], i have 244, src has [1,244]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 243 handle_osd_map epochs [244,244], i have 244, src has [1,244]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 244 heartbeat osd_stat(store_statfs(0x1aff4c000/0x0/0x1bfc00000, data 0x6b08243/0x6ca1000, compress 0x0/0x0/0x0, omap 0x647, meta 0x900f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:26.927696+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 180928512 unmapped: 29220864 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2732726 data_alloc: 301989888 data_used: 20205568
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:27.927900+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 180928512 unmapped: 29220864 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b74e800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:28.928051+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 181985280 unmapped: 28164096 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:29.928295+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 182018048 unmapped: 28131328 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:30.928439+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5c69bc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 182018048 unmapped: 28131328 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 244 heartbeat osd_stat(store_statfs(0x1aff07000/0x0/0x1bfc00000, data 0x6b4d9bc/0x6ce7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x900f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:31.928627+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 182067200 unmapped: 28082176 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 244 handle_osd_map epochs [244,245], i have 244, src has [1,245]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.837653160s of 10.120409966s, submitted: 84
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 245 handle_osd_map epochs [245,245], i have 245, src has [1,245]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1ac00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 245 ms_handle_reset con 0x560b58e1ac00 session 0x560b571d0960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 245 handle_osd_map epochs [245,246], i have 245, src has [1,246]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2750015 data_alloc: 301989888 data_used: 20221952
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 246 handle_osd_map epochs [246,246], i have 246, src has [1,246]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 246 handle_osd_map epochs [246,246], i have 246, src has [1,246]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 246 handle_osd_map epochs [246,246], i have 246, src has [1,246]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 246 ms_handle_reset con 0x560b5c69bc00 session 0x560b55ccd4a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:32.928811+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 182132736 unmapped: 28016640 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 246 heartbeat osd_stat(store_statfs(0x1afed4000/0x0/0x1bfc00000, data 0x6b77c6c/0x6d18000, compress 0x0/0x0/0x0, omap 0x647, meta 0x900f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:33.928957+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b57c000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 182132736 unmapped: 28016640 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:34.929143+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 182132736 unmapped: 28016640 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 247 handle_osd_map epochs [247,247], i have 247, src has [1,247]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 247 handle_osd_map epochs [247,247], i have 247, src has [1,247]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b4fb000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 247 ms_handle_reset con 0x560b5b4fb000 session 0x560b55cce5a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:35.929315+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 182165504 unmapped: 27983872 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 247 handle_osd_map epochs [247,248], i have 247, src has [1,248]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 248 handle_osd_map epochs [248,248], i have 248, src has [1,248]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 248 handle_osd_map epochs [248,248], i have 248, src has [1,248]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 248 ms_handle_reset con 0x560b5b57c000 session 0x560b5899eb40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:36.929461+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 182173696 unmapped: 27975680 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 248 heartbeat osd_stat(store_statfs(0x1afe98000/0x0/0x1bfc00000, data 0x6bad9e4/0x6d54000, compress 0x0/0x0/0x0, omap 0x647, meta 0x900f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2766456 data_alloc: 301989888 data_used: 20221952
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:37.929605+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 182173696 unmapped: 27975680 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:38.929828+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 182403072 unmapped: 27746304 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 248 handle_osd_map epochs [248,249], i have 248, src has [1,249]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 249 handle_osd_map epochs [249,249], i have 249, src has [1,249]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:39.930061+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 182411264 unmapped: 27738112 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Got map version 53
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 249 handle_osd_map epochs [249,249], i have 249, src has [1,249]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b54cd2c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:40.930294+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 182894592 unmapped: 27254784 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b58e1ac00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 249 handle_osd_map epochs [249,249], i have 249, src has [1,249]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 250 heartbeat osd_stat(store_statfs(0x1afe34000/0x0/0x1bfc00000, data 0x6c0d681/0x6db8000, compress 0x0/0x0/0x0, omap 0x647, meta 0x900f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b4fb400
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:41.930430+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 250 ms_handle_reset con 0x560b5b4fb400 session 0x560b5befef00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 183181312 unmapped: 26968064 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.617230415s of 10.081233025s, submitted: 130
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 250 handle_osd_map epochs [250,251], i have 250, src has [1,251]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2792238 data_alloc: 301989888 data_used: 20246528
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 251 handle_osd_map epochs [251,251], i have 251, src has [1,251]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 251 handle_osd_map epochs [251,251], i have 251, src has [1,251]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:42.930576+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 251 ms_handle_reset con 0x560b58e1ac00 session 0x560b59cbed20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 183353344 unmapped: 26796032 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:43.930707+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 183353344 unmapped: 26796032 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:44.930913+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 183271424 unmapped: 26877952 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 251 handle_osd_map epochs [251,252], i have 251, src has [1,252]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 252 handle_osd_map epochs [252,252], i have 252, src has [1,252]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 252 heartbeat osd_stat(store_statfs(0x1af9eb000/0x0/0x1bfc00000, data 0x6c5765f/0x6e03000, compress 0x0/0x0/0x0, omap 0x647, meta 0x940f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:45.931079+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 184516608 unmapped: 25632768 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:46.931215+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 184516608 unmapped: 25632768 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2793454 data_alloc: 301989888 data_used: 20258816
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:47.931357+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 184713216 unmapped: 25436160 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:48.931558+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 184909824 unmapped: 25239552 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 254 handle_osd_map epochs [254,254], i have 254, src has [1,254]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 254 ms_handle_reset con 0x560b5b74e800 session 0x560b5c4fde00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:49.931801+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 184909824 unmapped: 25239552 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 254 ms_handle_reset con 0x560b5a4a2000 session 0x560b59a6ed20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 254 ms_handle_reset con 0x560b5a544c00 session 0x560b57400960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a544c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 254 handle_osd_map epochs [255,255], i have 255, src has [1,255]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 255 handle_osd_map epochs [255,255], i have 255, src has [1,255]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:50.931949+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 183074816 unmapped: 27074560 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 255 heartbeat osd_stat(store_statfs(0x1af94d000/0x0/0x1bfc00000, data 0x6cf1d8a/0x6e9f000, compress 0x0/0x0/0x0, omap 0x647, meta 0x940f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 255 ms_handle_reset con 0x560b5a544c00 session 0x560b571e50e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:51.932194+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 183115776 unmapped: 27033600 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.489407539s of 10.158554077s, submitted: 264
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2614520 data_alloc: 301989888 data_used: 17858560
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:52.932348+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 183222272 unmapped: 26927104 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 256 handle_osd_map epochs [256,256], i have 256, src has [1,256]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 256 handle_osd_map epochs [256,257], i have 256, src has [1,257]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 257 handle_osd_map epochs [257,257], i have 257, src has [1,257]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:53.932512+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 183328768 unmapped: 26820608 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 257 handle_osd_map epochs [257,257], i have 257, src has [1,257]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 257 heartbeat osd_stat(store_statfs(0x1b10a6000/0x0/0x1bfc00000, data 0x5598de8/0x5746000, compress 0x0/0x0/0x0, omap 0x647, meta 0x940f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:54.932687+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 183328768 unmapped: 26820608 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ff800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 258 handle_osd_map epochs [258,258], i have 258, src has [1,258]
Dec 06 10:32:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.49914 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:55.932838+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 184508416 unmapped: 25640960 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 258 heartbeat osd_stat(store_statfs(0x1b2049000/0x0/0x1bfc00000, data 0x55f5a32/0x57a2000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 259 handle_osd_map epochs [259,259], i have 259, src has [1,259]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:56.932990+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 184631296 unmapped: 25518080 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 259 handle_osd_map epochs [259,260], i have 259, src has [1,260]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 260 handle_osd_map epochs [260,260], i have 260, src has [1,260]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 260 handle_osd_map epochs [260,260], i have 260, src has [1,260]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 260 ms_handle_reset con 0x560b576ff800 session 0x560b5c4fc960
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2628348 data_alloc: 301989888 data_used: 17858560
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:57.933091+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b4fbc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 260 ms_handle_reset con 0x560b5b4fbc00 session 0x560b571705a0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 184647680 unmapped: 25501696 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:58.933249+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 184688640 unmapped: 25460736 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 260 heartbeat osd_stat(store_statfs(0x1b201d000/0x0/0x1bfc00000, data 0x562013d/0x57cf000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:59.933475+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 184688640 unmapped: 25460736 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:00.933673+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 184623104 unmapped: 25526272 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:01.933905+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 260 heartbeat osd_stat(store_statfs(0x1b200a000/0x0/0x1bfc00000, data 0x563557d/0x57e4000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 184745984 unmapped: 25403392 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2632284 data_alloc: 301989888 data_used: 17854464
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.879208565s of 10.598340988s, submitted: 252
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:02.934052+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 261 handle_osd_map epochs [261,261], i have 261, src has [1,261]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 184754176 unmapped: 25395200 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:03.934278+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 184754176 unmapped: 25395200 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 262 handle_osd_map epochs [262,262], i have 262, src has [1,262]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:04.934462+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5808e400
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 262 ms_handle_reset con 0x560b5808e400 session 0x560b59a6eb40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 184762368 unmapped: 25387008 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:05.934679+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 184967168 unmapped: 25182208 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5dd15000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 262 handle_osd_map epochs [262,263], i have 262, src has [1,263]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 263 handle_osd_map epochs [263,263], i have 263, src has [1,263]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 263 ms_handle_reset con 0x560b5dd15000 session 0x560b57c9b860
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:06.934864+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 184975360 unmapped: 25174016 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ff800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 263 handle_osd_map epochs [263,264], i have 263, src has [1,264]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 264 handle_osd_map epochs [264,264], i have 264, src has [1,264]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 264 ms_handle_reset con 0x560b576ff800 session 0x560b59ae21e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2655610 data_alloc: 301989888 data_used: 17883136
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:07.935062+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 264 heartbeat osd_stat(store_statfs(0x1b1f99000/0x0/0x1bfc00000, data 0x56a15de/0x5854000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 185040896 unmapped: 25108480 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5808e400
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 264 handle_osd_map epochs [265,265], i have 265, src has [1,265]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 265 handle_osd_map epochs [265,265], i have 265, src has [1,265]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 265 ms_handle_reset con 0x560b5808e400 session 0x560b58db2f00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:08.935211+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 265 heartbeat osd_stat(store_statfs(0x1b1f93000/0x0/0x1bfc00000, data 0x56a3b9f/0x5859000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 185049088 unmapped: 25100288 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:09.935399+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 185147392 unmapped: 25001984 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a544c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 265 ms_handle_reset con 0x560b5a544c00 session 0x560b571801e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b4fbc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 265 handle_osd_map epochs [265,266], i have 265, src has [1,266]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 266 handle_osd_map epochs [266,266], i have 266, src has [1,266]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 266 ms_handle_reset con 0x560b5b4fbc00 session 0x560b58db21e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:10.935540+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 185155584 unmapped: 24993792 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 266 heartbeat osd_stat(store_statfs(0x1b1f6e000/0x0/0x1bfc00000, data 0x56c3c52/0x587d000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5dd14c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 266 ms_handle_reset con 0x560b5dd14c00 session 0x560b580383c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ff800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 266 handle_osd_map epochs [266,267], i have 266, src has [1,267]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 267 handle_osd_map epochs [267,267], i have 267, src has [1,267]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:11.935699+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 267 ms_handle_reset con 0x560b576ff800 session 0x560b56f90000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 186236928 unmapped: 23912448 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5808e400
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 267 handle_osd_map epochs [267,268], i have 267, src has [1,268]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 268 ms_handle_reset con 0x560b5808e400 session 0x560b5723a3c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a544c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2662489 data_alloc: 301989888 data_used: 17907712
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:12.935840+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.536602974s of 10.171621323s, submitted: 207
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 268 handle_osd_map epochs [268,268], i have 268, src has [1,268]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 268 heartbeat osd_stat(store_statfs(0x1b1f6d000/0x0/0x1bfc00000, data 0x56c61a3/0x5880000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 186368000 unmapped: 23781376 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 269 ms_handle_reset con 0x560b5a544c00 session 0x560b5c4fc1e0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:13.936035+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 269 handle_osd_map epochs [269,269], i have 269, src has [1,269]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 186368000 unmapped: 23781376 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:14.936190+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b4fbc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 186376192 unmapped: 23773184 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 269 ms_handle_reset con 0x560b5b4fbc00 session 0x560b59ad3c20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:15.936368+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 187490304 unmapped: 22659072 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:16.936516+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 187490304 unmapped: 22659072 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 269 handle_osd_map epochs [269,270], i have 269, src has [1,270]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2672270 data_alloc: 301989888 data_used: 17915904
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:17.936656+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 187490304 unmapped: 22659072 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 270 heartbeat osd_stat(store_statfs(0x1b1f00000/0x0/0x1bfc00000, data 0x572dec8/0x58ed000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:18.936832+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 187539456 unmapped: 22609920 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:19.937022+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 187539456 unmapped: 22609920 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 270 heartbeat osd_stat(store_statfs(0x1b1ed2000/0x0/0x1bfc00000, data 0x575a5c1/0x591b000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:20.937197+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 187539456 unmapped: 22609920 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:21.937403+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 187539456 unmapped: 22609920 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2687872 data_alloc: 301989888 data_used: 17928192
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:22.937564+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 271 handle_osd_map epochs [271,271], i have 271, src has [1,271]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.755902290s of 10.153867722s, submitted: 127
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 187539456 unmapped: 22609920 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:23.937736+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 187670528 unmapped: 22478848 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:24.937947+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 187604992 unmapped: 22544384 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 271 heartbeat osd_stat(store_statfs(0x1b1e56000/0x0/0x1bfc00000, data 0x57d697e/0x5998000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [0,0,2])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:25.938171+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 187604992 unmapped: 22544384 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 271 heartbeat osd_stat(store_statfs(0x1b1e29000/0x0/0x1bfc00000, data 0x58037cf/0x59c5000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:26.938374+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 186400768 unmapped: 23748608 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:27.938574+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2685218 data_alloc: 301989888 data_used: 17928192
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 186400768 unmapped: 23748608 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:28.938838+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 186400768 unmapped: 23748608 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:29.939074+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 186400768 unmapped: 23748608 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:30.939256+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 271 heartbeat osd_stat(store_statfs(0x1b1e12000/0x0/0x1bfc00000, data 0x581c435/0x59dc000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 186458112 unmapped: 23691264 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:31.939400+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 186458112 unmapped: 23691264 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:32.939534+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2690806 data_alloc: 301989888 data_used: 17928192
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 186458112 unmapped: 23691264 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 10.142389297s of 10.272731781s, submitted: 26
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:33.939705+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 187523072 unmapped: 22626304 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:34.939924+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 187523072 unmapped: 22626304 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 271 heartbeat osd_stat(store_statfs(0x1b1db4000/0x0/0x1bfc00000, data 0x587914b/0x5a3a000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:35.940141+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 187588608 unmapped: 22560768 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:36.940295+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 271 heartbeat osd_stat(store_statfs(0x1b1d7a000/0x0/0x1bfc00000, data 0x58b3343/0x5a74000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 187588608 unmapped: 22560768 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:37.940451+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2700820 data_alloc: 301989888 data_used: 17928192
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 187662336 unmapped: 22487040 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:38.940641+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 187908096 unmapped: 22241280 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:39.940835+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 187908096 unmapped: 22241280 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:40.941027+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 271 heartbeat osd_stat(store_statfs(0x1b1d17000/0x0/0x1bfc00000, data 0x5915bdf/0x5ad6000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 187998208 unmapped: 22151168 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 271 heartbeat osd_stat(store_statfs(0x1b1d17000/0x0/0x1bfc00000, data 0x5915bdf/0x5ad6000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:41.941231+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 188235776 unmapped: 21913600 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 271 ms_handle_reset con 0x560b54cd2c00 session 0x560b5723af00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:42.941451+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2704446 data_alloc: 301989888 data_used: 17928192
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 188596224 unmapped: 21553152 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:43.941619+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Got map version 54
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 10.498064041s of 10.753914833s, submitted: 297
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 188784640 unmapped: 21364736 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:44.941707+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 188784640 unmapped: 21364736 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:45.941893+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 188801024 unmapped: 21348352 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:46.942095+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 188801024 unmapped: 21348352 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 272 heartbeat osd_stat(store_statfs(0x1b1ca1000/0x0/0x1bfc00000, data 0x598ba5c/0x5b4c000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:47.942252+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2712094 data_alloc: 301989888 data_used: 17940480
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 188874752 unmapped: 21274624 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 272 heartbeat osd_stat(store_statfs(0x1b1ca1000/0x0/0x1bfc00000, data 0x598ba5c/0x5b4c000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:48.942436+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 189014016 unmapped: 21135360 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:49.942631+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 189120512 unmapped: 21028864 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:50.942851+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 190169088 unmapped: 19980288 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:51.942981+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 190169088 unmapped: 19980288 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 272 handle_osd_map epochs [272,273], i have 273, src has [1,273]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:52.943105+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2729654 data_alloc: 301989888 data_used: 17952768
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 190349312 unmapped: 19800064 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:53.943289+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 189251584 unmapped: 20897792 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 273 heartbeat osd_stat(store_statfs(0x1b1be1000/0x0/0x1bfc00000, data 0x5a4a8b4/0x5c0d000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:54.943470+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 189251584 unmapped: 20897792 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 10.975757599s of 11.245203018s, submitted: 85
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:55.943606+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 189358080 unmapped: 20791296 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:56.943747+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 189702144 unmapped: 20447232 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:57.943876+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2727856 data_alloc: 301989888 data_used: 17952768
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 189702144 unmapped: 20447232 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:58.944081+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 189874176 unmapped: 20275200 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:59.944295+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 190922752 unmapped: 19226624 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 273 heartbeat osd_stat(store_statfs(0x1b1af7000/0x0/0x1bfc00000, data 0x5b34e0d/0x5cf7000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [0,0,3])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:00.944459+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 191094784 unmapped: 19054592 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 273 heartbeat osd_stat(store_statfs(0x1b1ad1000/0x0/0x1bfc00000, data 0x5b5c7be/0x5d1d000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:01.944592+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 191094784 unmapped: 19054592 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:02.944702+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2744110 data_alloc: 301989888 data_used: 17952768
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 191234048 unmapped: 18915328 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:03.944851+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 191586304 unmapped: 18563072 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:04.944979+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 191586304 unmapped: 18563072 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.696935654s of 10.003815651s, submitted: 66
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 273 heartbeat osd_stat(store_statfs(0x1b1a5c000/0x0/0x1bfc00000, data 0x5bcf33c/0x5d91000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:05.945181+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 190218240 unmapped: 19931136 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:06.945318+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 191291392 unmapped: 18857984 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:07.945511+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2758562 data_alloc: 301989888 data_used: 17952768
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 191291392 unmapped: 18857984 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:08.945711+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 191766528 unmapped: 18382848 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v831: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:09.945925+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 191463424 unmapped: 18685952 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:10.962280+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 273 heartbeat osd_stat(store_statfs(0x1b199b000/0x0/0x1bfc00000, data 0x5c922b1/0x5e53000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 193159168 unmapped: 16990208 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:11.962416+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 273 heartbeat osd_stat(store_statfs(0x1b1914000/0x0/0x1bfc00000, data 0x5d19606/0x5eda000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 193110016 unmapped: 17039360 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc ms_handle_reset ms_handle_reset con 0x560b57977c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc reconnect Terminating session with v2:172.18.0.108:6810/3354697053
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc reconnect Starting new session with [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: get_auth_request con 0x560b580d9800 auth_method 0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_configure stats_period=5
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:12.962576+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2757696 data_alloc: 301989888 data_used: 17952768
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 273 heartbeat osd_stat(store_statfs(0x1b1910000/0x0/0x1bfc00000, data 0x5d1d141/0x5ede000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 193110016 unmapped: 17039360 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:13.962768+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 273 ms_handle_reset con 0x560b5517fc00 session 0x560b5899fa40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b576ff800
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 193126400 unmapped: 17022976 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:14.962972+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 193257472 unmapped: 16891904 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.632020950s of 10.005836487s, submitted: 73
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:15.963172+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 193257472 unmapped: 16891904 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:16.963268+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 273 heartbeat osd_stat(store_statfs(0x1b18be000/0x0/0x1bfc00000, data 0x5d6eb18/0x5f30000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 193265664 unmapped: 16883712 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 273 heartbeat osd_stat(store_statfs(0x1b1874000/0x0/0x1bfc00000, data 0x5db6e3c/0x5f79000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:17.963391+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2770544 data_alloc: 301989888 data_used: 17952768
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 193323008 unmapped: 16826368 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:18.963524+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 193323008 unmapped: 16826368 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 273 heartbeat osd_stat(store_statfs(0x1b185e000/0x0/0x1bfc00000, data 0x5dcd579/0x5f90000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:19.963756+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 193323008 unmapped: 16826368 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:20.963952+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 193511424 unmapped: 16637952 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 273 heartbeat osd_stat(store_statfs(0x1b17d0000/0x0/0x1bfc00000, data 0x5e5c5c6/0x601e000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:21.964090+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 193511424 unmapped: 16637952 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:22.964269+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2785524 data_alloc: 301989888 data_used: 17952768
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 193511424 unmapped: 16637952 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:23.964428+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 193863680 unmapped: 16285696 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:24.964600+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 193863680 unmapped: 16285696 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 273 heartbeat osd_stat(store_statfs(0x1b1783000/0x0/0x1bfc00000, data 0x5eaa555/0x606b000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:25.964759+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 10.592511177s of 10.870923042s, submitted: 61
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 193921024 unmapped: 16228352 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:26.964953+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 274 handle_osd_map epochs [274,274], i have 274, src has [1,274]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 195239936 unmapped: 14909440 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:27.965091+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2788700 data_alloc: 301989888 data_used: 17965056
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 274 heartbeat osd_stat(store_statfs(0x1b1753000/0x0/0x1bfc00000, data 0x5ed8bb8/0x609b000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 195239936 unmapped: 14909440 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:28.965225+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 195362816 unmapped: 14786560 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:29.965423+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 274 heartbeat osd_stat(store_statfs(0x1b171d000/0x0/0x1bfc00000, data 0x5f0dd96/0x60d1000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [0,0,0,0,2])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 195518464 unmapped: 14630912 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:30.965600+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 195534848 unmapped: 14614528 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:31.965739+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 195690496 unmapped: 14458880 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:32.965953+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2806028 data_alloc: 301989888 data_used: 17965056
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _renew_subs
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 197304320 unmapped: 12845056 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:33.966112+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 275 handle_osd_map epochs [275,275], i have 275, src has [1,275]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 197410816 unmapped: 12738560 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:34.966289+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 275 heartbeat osd_stat(store_statfs(0x1b1638000/0x0/0x1bfc00000, data 0x5fefdbb/0x61b5000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 197410816 unmapped: 12738560 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:35.966443+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 197550080 unmapped: 12599296 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 10.043486595s of 10.449810982s, submitted: 111
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:36.966600+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 197550080 unmapped: 12599296 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:37.966766+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2817670 data_alloc: 301989888 data_used: 17977344
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 197558272 unmapped: 12591104 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 275 heartbeat osd_stat(store_statfs(0x1b15a5000/0x0/0x1bfc00000, data 0x608324b/0x6248000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:38.967000+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 197844992 unmapped: 12304384 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:39.967209+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 197869568 unmapped: 12279808 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 275 handle_osd_map epochs [275,276], i have 275, src has [1,276]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:40.967538+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 276 heartbeat osd_stat(store_statfs(0x1b156a000/0x0/0x1bfc00000, data 0x60bc93d/0x6284000, compress 0x0/0x0/0x0, omap 0x647, meta 0x840f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 198926336 unmapped: 11223040 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:41.967697+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 198221824 unmapped: 11927552 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:42.967904+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2828488 data_alloc: 301989888 data_used: 17989632
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 198221824 unmapped: 11927552 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:43.968069+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 276 heartbeat osd_stat(store_statfs(0x1afedf000/0x0/0x1bfc00000, data 0x61a92ef/0x636f000, compress 0x0/0x0/0x0, omap 0x647, meta 0x99af9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 198230016 unmapped: 11919360 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:44.968255+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 200802304 unmapped: 9347072 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:45.968400+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 200802304 unmapped: 9347072 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:46.968593+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 200802304 unmapped: 9347072 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 276 handle_osd_map epochs [276,277], i have 276, src has [1,277]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 10.548650742s of 11.001379013s, submitted: 114
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 277 handle_osd_map epochs [277,277], i have 277, src has [1,277]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:47.968768+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2845654 data_alloc: 301989888 data_used: 18001920
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 201015296 unmapped: 9134080 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:48.968973+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 277 heartbeat osd_stat(store_statfs(0x1aec9c000/0x0/0x1bfc00000, data 0x6249828/0x6411000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 201015296 unmapped: 9134080 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:49.969182+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 277 heartbeat osd_stat(store_statfs(0x1aec9c000/0x0/0x1bfc00000, data 0x6249828/0x6411000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 201015296 unmapped: 9134080 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:50.969386+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 200695808 unmapped: 9453568 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:51.969547+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 200695808 unmapped: 9453568 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:52.969699+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2856678 data_alloc: 301989888 data_used: 18001920
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 200695808 unmapped: 9453568 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:53.969855+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 200916992 unmapped: 9232384 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:54.970035+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 200925184 unmapped: 9224192 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 277 handle_osd_map epochs [277,278], i have 277, src has [1,278]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:55.970191+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 278 heartbeat osd_stat(store_statfs(0x1aeba9000/0x0/0x1bfc00000, data 0x633cba2/0x6505000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 201023488 unmapped: 9125888 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:56.970363+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 201113600 unmapped: 9035776 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.616417885s of 10.053112984s, submitted: 107
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:57.970519+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2873988 data_alloc: 301989888 data_used: 18014208
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 278 heartbeat osd_stat(store_statfs(0x1aeb6b000/0x0/0x1bfc00000, data 0x63780c6/0x6543000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 202276864 unmapped: 7872512 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:58.970710+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 202276864 unmapped: 7872512 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:59.970962+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 202702848 unmapped: 7446528 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 278 heartbeat osd_stat(store_statfs(0x1aeb15000/0x0/0x1bfc00000, data 0x63ce781/0x6599000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:00.971126+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 202842112 unmapped: 7307264 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:01.971275+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 278 heartbeat osd_stat(store_statfs(0x1aea9d000/0x0/0x1bfc00000, data 0x6446c60/0x6611000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 202842112 unmapped: 7307264 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:02.971575+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2876276 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 203046912 unmapped: 7102464 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:03.971764+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 handle_osd_map epochs [279,279], i have 279, src has [1,279]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 204472320 unmapped: 5677056 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:04.971962+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1aea27000/0x0/0x1bfc00000, data 0x64b0a6c/0x667f000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 205545472 unmapped: 4603904 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:05.972073+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 205398016 unmapped: 4751360 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:06.972235+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 203636736 unmapped: 6512640 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.680982590s of 10.047563553s, submitted: 99
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:07.972368+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2903720 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 203915264 unmapped: 6234112 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:08.972547+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 204283904 unmapped: 5865472 heap: 210149376 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:09.972719+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae94a000/0x0/0x1bfc00000, data 0x6595597/0x6763000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 204972032 unmapped: 6225920 heap: 211197952 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:10.972896+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 205185024 unmapped: 6012928 heap: 211197952 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:11.973085+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 205643776 unmapped: 5554176 heap: 211197952 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:12.973243+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2911946 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 205643776 unmapped: 5554176 heap: 211197952 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:13.973416+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 206913536 unmapped: 5332992 heap: 212246528 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:14.973570+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae862000/0x0/0x1bfc00000, data 0x667d6a0/0x684b000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 207060992 unmapped: 5185536 heap: 212246528 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:15.973743+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 207060992 unmapped: 5185536 heap: 212246528 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:16.973893+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 207355904 unmapped: 4890624 heap: 212246528 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.644666672s of 10.000456810s, submitted: 78
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:17.974033+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2923872 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 207675392 unmapped: 5619712 heap: 213295104 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:18.974189+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 207675392 unmapped: 5619712 heap: 213295104 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:19.974346+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 208871424 unmapped: 4423680 heap: 213295104 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:20.974495+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae751000/0x0/0x1bfc00000, data 0x67908eb/0x695d000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 209231872 unmapped: 4063232 heap: 213295104 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:21.974641+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 209231872 unmapped: 4063232 heap: 213295104 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:22.974878+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2929310 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 209231872 unmapped: 4063232 heap: 213295104 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:23.975039+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 209338368 unmapped: 5005312 heap: 214343680 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:24.975216+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 210386944 unmapped: 3956736 heap: 214343680 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:25.975414+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae65b000/0x0/0x1bfc00000, data 0x6884c7e/0x6a53000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 210386944 unmapped: 3956736 heap: 214343680 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:26.975602+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae65b000/0x0/0x1bfc00000, data 0x6884c7e/0x6a53000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae600000/0x0/0x1bfc00000, data 0x68dea23/0x6aad000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 210698240 unmapped: 4694016 heap: 215392256 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:27.975830+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2951002 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 210714624 unmapped: 4677632 heap: 215392256 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:28.975965+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 210722816 unmapped: 4669440 heap: 215392256 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:29.976127+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 12.007405281s of 12.410357475s, submitted: 81
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 211779584 unmapped: 3612672 heap: 215392256 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:30.976317+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae513000/0x0/0x1bfc00000, data 0x69c9ca5/0x6b99000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 211779584 unmapped: 4661248 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:31.976457+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 211779584 unmapped: 4661248 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae513000/0x0/0x1bfc00000, data 0x69c9ca5/0x6b99000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:32.976618+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2964218 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212361216 unmapped: 4079616 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:33.976767+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 211828736 unmapped: 4612096 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:34.976933+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 211828736 unmapped: 4612096 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:35.977093+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212041728 unmapped: 4399104 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae4c2000/0x0/0x1bfc00000, data 0x6a1b68b/0x6be9000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:36.977265+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212049920 unmapped: 4390912 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:37.977411+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2955942 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212049920 unmapped: 4390912 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:38.977556+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212049920 unmapped: 4390912 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:39.977757+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 10.089460373s of 10.367487907s, submitted: 63
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212049920 unmapped: 4390912 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:40.977936+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212049920 unmapped: 4390912 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:41.978091+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae4c5000/0x0/0x1bfc00000, data 0x6a1b5c5/0x6be8000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212049920 unmapped: 4390912 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:42.978240+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2955942 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212049920 unmapped: 4390912 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:43.978428+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212049920 unmapped: 4390912 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:44.978633+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212049920 unmapped: 4390912 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:45.978808+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212049920 unmapped: 4390912 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:46.978982+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae4c3000/0x0/0x1bfc00000, data 0x6a1b6d1/0x6be9000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212049920 unmapped: 4390912 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:47.979166+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae4c3000/0x0/0x1bfc00000, data 0x6a1b6d1/0x6be9000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2957886 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae4c4000/0x0/0x1bfc00000, data 0x6a1b68b/0x6be9000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212049920 unmapped: 4390912 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:48.979321+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae4c4000/0x0/0x1bfc00000, data 0x6a1b68b/0x6be9000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212049920 unmapped: 4390912 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:49.979531+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae4c4000/0x0/0x1bfc00000, data 0x6a1b68b/0x6be9000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212049920 unmapped: 4390912 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 10.127981186s of 10.250601768s, submitted: 22
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:50.979703+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212049920 unmapped: 4390912 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:51.979858+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae4c3000/0x0/0x1bfc00000, data 0x6a1b798/0x6bea000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212058112 unmapped: 4382720 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:52.980006+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2958436 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212058112 unmapped: 4382720 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:53.980167+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212066304 unmapped: 4374528 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:54.980341+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212066304 unmapped: 4374528 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:55.980486+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae4c4000/0x0/0x1bfc00000, data 0x6a1b68c/0x6be9000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212066304 unmapped: 4374528 heap: 216440832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:56.980635+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae4c4000/0x0/0x1bfc00000, data 0x6a1b68c/0x6be9000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212082688 unmapped: 5406720 heap: 217489408 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:57.980812+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2965740 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212180992 unmapped: 5308416 heap: 217489408 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:58.980964+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 212180992 unmapped: 5308416 heap: 217489408 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:59.981147+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae495000/0x0/0x1bfc00000, data 0x6a49820/0x6c18000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 213237760 unmapped: 4251648 heap: 217489408 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:00.981316+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 10.011252403s of 10.131768227s, submitted: 25
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 213237760 unmapped: 4251648 heap: 217489408 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:01.981464+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 213237760 unmapped: 4251648 heap: 217489408 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:02.981624+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2972610 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 213434368 unmapped: 4055040 heap: 217489408 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:03.981823+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 213434368 unmapped: 4055040 heap: 217489408 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:04.981925+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae3f5000/0x0/0x1bfc00000, data 0x6aecea2/0x6cb9000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 213434368 unmapped: 5103616 heap: 218537984 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:05.982070+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 213786624 unmapped: 4751360 heap: 218537984 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:06.982185+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae3d7000/0x0/0x1bfc00000, data 0x6b0ba4b/0x6cd7000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 215031808 unmapped: 3506176 heap: 218537984 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:07.982479+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2977048 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 215040000 unmapped: 3497984 heap: 218537984 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:08.982619+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:09.982863+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 215212032 unmapped: 3325952 heap: 218537984 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:10.982998+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 215228416 unmapped: 3309568 heap: 218537984 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:11.983159+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 215228416 unmapped: 3309568 heap: 218537984 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae339000/0x0/0x1bfc00000, data 0x6baa0d1/0x6d75000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:12.983400+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 215498752 unmapped: 3039232 heap: 218537984 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2983740 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:13.983593+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 215498752 unmapped: 3039232 heap: 218537984 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 13.031019211s of 13.279790878s, submitted: 52
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:14.983766+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 215220224 unmapped: 4366336 heap: 219586560 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:15.983963+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 215375872 unmapped: 4210688 heap: 219586560 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:16.984217+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 215375872 unmapped: 4210688 heap: 219586560 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:17.984448+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 216424448 unmapped: 3162112 heap: 219586560 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae2f0000/0x0/0x1bfc00000, data 0x6bf2c23/0x6dbe000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2991852 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:18.984643+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 216645632 unmapped: 2940928 heap: 219586560 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae2a0000/0x0/0x1bfc00000, data 0x6c428ee/0x6e0e000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:19.984886+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 216645632 unmapped: 2940928 heap: 219586560 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:20.985049+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 216645632 unmapped: 3989504 heap: 220635136 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae25d000/0x0/0x1bfc00000, data 0x6c85d96/0x6e51000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:21.985222+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 216850432 unmapped: 3784704 heap: 220635136 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:22.985388+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 216850432 unmapped: 3784704 heap: 220635136 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2993944 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:23.985519+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 216850432 unmapped: 3784704 heap: 220635136 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:24.985812+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae20d000/0x0/0x1bfc00000, data 0x6cd498b/0x6ea1000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 216866816 unmapped: 3768320 heap: 220635136 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:25.985968+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 216866816 unmapped: 3768320 heap: 220635136 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:26.986249+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 216866816 unmapped: 3768320 heap: 220635136 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 13.061393738s of 13.318737984s, submitted: 49
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:27.986479+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 219029504 unmapped: 3702784 heap: 222732288 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae1c4000/0x0/0x1bfc00000, data 0x6d1e436/0x6eea000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3002392 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:28.986708+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 219029504 unmapped: 3702784 heap: 222732288 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:29.986991+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 219029504 unmapped: 3702784 heap: 222732288 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:30.987194+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 219037696 unmapped: 4743168 heap: 223780864 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:31.987476+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 219045888 unmapped: 4734976 heap: 223780864 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:32.987666+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 219054080 unmapped: 4726784 heap: 223780864 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3010122 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae162000/0x0/0x1bfc00000, data 0x6d7f4c4/0x6f4b000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:33.987873+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 219308032 unmapped: 4472832 heap: 223780864 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:34.988042+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 219045888 unmapped: 4734976 heap: 223780864 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:35.988214+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 219144192 unmapped: 4636672 heap: 223780864 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:36.988409+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 219144192 unmapped: 4636672 heap: 223780864 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 9.741312027s of 10.001796722s, submitted: 55
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:37.988741+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220266496 unmapped: 4562944 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3015488 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:38.988921+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220266496 unmapped: 4562944 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0cb000/0x0/0x1bfc00000, data 0x6e17c2c/0x6fe3000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:39.989136+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220487680 unmapped: 4341760 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:40.989320+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220487680 unmapped: 4341760 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c8000/0x0/0x1bfc00000, data 0x6e1b4fa/0x6fe6000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:41.989505+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220487680 unmapped: 4341760 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:42.989701+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c8000/0x0/0x1bfc00000, data 0x6e1b4fa/0x6fe6000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220487680 unmapped: 4341760 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c8000/0x0/0x1bfc00000, data 0x6e1b4fa/0x6fe6000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3010584 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:43.989937+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220487680 unmapped: 4341760 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:44.990152+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220487680 unmapped: 4341760 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:45.990356+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220487680 unmapped: 4341760 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59329 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c6000/0x0/0x1bfc00000, data 0x6e1b5c3/0x6fe7000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:46.990567+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220487680 unmapped: 4341760 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:47.990764+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220487680 unmapped: 4341760 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3012128 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 11.013169289s of 11.051259995s, submitted: 7
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:48.990973+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220495872 unmapped: 4333568 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c5000/0x0/0x1bfc00000, data 0x6e1b65e/0x6fe8000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c5000/0x0/0x1bfc00000, data 0x6e1b65e/0x6fe8000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:49.991199+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220495872 unmapped: 4333568 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _send_mon_message to mon.np0005548789 at v2:172.18.0.104:3300/0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:50.991390+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220495872 unmapped: 4333568 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:51.991548+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220495872 unmapped: 4333568 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c6000/0x0/0x1bfc00000, data 0x6e1b65e/0x6fe8000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c5000/0x0/0x1bfc00000, data 0x6e1b6b3/0x6fe8000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:52.991716+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220495872 unmapped: 4333568 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3013720 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:53.991879+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220495872 unmapped: 4333568 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c5000/0x0/0x1bfc00000, data 0x6e1b6b3/0x6fe8000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c5000/0x0/0x1bfc00000, data 0x6e1b6b3/0x6fe8000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:54.992015+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220495872 unmapped: 4333568 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:55.992200+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220504064 unmapped: 4325376 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:56.992405+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c5000/0x0/0x1bfc00000, data 0x6e1b65e/0x6fe8000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220512256 unmapped: 4317184 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:57.992563+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220512256 unmapped: 4317184 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c5000/0x0/0x1bfc00000, data 0x6e1b65e/0x6fe8000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3013720 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 10.076677322s of 10.134666443s, submitted: 10
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:58.992717+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220512256 unmapped: 4317184 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:59.992955+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220512256 unmapped: 4317184 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:00.993138+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220512256 unmapped: 4317184 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:01.993314+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220512256 unmapped: 4317184 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c6000/0x0/0x1bfc00000, data 0x6e1b65c/0x6fe8000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:02.993476+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220512256 unmapped: 4317184 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c6000/0x0/0x1bfc00000, data 0x6e1b595/0x6fe7000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3012854 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:03.993703+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220512256 unmapped: 4317184 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:04.993881+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220528640 unmapped: 4300800 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:05.994040+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220528640 unmapped: 4300800 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:06.994879+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 ms_handle_reset con 0x560b56091400 session 0x560b56f912c0
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5808e400
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 ms_handle_reset con 0x560b55afc000 session 0x560b55586d20
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5a544c00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220528640 unmapped: 4300800 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:07.995038+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220528640 unmapped: 4300800 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c8000/0x0/0x1bfc00000, data 0x6e1b4fa/0x6fe6000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3012212 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:08.995197+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220528640 unmapped: 4300800 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:09.995426+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220528640 unmapped: 4300800 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c8000/0x0/0x1bfc00000, data 0x6e1b4fa/0x6fe6000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:10.995597+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220528640 unmapped: 4300800 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:11.995879+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220528640 unmapped: 4300800 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:12.996055+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220536832 unmapped: 4292608 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3012212 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:13.996209+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c8000/0x0/0x1bfc00000, data 0x6e1b4fa/0x6fe6000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220536832 unmapped: 4292608 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c8000/0x0/0x1bfc00000, data 0x6e1b4fa/0x6fe6000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:14.996375+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220536832 unmapped: 4292608 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:15.996516+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220536832 unmapped: 4292608 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c8000/0x0/0x1bfc00000, data 0x6e1b4fa/0x6fe6000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:16.996706+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220536832 unmapped: 4292608 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:17.996870+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220536832 unmapped: 4292608 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3012212 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:18.997060+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220536832 unmapped: 4292608 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:19.997263+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220536832 unmapped: 4292608 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:20.997438+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220553216 unmapped: 4276224 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:21.997603+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c8000/0x0/0x1bfc00000, data 0x6e1b4fa/0x6fe6000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220553216 unmapped: 4276224 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:22.997765+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220553216 unmapped: 4276224 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3012212 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:23.997981+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220553216 unmapped: 4276224 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c8000/0x0/0x1bfc00000, data 0x6e1b4fa/0x6fe6000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:24.998148+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220553216 unmapped: 4276224 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:25.998279+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220553216 unmapped: 4276224 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c8000/0x0/0x1bfc00000, data 0x6e1b4fa/0x6fe6000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:26.998626+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220553216 unmapped: 4276224 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:27.998812+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 29.234876633s of 29.259424210s, submitted: 5
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220553216 unmapped: 4276224 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3012228 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:28.998973+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c8000/0x0/0x1bfc00000, data 0x6e1b4fa/0x6fe6000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220561408 unmapped: 4268032 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:29.999164+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220561408 unmapped: 4268032 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:30.999326+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220561408 unmapped: 4268032 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:31.999469+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220561408 unmapped: 4268032 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c8000/0x0/0x1bfc00000, data 0x6e1b4fa/0x6fe6000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:32.999598+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220561408 unmapped: 4268032 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3012228 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:33.999977+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220561408 unmapped: 4268032 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:35.000189+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220561408 unmapped: 4268032 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:36.000337+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220561408 unmapped: 4268032 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c6000/0x0/0x1bfc00000, data 0x6e1b630/0x6fe8000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:37.000477+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220569600 unmapped: 4259840 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 heartbeat osd_stat(store_statfs(0x1ae0c6000/0x0/0x1bfc00000, data 0x6e1b595/0x6fe7000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:38.000629+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220569600 unmapped: 4259840 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3014834 data_alloc: 301989888 data_used: 18026496
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:39.000721+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220569600 unmapped: 4259840 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:40.000865+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220569600 unmapped: 4259840 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 279 handle_osd_map epochs [279,280], i have 279, src has [1,280]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 12.426636696s of 12.450979233s, submitted: 4
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:41.001003+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 280 heartbeat osd_stat(store_statfs(0x1ae0c6000/0x0/0x1bfc00000, data 0x6e1b595/0x6fe7000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220602368 unmapped: 4227072 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:42.001192+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220602368 unmapped: 4227072 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:43.001321+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 9000.2 total, 600.0 interval
                                                          Cumulative writes: 26K writes, 100K keys, 26K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.01 MB/s
                                                          Cumulative WAL: 26K writes, 9481 syncs, 2.80 writes per sync, written: 0.10 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 13K writes, 51K keys, 13K commit groups, 1.0 writes per commit group, ingest: 54.67 MB, 0.09 MB/s
                                                          Interval WAL: 13K writes, 5358 syncs, 2.48 writes per sync, written: 0.05 GB, 0.09 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220602368 unmapped: 4227072 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 280 heartbeat osd_stat(store_statfs(0x1ae0c3000/0x0/0x1bfc00000, data 0x6e1da96/0x6fea000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3017140 data_alloc: 301989888 data_used: 18038784
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:44.001449+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 280 heartbeat osd_stat(store_statfs(0x1ae0c3000/0x0/0x1bfc00000, data 0x6e1da96/0x6fea000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220602368 unmapped: 4227072 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:45.001603+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220610560 unmapped: 4218880 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:46.001875+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220610560 unmapped: 4218880 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:47.002018+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220610560 unmapped: 4218880 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 280 handle_osd_map epochs [280,281], i have 280, src has [1,281]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:48.002159+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220626944 unmapped: 4202496 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3019262 data_alloc: 301989888 data_used: 18038784
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:49.002283+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 281 heartbeat osd_stat(store_statfs(0x1ae0bf000/0x0/0x1bfc00000, data 0x6e1feaf/0x6fee000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220626944 unmapped: 4202496 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:50.002477+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220626944 unmapped: 4202496 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:51.002615+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 281 heartbeat osd_stat(store_statfs(0x1ae0c0000/0x0/0x1bfc00000, data 0x6e1feaf/0x6fee000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220626944 unmapped: 4202496 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:52.002839+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220626944 unmapped: 4202496 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:53.002984+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220635136 unmapped: 4194304 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3019262 data_alloc: 301989888 data_used: 18038784
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:54.003131+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220635136 unmapped: 4194304 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:55.003303+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220635136 unmapped: 4194304 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:56.003475+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220635136 unmapped: 4194304 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 281 heartbeat osd_stat(store_statfs(0x1ae0c0000/0x0/0x1bfc00000, data 0x6e1feaf/0x6fee000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:57.003623+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220635136 unmapped: 4194304 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:58.003803+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220635136 unmapped: 4194304 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:59.004169+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3019262 data_alloc: 301989888 data_used: 18038784
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220635136 unmapped: 4194304 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:00.004454+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220635136 unmapped: 4194304 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:01.004615+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220643328 unmapped: 4186112 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:02.004816+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 281 heartbeat osd_stat(store_statfs(0x1ae0c0000/0x0/0x1bfc00000, data 0x6e1feaf/0x6fee000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220643328 unmapped: 4186112 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:03.004989+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220651520 unmapped: 4177920 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:04.005157+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3019262 data_alloc: 301989888 data_used: 18038784
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220651520 unmapped: 4177920 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:05.005349+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220651520 unmapped: 4177920 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:06.005512+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 281 heartbeat osd_stat(store_statfs(0x1ae0c0000/0x0/0x1bfc00000, data 0x6e1feaf/0x6fee000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220651520 unmapped: 4177920 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 281 heartbeat osd_stat(store_statfs(0x1ae0c0000/0x0/0x1bfc00000, data 0x6e1feaf/0x6fee000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:07.005660+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220651520 unmapped: 4177920 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:08.005828+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220651520 unmapped: 4177920 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:09.005998+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3019262 data_alloc: 301989888 data_used: 18038784
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 281 heartbeat osd_stat(store_statfs(0x1ae0c0000/0x0/0x1bfc00000, data 0x6e1feaf/0x6fee000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220667904 unmapped: 4161536 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:10.006170+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220667904 unmapped: 4161536 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:11.006336+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220667904 unmapped: 4161536 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:12.006500+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 220667904 unmapped: 4161536 heap: 224829440 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 281 heartbeat osd_stat(store_statfs(0x1ae0c0000/0x0/0x1bfc00000, data 0x6e1feaf/0x6fee000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:13.006632+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b55afc000
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 32.404953003s of 32.560253143s, submitted: 53
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 229064704 unmapped: 4161536 heap: 233226240 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:14.006817+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3077773 data_alloc: 301989888 data_used: 18038784
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221208576 unmapped: 17260544 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:15.006962+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 282 ms_handle_reset con 0x560b55afc000 session 0x560b5c4fcf00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221216768 unmapped: 17252352 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: handle_auth_request added challenge on 0x560b5b4fbc00
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:16.007153+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 282 handle_osd_map epochs [283,283], i have 283, src has [1,283]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 283 handle_osd_map epochs [282,283], i have 283, src has [1,283]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 283 handle_osd_map epochs [282,283], i have 283, src has [1,283]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221274112 unmapped: 17195008 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 283 ms_handle_reset con 0x560b5b4fbc00 session 0x560b57081860
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:17.007305+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221274112 unmapped: 17195008 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:18.007445+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221274112 unmapped: 17195008 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 283 heartbeat osd_stat(store_statfs(0x1ae0b7000/0x0/0x1bfc00000, data 0x6e24969/0x6ff6000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:19.007608+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3030016 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221274112 unmapped: 17195008 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:20.007852+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221274112 unmapped: 17195008 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:21.008013+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221274112 unmapped: 17195008 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:22.008249+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221274112 unmapped: 17195008 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:23.008442+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 283 handle_osd_map epochs [283,284], i have 283, src has [1,284]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221290496 unmapped: 17178624 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 10.301142693s of 10.589327812s, submitted: 72
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:24.008641+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3033018 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b3000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221290496 unmapped: 17178624 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:25.008841+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221298688 unmapped: 17170432 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:26.009194+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221298688 unmapped: 17170432 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:27.009348+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221298688 unmapped: 17170432 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:28.009495+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221298688 unmapped: 17170432 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:29.009634+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3033018 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221298688 unmapped: 17170432 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b3000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:30.009826+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221315072 unmapped: 17154048 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:31.009994+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221315072 unmapped: 17154048 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:32.010159+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221315072 unmapped: 17154048 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:33.010306+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221323264 unmapped: 17145856 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:34.010450+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3033018 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b3000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221323264 unmapped: 17145856 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:35.010597+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221323264 unmapped: 17145856 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:36.010854+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b3000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221323264 unmapped: 17145856 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:37.011016+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221323264 unmapped: 17145856 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:38.011284+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221323264 unmapped: 17145856 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:39.011449+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3033018 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221323264 unmapped: 17145856 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b3000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:40.011724+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b3000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b3000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221339648 unmapped: 17129472 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:41.011905+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221347840 unmapped: 17121280 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread utilization: idle 18.181961060s of 18.193437576s, submitted: 10
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 ms_handle_reset con 0x560b5b74f800 session 0x560b55b4fa40
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:42.012091+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221700096 unmapped: 16769024 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:43.012242+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221700096 unmapped: 16769024 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:44.012403+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Got map version 55
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3032314 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221888512 unmapped: 16580608 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:45.012756+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221888512 unmapped: 16580608 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:46.012935+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221888512 unmapped: 16580608 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:47.013254+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221888512 unmapped: 16580608 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:48.013555+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221888512 unmapped: 16580608 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:49.013685+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3032314 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221904896 unmapped: 16564224 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:50.014042+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221904896 unmapped: 16564224 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:51.014311+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221904896 unmapped: 16564224 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:52.014494+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221904896 unmapped: 16564224 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:53.014817+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221904896 unmapped: 16564224 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:54.015000+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3032314 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221904896 unmapped: 16564224 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:55.015172+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221904896 unmapped: 16564224 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:56.015383+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221904896 unmapped: 16564224 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:57.015559+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221913088 unmapped: 16556032 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:58.015729+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221913088 unmapped: 16556032 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:59.015865+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3032314 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221913088 unmapped: 16556032 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:00.016113+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221921280 unmapped: 16547840 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:01.016276+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221921280 unmapped: 16547840 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:02.016476+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221921280 unmapped: 16547840 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:03.016633+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221921280 unmapped: 16547840 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:04.016866+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3032314 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221921280 unmapped: 16547840 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:05.017000+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221929472 unmapped: 16539648 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:06.017180+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221929472 unmapped: 16539648 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:07.017371+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221937664 unmapped: 16531456 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:08.017512+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221937664 unmapped: 16531456 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:09.017642+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3032314 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221937664 unmapped: 16531456 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:10.017851+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221937664 unmapped: 16531456 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:11.017992+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221937664 unmapped: 16531456 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:12.018174+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221937664 unmapped: 16531456 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:13.018322+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221945856 unmapped: 16523264 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:14.018482+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3032314 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221945856 unmapped: 16523264 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:15.018630+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221945856 unmapped: 16523264 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:16.018743+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221945856 unmapped: 16523264 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:17.018923+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221945856 unmapped: 16523264 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:18.019089+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221945856 unmapped: 16523264 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:19.019238+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3032314 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221945856 unmapped: 16523264 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:20.019464+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221954048 unmapped: 16515072 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:21.019630+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221954048 unmapped: 16515072 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:22.019856+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221954048 unmapped: 16515072 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:23.019998+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221954048 unmapped: 16515072 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:24.020178+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3032314 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221954048 unmapped: 16515072 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:25.020324+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221954048 unmapped: 16515072 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:26.020525+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221954048 unmapped: 16515072 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:27.020677+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221954048 unmapped: 16515072 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:28.020822+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221962240 unmapped: 16506880 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:29.020973+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3032314 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221962240 unmapped: 16506880 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:30.021172+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221962240 unmapped: 16506880 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:31.021286+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221962240 unmapped: 16506880 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:32.022877+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221962240 unmapped: 16506880 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:33.023064+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221970432 unmapped: 16498688 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:34.023214+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3032314 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221970432 unmapped: 16498688 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:35.023382+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221970432 unmapped: 16498688 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:36.023561+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221978624 unmapped: 16490496 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:37.023768+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221978624 unmapped: 16490496 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:38.049273+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221978624 unmapped: 16490496 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:39.050311+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3032314 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221978624 unmapped: 16490496 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:40.050882+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221978624 unmapped: 16490496 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:41.051078+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221978624 unmapped: 16490496 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:42.051505+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221978624 unmapped: 16490496 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:43.052667+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221978624 unmapped: 16490496 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:44.053489+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3032314 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221986816 unmapped: 16482304 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:45.053681+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221986816 unmapped: 16482304 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:46.054302+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221986816 unmapped: 16482304 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:47.054454+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221986816 unmapped: 16482304 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:48.054817+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222027776 unmapped: 16441344 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:49.054965+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3032314 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222027776 unmapped: 16441344 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:50.055240+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222044160 unmapped: 16424960 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:51.055387+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222044160 unmapped: 16424960 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:52.055582+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222060544 unmapped: 16408576 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:53.055981+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:54.056384+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222060544 unmapped: 16408576 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3032314 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:55.056691+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222060544 unmapped: 16408576 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:56.057063+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222060544 unmapped: 16408576 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:57.057361+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222060544 unmapped: 16408576 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:58.057651+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222060544 unmapped: 16408576 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:59.057853+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222060544 unmapped: 16408576 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3032314 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:00.058150+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222060544 unmapped: 16408576 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:01.058399+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222068736 unmapped: 16400384 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:02.058623+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222068736 unmapped: 16400384 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:03.058828+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222076928 unmapped: 16392192 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:04.059036+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222076928 unmapped: 16392192 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3032314 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:05.059266+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222076928 unmapped: 16392192 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:06.059467+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222076928 unmapped: 16392192 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:07.059633+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222076928 unmapped: 16392192 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:08.059856+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222076928 unmapped: 16392192 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:09.060005+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222085120 unmapped: 16384000 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3032314 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:10.060194+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222085120 unmapped: 16384000 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:11.060298+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222085120 unmapped: 16384000 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:12.060455+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222085120 unmapped: 16384000 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:13.060580+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222085120 unmapped: 16384000 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:14.060736+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222085120 unmapped: 16384000 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3032314 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:15.060878+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222085120 unmapped: 16384000 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:16.061050+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222101504 unmapped: 16367616 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:17.061189+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222109696 unmapped: 16359424 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:18.061372+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222117888 unmapped: 16351232 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:19.061557+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222117888 unmapped: 16351232 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3032314 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:20.061736+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222117888 unmapped: 16351232 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:21.061894+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222117888 unmapped: 16351232 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:22.062060+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222117888 unmapped: 16351232 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:23.062186+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222117888 unmapped: 16351232 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:24.062295+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222117888 unmapped: 16351232 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: bluestore.MempoolThread(0x560b53a41b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3032314 data_alloc: 301989888 data_used: 18063360
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:25.062431+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222126080 unmapped: 16343040 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:26.062534+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222126080 unmapped: 16343040 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: do_command 'config diff' '{prefix=config diff}'
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: do_command 'config show' '{prefix=config show}'
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:27.062934+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: do_command 'counter dump' '{prefix=counter dump}'
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 221863936 unmapped: 16605184 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: do_command 'counter schema' '{prefix=counter schema}'
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:28.063053+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222011392 unmapped: 16457728 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: osd.3 284 heartbeat osd_stat(store_statfs(0x1ae0b4000/0x0/0x1bfc00000, data 0x6e26d82/0x6ffa000, compress 0x0/0x0/0x0, omap 0x647, meta 0xab4f9b9), peers [0,1,2,4,5] op hist [])
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: tick
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_tickets
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:29.063196+0000)
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: prioritycache tune_memory target: 5709084876 mapped: 222085120 unmapped: 16384000 heap: 238469120 old mem: 4047415775 new mem: 4047415775
Dec 06 10:32:59 np0005548790.localdomain ceph-osd[32586]: do_command 'log dump' '{prefix=log dump}'
Dec 06 10:32:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69536 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:59 np0005548790.localdomain ceph-mon[301742]: from='client.59269 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:59 np0005548790.localdomain ceph-mon[301742]: from='client.69476 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:59 np0005548790.localdomain ceph-mon[301742]: from='client.49875 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:59 np0005548790.localdomain ceph-mon[301742]: from='client.59275 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:59 np0005548790.localdomain ceph-mon[301742]: from='client.69488 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:59 np0005548790.localdomain ceph-mon[301742]: from='client.49890 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:59 np0005548790.localdomain ceph-mon[301742]: from='client.59293 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:59 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/3924164660' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 06 10:32:59 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/2460720482' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 06 10:32:59 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/4179676265' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 06 10:32:59 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/4060345088' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 06 10:32:59 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/842483459' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 06 10:32:59 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/1645509840' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 06 10:32:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59350 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:32:59 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.49932 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:00 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69551 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:00 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69566 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:00 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:33:00.332 280869 DEBUG oslo_service.periodic_task [None req-b068e529-20e8-484c-bef2-4c9095965785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 06 10:33:00 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59356 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:00 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 06 10:33:00 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4104655568' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:33:00 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.
Dec 06 10:33:00 np0005548790.localdomain systemd[1]: tmp-crun.AxkQca.mount: Deactivated successfully.
Dec 06 10:33:00 np0005548790.localdomain podman[329427]: 2025-12-06 10:33:00.583610645 +0000 UTC m=+0.094265270 container health_status 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 06 10:33:00 np0005548790.localdomain podman[329427]: 2025-12-06 10:33:00.6311303 +0000 UTC m=+0.141784905 container exec_died 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 06 10:33:00 np0005548790.localdomain systemd[1]: 643364bd80a54443f6d3bfd8354a56bab838c7380de0624b1f6f9710be8a2d83.service: Deactivated successfully.
Dec 06 10:33:00 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69575 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:00 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59377 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:00 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.49959 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:00 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 06 10:33:00 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2183890531' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 06 10:33:00 np0005548790.localdomain ceph-mon[301742]: from='client.69503 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:00 np0005548790.localdomain ceph-mon[301742]: from='client.49905 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:00 np0005548790.localdomain ceph-mon[301742]: from='client.59308 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:00 np0005548790.localdomain ceph-mon[301742]: from='client.69521 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:00 np0005548790.localdomain ceph-mon[301742]: from='client.49914 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:00 np0005548790.localdomain ceph-mon[301742]: pgmap v831: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:33:00 np0005548790.localdomain ceph-mon[301742]: from='client.59329 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:00 np0005548790.localdomain ceph-mon[301742]: from='client.69536 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:00 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/3258516296' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 06 10:33:00 np0005548790.localdomain ceph-mon[301742]: from='client.59350 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:00 np0005548790.localdomain ceph-mon[301742]: from='client.49932 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:00 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/330610004' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:33:00 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/1470188650' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:33:00 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/4104655568' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 06 10:33:00 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/2196284679' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 06 10:33:00 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/3825277840' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 06 10:33:00 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/1440614881' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 06 10:33:00 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2183890531' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 06 10:33:01 np0005548790.localdomain crontab[329544]: (root) LIST (root)
Dec 06 10:33:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69590 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:01 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec 06 10:33:01 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/305260225' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 06 10:33:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59407 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:01 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 10:33:01 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:33:01.517+0000 7f066579c640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 10:33:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.49983 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:01 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 10:33:01 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:33:01.519+0000 7f066579c640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 10:33:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v832: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:33:01 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69614 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:01 np0005548790.localdomain ceph-mgr[286934]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 10:33:01 np0005548790.localdomain ceph-1939e851-b10c-5c3b-9bb7-8e7f380233e8-mgr-np0005548790-kvkfyr[286930]: 2025-12-06T10:33:01.844+0000 7f066579c640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 06 10:33:02 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:33:02.060 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:33:02 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec 06 10:33:02 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2229859225' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 06 10:33:02 np0005548790.localdomain ceph-mon[301742]: from='client.69551 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:02 np0005548790.localdomain ceph-mon[301742]: from='client.69566 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:02 np0005548790.localdomain ceph-mon[301742]: from='client.59356 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:02 np0005548790.localdomain ceph-mon[301742]: from='client.69575 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:02 np0005548790.localdomain ceph-mon[301742]: from='client.59377 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:02 np0005548790.localdomain ceph-mon[301742]: from='client.49959 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:02 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2272977300' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 06 10:33:02 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/305260225' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 06 10:33:02 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/887149509' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 06 10:33:02 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2736339179' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 06 10:33:02 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Dec 06 10:33:02 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1245768976' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/329508633' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1424359202' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:33:03.327 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: from='client.69590 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: from='client.59407 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: from='client.49983 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: pgmap v832: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: from='client.69614 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/1780513470' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2229859225' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/2327447688' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/2107682123' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/3153028272' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/3333518524' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2479036' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/545032456' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/466198046' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/3265217251' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1245768976' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/3564364230' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/471931038' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/329508633' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2266612871' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/1424714830' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1424359202' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/3066090828' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2422302273' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1710489525' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.
Dec 06 10:33:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.
Dec 06 10:33:03 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.
Dec 06 10:33:03 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v833: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:33:03 np0005548790.localdomain podman[329872]: 2025-12-06 10:33:03.590861505 +0000 UTC m=+0.099543312 container health_status 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:33:03 np0005548790.localdomain podman[329873]: 2025-12-06 10:33:03.593299289 +0000 UTC m=+0.102224222 container health_status 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Dec 06 10:33:03 np0005548790.localdomain podman[329873]: 2025-12-06 10:33:03.602297541 +0000 UTC m=+0.111222514 container exec_died 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 06 10:33:03 np0005548790.localdomain systemd[1]: 8db16ea3ec08f89f47a02d211a4be96693a3fa9369a46ba8c4ccdb3c35d1faa8.service: Deactivated successfully.
Dec 06 10:33:03 np0005548790.localdomain podman[329874]: 2025-12-06 10:33:03.644647337 +0000 UTC m=+0.152501722 container health_status 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_id=edpm, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, release=1755695350, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, name=ubi9-minimal, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter)
Dec 06 10:33:03 np0005548790.localdomain podman[329874]: 2025-12-06 10:33:03.656988748 +0000 UTC m=+0.164843103 container exec_died 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8 (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, config_id=edpm, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 06 10:33:03 np0005548790.localdomain systemd[1]: 9a80149ad389e9d40dd029debffc6756f2317afe77600649a966e7a9b71de0a8.service: Deactivated successfully.
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2953662664' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain podman[329872]: 2025-12-06 10:33:03.728067455 +0000 UTC m=+0.236749292 container exec_died 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 06 10:33:03 np0005548790.localdomain systemd[1]: 0ff9255db4e093668b381c1a39d5353d6c648609b331d9324573a8161ce142f7.service: Deactivated successfully.
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec 06 10:33:03 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2985416432' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 93 heartbeat osd_stat(store_statfs(0x1b9256000/0x0/0x1bfc00000, data 0x27b35e8/0x2836000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 93 ms_handle_refused con 0x56131a7e7000 session 0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84418560 unmapped: 1507328 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 93 ms_handle_refused con 0x56131a7e7000 session 0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:21.940734+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 854233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84418560 unmapped: 1507328 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Got map version 35
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.103:6800/180363885,v1:172.18.0.103:6801/180363885]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:22.940927+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 93 ms_handle_refused con 0x56131a7e7000 session 0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84606976 unmapped: 1318912 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:23.941103+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84606976 unmapped: 1318912 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Got map version 36
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.103:6800/180363885,v1:172.18.0.103:6801/180363885]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:24.941276+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:25.941479+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 93 heartbeat osd_stat(store_statfs(0x1b9256000/0x0/0x1bfc00000, data 0x27b35e8/0x2836000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 93 ms_handle_refused con 0x56131a7e7000 session 0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:26.941641+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 854233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:27.941832+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:28.942058+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 93 heartbeat osd_stat(store_statfs(0x1b9256000/0x0/0x1bfc00000, data 0x27b35e8/0x2836000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:29.942222+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 93 heartbeat osd_stat(store_statfs(0x1b9256000/0x0/0x1bfc00000, data 0x27b35e8/0x2836000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:30.942385+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:31.942556+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 854233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:32.942713+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 93 ms_handle_refused con 0x56131a7e7000 session 0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:33.942854+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:34.943029+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 93 heartbeat osd_stat(store_statfs(0x1b9256000/0x0/0x1bfc00000, data 0x27b35e8/0x2836000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:35.943253+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 93 heartbeat osd_stat(store_statfs(0x1b9256000/0x0/0x1bfc00000, data 0x27b35e8/0x2836000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Got map version 37
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.103:6800/180363885,v1:172.18.0.103:6801/180363885]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:36.943408+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 854233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:37.943555+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 7800.1 total, 600.0 interval
                                                          Cumulative writes: 5320 writes, 23K keys, 5320 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5320 writes, 739 syncs, 7.20 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 50 writes, 178 keys, 50 commit groups, 1.0 writes per commit group, ingest: 0.27 MB, 0.00 MB/s
                                                          Interval WAL: 50 writes, 22 syncs, 2.27 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:38.943697+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:39.943840+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:40.944007+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:41.944168+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 93 heartbeat osd_stat(store_statfs(0x1b9256000/0x0/0x1bfc00000, data 0x27b35e8/0x2836000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 854233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:42.944402+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 93 heartbeat osd_stat(store_statfs(0x1b9256000/0x0/0x1bfc00000, data 0x27b35e8/0x2836000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:43.944568+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:44.944824+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 93 ms_handle_refused con 0x56131a7e7000 session 0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:45.945018+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 93 heartbeat osd_stat(store_statfs(0x1b9256000/0x0/0x1bfc00000, data 0x27b35e8/0x2836000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:46.945256+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 854233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:47.945470+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:48.945654+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:49.945847+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:50.946040+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 93 heartbeat osd_stat(store_statfs(0x1b9256000/0x0/0x1bfc00000, data 0x27b35e8/0x2836000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:51.946239+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 854233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:52.946453+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:53.946653+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 93 heartbeat osd_stat(store_statfs(0x1b9256000/0x0/0x1bfc00000, data 0x27b35e8/0x2836000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:54.947057+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:55.947342+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:56.947537+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 854233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 93 heartbeat osd_stat(store_statfs(0x1b9256000/0x0/0x1bfc00000, data 0x27b35e8/0x2836000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:57.947700+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:58.947879+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:09:59.948062+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 93 ms_handle_refused con 0x56131a7e7000 session 0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:00.948234+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 93 heartbeat osd_stat(store_statfs(0x1b9256000/0x0/0x1bfc00000, data 0x27b35e8/0x2836000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:01.948418+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 854233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:02.948598+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:03.948806+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 93 heartbeat osd_stat(store_statfs(0x1b9256000/0x0/0x1bfc00000, data 0x27b35e8/0x2836000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:04.948963+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:05.949142+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:06.949288+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 854233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 1310720 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Got map version 38
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Active mgr is now 
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc reconnect Terminating session with v2:172.18.0.103:6800/180363885
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc reconnect No active mgr available yet
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:07.949408+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 93 ms_handle_reset con 0x56131776b400 session 0x5613183c94a0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776b400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 48.459091187s of 48.464553833s, submitted: 1
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84058112 unmapped: 1867776 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Got map version 39
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2148019987,v1:172.18.0.106:6811/2148019987]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc reconnect Starting new session with [v2:172.18.0.106:6810/2148019987,v1:172.18.0.106:6811/2148019987]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: get_auth_request con 0x561318f6b800 auth_method 0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:08.949541+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_configure stats_period=5
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 94 heartbeat osd_stat(store_statfs(0x1b9252000/0x0/0x1bfc00000, data 0x27b59c9/0x283a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:09.949717+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:10.949894+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Got map version 40
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2148019987,v1:172.18.0.106:6811/2148019987]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:11.950040+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 857233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:12.950184+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:13.950332+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Got map version 41
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2148019987,v1:172.18.0.106:6811/2148019987]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:14.950492+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 94 heartbeat osd_stat(store_statfs(0x1b9252000/0x0/0x1bfc00000, data 0x27b59c9/0x283a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 94 heartbeat osd_stat(store_statfs(0x1b9252000/0x0/0x1bfc00000, data 0x27b59c9/0x283a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:15.950702+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:16.950880+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 857233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:17.951077+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 94 heartbeat osd_stat(store_statfs(0x1b9252000/0x0/0x1bfc00000, data 0x27b59c9/0x283a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:18.951244+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:19.951422+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:20.951629+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:21.951855+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 857233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 94 heartbeat osd_stat(store_statfs(0x1b9252000/0x0/0x1bfc00000, data 0x27b59c9/0x283a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:22.952110+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:23.952253+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:24.952435+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 94 heartbeat osd_stat(store_statfs(0x1b9252000/0x0/0x1bfc00000, data 0x27b59c9/0x283a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:25.952632+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 94 heartbeat osd_stat(store_statfs(0x1b9252000/0x0/0x1bfc00000, data 0x27b59c9/0x283a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:26.952896+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 857233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:27.953096+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:28.953270+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 94 heartbeat osd_stat(store_statfs(0x1b9252000/0x0/0x1bfc00000, data 0x27b59c9/0x283a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:29.953458+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:30.953594+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:31.953756+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 857233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:32.953965+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:33.954120+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 94 heartbeat osd_stat(store_statfs(0x1b9252000/0x0/0x1bfc00000, data 0x27b59c9/0x283a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:34.954321+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:35.954506+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:36.954713+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 857233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:37.954884+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:38.955093+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 94 heartbeat osd_stat(store_statfs(0x1b9252000/0x0/0x1bfc00000, data 0x27b59c9/0x283a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:39.955291+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:40.955503+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:41.960273+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 857233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:42.960474+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:43.960655+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:44.960869+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 94 heartbeat osd_stat(store_statfs(0x1b9252000/0x0/0x1bfc00000, data 0x27b59c9/0x283a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:45.961129+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:46.961292+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 857233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:47.961471+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:48.961599+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:49.961823+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 94 heartbeat osd_stat(store_statfs(0x1b9252000/0x0/0x1bfc00000, data 0x27b59c9/0x283a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:50.962004+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:51.962157+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 857233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:52.962335+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:53.962471+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:54.962590+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 94 heartbeat osd_stat(store_statfs(0x1b9252000/0x0/0x1bfc00000, data 0x27b59c9/0x283a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:55.962872+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 94 heartbeat osd_stat(store_statfs(0x1b9252000/0x0/0x1bfc00000, data 0x27b59c9/0x283a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:56.963060+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 857233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:57.963227+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:58.963361+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:10:59.963550+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:00.963760+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 94 heartbeat osd_stat(store_statfs(0x1b9252000/0x0/0x1bfc00000, data 0x27b59c9/0x283a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:01.963972+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 857233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:02.964151+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:03.964336+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:04.964481+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:05.964701+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:06.964929+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 94 heartbeat osd_stat(store_statfs(0x1b9252000/0x0/0x1bfc00000, data 0x27b59c9/0x283a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 857233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:07.965110+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:08.965235+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 94 heartbeat osd_stat(store_statfs(0x1b9252000/0x0/0x1bfc00000, data 0x27b59c9/0x283a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:09.965437+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:10.965627+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84279296 unmapped: 1646592 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Got map version 42
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Active mgr is now 
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc reconnect Terminating session with v2:172.18.0.106:6810/2148019987
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc reconnect No active mgr available yet
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 94 ms_handle_reset con 0x56131776b400 session 0x56131b051860
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae7ac00
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 63.799076080s of 63.805721283s, submitted: 1
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:11.965744+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84336640 unmapped: 1589248 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 860233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Got map version 43
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc reconnect Starting new session with [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: get_auth_request con 0x56131ae55000 auth_method 0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_configure stats_period=5
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:12.965820+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84066304 unmapped: 1859584 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:13.966309+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84066304 unmapped: 1859584 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:14.966447+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84066304 unmapped: 1859584 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Got map version 44
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:15.966632+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84066304 unmapped: 1859584 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Got map version 45
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:16.966884+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 860233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:17.967028+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:18.967170+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:19.967292+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:20.967418+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:21.967635+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 860233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:22.967813+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:23.967941+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:24.968081+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:25.968270+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:26.968423+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 860233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:27.968598+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:28.968751+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:29.968913+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:30.969047+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:31.969272+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 860233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:32.969426+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:33.969591+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:34.969753+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:35.970031+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:36.970201+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 860233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:37.970410+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:38.970575+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:39.970856+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:40.971035+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:41.971207+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 860233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:42.971445+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:43.971689+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:44.971841+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:45.972102+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:46.972368+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 860233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:47.972552+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:48.972711+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:49.972890+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:50.973063+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:51.973279+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 860233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:52.973439+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:53.973670+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:54.973854+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:55.974040+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:56.974226+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 860233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:57.974418+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:58.974579+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:11:59.974741+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:00.974937+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:01.975112+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 860233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:02.975253+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:03.975425+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:04.975635+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:05.975828+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:06.975992+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:07.976140+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 860233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:08.976265+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:09.976421+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:10.976604+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:11.976811+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:12.977001+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 860233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:13.977156+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:14.977352+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:15.977548+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:16.977697+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:17.977874+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 860233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:18.978009+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:19.978130+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:20.978296+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:21.978445+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:22.978644+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 860233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:23.978817+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:24.979361+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:25.979572+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:26.979864+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:27.980262+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 860233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:28.980400+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:29.980542+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:30.980747+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:31.980948+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:32.981172+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 860233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:33.981298+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:34.981454+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:35.981977+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:36.982505+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:37.982706+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 860233 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:38.982843+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:39.983000+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 heartbeat osd_stat(store_statfs(0x1b924e000/0x0/0x1bfc00000, data 0x27b7f02/0x283e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:40.983131+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 89.469406128s of 89.473350525s, submitted: 1
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:41.983311+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84254720 unmapped: 1671168 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:42.983500+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 96 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Got map version 46
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 866416 data_alloc: 285212672 data_used: 2756608
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 96 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84443136 unmapped: 1482752 heap: 85925888 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131aea2800
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:43.983629+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84508672 unmapped: 17154048 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:44.983854+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 96 ms_handle_reset con 0x56131aea2800 session 0x56131b0f41e0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84508672 unmapped: 17154048 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:45.984121+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 96 heartbeat osd_stat(store_statfs(0x1b8a49000/0x0/0x1bfc00000, data 0x2fba478/0x3045000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 96 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 96 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 96 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 96 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84574208 unmapped: 17088512 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:46.984370+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84574208 unmapped: 17088512 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:47.984640+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 927638 data_alloc: 285212672 data_used: 2768896
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 97 heartbeat osd_stat(store_statfs(0x1b8a44000/0x0/0x1bfc00000, data 0x2fbc9ab/0x3049000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84574208 unmapped: 17088512 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:48.984923+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84574208 unmapped: 17088512 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:49.985135+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 97 heartbeat osd_stat(store_statfs(0x1b8a44000/0x0/0x1bfc00000, data 0x2fbc9ab/0x3049000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84574208 unmapped: 17088512 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:50.985355+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84574208 unmapped: 17088512 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:51.985603+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84574208 unmapped: 17088512 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 97 heartbeat osd_stat(store_statfs(0x1b8a44000/0x0/0x1bfc00000, data 0x2fbc9ab/0x3049000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:52.985870+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 927638 data_alloc: 285212672 data_used: 2768896
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84574208 unmapped: 17088512 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:53.986069+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84574208 unmapped: 17088512 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:54.986277+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84574208 unmapped: 17088512 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:55.986594+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 97 heartbeat osd_stat(store_statfs(0x1b8a44000/0x0/0x1bfc00000, data 0x2fbc9ab/0x3049000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84574208 unmapped: 17088512 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:56.986815+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84574208 unmapped: 17088512 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:57.987012+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 927638 data_alloc: 285212672 data_used: 2768896
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84574208 unmapped: 17088512 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 97 heartbeat osd_stat(store_statfs(0x1b8a44000/0x0/0x1bfc00000, data 0x2fbc9ab/0x3049000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:58.987164+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84574208 unmapped: 17088512 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:12:59.987349+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84574208 unmapped: 17088512 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:00.987617+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 97 heartbeat osd_stat(store_statfs(0x1b8a44000/0x0/0x1bfc00000, data 0x2fbc9ab/0x3049000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84574208 unmapped: 17088512 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 97 heartbeat osd_stat(store_statfs(0x1b8a44000/0x0/0x1bfc00000, data 0x2fbc9ab/0x3049000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:01.987846+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84574208 unmapped: 17088512 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:02.988092+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 927638 data_alloc: 285212672 data_used: 2768896
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84574208 unmapped: 17088512 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:03.988253+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84574208 unmapped: 17088512 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:04.988409+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84574208 unmapped: 17088512 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:05.988639+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84574208 unmapped: 17088512 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 97 heartbeat osd_stat(store_statfs(0x1b8a44000/0x0/0x1bfc00000, data 0x2fbc9ab/0x3049000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:06.988847+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84574208 unmapped: 17088512 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:07.989114+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 927638 data_alloc: 285212672 data_used: 2768896
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84574208 unmapped: 17088512 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:08.989242+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.816726685s of 27.919960022s, submitted: 18
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84590592 unmapped: 17072128 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:09.989403+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 17047552 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 98 heartbeat osd_stat(store_statfs(0x1b8a3e000/0x0/0x1bfc00000, data 0x2fbef11/0x304f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:10.989581+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84615168 unmapped: 17047552 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:11.989742+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e7400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84631552 unmapped: 17031168 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:12.989903+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 937537 data_alloc: 285212672 data_used: 2793472
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 99 handle_osd_map epochs [99,99], i have 99, src has [1,99]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 99 ms_handle_reset con 0x56131a7e7400 session 0x56131a1ef4a0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84672512 unmapped: 16990208 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:13.990023+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84672512 unmapped: 16990208 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:14.990190+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 99 heartbeat osd_stat(store_statfs(0x1b8a3b000/0x0/0x1bfc00000, data 0x2fc1465/0x3051000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84672512 unmapped: 16990208 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:15.990384+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84672512 unmapped: 16990208 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:16.990582+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84672512 unmapped: 16990208 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:17.990755+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 936128 data_alloc: 285212672 data_used: 2793472
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 99 heartbeat osd_stat(store_statfs(0x1b8a3b000/0x0/0x1bfc00000, data 0x2fc1465/0x3051000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84672512 unmapped: 16990208 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:18.990933+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84672512 unmapped: 16990208 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:19.991124+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84672512 unmapped: 16990208 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:20.991300+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.687566757s of 11.830204964s, submitted: 39
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84697088 unmapped: 16965632 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:21.991498+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84770816 unmapped: 16891904 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 101 heartbeat osd_stat(store_statfs(0x1b8a37000/0x0/0x1bfc00000, data 0x2fc39d7/0x3056000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:22.991707+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 941791 data_alloc: 285212672 data_used: 2793472
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84770816 unmapped: 16891904 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:23.991888+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84770816 unmapped: 16891904 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:24.992056+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84770816 unmapped: 16891904 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:25.992281+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84770816 unmapped: 16891904 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 101 heartbeat osd_stat(store_statfs(0x1b8a34000/0x0/0x1bfc00000, data 0x2fc5df0/0x305a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:26.992440+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776a800
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84787200 unmapped: 16875520 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:27.992579+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 945279 data_alloc: 285212672 data_used: 2805760
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 102 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 102 ms_handle_reset con 0x56131776a800 session 0x56131a1ee5a0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84795392 unmapped: 16867328 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:28.992702+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84795392 unmapped: 16867328 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:29.992855+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84795392 unmapped: 16867328 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 102 heartbeat osd_stat(store_statfs(0x1b8a30000/0x0/0x1bfc00000, data 0x2fc8354/0x305d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:30.993030+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84795392 unmapped: 16867328 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:31.993232+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84795392 unmapped: 16867328 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:32.993423+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 945279 data_alloc: 285212672 data_used: 2805760
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84795392 unmapped: 16867328 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:33.993582+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84795392 unmapped: 16867328 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:34.993819+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84795392 unmapped: 16867328 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:35.994027+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84795392 unmapped: 16867328 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 102 heartbeat osd_stat(store_statfs(0x1b8a30000/0x0/0x1bfc00000, data 0x2fc8354/0x305d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:36.994190+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.854754448s of 15.933326721s, submitted: 37
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84795392 unmapped: 16867328 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 103 heartbeat osd_stat(store_statfs(0x1b8a30000/0x0/0x1bfc00000, data 0x2fc8354/0x305d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:37.994349+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 947401 data_alloc: 285212672 data_used: 2805760
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84795392 unmapped: 16867328 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:38.994519+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84795392 unmapped: 16867328 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:39.994735+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84795392 unmapped: 16867328 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:40.994954+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84795392 unmapped: 16867328 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:41.995184+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84795392 unmapped: 16867328 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:42.995356+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 947401 data_alloc: 285212672 data_used: 2805760
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84795392 unmapped: 16867328 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 103 heartbeat osd_stat(store_statfs(0x1b8a2d000/0x0/0x1bfc00000, data 0x2fca76d/0x3061000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:43.995531+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84795392 unmapped: 16867328 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:44.995684+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 84795392 unmapped: 16867328 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:45.995869+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 85843968 unmapped: 15818752 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e2000
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:46.996010+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.995869637s of 10.123256683s, submitted: 31
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 103 ms_handle_reset con 0x56131a7e2000 session 0x56131ae81860
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 85999616 unmapped: 15663104 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 103 heartbeat osd_stat(store_statfs(0x1b8a2c000/0x0/0x1bfc00000, data 0x2fca77d/0x3062000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:47.996182+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 985012 data_alloc: 285212672 data_used: 2805760
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 85999616 unmapped: 15663104 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:48.996320+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a202400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 103 ms_handle_reset con 0x56131a202400 session 0x56131ad90960
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 86106112 unmapped: 15556608 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131aea3000
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:49.996498+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 103 heartbeat osd_stat(store_statfs(0x1b85b9000/0x0/0x1bfc00000, data 0x343b7b0/0x34d5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 86106112 unmapped: 15556608 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131b071400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:50.996658+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 86663168 unmapped: 14999552 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:51.996836+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 87343104 unmapped: 14319616 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:52.996973+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 103 heartbeat osd_stat(store_statfs(0x1b85b9000/0x0/0x1bfc00000, data 0x343b7b0/0x34d5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1023247 data_alloc: 301989888 data_used: 7270400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 90136576 unmapped: 11526144 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:53.997137+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 90136576 unmapped: 11526144 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:54.997319+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 90136576 unmapped: 11526144 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:55.997575+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 90136576 unmapped: 11526144 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:56.997738+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 90136576 unmapped: 11526144 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:57.997867+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1023247 data_alloc: 301989888 data_used: 7270400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 90136576 unmapped: 11526144 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 103 heartbeat osd_stat(store_statfs(0x1b85b9000/0x0/0x1bfc00000, data 0x343b7b0/0x34d5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:58.997990+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 90136576 unmapped: 11526144 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:13:59.998191+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 90144768 unmapped: 11517952 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:00.998343+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.813238144s of 13.888871193s, submitted: 14
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 90374144 unmapped: 11288576 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:01.998553+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 90456064 unmapped: 11206656 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:02.998735+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1031897 data_alloc: 301989888 data_used: 7393280
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 90529792 unmapped: 11132928 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:03.998865+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 103 heartbeat osd_stat(store_statfs(0x1b8564000/0x0/0x1bfc00000, data 0x34907b0/0x352a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 90718208 unmapped: 10944512 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:04.999094+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 90750976 unmapped: 10911744 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:05.999272+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 103 ms_handle_reset con 0x56131b071400 session 0x56131a11dc20
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 103 ms_handle_reset con 0x56131aea3000 session 0x56131ad905a0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 90759168 unmapped: 10903552 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:06.999423+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a337000
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 97140736 unmapped: 4521984 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 103 ms_handle_reset con 0x56131a337000 session 0x56131b050f00
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:07.999537+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1127333 data_alloc: 301989888 data_used: 7393280
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 91979776 unmapped: 9682944 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:08.999680+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a202400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 104 ms_handle_reset con 0x56131a202400 session 0x56131b050d20
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a337000
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 93814784 unmapped: 7847936 heap: 101662720 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:09.999839+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 104 heartbeat osd_stat(store_statfs(0x1b7a2f000/0x0/0x1bfc00000, data 0x3fc2ce3/0x405e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 104 ms_handle_reset con 0x56131a337000 session 0x56131b050960
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e2000
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 94167040 unmapped: 17047552 heap: 111214592 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:10.999988+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 105 handle_osd_map epochs [105,105], i have 105, src has [1,105]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.586531639s of 10.003039360s, submitted: 67
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.13] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 105 ms_handle_reset con 0x56131a7e2000 session 0x56131b0f41e0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.f] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.d] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.c] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.5] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 93986816 unmapped: 17227776 heap: 111214592 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:12.000150+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 106 handle_osd_map epochs [106,106], i have 106, src has [1,106]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 106 heartbeat osd_stat(store_statfs(0x1b6cb2000/0x0/0x1bfc00000, data 0x4d3983e/0x4dda000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 94011392 unmapped: 17203200 heap: 111214592 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:13.000307+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1250836 data_alloc: 301989888 data_used: 7753728
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 94011392 unmapped: 17203200 heap: 111214592 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae53800
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:14.000435+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 106 ms_handle_reset con 0x56131ae53800 session 0x56131b051c20
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 91709440 unmapped: 19505152 heap: 111214592 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:15.000556+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 91709440 unmapped: 19505152 heap: 111214592 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131aea2400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:16.000736+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 91807744 unmapped: 19406848 heap: 111214592 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:17.001044+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 106 heartbeat osd_stat(store_statfs(0x1b717d000/0x0/0x1bfc00000, data 0x48727fb/0x4910000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 106 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 106 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 106 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.d] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.c] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.f] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.5] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.13] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 91824128 unmapped: 19390464 heap: 111214592 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:18.001178+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1172313 data_alloc: 285212672 data_used: 2871296
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 91824128 unmapped: 19390464 heap: 111214592 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 107 heartbeat osd_stat(store_statfs(0x1b7179000/0x0/0x1bfc00000, data 0x4874c14/0x4914000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:19.001329+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a30dc00
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108724224 unmapped: 14753792 heap: 123478016 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:20.001508+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 107 ms_handle_reset con 0x56131a30dc00 session 0x56131b0f43c0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 107 heartbeat osd_stat(store_statfs(0x1b61d7000/0x0/0x1bfc00000, data 0x5816c24/0x58b7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 97853440 unmapped: 25624576 heap: 123478016 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:21.001714+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 97853440 unmapped: 25624576 heap: 123478016 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.529640198s of 10.839315414s, submitted: 72
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:22.001890+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a30dc00
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 97861632 unmapped: 25616384 heap: 123478016 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e6400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:23.002059+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1207620 data_alloc: 285212672 data_used: 3510272
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 108 ms_handle_reset con 0x56131a30dc00 session 0x561319e441e0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 92700672 unmapped: 30777344 heap: 123478016 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:24.002204+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 108 heartbeat osd_stat(store_statfs(0x1b61d7000/0x0/0x1bfc00000, data 0x5816bd2/0x58b7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 96870400 unmapped: 26607616 heap: 123478016 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:25.002335+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 97804288 unmapped: 25673728 heap: 123478016 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:26.002547+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 108 heartbeat osd_stat(store_statfs(0x1b6f44000/0x0/0x1bfc00000, data 0x4aa7159/0x4b49000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 97869824 unmapped: 25608192 heap: 123478016 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:27.002733+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 98066432 unmapped: 25411584 heap: 123478016 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:28.002869+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1312114 data_alloc: 301989888 data_used: 11112448
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 108 heartbeat osd_stat(store_statfs(0x1b687c000/0x0/0x1bfc00000, data 0x516a159/0x520c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 98623488 unmapped: 24854528 heap: 123478016 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:29.003071+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 108 ms_handle_reset con 0x56131aea2400 session 0x561317bd3a40
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e3000
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 98877440 unmapped: 24600576 heap: 123478016 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:30.003190+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 100491264 unmapped: 22986752 heap: 123478016 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:31.003328+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 100491264 unmapped: 22986752 heap: 123478016 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:32.003480+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.782679558s of 10.141608238s, submitted: 98
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 100540416 unmapped: 22937600 heap: 123478016 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:33.003582+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 109 heartbeat osd_stat(store_statfs(0x1b6865000/0x0/0x1bfc00000, data 0x5184572/0x5228000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1318832 data_alloc: 301989888 data_used: 10948608
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 100622336 unmapped: 22855680 heap: 123478016 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:34.003716+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 109 heartbeat osd_stat(store_statfs(0x1b6865000/0x0/0x1bfc00000, data 0x5184572/0x5228000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 100704256 unmapped: 22773760 heap: 123478016 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:35.003881+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 100720640 unmapped: 22757376 heap: 123478016 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:36.004069+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 100720640 unmapped: 22757376 heap: 123478016 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:37.004216+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 109 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x5188572/0x522c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 109 ms_handle_reset con 0x56131a7e6400 session 0x561319c754a0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 109 heartbeat osd_stat(store_statfs(0x1b6862000/0x0/0x1bfc00000, data 0x5188572/0x522c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 100802560 unmapped: 22675456 heap: 123478016 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:38.004390+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1324880 data_alloc: 301989888 data_used: 11354112
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 100818944 unmapped: 22659072 heap: 123478016 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:39.004517+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a336c00
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 110 ms_handle_reset con 0x56131a336c00 session 0x561319c20000
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a30cc00
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 110 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae7b400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 110 ms_handle_reset con 0x56131ae7b400 session 0x561319c474a0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 110 ms_handle_reset con 0x56131a30cc00 session 0x561317bd2780
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a30dc00
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 121905152 unmapped: 12615680 heap: 134520832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets getting new tickets!
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:40.004664+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _finish_auth 0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:40.006020+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 110 ms_handle_reset con 0x56131a30dc00 session 0x56131ad8af00
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a336c00
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105422848 unmapped: 29097984 heap: 134520832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:41.004864+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 110 ms_handle_reset con 0x56131a336c00 session 0x56131a1ee1e0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.d] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.f] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.13] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.c] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 111 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.5] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105422848 unmapped: 29097984 heap: 134520832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:42.005009+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.663247108s of 10.043148041s, submitted: 86
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105447424 unmapped: 29073408 heap: 134520832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:43.005158+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1530707 data_alloc: 301989888 data_used: 14274560
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 112 handle_osd_map epochs [112,112], i have 112, src has [1,112]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2fc800
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 112 heartbeat osd_stat(store_statfs(0x1b51a9000/0x0/0x1bfc00000, data 0x68395dd/0x68e3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 97345536 unmapped: 37175296 heap: 134520832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:44.005299+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 112 ms_handle_reset con 0x56131a2fc800 session 0x56131ae81860
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776b400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 112 ms_handle_reset con 0x56131776b400 session 0x561319c74d20
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 96862208 unmapped: 37658624 heap: 134520832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:45.005440+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 96862208 unmapped: 37658624 heap: 134520832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:46.005608+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 112 heartbeat osd_stat(store_statfs(0x1b5acb000/0x0/0x1bfc00000, data 0x5f1b5bd/0x5fc3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 96862208 unmapped: 37658624 heap: 134520832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:47.005738+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.d] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.c] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.5] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.13] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 113 heartbeat osd_stat(store_statfs(0x1b5acb000/0x0/0x1bfc00000, data 0x5f1b5bd/0x5fc3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[2.f] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776b400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 98959360 unmapped: 35561472 heap: 134520832 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:48.005846+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1580898 data_alloc: 301989888 data_used: 6340608
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 113 ms_handle_reset con 0x56131776b400 session 0x5613183cba40
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104882176 unmapped: 33488896 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:49.005959+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104882176 unmapped: 33488896 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:50.006074+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 113 heartbeat osd_stat(store_statfs(0x1b4704000/0x0/0x1bfc00000, data 0x72dea48/0x738a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318ae2c00
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104882176 unmapped: 33488896 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:51.006175+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e5800
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 114 handle_osd_map epochs [114,114], i have 114, src has [1,114]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 114 handle_osd_map epochs [114,114], i have 114, src has [1,114]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 114 ms_handle_reset con 0x561318ae2c00 session 0x561319c214a0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 101949440 unmapped: 36421632 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:52.006286+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.585463524s of 10.125967026s, submitted: 137
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776a400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 114 ms_handle_reset con 0x56131776a400 session 0x5613183cbe00
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 114 ms_handle_reset con 0x56131a7e5800 session 0x561319da3a40
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 114 ms_handle_reset con 0x56131a7e3000 session 0x561319da32c0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 96903168 unmapped: 41467904 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:53.006403+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776a400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776b400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 114 heartbeat osd_stat(store_statfs(0x1b55a4000/0x0/0x1bfc00000, data 0x643bfdf/0x64ea000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1172698 data_alloc: 285212672 data_used: 1740800
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 114 ms_handle_reset con 0x56131776a400 session 0x561318224f00
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 114 ms_handle_reset con 0x56131776b400 session 0x5613183caf00
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 96468992 unmapped: 41902080 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:54.006567+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 96468992 unmapped: 41902080 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:55.006715+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 96468992 unmapped: 41902080 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:56.006857+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318ae2c00
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 96518144 unmapped: 41852928 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:57.006998+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b6c71000/0x0/0x1bfc00000, data 0x3e70f3a/0x3f1b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 96600064 unmapped: 41771008 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:58.007117+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1192359 data_alloc: 285212672 data_used: 3678208
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 96976896 unmapped: 41394176 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:14:59.007291+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae55800
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 97845248 unmapped: 40525824 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:00.007459+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 99540992 unmapped: 38830080 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:01.007611+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 99540992 unmapped: 38830080 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:02.007804+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b7b6c000/0x0/0x1bfc00000, data 0x3e73386/0x3f21000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:03.007927+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 99573760 unmapped: 38797312 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1227079 data_alloc: 301989888 data_used: 8597504
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:04.008090+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 99573760 unmapped: 38797312 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b7b6c000/0x0/0x1bfc00000, data 0x3e73386/0x3f21000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b7b6c000/0x0/0x1bfc00000, data 0x3e73386/0x3f21000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:05.008244+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 99573760 unmapped: 38797312 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.871729851s of 13.214777946s, submitted: 104
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:06.008439+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 99573760 unmapped: 38797312 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 ms_handle_reset con 0x561318ae2c00 session 0x56131882fc20
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:07.008576+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 99573760 unmapped: 38797312 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:08.008694+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 99573760 unmapped: 38797312 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b7b6d000/0x0/0x1bfc00000, data 0x3e73386/0x3f21000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1225975 data_alloc: 301989888 data_used: 8601600
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e2000
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:09.008842+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 100859904 unmapped: 37511168 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:10.008958+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105037824 unmapped: 33333248 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:11.009142+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 106086400 unmapped: 32284672 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:12.009517+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 106463232 unmapped: 31907840 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:13.009683+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 106790912 unmapped: 31580160 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1373595 data_alloc: 301989888 data_used: 8921088
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b6afd000/0x0/0x1bfc00000, data 0x4edb386/0x4f89000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:14.009834+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107192320 unmapped: 31178752 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:15.009985+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b6afd000/0x0/0x1bfc00000, data 0x4edb386/0x4f89000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107274240 unmapped: 31096832 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:16.010196+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107274240 unmapped: 31096832 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.872921944s of 11.525347710s, submitted: 170
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:17.010373+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 106995712 unmapped: 31375360 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 ms_handle_reset con 0x56131ae55800 session 0x561317bd3a40
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:18.010506+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 106987520 unmapped: 31383552 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e5c00
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 ms_handle_reset con 0x56131a7e2000 session 0x56131822a960
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 ms_handle_reset con 0x56131a7e5c00 session 0x56131b051a40
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1222382 data_alloc: 301989888 data_used: 5705728
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:19.010671+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776a400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103096320 unmapped: 35274752 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 ms_handle_reset con 0x56131776a400 session 0x561319f09680
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:20.010866+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b89fd000/0x0/0x1bfc00000, data 0x2fe62f1/0x3091000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:21.011021+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:22.011188+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:23.011344+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1078102 data_alloc: 285212672 data_used: 1748992
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:24.011525+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:25.011722+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:26.011921+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b89fd000/0x0/0x1bfc00000, data 0x2fe62f1/0x3091000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:27.012070+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:28.012236+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1078102 data_alloc: 285212672 data_used: 1748992
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:29.012463+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:30.012651+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b89fd000/0x0/0x1bfc00000, data 0x2fe62f1/0x3091000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:31.012837+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:32.013005+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:33.013189+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1078102 data_alloc: 285212672 data_used: 1748992
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:34.013373+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:35.013541+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b89fd000/0x0/0x1bfc00000, data 0x2fe62f1/0x3091000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:36.013761+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b89fd000/0x0/0x1bfc00000, data 0x2fe62f1/0x3091000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:37.013980+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b89fd000/0x0/0x1bfc00000, data 0x2fe62f1/0x3091000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:38.014140+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1078102 data_alloc: 285212672 data_used: 1748992
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:39.014304+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:40.014455+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b89fd000/0x0/0x1bfc00000, data 0x2fe62f1/0x3091000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:41.014581+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:42.014754+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:43.014823+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1078102 data_alloc: 285212672 data_used: 1748992
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:44.014975+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:45.015126+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b89fd000/0x0/0x1bfc00000, data 0x2fe62f1/0x3091000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:46.015355+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:47.015501+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b89fd000/0x0/0x1bfc00000, data 0x2fe62f1/0x3091000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:48.072728+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b89fd000/0x0/0x1bfc00000, data 0x2fe62f1/0x3091000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1078102 data_alloc: 285212672 data_used: 1748992
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:49.072918+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b89fd000/0x0/0x1bfc00000, data 0x2fe62f1/0x3091000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:50.073071+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:51.073229+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:52.073354+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:53.073533+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1078102 data_alloc: 285212672 data_used: 1748992
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:54.073735+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b89fd000/0x0/0x1bfc00000, data 0x2fe62f1/0x3091000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:55.073926+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:56.074235+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:57.074462+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:58.074606+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b89fd000/0x0/0x1bfc00000, data 0x2fe62f1/0x3091000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1078102 data_alloc: 285212672 data_used: 1748992
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:15:59.074836+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:00.075028+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:01.075225+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:02.075377+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b89fd000/0x0/0x1bfc00000, data 0x2fe62f1/0x3091000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:03.075534+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1078102 data_alloc: 285212672 data_used: 1748992
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:04.075704+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:05.075866+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:06.076268+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:07.076409+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b89fd000/0x0/0x1bfc00000, data 0x2fe62f1/0x3091000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:08.076541+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103129088 unmapped: 35241984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1078102 data_alloc: 285212672 data_used: 1748992
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:09.076719+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103137280 unmapped: 35233792 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:10.076898+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103137280 unmapped: 35233792 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:11.077048+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103137280 unmapped: 35233792 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:12.077197+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103137280 unmapped: 35233792 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b89fd000/0x0/0x1bfc00000, data 0x2fe62f1/0x3091000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:13.077364+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103137280 unmapped: 35233792 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b89fd000/0x0/0x1bfc00000, data 0x2fe62f1/0x3091000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1078102 data_alloc: 285212672 data_used: 1748992
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:14.077536+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103137280 unmapped: 35233792 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:15.077707+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103137280 unmapped: 35233792 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:16.077907+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103137280 unmapped: 35233792 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:17.078055+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103145472 unmapped: 35225600 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:18.078233+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103145472 unmapped: 35225600 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1078102 data_alloc: 285212672 data_used: 1748992
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:19.078417+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b89fd000/0x0/0x1bfc00000, data 0x2fe62f1/0x3091000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103145472 unmapped: 35225600 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 heartbeat osd_stat(store_statfs(0x1b89fd000/0x0/0x1bfc00000, data 0x2fe62f1/0x3091000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:20.078558+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103145472 unmapped: 35225600 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:21.078802+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103145472 unmapped: 35225600 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131b071400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 64.506591797s of 64.729621887s, submitted: 57
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:22.078946+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103161856 unmapped: 35209216 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 116 ms_handle_reset con 0x56131b071400 session 0x561319f094a0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131aea3400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a80e400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 116 handle_osd_map epochs [116,116], i have 116, src has [1,116]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:23.079140+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103301120 unmapped: 35069952 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 116 handle_osd_map epochs [117,117], i have 116, src has [1,117]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 117 ms_handle_reset con 0x56131aea3400 session 0x561319f09c20
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 117 ms_handle_reset con 0x56131a80e400 session 0x56131b050b40
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1100818 data_alloc: 285212672 data_used: 1773568
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:24.079320+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103333888 unmapped: 35037184 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a80e400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:25.079451+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 118 handle_osd_map epochs [117,118], i have 118, src has [1,118]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103153664 unmapped: 35217408 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 119 heartbeat osd_stat(store_statfs(0x1b89e8000/0x0/0x1bfc00000, data 0x2fee6e2/0x30a4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 119 ms_handle_reset con 0x56131a80e400 session 0x561319f08960
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:26.079639+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103235584 unmapped: 35135488 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 120 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776a400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 120 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 120 ms_handle_reset con 0x56131776a400 session 0x561318224f00
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 120 heartbeat osd_stat(store_statfs(0x1b89e4000/0x0/0x1bfc00000, data 0x2ff0c69/0x30a8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:27.123153+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 120 heartbeat osd_stat(store_statfs(0x1b89dc000/0x0/0x1bfc00000, data 0x2ff3209/0x30af000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 103374848 unmapped: 34996224 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:28.123315+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 121 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 121 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104448000 unmapped: 33923072 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 121 heartbeat osd_stat(store_statfs(0x1b89db000/0x0/0x1bfc00000, data 0x2ff3c03/0x30b0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1122947 data_alloc: 285212672 data_used: 1773568
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:29.123508+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105512960 unmapped: 32858112 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 122 heartbeat osd_stat(store_statfs(0x1b89d3000/0x0/0x1bfc00000, data 0x2ff9308/0x30ba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:30.123685+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104472576 unmapped: 33898496 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a240400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:31.123840+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104472576 unmapped: 33898496 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 122 heartbeat osd_stat(store_statfs(0x1b89d3000/0x0/0x1bfc00000, data 0x2ff9308/0x30ba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:32.123981+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104488960 unmapped: 33882112 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.148643494s of 10.640843391s, submitted: 127
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 123 ms_handle_reset con 0x56131a240400 session 0x561318224b40
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:33.124157+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a80ec00
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104562688 unmapped: 33808384 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a905800
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1129845 data_alloc: 285212672 data_used: 1789952
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:34.124299+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 124 ms_handle_reset con 0x56131a905800 session 0x56131ad823c0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104628224 unmapped: 33742848 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 124 handle_osd_map epochs [124,124], i have 124, src has [1,124]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 124 handle_osd_map epochs [124,124], i have 124, src has [1,124]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 124 ms_handle_reset con 0x56131a80ec00 session 0x561319c38b40
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776a400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a240400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 124 ms_handle_reset con 0x56131a240400 session 0x56131ad91680
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:35.124453+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104710144 unmapped: 33660928 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:36.124648+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 125 handle_osd_map epochs [125,125], i have 125, src has [1,125]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 125 handle_osd_map epochs [125,125], i have 125, src has [1,125]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104824832 unmapped: 33546240 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 125 heartbeat osd_stat(store_statfs(0x1b89ce000/0x0/0x1bfc00000, data 0x2ffcbb7/0x30bf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [0,1])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 125 ms_handle_reset con 0x56131776a400 session 0x561319c38d20
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a80e400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a905800
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 125 handle_osd_map epochs [126,126], i have 126, src has [1,126]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:37.124867+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104890368 unmapped: 33480704 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 126 heartbeat osd_stat(store_statfs(0x1b89c8000/0x0/0x1bfc00000, data 0x2fff14e/0x30c3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:38.125024+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 127 handle_osd_map epochs [127,127], i have 127, src has [1,127]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 127 ms_handle_reset con 0x56131a80e400 session 0x561319c741e0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104931328 unmapped: 33439744 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e3800
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 127 handle_osd_map epochs [127,127], i have 127, src has [1,127]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 127 ms_handle_reset con 0x56131a905800 session 0x56131ad91860
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2fc000
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1139598 data_alloc: 285212672 data_used: 1814528
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:39.125179+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104579072 unmapped: 33792000 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 128 ms_handle_reset con 0x56131a2fc000 session 0x56131ad91a40
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 128 ms_handle_reset con 0x56131a7e3800 session 0x561319c75c20
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2fc000
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 128 handle_osd_map epochs [128,128], i have 128, src has [1,128]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:40.125362+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104660992 unmapped: 33710080 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 129 heartbeat osd_stat(store_statfs(0x1b85c3000/0x0/0x1bfc00000, data 0x300520a/0x30c8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 129 ms_handle_reset con 0x56131a2fc000 session 0x561319c750e0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:41.125511+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104677376 unmapped: 33693696 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:42.125712+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104677376 unmapped: 33693696 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.852753639s of 10.639102936s, submitted: 282
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:43.125872+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104718336 unmapped: 33652736 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 130 heartbeat osd_stat(store_statfs(0x1b85c3000/0x0/0x1bfc00000, data 0x3007007/0x30c9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1144038 data_alloc: 285212672 data_used: 1810432
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:44.126059+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104718336 unmapped: 33652736 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:45.126206+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104718336 unmapped: 33652736 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:46.126391+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104718336 unmapped: 33652736 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:47.126519+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104792064 unmapped: 33579008 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:48.126665+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104792064 unmapped: 33579008 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1146192 data_alloc: 285212672 data_used: 1810432
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:49.126821+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104792064 unmapped: 33579008 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 131 heartbeat osd_stat(store_statfs(0x1b85bc000/0x0/0x1bfc00000, data 0x300b8d5/0x30d1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:50.126970+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104792064 unmapped: 33579008 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:51.127129+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104792064 unmapped: 33579008 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:52.127265+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104792064 unmapped: 33579008 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:53.127373+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104792064 unmapped: 33579008 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1146192 data_alloc: 285212672 data_used: 1810432
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:54.127530+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104792064 unmapped: 33579008 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:55.127716+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104792064 unmapped: 33579008 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 131 heartbeat osd_stat(store_statfs(0x1b85bc000/0x0/0x1bfc00000, data 0x300b8d5/0x30d1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:56.127943+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104792064 unmapped: 33579008 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:57.128097+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104792064 unmapped: 33579008 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:58.128232+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104792064 unmapped: 33579008 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1146192 data_alloc: 285212672 data_used: 1810432
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:16:59.128392+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104792064 unmapped: 33579008 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 131 heartbeat osd_stat(store_statfs(0x1b85bc000/0x0/0x1bfc00000, data 0x300b8d5/0x30d1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:00.129130+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104792064 unmapped: 33579008 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:01.129279+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104792064 unmapped: 33579008 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:02.129420+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104792064 unmapped: 33579008 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:03.129580+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 131 heartbeat osd_stat(store_statfs(0x1b85bc000/0x0/0x1bfc00000, data 0x300b8d5/0x30d1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104792064 unmapped: 33579008 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1146192 data_alloc: 285212672 data_used: 1810432
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:04.129770+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104792064 unmapped: 33579008 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 131 heartbeat osd_stat(store_statfs(0x1b85bc000/0x0/0x1bfc00000, data 0x300b8d5/0x30d1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:05.129972+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104792064 unmapped: 33579008 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:06.130097+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.262424469s of 23.300199509s, submitted: 33
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104800256 unmapped: 33570816 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:07.130262+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104857600 unmapped: 33513472 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318ae3000
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 133 ms_handle_reset con 0x561318ae3000 session 0x561319c74d20
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:08.130438+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104865792 unmapped: 33505280 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1154338 data_alloc: 285212672 data_used: 1810432
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:09.130588+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104865792 unmapped: 33505280 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:10.130798+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104890368 unmapped: 33480704 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a224400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 134 ms_handle_reset con 0x56131a224400 session 0x5613183caf00
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 134 heartbeat osd_stat(store_statfs(0x1b85af000/0x0/0x1bfc00000, data 0x30128f1/0x30de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:11.130943+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104972288 unmapped: 33398784 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:12.131549+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a905000
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105029632 unmapped: 33341440 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 136 ms_handle_reset con 0x56131a905000 session 0x5613183cba40
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:13.131934+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 136 heartbeat osd_stat(store_statfs(0x1b85a6000/0x0/0x1bfc00000, data 0x30172bd/0x30e6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105029632 unmapped: 33341440 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1167302 data_alloc: 285212672 data_used: 1839104
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:14.132317+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105029632 unmapped: 33341440 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:15.132486+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105029632 unmapped: 33341440 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:16.132746+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105029632 unmapped: 33341440 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:17.171641+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.761249542s of 10.875541687s, submitted: 56
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 137 heartbeat osd_stat(store_statfs(0x1b85a6000/0x0/0x1bfc00000, data 0x30172bd/0x30e6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105037824 unmapped: 33333248 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:18.172346+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 137 heartbeat osd_stat(store_statfs(0x1b85a3000/0x0/0x1bfc00000, data 0x30196d6/0x30ea000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105021440 unmapped: 33349632 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1169632 data_alloc: 285212672 data_used: 1839104
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:19.172920+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105021440 unmapped: 33349632 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 137 heartbeat osd_stat(store_statfs(0x1b85a3000/0x0/0x1bfc00000, data 0x30196d6/0x30ea000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:20.173856+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105021440 unmapped: 33349632 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:21.174365+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105021440 unmapped: 33349632 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a70e800
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 137 ms_handle_reset con 0x56131a70e800 session 0x561317be94a0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:22.174574+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105037824 unmapped: 33333248 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318f31000
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:23.174749+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 138 ms_handle_reset con 0x561318f31000 session 0x561317be8b40
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105062400 unmapped: 33308672 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1176387 data_alloc: 285212672 data_used: 1855488
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:24.174967+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318f30400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105062400 unmapped: 33308672 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a203c00
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 138 ms_handle_reset con 0x56131a203c00 session 0x56131a7e0b40
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 138 ms_handle_reset con 0x561318f30400 session 0x561317be81e0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae88400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 138 ms_handle_reset con 0x56131ae88400 session 0x56131808c3c0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:25.175625+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 138 heartbeat osd_stat(store_statfs(0x1b859c000/0x0/0x1bfc00000, data 0x301bc8a/0x30f1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105078784 unmapped: 33292288 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 138 heartbeat osd_stat(store_statfs(0x1b859c000/0x0/0x1bfc00000, data 0x301bc8a/0x30f1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318f30400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:26.175852+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 138 ms_handle_reset con 0x561318f30400 session 0x56131808cd20
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318f31000
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 138 ms_handle_reset con 0x561318f31000 session 0x561319c21a40
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105127936 unmapped: 33243136 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:27.176084+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a203c00
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 138 ms_handle_reset con 0x56131a203c00 session 0x561319c214a0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a70e800
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.195681572s of 10.276930809s, submitted: 32
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105144320 unmapped: 33226752 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:28.176910+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 139 ms_handle_reset con 0x56131a70e800 session 0x561319c20000
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105168896 unmapped: 33202176 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a224400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 139 ms_handle_reset con 0x56131a224400 session 0x56131a11d680
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1184073 data_alloc: 285212672 data_used: 1871872
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:29.177074+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105168896 unmapped: 33202176 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318f30400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 139 ms_handle_reset con 0x561318f30400 session 0x56131a11c3c0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318f31000
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:30.177287+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105242624 unmapped: 33128448 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 139 ms_handle_reset con 0x561318f31000 session 0x56131a11dc20
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:31.177509+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 139 heartbeat osd_stat(store_statfs(0x1b859a000/0x0/0x1bfc00000, data 0x301e202/0x30f4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a203c00
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105299968 unmapped: 33071104 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 139 ms_handle_reset con 0x56131a203c00 session 0x561319da3680
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:32.177756+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104341504 unmapped: 34029568 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a70e800
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 139 ms_handle_reset con 0x56131a70e800 session 0x561319da3e00
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:33.178092+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104366080 unmapped: 34004992 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:34.178484+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1180835 data_alloc: 285212672 data_used: 1863680
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a23c000
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 139 ms_handle_reset con 0x56131a23c000 session 0x561319da3860
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104366080 unmapped: 34004992 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:35.178743+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104366080 unmapped: 34004992 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 139 heartbeat osd_stat(store_statfs(0x1b8599000/0x0/0x1bfc00000, data 0x301e265/0x30f5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:36.178966+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 139 heartbeat osd_stat(store_statfs(0x1b8599000/0x0/0x1bfc00000, data 0x301e265/0x30f5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104366080 unmapped: 34004992 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:37.179114+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 140 heartbeat osd_stat(store_statfs(0x1b8599000/0x0/0x1bfc00000, data 0x301e265/0x30f5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104382464 unmapped: 33988608 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:38.179279+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104382464 unmapped: 33988608 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:39.179436+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1188121 data_alloc: 285212672 data_used: 1875968
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104382464 unmapped: 33988608 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:40.179599+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.552520752s of 12.834862709s, submitted: 85
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2d7400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 140 heartbeat osd_stat(store_statfs(0x1b8594000/0x0/0x1bfc00000, data 0x302067e/0x30f9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 140 ms_handle_reset con 0x56131a2d7400 session 0x56131b051680
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104382464 unmapped: 33988608 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:41.179996+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 140 handle_osd_map epochs [141,141], i have 141, src has [1,141]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104398848 unmapped: 33972224 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a30bc00
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 141 ms_handle_reset con 0x56131a30bc00 session 0x56131b051a40
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:42.180171+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104448000 unmapped: 33923072 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2fc000
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 142 ms_handle_reset con 0x56131a2fc000 session 0x56131b050960
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:43.180388+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a339800
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 142 ms_handle_reset con 0x56131a339800 session 0x56131b0505a0
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104464384 unmapped: 33906688 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 142 handle_osd_map epochs [142,142], i have 142, src has [1,142]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:44.180655+0000)
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1197183 data_alloc: 285212672 data_used: 1888256
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131aea2400
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:03 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a23d400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 143 ms_handle_reset con 0x56131a23d400 session 0x56131a7e05a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 143 ms_handle_reset con 0x56131aea2400 session 0x56131b051c20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104529920 unmapped: 33841152 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:45.180942+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2d7400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 144 ms_handle_reset con 0x56131a2d7400 session 0x56131808c3c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104628224 unmapped: 33742848 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2fc000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a30bc00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:46.181186+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 144 heartbeat osd_stat(store_statfs(0x1b8580000/0x0/0x1bfc00000, data 0x3029c81/0x310c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 144 handle_osd_map epochs [145,145], i have 145, src has [1,145]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 145 ms_handle_reset con 0x56131a2fc000 session 0x561317be94a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 145 ms_handle_reset con 0x56131a30bc00 session 0x56131a7e1e00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104759296 unmapped: 33611776 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a339800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e2800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 145 ms_handle_reset con 0x56131a7e2800 session 0x56131a1ee1e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2d7400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:47.181373+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 145 ms_handle_reset con 0x56131a339800 session 0x56131ad91a40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 145 ms_handle_reset con 0x56131a2d7400 session 0x56131a1ee780
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104898560 unmapped: 33472512 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318ae2c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 145 ms_handle_reset con 0x561318ae2c00 session 0x56131b0f4780
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:48.181526+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a70f000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 145 ms_handle_reset con 0x56131a70f000 session 0x561319e45860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104914944 unmapped: 33456128 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:49.181843+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1219586 data_alloc: 285212672 data_used: 1888256
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318f31000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 145 ms_handle_reset con 0x561318f31000 session 0x561319e450e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318ae2c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104996864 unmapped: 33374208 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2d7400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a339800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 145 ms_handle_reset con 0x56131a339800 session 0x56131b0505a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 145 ms_handle_reset con 0x561318ae2c00 session 0x561319e443c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:50.182070+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.213962555s of 10.004975319s, submitted: 210
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 145 ms_handle_reset con 0x56131a2d7400 session 0x56131b050d20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105013248 unmapped: 33357824 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a70f000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 145 ms_handle_reset con 0x56131a70f000 session 0x561319c46000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae54000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:51.182645+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 146 ms_handle_reset con 0x56131ae54000 session 0x561319c461e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 146 handle_osd_map epochs [146,146], i have 146, src has [1,146]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 113451008 unmapped: 24920064 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:52.182849+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 146 heartbeat osd_stat(store_statfs(0x1b6577000/0x0/0x1bfc00000, data 0x502eaf5/0x5116000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 146 handle_osd_map epochs [147,147], i have 147, src has [1,147]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 146 handle_osd_map epochs [147,147], i have 147, src has [1,147]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318ae2c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2d7400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105095168 unmapped: 33275904 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 147 handle_osd_map epochs [147,147], i have 147, src has [1,147]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 147 ms_handle_reset con 0x56131a2d7400 session 0x56131a7e05a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a241000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:53.182949+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 147 ms_handle_reset con 0x561318ae2c00 session 0x561319da8960
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 148 ms_handle_reset con 0x56131a241000 session 0x561319da3e00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105168896 unmapped: 33202176 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a70f800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a338000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 148 ms_handle_reset con 0x56131a70f800 session 0x56131ad99a40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131b070800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:54.183106+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1507837 data_alloc: 285212672 data_used: 1921024
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 148 ms_handle_reset con 0x56131b070800 session 0x56131b0f4780
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105111552 unmapped: 33259520 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318ae2c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 149 ms_handle_reset con 0x561318ae2c00 session 0x56131b050d20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 149 ms_handle_reset con 0x56131a338000 session 0x56131a11d680
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a241000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:55.183232+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 150 handle_osd_map epochs [150,150], i have 150, src has [1,150]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 104439808 unmapped: 33931264 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 150 handle_osd_map epochs [150,150], i have 150, src has [1,150]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2d7400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 150 ms_handle_reset con 0x56131a241000 session 0x56131b051c20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:56.183432+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 151 ms_handle_reset con 0x56131a2d7400 session 0x56131a11c3c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105521152 unmapped: 32849920 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a70f800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 151 ms_handle_reset con 0x56131a70f800 session 0x561319c461e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318ae2c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:57.183593+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 151 heartbeat osd_stat(store_statfs(0x1b856a000/0x0/0x1bfc00000, data 0x303a15e/0x3122000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 152 ms_handle_reset con 0x561318ae2c00 session 0x561317be94a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105422848 unmapped: 32948224 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a23d000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 152 ms_handle_reset con 0x56131a23d000 session 0x561319b7dc20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:58.183753+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e3400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 152 ms_handle_reset con 0x56131a7e3400 session 0x561319b7c3c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105439232 unmapped: 32931840 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:17:59.183901+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1248077 data_alloc: 285212672 data_used: 1929216
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105439232 unmapped: 32931840 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:00.184135+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105439232 unmapped: 32931840 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:01.184318+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105439232 unmapped: 32931840 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:02.184472+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 152 heartbeat osd_stat(store_statfs(0x1b8565000/0x0/0x1bfc00000, data 0x303c711/0x3126000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 152 handle_osd_map epochs [153,153], i have 153, src has [1,153]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 152 handle_osd_map epochs [153,153], i have 153, src has [1,153]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.114872932s of 11.975986481s, submitted: 234
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105480192 unmapped: 32890880 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:03.184617+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105480192 unmapped: 32890880 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:04.184858+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1250231 data_alloc: 285212672 data_used: 1929216
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105480192 unmapped: 32890880 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 153 heartbeat osd_stat(store_statfs(0x1b8563000/0x0/0x1bfc00000, data 0x303eb82/0x312a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:05.185016+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105480192 unmapped: 32890880 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2fcc00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 153 ms_handle_reset con 0x56131a2fcc00 session 0x561318ad3c20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:06.185180+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105480192 unmapped: 32890880 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:07.185344+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105480192 unmapped: 32890880 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 153 heartbeat osd_stat(store_statfs(0x1b8563000/0x0/0x1bfc00000, data 0x303eb82/0x312a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:08.185514+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a203400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 153 handle_osd_map epochs [154,154], i have 154, src has [1,154]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 154 ms_handle_reset con 0x56131a203400 session 0x561318ad2f00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105496576 unmapped: 32874496 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318ae2c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 154 ms_handle_reset con 0x561318ae2c00 session 0x561318ad3860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:09.185667+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1260133 data_alloc: 285212672 data_used: 1929216
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 105496576 unmapped: 32874496 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a23d000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 154 ms_handle_reset con 0x56131a23d000 session 0x56131b0f5860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2fcc00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e3400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:10.185846+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 155 handle_osd_map epochs [155,155], i have 155, src has [1,155]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 106569728 unmapped: 31801344 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 155 ms_handle_reset con 0x56131a2fcc00 session 0x56131b0f4f00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 155 ms_handle_reset con 0x56131a7e3400 session 0x56131b0f43c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:11.186012+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a241000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 155 ms_handle_reset con 0x56131a241000 session 0x56131b0f5e00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318ae2c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 106577920 unmapped: 31793152 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 156 ms_handle_reset con 0x561318ae2c00 session 0x56131b0f5c20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a23d000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 156 heartbeat osd_stat(store_statfs(0x1b8555000/0x0/0x1bfc00000, data 0x3045c45/0x3138000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:12.186147+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.779577255s of 10.009882927s, submitted: 99
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 156 ms_handle_reset con 0x56131a23d000 session 0x56131ae81680
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2fcc00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 106586112 unmapped: 31784960 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 156 ms_handle_reset con 0x56131a2fcc00 session 0x56131ae805a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:13.186325+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 106602496 unmapped: 31768576 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:14.186506+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1262303 data_alloc: 285212672 data_used: 1941504
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 106602496 unmapped: 31768576 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:15.186890+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 106602496 unmapped: 31768576 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:16.187151+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 106610688 unmapped: 31760384 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:17.187317+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a80f000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 106635264 unmapped: 31735808 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 157 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 157 ms_handle_reset con 0x56131a80f000 session 0x56131ae80d20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 157 heartbeat osd_stat(store_statfs(0x1b8552000/0x0/0x1bfc00000, data 0x304801c/0x313b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:18.187456+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 106635264 unmapped: 31735808 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a70fc00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a202800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:19.187590+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Got map version 47
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1269630 data_alloc: 285212672 data_used: 1941504
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 157 ms_handle_reset con 0x56131a202800 session 0x56131b0f4d20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 106897408 unmapped: 31473664 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:20.187753+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 106905600 unmapped: 31465472 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 157 heartbeat osd_stat(store_statfs(0x1b8551000/0x0/0x1bfc00000, data 0x30480a7/0x313b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:21.187965+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 106905600 unmapped: 31465472 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:22.188171+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 106905600 unmapped: 31465472 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:23.188393+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 157 heartbeat osd_stat(store_statfs(0x1b8551000/0x0/0x1bfc00000, data 0x30480a7/0x313b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 106905600 unmapped: 31465472 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:24.188531+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1269038 data_alloc: 285212672 data_used: 1941504
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 106913792 unmapped: 31457280 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:25.188683+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Got map version 48
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.912384987s of 13.194304466s, submitted: 79
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 157 heartbeat osd_stat(store_statfs(0x1b8551000/0x0/0x1bfc00000, data 0x30480a7/0x313b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107159552 unmapped: 31211520 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:26.188885+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107167744 unmapped: 31203328 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:27.189026+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 157 heartbeat osd_stat(store_statfs(0x1b8553000/0x0/0x1bfc00000, data 0x30480d6/0x313b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107200512 unmapped: 31170560 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:28.189247+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107200512 unmapped: 31170560 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 157 heartbeat osd_stat(store_statfs(0x1b8553000/0x0/0x1bfc00000, data 0x30480d6/0x313b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:29.189430+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1268176 data_alloc: 285212672 data_used: 1941504
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107200512 unmapped: 31170560 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 157 heartbeat osd_stat(store_statfs(0x1b8553000/0x0/0x1bfc00000, data 0x30480d6/0x313b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:30.189569+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae55800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a70e800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 157 ms_handle_reset con 0x56131ae55800 session 0x561319c46960
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 157 ms_handle_reset con 0x56131a70e800 session 0x561319e450e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107208704 unmapped: 31162368 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:31.189765+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107208704 unmapped: 31162368 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:32.190022+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131b070c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 157 ms_handle_reset con 0x56131b070c00 session 0x561317bea000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318f31800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107233280 unmapped: 31137792 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 157 heartbeat osd_stat(store_statfs(0x1b8551000/0x0/0x1bfc00000, data 0x3048149/0x313d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae89000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:33.190128+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 157 ms_handle_reset con 0x561318f31800 session 0x561319b90780
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a336c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 157 ms_handle_reset con 0x56131ae89000 session 0x56131a1ef860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 157 heartbeat osd_stat(store_statfs(0x1b8551000/0x0/0x1bfc00000, data 0x3048149/0x313d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318f31800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 157 ms_handle_reset con 0x56131a336c00 session 0x56131b0f4780
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107094016 unmapped: 31277056 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 157 heartbeat osd_stat(store_statfs(0x1b8551000/0x0/0x1bfc00000, data 0x3048149/0x313d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [0,0,0,0,1])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:34.190263+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 157 ms_handle_reset con 0x561318f31800 session 0x5613182243c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1407571 data_alloc: 285212672 data_used: 1941504
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107225088 unmapped: 31145984 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:35.190386+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.718971252s of 10.220855713s, submitted: 118
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a70e800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107339776 unmapped: 31031296 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 158 ms_handle_reset con 0x56131a70e800 session 0x56131b0f5860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae55800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:36.190544+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 158 ms_handle_reset con 0x56131ae55800 session 0x56131b0f4f00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107323392 unmapped: 31047680 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:37.190668+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131b070c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 158 ms_handle_reset con 0x56131b070c00 session 0x56131b0f5c20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 158 heartbeat osd_stat(store_statfs(0x1b7976000/0x0/0x1bfc00000, data 0x3c2166b/0x3d17000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107347968 unmapped: 31023104 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:38.190856+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107347968 unmapped: 31023104 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776ac00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 158 ms_handle_reset con 0x56131776ac00 session 0x561318ad3c20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131aea2c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:39.190978+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1374133 data_alloc: 285212672 data_used: 1953792
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 158 ms_handle_reset con 0x56131aea2c00 session 0x561318ad3860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 158 handle_osd_map epochs [158,159], i have 158, src has [1,159]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107364352 unmapped: 31006720 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:40.191102+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 159 handle_osd_map epochs [159,159], i have 159, src has [1,159]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2fc000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 159 ms_handle_reset con 0x56131a2fc000 session 0x561317be94a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a30c400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 159 ms_handle_reset con 0x56131a30c400 session 0x561319c461e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107380736 unmapped: 30990336 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:41.191298+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2fcc00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 159 ms_handle_reset con 0x56131a2fcc00 session 0x56131a11c3c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776ac00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 159 ms_handle_reset con 0x56131776ac00 session 0x56131b050d20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107380736 unmapped: 30990336 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:42.191433+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 159 heartbeat osd_stat(store_statfs(0x1b7972000/0x0/0x1bfc00000, data 0x3c23bac/0x3d1c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 159 handle_osd_map epochs [160,160], i have 160, src has [1,160]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107388928 unmapped: 30982144 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae88000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:43.191594+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 160 handle_osd_map epochs [160,161], i have 160, src has [1,161]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107429888 unmapped: 30941184 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae55400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:44.192395+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a240400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 161 ms_handle_reset con 0x56131ae55400 session 0x561317bd23c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 161 ms_handle_reset con 0x56131ae88000 session 0x56131b051c20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 161 heartbeat osd_stat(store_statfs(0x1b7968000/0x0/0x1bfc00000, data 0x3c286f4/0x3d26000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1387193 data_alloc: 285212672 data_used: 1982464
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 161 ms_handle_reset con 0x56131a240400 session 0x56131b0f41e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107446272 unmapped: 30924800 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:45.192542+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131b071400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 161 ms_handle_reset con 0x56131b071400 session 0x56131b0f5c20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776ac00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 161 ms_handle_reset con 0x56131776ac00 session 0x561319b90780
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107421696 unmapped: 30949376 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:46.192724+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a240400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.292724609s of 10.651900291s, submitted: 95
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 161 ms_handle_reset con 0x56131a240400 session 0x56131ae805a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae55400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 161 ms_handle_reset con 0x56131ae55400 session 0x561319e45860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107446272 unmapped: 30924800 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:47.192885+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107446272 unmapped: 30924800 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae78800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 161 ms_handle_reset con 0x56131ae78800 session 0x56131808c780
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:48.193060+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 161 heartbeat osd_stat(store_statfs(0x1b7969000/0x0/0x1bfc00000, data 0x3c28738/0x3d25000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107446272 unmapped: 30924800 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:49.193206+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1388257 data_alloc: 285212672 data_used: 1982464
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae54000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 161 ms_handle_reset con 0x56131ae54000 session 0x561319f083c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107446272 unmapped: 30924800 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:50.193370+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776ac00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 161 ms_handle_reset con 0x56131776ac00 session 0x561319c74b40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107462656 unmapped: 30908416 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:51.193512+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a240400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 161 ms_handle_reset con 0x56131a240400 session 0x56131b0f41e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae55400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107470848 unmapped: 30900224 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 161 ms_handle_reset con 0x56131ae55400 session 0x56131b0f5c20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:52.193671+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 161 heartbeat osd_stat(store_statfs(0x1b796a000/0x0/0x1bfc00000, data 0x3c286d6/0x3d24000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 161 handle_osd_map epochs [162,162], i have 162, src has [1,162]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 161 handle_osd_map epochs [162,162], i have 162, src has [1,162]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 161 handle_osd_map epochs [162,162], i have 162, src has [1,162]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 161 handle_osd_map epochs [162,162], i have 162, src has [1,162]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae78800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107503616 unmapped: 30867456 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 162 ms_handle_reset con 0x56131ae78800 session 0x56131c0621e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:53.193733+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a80f400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 162 ms_handle_reset con 0x56131a80f400 session 0x56131c062780
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107528192 unmapped: 30842880 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:54.193899+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1392542 data_alloc: 285212672 data_used: 1994752
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 162 heartbeat osd_stat(store_statfs(0x1b7966000/0x0/0x1bfc00000, data 0x3c2aaef/0x3d28000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107528192 unmapped: 30842880 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:55.194034+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776ac00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 handle_osd_map epochs [163,163], i have 163, src has [1,163]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 ms_handle_reset con 0x56131776ac00 session 0x56131c062960
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107536384 unmapped: 30834688 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:56.194196+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a240400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 ms_handle_reset con 0x56131a240400 session 0x56131c062d20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107536384 unmapped: 30834688 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:57.194332+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 heartbeat osd_stat(store_statfs(0x1b795f000/0x0/0x1bfc00000, data 0x3c2d0e6/0x3d2e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae55400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.921104431s of 11.183115005s, submitted: 79
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 ms_handle_reset con 0x56131ae55400 session 0x56131c062f00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107552768 unmapped: 30818304 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:58.194444+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae54400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 ms_handle_reset con 0x56131ae54400 session 0x56131c0634a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a225400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107585536 unmapped: 30785536 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 ms_handle_reset con 0x56131a225400 session 0x56131c063860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:18:59.194627+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1399320 data_alloc: 285212672 data_used: 2007040
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a225400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 ms_handle_reset con 0x56131a225400 session 0x56131c2b8000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107610112 unmapped: 30760960 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:00.194813+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107618304 unmapped: 30752768 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:01.194964+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 heartbeat osd_stat(store_statfs(0x1b7961000/0x0/0x1bfc00000, data 0x3c2d084/0x3d2d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107618304 unmapped: 30752768 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:02.195067+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107642880 unmapped: 30728192 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:03.195259+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776a400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 ms_handle_reset con 0x56131776a400 session 0x56131c2b83c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107667456 unmapped: 30703616 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:04.195392+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1401102 data_alloc: 285212672 data_used: 2007040
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107667456 unmapped: 30703616 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:05.195530+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a30b000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 107675648 unmapped: 30695424 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 ms_handle_reset con 0x56131a30b000 session 0x56131c2b85a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:06.195683+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e3c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 heartbeat osd_stat(store_statfs(0x1b7960000/0x0/0x1bfc00000, data 0x3c2d1b0/0x3d2e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [1,2,3,4,5] op hist [0,0,0,0,0,1])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 ms_handle_reset con 0x56131a7e3c00 session 0x56131c2b8d20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a30c800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108437504 unmapped: 29933568 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 ms_handle_reset con 0x56131a30c800 session 0x56131c2b9680
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:07.195825+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108437504 unmapped: 29933568 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:08.196012+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a30bc00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.707121849s of 11.120048523s, submitted: 87
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 ms_handle_reset con 0x56131a30bc00 session 0x561319bb23c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 heartbeat osd_stat(store_statfs(0x1b6610000/0x0/0x1bfc00000, data 0x4b7d15e/0x4c7e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108404736 unmapped: 29966336 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:09.196176+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1523282 data_alloc: 285212672 data_used: 2007040
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108404736 unmapped: 29966336 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:10.196318+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a23c800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 ms_handle_reset con 0x56131a23c800 session 0x56131c0632c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108437504 unmapped: 29933568 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:11.196464+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e2400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 ms_handle_reset con 0x56131a7e2400 session 0x56131ae803c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e6400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108347392 unmapped: 30023680 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 ms_handle_reset con 0x56131a7e6400 session 0x56131c062780
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:12.196598+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108347392 unmapped: 30023680 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:13.196799+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 heartbeat osd_stat(store_statfs(0x1b7560000/0x0/0x1bfc00000, data 0x3c2d1e9/0x3d2e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108388352 unmapped: 29982720 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:14.196945+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1413281 data_alloc: 285212672 data_used: 2011136
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108388352 unmapped: 29982720 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:15.197082+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108404736 unmapped: 29966336 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:16.197288+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108404736 unmapped: 29966336 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:17.197548+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 heartbeat osd_stat(store_statfs(0x1b7560000/0x0/0x1bfc00000, data 0x3c2d318/0x3d2e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108404736 unmapped: 29966336 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:18.197716+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 heartbeat osd_stat(store_statfs(0x1b7561000/0x0/0x1bfc00000, data 0x3c2d347/0x3d2d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108404736 unmapped: 29966336 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:19.197899+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1412415 data_alloc: 285212672 data_used: 2011136
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108404736 unmapped: 29966336 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:20.198084+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.422436714s of 11.825881004s, submitted: 67
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108404736 unmapped: 29966336 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:21.198276+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108404736 unmapped: 29966336 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:22.198428+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 heartbeat osd_stat(store_statfs(0x1b7561000/0x0/0x1bfc00000, data 0x3c2d347/0x3d2d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108404736 unmapped: 29966336 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:23.198578+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108404736 unmapped: 29966336 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:24.198731+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1412415 data_alloc: 285212672 data_used: 2011136
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108404736 unmapped: 29966336 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:25.198886+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 heartbeat osd_stat(store_statfs(0x1b7561000/0x0/0x1bfc00000, data 0x3c2d347/0x3d2d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108404736 unmapped: 29966336 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:26.199132+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 heartbeat osd_stat(store_statfs(0x1b7561000/0x0/0x1bfc00000, data 0x3c2d347/0x3d2d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108404736 unmapped: 29966336 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:27.199272+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108404736 unmapped: 29966336 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:28.199393+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108437504 unmapped: 29933568 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:29.199547+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1413815 data_alloc: 285212672 data_used: 2011136
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108437504 unmapped: 29933568 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:30.199694+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.989104271s of 10.000363350s, submitted: 2
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108437504 unmapped: 29933568 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:31.199852+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108437504 unmapped: 29933568 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:32.199988+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 heartbeat osd_stat(store_statfs(0x1b7560000/0x0/0x1bfc00000, data 0x3c2d3e2/0x3d2e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108437504 unmapped: 29933568 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:33.200153+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108437504 unmapped: 29933568 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:34.200362+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1413815 data_alloc: 285212672 data_used: 2011136
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108437504 unmapped: 29933568 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 heartbeat osd_stat(store_statfs(0x1b7560000/0x0/0x1bfc00000, data 0x3c2d3e2/0x3d2e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:35.200518+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108437504 unmapped: 29933568 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:36.200696+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 heartbeat osd_stat(store_statfs(0x1b7560000/0x0/0x1bfc00000, data 0x3c2d3e2/0x3d2e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108437504 unmapped: 29933568 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:37.200834+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108437504 unmapped: 29933568 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:38.200953+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 8400.1 total, 600.0 interval
                                                          Cumulative writes: 10K writes, 37K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s
                                                          Cumulative WAL: 10K writes, 2822 syncs, 3.55 writes per sync, written: 0.03 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 4703 writes, 14K keys, 4703 commit groups, 1.0 writes per commit group, ingest: 10.73 MB, 0.02 MB/s
                                                          Interval WAL: 4703 writes, 2083 syncs, 2.26 writes per sync, written: 0.01 GB, 0.02 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108478464 unmapped: 29892608 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:39.201097+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1414235 data_alloc: 285212672 data_used: 2011136
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108478464 unmapped: 29892608 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:40.201236+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 heartbeat osd_stat(store_statfs(0x1b7561000/0x0/0x1bfc00000, data 0x3c2d411/0x3d2d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.961771965s of 10.004536629s, submitted: 10
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108494848 unmapped: 29876224 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:41.201402+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e3c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 164 ms_handle_reset con 0x56131a7e3c00 session 0x56131b0f41e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108494848 unmapped: 29876224 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:42.201547+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108494848 unmapped: 29876224 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:43.201691+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108503040 unmapped: 29868032 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:44.201840+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 165 heartbeat osd_stat(store_statfs(0x1b7556000/0x0/0x1bfc00000, data 0x3c320c3/0x3d37000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1427093 data_alloc: 285212672 data_used: 2035712
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108503040 unmapped: 29868032 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:45.201984+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108511232 unmapped: 29859840 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 166 heartbeat osd_stat(store_statfs(0x1b7551000/0x0/0x1bfc00000, data 0x3c346ad/0x3d3c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:46.202155+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a30b000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 166 ms_handle_reset con 0x56131a30b000 session 0x56131b0f5c20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108519424 unmapped: 29851648 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:47.202304+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108519424 unmapped: 29851648 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:48.202439+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108519424 unmapped: 29851648 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:49.202585+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1432201 data_alloc: 285212672 data_used: 2035712
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 167 heartbeat osd_stat(store_statfs(0x1b754d000/0x0/0x1bfc00000, data 0x3c36c6c/0x3d40000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 167 handle_osd_map epochs [168,168], i have 168, src has [1,168]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 167 handle_osd_map epochs [168,168], i have 168, src has [1,168]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 167 handle_osd_map epochs [168,168], i have 168, src has [1,168]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 167 handle_osd_map epochs [168,168], i have 168, src has [1,168]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a30dc00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 168 ms_handle_reset con 0x56131a30dc00 session 0x561319c74b40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108519424 unmapped: 29851648 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:50.202731+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.806577682s of 10.001728058s, submitted: 94
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108519424 unmapped: 29851648 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:51.202877+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 169 handle_osd_map epochs [169,170], i have 169, src has [1,170]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 170 handle_osd_map epochs [169,170], i have 170, src has [1,170]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 108519424 unmapped: 29851648 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:52.203048+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 170 handle_osd_map epochs [170,171], i have 170, src has [1,171]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a23cc00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 171 ms_handle_reset con 0x56131a23cc00 session 0x561319f083c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 171 heartbeat osd_stat(store_statfs(0x1b7540000/0x0/0x1bfc00000, data 0x3c3dd42/0x3d4c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109592576 unmapped: 28778496 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:53.203176+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 171 handle_osd_map epochs [172,172], i have 171, src has [1,172]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:54.203316+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109600768 unmapped: 28770304 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1447743 data_alloc: 285212672 data_used: 2043904
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:55.203453+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109600768 unmapped: 28770304 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 172 handle_osd_map epochs [172,173], i have 172, src has [1,173]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 173 handle_osd_map epochs [172,173], i have 173, src has [1,173]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 173 heartbeat osd_stat(store_statfs(0x1b7534000/0x0/0x1bfc00000, data 0x3c44d37/0x3d58000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:56.203624+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109608960 unmapped: 28762112 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a337000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 173 ms_handle_reset con 0x56131a337000 session 0x561319e45860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:57.203810+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109617152 unmapped: 28753920 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 173 handle_osd_map epochs [173,174], i have 173, src has [1,174]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 174 handle_osd_map epochs [174,174], i have 174, src has [1,174]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:58.204037+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109617152 unmapped: 28753920 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2fc800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 174 ms_handle_reset con 0x56131a2fc800 session 0x56131ae805a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:19:59.204197+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109617152 unmapped: 28753920 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 174 heartbeat osd_stat(store_statfs(0x1b7531000/0x0/0x1bfc00000, data 0x3c47409/0x3d5c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 174 handle_osd_map epochs [175,175], i have 174, src has [1,175]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 174 handle_osd_map epochs [175,175], i have 175, src has [1,175]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 174 handle_osd_map epochs [175,175], i have 175, src has [1,175]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 174 handle_osd_map epochs [175,175], i have 175, src has [1,175]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 174 handle_osd_map epochs [175,175], i have 175, src has [1,175]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1460020 data_alloc: 285212672 data_used: 2068480
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:00.204371+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109625344 unmapped: 28745728 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2fc800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.745414734s of 10.008719444s, submitted: 105
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 175 ms_handle_reset con 0x56131a2fc800 session 0x561319b90780
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:01.204553+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109633536 unmapped: 28737536 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:02.204741+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109633536 unmapped: 28737536 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 175 handle_osd_map epochs [176,176], i have 175, src has [1,176]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 176 heartbeat osd_stat(store_statfs(0x1b7529000/0x0/0x1bfc00000, data 0x3c4be3e/0x3d64000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:03.204907+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109641728 unmapped: 28729344 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:04.205039+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109649920 unmapped: 28721152 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1466558 data_alloc: 285212672 data_used: 2068480
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:05.205197+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109658112 unmapped: 28712960 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Got map version 49
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318f31800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 176 heartbeat osd_stat(store_statfs(0x1b7527000/0x0/0x1bfc00000, data 0x3c4c1a6/0x3d67000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:06.205377+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109682688 unmapped: 28688384 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561319b58c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 176 ms_handle_reset con 0x561319b58c00 session 0x56131ad83a40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:07.205533+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109682688 unmapped: 28688384 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 176 handle_osd_map epochs [177,177], i have 176, src has [1,177]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:08.205670+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109699072 unmapped: 28672000 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae53800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 177 ms_handle_reset con 0x56131ae53800 session 0x561319b914a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:09.205824+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109707264 unmapped: 28663808 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1469600 data_alloc: 285212672 data_used: 2084864
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:10.205995+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109707264 unmapped: 28663808 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 177 handle_osd_map epochs [178,178], i have 177, src has [1,178]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.221946716s of 10.398715019s, submitted: 64
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:11.206132+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109748224 unmapped: 28622848 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 178 heartbeat osd_stat(store_statfs(0x1b7526000/0x0/0x1bfc00000, data 0x3c4e64d/0x3d68000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:12.206240+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109748224 unmapped: 28622848 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:13.206389+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109748224 unmapped: 28622848 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:14.206601+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109748224 unmapped: 28622848 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1472280 data_alloc: 285212672 data_used: 2097152
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:15.206738+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109748224 unmapped: 28622848 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 178 heartbeat osd_stat(store_statfs(0x1b7523000/0x0/0x1bfc00000, data 0x3c50d12/0x3d6b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:16.206926+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109748224 unmapped: 28622848 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a70e800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 178 ms_handle_reset con 0x56131a70e800 session 0x56131b051a40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:17.207100+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 178 handle_osd_map epochs [178,179], i have 178, src has [1,179]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109764608 unmapped: 28606464 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 handle_osd_map epochs [179,179], i have 179, src has [1,179]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2d6c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:18.207294+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109805568 unmapped: 28565504 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 ms_handle_reset con 0x56131a2d6c00 session 0x56131a11de00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:19.207436+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109805568 unmapped: 28565504 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1476281 data_alloc: 285212672 data_used: 2109440
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b751f000/0x0/0x1bfc00000, data 0x3c5317a/0x3d6e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:20.207629+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109805568 unmapped: 28565504 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.767402649s of 10.059316635s, submitted: 88
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:21.207865+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109805568 unmapped: 28565504 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b7520000/0x0/0x1bfc00000, data 0x3c53244/0x3d6e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:22.207983+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109805568 unmapped: 28565504 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b7520000/0x0/0x1bfc00000, data 0x3c53244/0x3d6e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:23.208129+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109805568 unmapped: 28565504 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:24.209405+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109805568 unmapped: 28565504 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1475769 data_alloc: 285212672 data_used: 2109440
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:25.209619+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109813760 unmapped: 28557312 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b7520000/0x0/0x1bfc00000, data 0x3c53244/0x3d6e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:26.209926+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109830144 unmapped: 28540928 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a336c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 ms_handle_reset con 0x56131a336c00 session 0x561318225e00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:27.210156+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109830144 unmapped: 28540928 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b751f000/0x0/0x1bfc00000, data 0x3c53370/0x3d6f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318f6b000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:28.210320+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109838336 unmapped: 28532736 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 ms_handle_reset con 0x561318f6b000 session 0x561317bd32c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b751f000/0x0/0x1bfc00000, data 0x3c5330e/0x3d6e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:29.210481+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109838336 unmapped: 28532736 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b751f000/0x0/0x1bfc00000, data 0x3c5330e/0x3d6e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1476438 data_alloc: 285212672 data_used: 2109440
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:30.210633+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109838336 unmapped: 28532736 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:31.210823+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109838336 unmapped: 28532736 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae55c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.412330627s of 10.506837845s, submitted: 23
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 ms_handle_reset con 0x56131ae55c00 session 0x56131ad82f00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:32.211003+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318f6b800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 ms_handle_reset con 0x561318f6b800 session 0x56131a1ef860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109821952 unmapped: 28549120 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b751f000/0x0/0x1bfc00000, data 0x3c5331e/0x3d6f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318f30c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 ms_handle_reset con 0x561318f30c00 session 0x56131b0f4d20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:33.211173+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109830144 unmapped: 28540928 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae53800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b751d000/0x0/0x1bfc00000, data 0x3c5333e/0x3d71000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 ms_handle_reset con 0x56131ae53800 session 0x56131822b0e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776b800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 ms_handle_reset con 0x56131776b800 session 0x561319bb3a40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:34.211473+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109838336 unmapped: 28532736 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1483789 data_alloc: 285212672 data_used: 2109440
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:35.211636+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109838336 unmapped: 28532736 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561319b58c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 ms_handle_reset con 0x561319b58c00 session 0x561317be9e00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:36.211926+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109862912 unmapped: 28508160 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e3400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318ae2800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 ms_handle_reset con 0x56131a7e3400 session 0x56131c0630e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b751b000/0x0/0x1bfc00000, data 0x3c533a0/0x3d72000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:37.212085+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 ms_handle_reset con 0x561318ae2800 session 0x56131ba18f00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109887488 unmapped: 28483584 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776b800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae89800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 ms_handle_reset con 0x56131776b800 session 0x56131ad99860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 ms_handle_reset con 0x56131ae89800 session 0x56131ba19680
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2fc400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a70e400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:38.212226+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109928448 unmapped: 28442624 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 ms_handle_reset con 0x56131a70e400 session 0x56131ba19a40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 ms_handle_reset con 0x56131a2fc400 session 0x5613183c8960
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae55400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:39.212338+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109944832 unmapped: 28426240 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 ms_handle_reset con 0x56131ae55400 session 0x56131b0f5e00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b751f000/0x0/0x1bfc00000, data 0x3c5331e/0x3d6f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1480916 data_alloc: 285212672 data_used: 2109440
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:40.212552+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109944832 unmapped: 28426240 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:41.212687+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109944832 unmapped: 28426240 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:42.212843+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b7520000/0x0/0x1bfc00000, data 0x3c5330e/0x3d6e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109944832 unmapped: 28426240 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:43.212987+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109944832 unmapped: 28426240 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:44.213184+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109944832 unmapped: 28426240 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1480916 data_alloc: 285212672 data_used: 2109440
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:45.213338+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109944832 unmapped: 28426240 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.576462746s of 14.021934509s, submitted: 93
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:46.213495+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109944832 unmapped: 28426240 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e7c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 ms_handle_reset con 0x56131a7e7c00 session 0x561319f08b40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:47.213595+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109944832 unmapped: 28426240 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318f6b000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:48.213766+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 heartbeat osd_stat(store_statfs(0x1b751f000/0x0/0x1bfc00000, data 0x3c533d8/0x3d6f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 ms_handle_reset con 0x561318f6b000 session 0x561319c214a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109985792 unmapped: 28385280 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:49.213956+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109985792 unmapped: 28385280 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1482430 data_alloc: 285212672 data_used: 2109440
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:50.214696+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a338000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 109993984 unmapped: 28377088 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 ms_handle_reset con 0x56131a338000 session 0x56131c0621e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 179 handle_osd_map epochs [179,180], i have 179, src has [1,180]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 180 handle_osd_map epochs [180,180], i have 180, src has [1,180]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:51.214840+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 110010368 unmapped: 28360704 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:52.214964+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 110010368 unmapped: 28360704 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776ac00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 180 ms_handle_reset con 0x56131776ac00 session 0x561319c39c20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:53.215110+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 110084096 unmapped: 28286976 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b751a000/0x0/0x1bfc00000, data 0x3c55a3e/0x3d73000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e6800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 180 ms_handle_reset con 0x56131a7e6800 session 0x56131c2b9860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:54.215849+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 110084096 unmapped: 28286976 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b751a000/0x0/0x1bfc00000, data 0x3c55a4f/0x3d74000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1487925 data_alloc: 285212672 data_used: 2121728
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:55.215992+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776ac00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 110100480 unmapped: 28270592 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.711316109s of 10.004819870s, submitted: 78
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 180 ms_handle_reset con 0x56131776ac00 session 0x56131c2b8f00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 180 heartbeat osd_stat(store_statfs(0x1b751a000/0x0/0x1bfc00000, data 0x3c55a4f/0x3d74000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:56.216277+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 110116864 unmapped: 28254208 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:57.216814+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 110116864 unmapped: 28254208 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 180 handle_osd_map epochs [181,181], i have 180, src has [1,181]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2d6c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 181 ms_handle_reset con 0x56131a2d6c00 session 0x561319e441e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:58.217747+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 110116864 unmapped: 28254208 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:20:59.217902+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 110116864 unmapped: 28254208 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1492067 data_alloc: 285212672 data_used: 2134016
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318f30000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:00.218206+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 110067712 unmapped: 28303360 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 181 ms_handle_reset con 0x561318f30000 session 0x56131ad82960
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 181 heartbeat osd_stat(store_statfs(0x1b7516000/0x0/0x1bfc00000, data 0x3c57ef2/0x3d78000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:01.218369+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 110051328 unmapped: 28319744 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a70e400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 181 ms_handle_reset con 0x56131a70e400 session 0x56131ad98f00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a225800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 181 ms_handle_reset con 0x56131a225800 session 0x56131c2b90e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:02.218496+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 110059520 unmapped: 28311552 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776ac00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 181 ms_handle_reset con 0x56131776ac00 session 0x561319f07c20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae7bc00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:03.218631+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 181 ms_handle_reset con 0x56131ae7bc00 session 0x56131a11d0e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 110067712 unmapped: 28303360 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:04.218837+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 110067712 unmapped: 28303360 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1490817 data_alloc: 285212672 data_used: 2134016
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 181 heartbeat osd_stat(store_statfs(0x1b7517000/0x0/0x1bfc00000, data 0x3c57f21/0x3d77000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:05.218983+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 181 handle_osd_map epochs [181,182], i have 181, src has [1,182]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 182 handle_osd_map epochs [182,182], i have 182, src has [1,182]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 182 handle_osd_map epochs [182,182], i have 182, src has [1,182]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.854434967s of 10.006495476s, submitted: 50
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 110075904 unmapped: 28295168 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:06.219149+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 110075904 unmapped: 28295168 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 182 handle_osd_map epochs [183,183], i have 182, src has [1,183]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e6800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 183 ms_handle_reset con 0x56131a7e6800 session 0x561319bb2780
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:07.219350+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 110075904 unmapped: 28295168 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae88000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 183 handle_osd_map epochs [184,184], i have 183, src has [1,184]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 184 ms_handle_reset con 0x56131ae88000 session 0x561319bb3e00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:08.219566+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 110075904 unmapped: 28295168 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 184 handle_osd_map epochs [185,185], i have 184, src has [1,185]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:09.219853+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 110084096 unmapped: 28286976 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae89c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1510275 data_alloc: 285212672 data_used: 2150400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:10.219982+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 110084096 unmapped: 28286976 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 185 handle_osd_map epochs [185,186], i have 185, src has [1,186]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 186 handle_osd_map epochs [184,186], i have 186, src has [1,186]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 186 ms_handle_reset con 0x56131ae89c00 session 0x56131ad832c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 186 heartbeat osd_stat(store_statfs(0x1b7500000/0x0/0x1bfc00000, data 0x3c63a23/0x3d8c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:11.220191+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 110043136 unmapped: 28327936 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:12.220337+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 186 ms_handle_reset con 0x561318f31800 session 0x56131c6070e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 186 handle_osd_map epochs [186,187], i have 186, src has [1,187]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131aea2800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 111689728 unmapped: 26681344 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 187 ms_handle_reset con 0x56131aea2800 session 0x56131ba181e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:13.220487+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 187 heartbeat osd_stat(store_statfs(0x1b74fc000/0x0/0x1bfc00000, data 0x3c661cc/0x3d91000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 111689728 unmapped: 26681344 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Got map version 50
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:14.220861+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 111747072 unmapped: 26624000 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1515431 data_alloc: 285212672 data_used: 2162688
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:15.221039+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 111747072 unmapped: 26624000 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:16.221226+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.697085381s of 10.829483032s, submitted: 337
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 111747072 unmapped: 26624000 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:17.221374+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 111747072 unmapped: 26624000 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 187 handle_osd_map epochs [188,188], i have 187, src has [1,188]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 188 heartbeat osd_stat(store_statfs(0x1b74f9000/0x0/0x1bfc00000, data 0x3c6864c/0x3d94000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:18.221551+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 188 handle_osd_map epochs [189,189], i have 188, src has [1,189]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 111755264 unmapped: 26615808 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:19.221662+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 111755264 unmapped: 26615808 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1522617 data_alloc: 285212672 data_used: 2174976
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:20.221798+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 111755264 unmapped: 26615808 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a225400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 189 ms_handle_reset con 0x56131a225400 session 0x56131c607a40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131aea2c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 189 ms_handle_reset con 0x56131aea2c00 session 0x56131c607c20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:21.221940+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 111796224 unmapped: 26574848 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:22.222071+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 189 handle_osd_map epochs [190,190], i have 189, src has [1,190]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a224000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 111804416 unmapped: 26566656 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 190 ms_handle_reset con 0x56131a224000 session 0x561319da8d20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a224000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:23.222249+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 111804416 unmapped: 26566656 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 190 ms_handle_reset con 0x56131a224000 session 0x561319bb3e00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae53000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 190 ms_handle_reset con 0x56131ae53000 session 0x56131a1ef860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a80e800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 190 heartbeat osd_stat(store_statfs(0x1b74f0000/0x0/0x1bfc00000, data 0x3c6d2b3/0x3d9e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:24.222382+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 190 ms_handle_reset con 0x56131a80e800 session 0x56131ad82960
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 111845376 unmapped: 26525696 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1642640 data_alloc: 285212672 data_used: 2191360
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:25.222511+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131aea3c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 111878144 unmapped: 26492928 heap: 138371072 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 190 ms_handle_reset con 0x56131aea3c00 session 0x56131ad82f00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae89800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:26.222681+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.725932121s of 10.183723450s, submitted: 121
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 111951872 unmapped: 34816000 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 190 heartbeat osd_stat(store_statfs(0x1b4cf0000/0x0/0x1bfc00000, data 0x646d2b3/0x659e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:27.222852+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 190 heartbeat osd_stat(store_statfs(0x1b44f0000/0x0/0x1bfc00000, data 0x6c6d2b3/0x6d9e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 190 handle_osd_map epochs [191,191], i have 190, src has [1,191]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 190 handle_osd_map epochs [191,191], i have 191, src has [1,191]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 112001024 unmapped: 34766848 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 191 heartbeat osd_stat(store_statfs(0x1b3ceb000/0x0/0x1bfc00000, data 0x746f6ec/0x75a2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:28.222960+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 112066560 unmapped: 34701312 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:29.223103+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 112197632 unmapped: 34570240 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2144028 data_alloc: 285212672 data_used: 2203648
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:30.223222+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 112222208 unmapped: 34545664 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 191 heartbeat osd_stat(store_statfs(0x1b1ce9000/0x0/0x1bfc00000, data 0x946f822/0x95a4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 191 handle_osd_map epochs [192,192], i have 191, src has [1,192]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 191 handle_osd_map epochs [192,192], i have 192, src has [1,192]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:31.223370+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 192 heartbeat osd_stat(store_statfs(0x1b1ce5000/0x0/0x1bfc00000, data 0x9471e23/0x95a8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 112222208 unmapped: 34545664 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:32.223529+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 120619008 unmapped: 26148864 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:33.223709+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 112279552 unmapped: 34488320 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:34.223876+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 112369664 unmapped: 34398208 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2311408 data_alloc: 285212672 data_used: 2220032
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:35.224020+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 120766464 unmapped: 26001408 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 192 heartbeat osd_stat(store_statfs(0x1b04e7000/0x0/0x1bfc00000, data 0xac71f81/0xada7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:36.224194+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 120897536 unmapped: 25870336 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.667926788s of 10.287294388s, submitted: 81
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:37.224355+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 192 handle_osd_map epochs [193,193], i have 192, src has [1,193]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 120987648 unmapped: 25780224 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 193 handle_osd_map epochs [193,193], i have 193, src has [1,193]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 193 handle_osd_map epochs [193,193], i have 193, src has [1,193]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 193 handle_osd_map epochs [193,193], i have 193, src has [1,193]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:38.224507+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 193 heartbeat osd_stat(store_statfs(0x1ae4e2000/0x0/0x1bfc00000, data 0xcc74464/0xcdab000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 112582656 unmapped: 34185216 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:39.224665+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 112721920 unmapped: 34045952 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2755270 data_alloc: 285212672 data_used: 2232320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:40.224867+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 112844800 unmapped: 33923072 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 193 heartbeat osd_stat(store_statfs(0x1abce3000/0x0/0x1bfc00000, data 0xf47452e/0xf5ab000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:41.225026+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 112926720 unmapped: 33841152 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 193 heartbeat osd_stat(store_statfs(0x1abce3000/0x0/0x1bfc00000, data 0xf47452e/0xf5ab000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:42.225145+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 112926720 unmapped: 33841152 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:43.225329+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 113090560 unmapped: 33677312 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:44.225488+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 113188864 unmapped: 33579008 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3138032 data_alloc: 285212672 data_used: 2232320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:45.225661+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 113311744 unmapped: 33456128 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:46.225872+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 113311744 unmapped: 33456128 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 193 heartbeat osd_stat(store_statfs(0x1a84e4000/0x0/0x1bfc00000, data 0x12c74627/0x12daa000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:47.226044+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 113319936 unmapped: 33447936 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:48.226253+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.213956833s of 11.825005531s, submitted: 49
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 193 heartbeat osd_stat(store_statfs(0x1a84e4000/0x0/0x1bfc00000, data 0x12c74627/0x12daa000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 113328128 unmapped: 33439744 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:49.226429+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 113467392 unmapped: 33300480 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:50.226581+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3362192 data_alloc: 285212672 data_used: 2232320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 122003456 unmapped: 24764416 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:51.226744+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 113614848 unmapped: 33153024 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Got map version 51
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:52.226908+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 113803264 unmapped: 32964608 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 193 heartbeat osd_stat(store_statfs(0x1a64e1000/0x0/0x1bfc00000, data 0x14c74a1c/0x14dad000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:53.227147+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 123387904 unmapped: 23379968 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 193 heartbeat osd_stat(store_statfs(0x1a54e0000/0x0/0x1bfc00000, data 0x15c74b1c/0x15dae000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:54.227379+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 123584512 unmapped: 23183360 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:55.227558+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3803698 data_alloc: 285212672 data_used: 2232320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 115269632 unmapped: 31498240 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:56.227839+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 131080192 unmapped: 15687680 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:57.228053+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 115449856 unmapped: 31318016 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131aea2800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 193 ms_handle_reset con 0x56131aea2800 session 0x561319c39680
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:58.228251+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 115613696 unmapped: 31154176 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 193 heartbeat osd_stat(store_statfs(0x19fce3000/0x0/0x1bfc00000, data 0x1b474b82/0x1b5ab000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [0,0,0,0,0,0,0,0,1])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.545500755s of 10.593817711s, submitted: 45
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:21:59.228713+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 115703808 unmapped: 31064064 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae7b800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 193 ms_handle_reset con 0x56131ae7b800 session 0x56131a7e0000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:00.228941+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 4186738 data_alloc: 285212672 data_used: 2232320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 193 handle_osd_map epochs [194,194], i have 193, src has [1,194]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 124125184 unmapped: 22642688 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a80e400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 194 ms_handle_reset con 0x56131a80e400 session 0x56131c2b8780
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:01.229310+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2fc000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 194 ms_handle_reset con 0x56131a2fc000 session 0x561319bb34a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 124002304 unmapped: 22765568 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 195 handle_osd_map epochs [195,195], i have 195, src has [1,195]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 195 handle_osd_map epochs [195,195], i have 195, src has [1,195]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:02.229617+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 195 heartbeat osd_stat(store_statfs(0x19e4d7000/0x0/0x1bfc00000, data 0x1cc7993e/0x1cdb5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 124157952 unmapped: 22609920 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:03.229829+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a339800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 124338176 unmapped: 22429696 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:04.230042+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 195 ms_handle_reset con 0x56131a339800 session 0x56131808cd20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131aea2c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 116137984 unmapped: 30629888 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318f6b000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:05.230205+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 4771290 data_alloc: 285212672 data_used: 2244608
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 195 ms_handle_reset con 0x561318f6b000 session 0x561319b7cf00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 195 ms_handle_reset con 0x56131aea2c00 session 0x56131a7e0d20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 116195328 unmapped: 30572544 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:06.230637+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2fc000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 195 ms_handle_reset con 0x56131a2fc000 session 0x56131b0f41e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 116359168 unmapped: 30408704 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a339800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a80e400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae7b800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:07.230807+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 195 ms_handle_reset con 0x56131ae7b800 session 0x56131a11de00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e6800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318f31800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 195 ms_handle_reset con 0x561318f31800 session 0x561319b90f00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 196 handle_osd_map epochs [196,196], i have 196, src has [1,196]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 124895232 unmapped: 21872640 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 196 ms_handle_reset con 0x56131a339800 session 0x56131c062960
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318f30c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 196 heartbeat osd_stat(store_statfs(0x198d9f000/0x0/0x1bfc00000, data 0x223b1a7a/0x224ef000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [1])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:08.230946+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 196 ms_handle_reset con 0x561318f30c00 session 0x561317bea000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 197 ms_handle_reset con 0x56131a7e6800 session 0x56131c6070e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 197 ms_handle_reset con 0x56131ae89800 session 0x561317bd32c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 116670464 unmapped: 30097408 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 197 ms_handle_reset con 0x56131a80e400 session 0x56131ad914a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 197 heartbeat osd_stat(store_statfs(0x197cd1000/0x0/0x1bfc00000, data 0x2347e4c5/0x235bd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae55c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.662558556s of 10.001616478s, submitted: 173
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:09.231094+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 197 ms_handle_reset con 0x56131ae55c00 session 0x56131c607a40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318f30c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 116703232 unmapped: 30064640 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:10.231330+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 5034578 data_alloc: 285212672 data_used: 2260992
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e6800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a80e400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 197 ms_handle_reset con 0x56131a7e6800 session 0x56131ba18780
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 197 handle_osd_map epochs [197,198], i have 197, src has [1,198]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 198 handle_osd_map epochs [198,198], i have 198, src has [1,198]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 198 ms_handle_reset con 0x56131a80e400 session 0x56131c606000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae89800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 116719616 unmapped: 30048256 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:11.231461+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 198 ms_handle_reset con 0x56131ae89800 session 0x56131a7e1e00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a30c400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 198 ms_handle_reset con 0x561318f30c00 session 0x561319f07c20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a225400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a224000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 198 ms_handle_reset con 0x56131a30c400 session 0x56131c062000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 198 ms_handle_reset con 0x56131a224000 session 0x561319b72d20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 114409472 unmapped: 32358400 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318f30c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 198 ms_handle_reset con 0x561318f30c00 session 0x56131ba185a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:12.231604+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 198 handle_osd_map epochs [199,199], i have 198, src has [1,199]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 114442240 unmapped: 32325632 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 199 ms_handle_reset con 0x56131a225400 session 0x56131ad99860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:13.231820+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 114442240 unmapped: 32325632 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:14.232044+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 199 heartbeat osd_stat(store_statfs(0x1b70ca000/0x0/0x1bfc00000, data 0x3c83299/0x3dc3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4d6f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 114442240 unmapped: 32325632 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae78800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 199 handle_osd_map epochs [200,200], i have 199, src has [1,200]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 200 heartbeat osd_stat(store_statfs(0x1b70ca000/0x0/0x1bfc00000, data 0x3c83299/0x3dc3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4d6f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:15.232237+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 200 ms_handle_reset con 0x56131ae78800 session 0x561319e441e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1666737 data_alloc: 285212672 data_used: 2285568
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 114417664 unmapped: 32350208 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 201 heartbeat osd_stat(store_statfs(0x1b70c2000/0x0/0x1bfc00000, data 0x3c87ea7/0x3dca000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4d6f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131aea3400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:16.232397+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 201 ms_handle_reset con 0x56131aea3400 session 0x561319f08b40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131aea3400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 201 ms_handle_reset con 0x56131aea3400 session 0x5613183c8960
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 114425856 unmapped: 32342016 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:17.232651+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 114040832 unmapped: 32727040 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a23c800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:18.232840+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 202 ms_handle_reset con 0x56131a23c800 session 0x561317be9860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 202 handle_osd_map epochs [203,203], i have 202, src has [1,203]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 202 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2fc400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae53800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 203 ms_handle_reset con 0x56131a2fc400 session 0x56131c6061e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 114073600 unmapped: 32694272 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:19.233117+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a203000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.224403381s of 10.080433846s, submitted: 307
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 203 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 203 ms_handle_reset con 0x56131a203000 session 0x56131a1ee780
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae54400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 203 heartbeat osd_stat(store_statfs(0x1b70b9000/0x0/0x1bfc00000, data 0x3c8cf0d/0x3dd4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4d6f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 203 handle_osd_map epochs [204,204], i have 203, src has [1,204]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 204 handle_osd_map epochs [204,204], i have 204, src has [1,204]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 204 ms_handle_reset con 0x56131ae53800 session 0x5613182243c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a203000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 204 ms_handle_reset con 0x56131a203000 session 0x56131882ed20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 115122176 unmapped: 31645696 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 204 ms_handle_reset con 0x56131ae54400 session 0x56131ad8be00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:20.233295+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a23c800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1695462 data_alloc: 285212672 data_used: 2297856
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2fc400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 204 ms_handle_reset con 0x56131a2fc400 session 0x561319da3e00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 204 handle_osd_map epochs [205,205], i have 204, src has [1,205]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 205 ms_handle_reset con 0x56131a23c800 session 0x56131a7e1860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 115146752 unmapped: 31621120 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131aea3400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:21.233580+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318f30000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 205 handle_osd_map epochs [206,206], i have 205, src has [1,206]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae53000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 206 handle_osd_map epochs [206,206], i have 206, src has [1,206]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 115212288 unmapped: 31555584 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 206 ms_handle_reset con 0x56131ae53000 session 0x561319bb2780
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 206 ms_handle_reset con 0x56131aea3400 session 0x56131808d680
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:22.233749+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 116293632 unmapped: 30474240 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 207 heartbeat osd_stat(store_statfs(0x1b8088000/0x0/0x1bfc00000, data 0x3c969a4/0x3de4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x3d8f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 207 ms_handle_reset con 0x561318f30000 session 0x56131c062b40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a23d000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 207 ms_handle_reset con 0x56131a23d000 session 0x561319bb2000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:23.233970+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a80e800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 116301824 unmapped: 30466048 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 207 ms_handle_reset con 0x56131a80e800 session 0x5613182252c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:24.234166+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 208 handle_osd_map epochs [208,208], i have 208, src has [1,208]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a80e400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a30cc00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 116326400 unmapped: 30441472 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 208 ms_handle_reset con 0x56131a80e400 session 0x56131ae81e00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 208 ms_handle_reset con 0x56131a30cc00 session 0x561319c47c20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:25.234303+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1704525 data_alloc: 285212672 data_used: 2326528
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a80e400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 116375552 unmapped: 30392320 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 208 ms_handle_reset con 0x56131a80e400 session 0x56131a7e0b40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318f30000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:26.234481+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 208 ms_handle_reset con 0x561318f30000 session 0x561317be85a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 116416512 unmapped: 30351360 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a23d000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:27.234637+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 208 ms_handle_reset con 0x56131a23d000 session 0x56131c6074a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 208 heartbeat osd_stat(store_statfs(0x1b8087000/0x0/0x1bfc00000, data 0x3c98a5c/0x3de6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x3d8f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 208 handle_osd_map epochs [209,209], i have 208, src has [1,209]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 208 handle_osd_map epochs [209,209], i have 209, src has [1,209]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 116424704 unmapped: 30343168 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:28.234856+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 116424704 unmapped: 30343168 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:29.235139+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 116424704 unmapped: 30343168 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:30.235371+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1706942 data_alloc: 285212672 data_used: 2338816
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.238841057s of 11.190751076s, submitted: 310
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 116424704 unmapped: 30343168 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:31.235525+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131aea2000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 209 ms_handle_reset con 0x56131aea2000 session 0x56131c063e00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 116432896 unmapped: 30334976 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:32.235651+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 209 handle_osd_map epochs [210,210], i have 209, src has [1,210]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 116441088 unmapped: 30326784 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 210 heartbeat osd_stat(store_statfs(0x1b807e000/0x0/0x1bfc00000, data 0x3c9d466/0x3def000, compress 0x0/0x0/0x0, omap 0x649, meta 0x3d8f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:33.235853+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 210 handle_osd_map epochs [211,211], i have 210, src has [1,211]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 116465664 unmapped: 30302208 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:34.236009+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 116465664 unmapped: 30302208 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:35.236185+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1717844 data_alloc: 285212672 data_used: 2351104
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 211 handle_osd_map epochs [211,211], i have 211, src has [1,211]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 211 handle_osd_map epochs [212,212], i have 211, src has [1,212]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 212 heartbeat osd_stat(store_statfs(0x1b8075000/0x0/0x1bfc00000, data 0x3ca1ecc/0x3df7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x3d8f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 116473856 unmapped: 30294016 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a70ec00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 212 ms_handle_reset con 0x56131a70ec00 session 0x56131c063680
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:36.236356+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 212 handle_osd_map epochs [213,213], i have 212, src has [1,213]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318ae3400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 213 ms_handle_reset con 0x561318ae3400 session 0x561319bb25a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 116506624 unmapped: 30261248 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:37.236538+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 213 handle_osd_map epochs [214,214], i have 213, src has [1,214]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 117571584 unmapped: 29196288 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561319e8e800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 214 handle_osd_map epochs [213,214], i have 214, src has [1,214]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:38.236691+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 214 ms_handle_reset con 0x561319e8e800 session 0x561319c38960
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 117596160 unmapped: 29171712 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:39.236853+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561319e8f000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 214 ms_handle_reset con 0x561319e8f000 session 0x56131882f4a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 117596160 unmapped: 29171712 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 214 heartbeat osd_stat(store_statfs(0x1b8071000/0x0/0x1bfc00000, data 0x3ca69fa/0x3dfd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x3d8f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:40.237001+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1726230 data_alloc: 285212672 data_used: 2371584
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.611667633s of 10.007593155s, submitted: 112
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 117604352 unmapped: 29163520 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:41.237188+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 214 handle_osd_map epochs [215,215], i have 214, src has [1,215]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae79400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 215 ms_handle_reset con 0x56131ae79400 session 0x56131a11d860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 117596160 unmapped: 29171712 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:42.237357+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 215 handle_osd_map epochs [216,216], i have 215, src has [1,216]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 117612544 unmapped: 29155328 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:43.237517+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a23c800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2fc400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 216 ms_handle_reset con 0x56131a2fc400 session 0x561317be8d20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 216 ms_handle_reset con 0x56131a23c800 session 0x561319da2d20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 216 heartbeat osd_stat(store_statfs(0x1b806b000/0x0/0x1bfc00000, data 0x3ca8fab/0x3e02000, compress 0x0/0x0/0x0, omap 0x649, meta 0x3d8f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 117620736 unmapped: 29147136 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:44.237635+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae54400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 216 ms_handle_reset con 0x56131ae54400 session 0x561319f07a40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae7bc00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 216 ms_handle_reset con 0x56131ae7bc00 session 0x56131ba19e00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2fc000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 216 heartbeat osd_stat(store_statfs(0x1b8067000/0x0/0x1bfc00000, data 0x3cab3c4/0x3e06000, compress 0x0/0x0/0x0, omap 0x649, meta 0x3d8f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 216 ms_handle_reset con 0x56131a2fc000 session 0x56131ae80d20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 117653504 unmapped: 29114368 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:45.237735+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1734379 data_alloc: 285212672 data_used: 2387968
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 117661696 unmapped: 29106176 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:46.237967+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 117669888 unmapped: 29097984 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:47.238100+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 117669888 unmapped: 29097984 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 216 heartbeat osd_stat(store_statfs(0x1b8066000/0x0/0x1bfc00000, data 0x3cab5cc/0x3e08000, compress 0x0/0x0/0x0, omap 0x649, meta 0x3d8f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:48.238256+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2d6800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 216 ms_handle_reset con 0x56131a2d6800 session 0x56131a1efa40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 117678080 unmapped: 29089792 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:49.238409+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a336400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 216 ms_handle_reset con 0x56131a336400 session 0x561317bd23c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 216 handle_osd_map epochs [217,217], i have 216, src has [1,217]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 217 handle_osd_map epochs [217,217], i have 217, src has [1,217]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318ac3c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 117702656 unmapped: 29065216 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:50.238555+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1742222 data_alloc: 285212672 data_used: 2396160
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 217 handle_osd_map epochs [218,218], i have 217, src has [1,218]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.136951447s of 10.339798927s, submitted: 54
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2d7c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 117702656 unmapped: 29065216 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 218 ms_handle_reset con 0x561318ac3c00 session 0x56131c2b8960
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x5613167a9c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 218 heartbeat osd_stat(store_statfs(0x1b8062000/0x0/0x1bfc00000, data 0x3cadb2e/0x3e0b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x3d8f9b7), peers [1,2,3,4,5] op hist [0,0,1])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 218 ms_handle_reset con 0x5613167a9c00 session 0x56131b0f43c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 218 ms_handle_reset con 0x56131a2d7c00 session 0x561319f065a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x5613167a9c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:51.238702+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 218 ms_handle_reset con 0x5613167a9c00 session 0x56131c0623c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 117784576 unmapped: 28983296 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318ac3c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 218 heartbeat osd_stat(store_statfs(0x1b805e000/0x0/0x1bfc00000, data 0x3cb0281/0x3e10000, compress 0x0/0x0/0x0, omap 0x649, meta 0x3d8f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2d6800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:52.238814+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 127787008 unmapped: 18980864 heap: 146767872 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:53.238920+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a202400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 127950848 unmapped: 27222016 heap: 155172864 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:54.239020+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 218 ms_handle_reset con 0x56131a202400 session 0x56131808c780
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 119685120 unmapped: 35487744 heap: 155172864 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:55.239184+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2647915 data_alloc: 285212672 data_used: 2400256
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 132317184 unmapped: 27058176 heap: 159375360 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 218 heartbeat osd_stat(store_statfs(0x1afc5d000/0x0/0x1bfc00000, data 0xc0b03ad/0xc211000, compress 0x0/0x0/0x0, omap 0x649, meta 0x3d8f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:56.239380+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 119955456 unmapped: 39419904 heap: 159375360 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:57.239510+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 219 handle_osd_map epochs [219,219], i have 219, src has [1,219]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 219 handle_osd_map epochs [219,219], i have 219, src has [1,219]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 124174336 unmapped: 35201024 heap: 159375360 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 219 handle_osd_map epochs [219,219], i have 219, src has [1,219]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:58.239661+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 124100608 unmapped: 35274752 heap: 159375360 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:22:59.239879+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 219 heartbeat osd_stat(store_statfs(0x1a86b8000/0x0/0x1bfc00000, data 0x124b27fe/0x12615000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [1,2,3,4,5] op hist [0,0,0,0,0,0,0,0,1])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 146161664 unmapped: 17416192 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:00.240682+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3889741 data_alloc: 285212672 data_used: 2412544
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.318271160s of 10.136363983s, submitted: 396
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a23cc00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 130465792 unmapped: 33112064 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:01.240824+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 219 ms_handle_reset con 0x56131a23cc00 session 0x56131c2b85a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 122126336 unmapped: 41451520 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:02.240998+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561319e8e400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 134750208 unmapped: 28827648 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:03.241185+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 219 heartbeat osd_stat(store_statfs(0x19dab9000/0x0/0x1bfc00000, data 0x1d0b27fe/0x1d215000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [1,2,3,4,5] op hist [0,0,0,0,0,1])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139051008 unmapped: 24526848 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:04.241582+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 219 ms_handle_reset con 0x561319e8e400 session 0x561319c381e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 126476288 unmapped: 37101568 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:05.241807+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae79800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 219 heartbeat osd_stat(store_statfs(0x19beb9000/0x0/0x1bfc00000, data 0x1ecb27fe/0x1ee15000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 219 ms_handle_reset con 0x56131ae79800 session 0x561318ad3860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 4877562 data_alloc: 285212672 data_used: 2412544
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 126558208 unmapped: 37019648 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:06.243874+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 219 ms_handle_reset con 0x56131a2d6800 session 0x561319bb3680
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 219 ms_handle_reset con 0x561318ac3c00 session 0x561319c74b40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x5613167a9c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561319e8e400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 219 ms_handle_reset con 0x561319e8e400 session 0x561317beab40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a202400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 219 ms_handle_reset con 0x5613167a9c00 session 0x561319da3860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 122437632 unmapped: 41140224 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 219 ms_handle_reset con 0x56131a202400 session 0x56131ad99680
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 219 heartbeat osd_stat(store_statfs(0x1996b5000/0x0/0x1bfc00000, data 0x210b29df/0x21219000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:07.244023+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x5613167a9c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 219 heartbeat osd_stat(store_statfs(0x1996b5000/0x0/0x1bfc00000, data 0x210b29df/0x21219000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 122593280 unmapped: 40984576 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 219 ms_handle_reset con 0x5613167a9c00 session 0x561319c394a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:08.244186+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776b000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 219 ms_handle_reset con 0x56131776b000 session 0x561317bd3e00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 122601472 unmapped: 40976384 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:09.244381+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 123650048 unmapped: 39927808 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:10.244731+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1896247 data_alloc: 285212672 data_used: 2424832
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 221 handle_osd_map epochs [221,221], i have 221, src has [1,221]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 123650048 unmapped: 39927808 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.704465866s of 10.134362221s, submitted: 432
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 221 heartbeat osd_stat(store_statfs(0x1b52b3000/0x0/0x1bfc00000, data 0x3cb4fe4/0x3e19000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:11.245119+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 123650048 unmapped: 39927808 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:12.245360+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a70f400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 222 ms_handle_reset con 0x56131a70f400 session 0x56131ae81860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 124010496 unmapped: 39567360 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:13.245627+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 222 heartbeat osd_stat(store_statfs(0x1b6aaa000/0x0/0x1bfc00000, data 0x3cb9c6c/0x3e22000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 222 handle_osd_map epochs [222,223], i have 222, src has [1,223]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Got map version 52
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 223 handle_osd_map epochs [223,223], i have 223, src has [1,223]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 124067840 unmapped: 39510016 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:14.245949+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2fdc00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 223 ms_handle_reset con 0x56131a2fdc00 session 0x56131c607e00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 124084224 unmapped: 39493632 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1909623 data_alloc: 285212672 data_used: 2441216
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:16.052678+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 124092416 unmapped: 39485440 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 223 heartbeat osd_stat(store_statfs(0x1b6aa6000/0x0/0x1bfc00000, data 0x3cbc294/0x3e27000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:17.052948+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 124092416 unmapped: 39485440 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 224 heartbeat osd_stat(store_statfs(0x1b6aa8000/0x0/0x1bfc00000, data 0x3cbc1c3/0x3e25000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:18.053148+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 125149184 unmapped: 38428672 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae7d000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2fc400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 225 handle_osd_map epochs [225,225], i have 225, src has [1,225]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 225 ms_handle_reset con 0x56131a2fc400 session 0x56131a11d0e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 225 ms_handle_reset con 0x56131ae7d000 session 0x561319c21a40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x5613167a9c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 225 ms_handle_reset con 0x5613167a9c00 session 0x561317bd3c20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776b000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:19.053298+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 225 ms_handle_reset con 0x56131776b000 session 0x56131822bc20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2fdc00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 125149184 unmapped: 38428672 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 225 ms_handle_reset con 0x56131a2fdc00 session 0x56131a7e1680
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:20.053498+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 125173760 unmapped: 38404096 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a70f400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 225 ms_handle_reset con 0x56131a70f400 session 0x56131ad82960
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1917683 data_alloc: 285212672 data_used: 2441216
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 225 heartbeat osd_stat(store_statfs(0x1b6aa0000/0x0/0x1bfc00000, data 0x3cc0e01/0x3e2e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:21.053707+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.623670578s of 10.007481575s, submitted: 378
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 125247488 unmapped: 38330368 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 225 handle_osd_map epochs [226,226], i have 225, src has [1,226]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x5613167a9c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 226 ms_handle_reset con 0x5613167a9c00 session 0x56131ad82f00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:22.054577+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 125288448 unmapped: 38289408 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776b000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 228 ms_handle_reset con 0x56131776b000 session 0x56131ad832c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:23.054728+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 125362176 unmapped: 38215680 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a30a400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 228 ms_handle_reset con 0x56131a30a400 session 0x56131ae812c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae7b000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 228 ms_handle_reset con 0x56131ae7b000 session 0x56131ae805a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae7b800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:24.054874+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 125378560 unmapped: 38199296 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 228 ms_handle_reset con 0x56131ae7b800 session 0x561319c46960
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 228 heartbeat osd_stat(store_statfs(0x1b6a95000/0x0/0x1bfc00000, data 0x3cc7d51/0x3e37000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae7b800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:25.054989+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 125476864 unmapped: 38100992 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 228 ms_handle_reset con 0x56131ae7b800 session 0x56131ad8be00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 228 handle_osd_map epochs [229,229], i have 228, src has [1,229]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 229 handle_osd_map epochs [229,229], i have 229, src has [1,229]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1981922 data_alloc: 285212672 data_used: 2465792
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:26.055182+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 125485056 unmapped: 38092800 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x5613167a9c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776b000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 229 ms_handle_reset con 0x56131776b000 session 0x56131a1ef0e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 229 ms_handle_reset con 0x5613167a9c00 session 0x56131ad8b680
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:27.055347+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 125485056 unmapped: 38092800 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a30a400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 229 ms_handle_reset con 0x56131a30a400 session 0x56131ad8a3c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae7b000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 229 ms_handle_reset con 0x56131ae7b000 session 0x56131a7e0d20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x5613167a9c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 229 handle_osd_map epochs [230,230], i have 229, src has [1,230]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776b000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a30a400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae7b800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 230 ms_handle_reset con 0x5613167a9c00 session 0x56131a7e1e00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:28.055644+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136773632 unmapped: 26804224 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 230 ms_handle_reset con 0x56131a30a400 session 0x56131a7e1860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae78000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 230 heartbeat osd_stat(store_statfs(0x1b63ee000/0x0/0x1bfc00000, data 0x43669de/0x44de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 230 ms_handle_reset con 0x56131ae78000 session 0x5613183cb860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 230 handle_osd_map epochs [231,231], i have 230, src has [1,231]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 231 handle_osd_map epochs [231,231], i have 231, src has [1,231]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 231 ms_handle_reset con 0x56131ae7b800 session 0x56131a7e14a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318ae2c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 231 ms_handle_reset con 0x561318ae2c00 session 0x56131a1ee780
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 231 ms_handle_reset con 0x56131776b000 session 0x561319b91a40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:29.055828+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 126263296 unmapped: 37314560 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x5613167a9c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 231 handle_osd_map epochs [232,232], i have 231, src has [1,232]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 232 ms_handle_reset con 0x5613167a9c00 session 0x56131a1eed20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:30.055953+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 126296064 unmapped: 37281792 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2125975 data_alloc: 285212672 data_used: 2478080
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a30a400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 232 heartbeat osd_stat(store_statfs(0x1b541c000/0x0/0x1bfc00000, data 0x5332c75/0x54b0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:31.056080+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 126312448 unmapped: 37265408 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.300298691s of 10.083432198s, submitted: 242
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 232 ms_handle_reset con 0x56131a30a400 session 0x56131ad821e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae78000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae7b800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 232 ms_handle_reset con 0x56131ae7b800 session 0x561319da23c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a202c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 232 ms_handle_reset con 0x56131ae78000 session 0x56131ad912c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae78000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 232 heartbeat osd_stat(store_statfs(0x1b541f000/0x0/0x1bfc00000, data 0x5332c65/0x54af000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x5613167a9c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:32.056178+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776b000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 126795776 unmapped: 36782080 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 232 ms_handle_reset con 0x56131ae78000 session 0x56131822ab40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 232 ms_handle_reset con 0x56131776b000 session 0x56131ad825a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 232 handle_osd_map epochs [233,233], i have 232, src has [1,233]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 233 handle_osd_map epochs [233,233], i have 233, src has [1,233]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 233 ms_handle_reset con 0x5613167a9c00 session 0x56131c2b8f00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a30a400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 233 ms_handle_reset con 0x56131a30a400 session 0x56131a7e01e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae7b800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:33.056439+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 233 ms_handle_reset con 0x56131ae7b800 session 0x56131a11d0e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 127983616 unmapped: 35594240 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 233 ms_handle_reset con 0x56131a202c00 session 0x561318224d20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 233 heartbeat osd_stat(store_statfs(0x1b4c80000/0x0/0x1bfc00000, data 0x5ad0152/0x5c4d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:34.089267+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 128016384 unmapped: 35561472 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x5613167a9c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 234 handle_osd_map epochs [234,235], i have 234, src has [1,235]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 235 ms_handle_reset con 0x5613167a9c00 session 0x56131a11c1e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776b000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:35.089415+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 128016384 unmapped: 35561472 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 235 ms_handle_reset con 0x56131776b000 session 0x561319bb3860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2137758 data_alloc: 285212672 data_used: 2510848
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:36.089598+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 128016384 unmapped: 35561472 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a30a400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:37.089862+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 128057344 unmapped: 35520512 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 235 handle_osd_map epochs [235,236], i have 235, src has [1,236]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 236 handle_osd_map epochs [237,237], i have 236, src has [1,237]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 237 handle_osd_map epochs [237,237], i have 237, src has [1,237]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 237 ms_handle_reset con 0x56131a30a400 session 0x561319da23c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae78000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 237 ms_handle_reset con 0x56131ae78000 session 0x561317bd3c20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:38.090022+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 128163840 unmapped: 35414016 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a2d6c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:39.090194+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 128163840 unmapped: 35414016 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 237 handle_osd_map epochs [238,238], i have 237, src has [1,238]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 238 ms_handle_reset con 0x56131a2d6c00 session 0x56131c607e00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131b070c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 238 heartbeat osd_stat(store_statfs(0x1b63d7000/0x0/0x1bfc00000, data 0x4376fa3/0x44f5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [0,0,0,1])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 238 ms_handle_reset con 0x56131b070c00 session 0x561319e44d20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:40.090352+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 128212992 unmapped: 35364864 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1983390 data_alloc: 285212672 data_used: 2502656
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131aea3400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 238 ms_handle_reset con 0x56131aea3400 session 0x56131808d860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:41.090495+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 128262144 unmapped: 35315712 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.805157661s of 10.686613083s, submitted: 258
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:42.090660+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 239 handle_osd_map epochs [239,239], i have 239, src has [1,239]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 128286720 unmapped: 35291136 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:43.090838+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 127852544 unmapped: 35725312 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 240 heartbeat osd_stat(store_statfs(0x1b6a65000/0x0/0x1bfc00000, data 0x3ce411a/0x3e67000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:44.091022+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 127868928 unmapped: 35708928 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:45.091236+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 127877120 unmapped: 35700736 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 240 handle_osd_map epochs [240,241], i have 240, src has [1,241]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1995508 data_alloc: 285212672 data_used: 2527232
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:46.091400+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 127918080 unmapped: 35659776 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:47.091594+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 127918080 unmapped: 35659776 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 241 heartbeat osd_stat(store_statfs(0x1b6a63000/0x0/0x1bfc00000, data 0x3ce67cc/0x3e6b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 241 handle_osd_map epochs [242,242], i have 242, src has [1,242]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 241 handle_osd_map epochs [242,242], i have 242, src has [1,242]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:48.091725+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 127926272 unmapped: 35651584 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e2400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 242 ms_handle_reset con 0x56131a7e2400 session 0x56131808cd20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:49.091901+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 128598016 unmapped: 34979840 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 242 heartbeat osd_stat(store_statfs(0x1b6159000/0x0/0x1bfc00000, data 0x45edc49/0x4774000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 242 heartbeat osd_stat(store_statfs(0x1b6159000/0x0/0x1bfc00000, data 0x45edc49/0x4774000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:50.092073+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 128614400 unmapped: 34963456 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2073264 data_alloc: 285212672 data_used: 2539520
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:51.092262+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 128614400 unmapped: 34963456 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 242 heartbeat osd_stat(store_statfs(0x1b6159000/0x0/0x1bfc00000, data 0x45edc49/0x4774000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:52.092431+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.870986938s of 10.234692574s, submitted: 167
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776b400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 242 ms_handle_reset con 0x56131776b400 session 0x56131808c3c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 128638976 unmapped: 34938880 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776b400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:53.092570+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 128712704 unmapped: 34865152 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e6800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:54.092711+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 128745472 unmapped: 34832384 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 243 heartbeat osd_stat(store_statfs(0x1b6153000/0x0/0x1bfc00000, data 0x45f015d/0x477a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:55.092813+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 129622016 unmapped: 33955840 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2104917 data_alloc: 301989888 data_used: 5734400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:56.093102+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 129622016 unmapped: 33955840 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:57.093291+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 129622016 unmapped: 33955840 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:58.093485+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 129622016 unmapped: 33955840 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:23:59.093658+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 129646592 unmapped: 33931264 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:00.093837+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 129646592 unmapped: 33931264 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 243 heartbeat osd_stat(store_statfs(0x1b6152000/0x0/0x1bfc00000, data 0x45f0293/0x477c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2105453 data_alloc: 301989888 data_used: 5734400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:01.093974+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 129646592 unmapped: 33931264 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:02.094084+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.921300888s of 10.029424667s, submitted: 37
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 129654784 unmapped: 33923072 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:03.094194+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 129662976 unmapped: 33914880 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:04.094404+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 129662976 unmapped: 33914880 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:05.094558+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 243 heartbeat osd_stat(store_statfs(0x1b6150000/0x0/0x1bfc00000, data 0x45f0426/0x477d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 129662976 unmapped: 33914880 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2108107 data_alloc: 301989888 data_used: 5738496
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:06.095054+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 134840320 unmapped: 28737536 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:07.095239+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 134840320 unmapped: 28737536 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:08.095401+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136544256 unmapped: 27033600 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:09.095543+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136552448 unmapped: 27025408 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:10.095719+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137043968 unmapped: 26533888 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 243 heartbeat osd_stat(store_statfs(0x1b533e000/0x0/0x1bfc00000, data 0x53ee455/0x557a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2211915 data_alloc: 301989888 data_used: 5726208
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:11.095854+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135028736 unmapped: 28549120 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:12.096020+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135028736 unmapped: 28549120 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:13.096187+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.562756538s of 11.025742531s, submitted: 143
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135028736 unmapped: 28549120 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:14.096476+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135028736 unmapped: 28549120 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:15.096625+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135036928 unmapped: 28540928 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2214409 data_alloc: 301989888 data_used: 5726208
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:16.096799+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135036928 unmapped: 28540928 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 243 heartbeat osd_stat(store_statfs(0x1b5352000/0x0/0x1bfc00000, data 0x53ee62e/0x557c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:17.096984+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135036928 unmapped: 28540928 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:18.097160+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135036928 unmapped: 28540928 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:19.097350+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135045120 unmapped: 28532736 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:20.097512+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135045120 unmapped: 28532736 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2216305 data_alloc: 301989888 data_used: 5726208
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:21.097654+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135061504 unmapped: 28516352 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:22.097805+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 243 heartbeat osd_stat(store_statfs(0x1b5352000/0x0/0x1bfc00000, data 0x53ee717/0x557c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135069696 unmapped: 28508160 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:23.097933+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135061504 unmapped: 28516352 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 243 heartbeat osd_stat(store_statfs(0x1b5354000/0x0/0x1bfc00000, data 0x53ee6ab/0x557a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:24.098057+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135061504 unmapped: 28516352 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:25.098177+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135061504 unmapped: 28516352 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.206362724s of 12.318105698s, submitted: 27
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2214299 data_alloc: 301989888 data_used: 5726208
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:26.098298+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 244 heartbeat osd_stat(store_statfs(0x1b5354000/0x0/0x1bfc00000, data 0x53ee6ab/0x557a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135061504 unmapped: 28516352 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:27.098518+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135036928 unmapped: 28540928 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:28.098677+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135036928 unmapped: 28540928 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:29.098823+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 244 heartbeat osd_stat(store_statfs(0x1b534f000/0x0/0x1bfc00000, data 0x53f0c44/0x557e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135036928 unmapped: 28540928 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:30.098966+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135036928 unmapped: 28540928 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2218155 data_alloc: 301989888 data_used: 5738496
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 244 heartbeat osd_stat(store_statfs(0x1b534f000/0x0/0x1bfc00000, data 0x53f0c44/0x557e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:31.099153+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a70e800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 244 ms_handle_reset con 0x56131a70e800 session 0x56131a7e1680
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135036928 unmapped: 28540928 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:32.099288+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a338000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 245 handle_osd_map epochs [245,245], i have 245, src has [1,245]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 245 ms_handle_reset con 0x56131a338000 session 0x56131a7e1860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 245 handle_osd_map epochs [245,245], i have 245, src has [1,245]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135045120 unmapped: 28532736 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 246 handle_osd_map epochs [246,246], i have 246, src has [1,246]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:33.099419+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135053312 unmapped: 28524544 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:34.099553+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135053312 unmapped: 28524544 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:35.099861+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a202400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 247 handle_osd_map epochs [247,247], i have 247, src has [1,247]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 247 ms_handle_reset con 0x56131a202400 session 0x561319b7d0e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 247 handle_osd_map epochs [247,247], i have 247, src has [1,247]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135061504 unmapped: 28516352 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.803734779s of 10.002998352s, submitted: 77
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2234934 data_alloc: 301989888 data_used: 5750784
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:36.100087+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 248 handle_osd_map epochs [248,248], i have 248, src has [1,248]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 248 heartbeat osd_stat(store_statfs(0x1b5340000/0x0/0x1bfc00000, data 0x53f7d3f/0x558d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135069696 unmapped: 28508160 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:37.100259+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135069696 unmapped: 28508160 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:38.100408+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae54400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135069696 unmapped: 28508160 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:39.100590+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 249 handle_osd_map epochs [249,249], i have 249, src has [1,249]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 249 ms_handle_reset con 0x56131ae54400 session 0x56131882ed20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135077888 unmapped: 28499968 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 249 heartbeat osd_stat(store_statfs(0x1b533b000/0x0/0x1bfc00000, data 0x53fa498/0x5593000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:40.100843+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Got map version 53
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135094272 unmapped: 28483584 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2245091 data_alloc: 301989888 data_used: 5763072
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:41.100978+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135102464 unmapped: 28475392 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 249 handle_osd_map epochs [250,250], i have 250, src has [1,250]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a225400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 250 handle_osd_map epochs [250,250], i have 250, src has [1,250]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 250 ms_handle_reset con 0x56131a225400 session 0x56131ad90b40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:42.101121+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 250 handle_osd_map epochs [250,250], i have 250, src has [1,250]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135118848 unmapped: 28459008 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 251 handle_osd_map epochs [251,251], i have 251, src has [1,251]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a202400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 251 ms_handle_reset con 0x56131a202400 session 0x56131ad914a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:43.101259+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135127040 unmapped: 28450816 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 251 heartbeat osd_stat(store_statfs(0x1b532e000/0x0/0x1bfc00000, data 0x54019ca/0x559f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:44.101481+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135151616 unmapped: 28426240 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318fd8c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:45.101666+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136208384 unmapped: 27369472 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.700420380s of 10.003545761s, submitted: 82
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 252 handle_osd_map epochs [252,252], i have 252, src has [1,252]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 252 ms_handle_reset con 0x561318fd8c00 session 0x56131b051860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2257141 data_alloc: 301989888 data_used: 5779456
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a80f000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 252 ms_handle_reset con 0x56131a80f000 session 0x561319e45e00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:46.101852+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136232960 unmapped: 27344896 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e4000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:47.102075+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136232960 unmapped: 27344896 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 253 handle_osd_map epochs [253,253], i have 253, src has [1,253]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:48.102202+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 253 ms_handle_reset con 0x56131a7e4000 session 0x561319f06d20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136257536 unmapped: 27320320 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:49.102351+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 254 heartbeat osd_stat(store_statfs(0x1b5328000/0x0/0x1bfc00000, data 0x54061d7/0x55a4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135544832 unmapped: 28033024 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:50.102543+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 254 ms_handle_reset con 0x56131a7e6800 session 0x56131b051a40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 254 ms_handle_reset con 0x56131776b400 session 0x56131822b680
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 135544832 unmapped: 28033024 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2262701 data_alloc: 301989888 data_used: 5779456
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318fd8c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 254 handle_osd_map epochs [255,255], i have 255, src has [1,255]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 254 handle_osd_map epochs [255,255], i have 255, src has [1,255]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:51.102674+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 255 ms_handle_reset con 0x561318fd8c00 session 0x56131ae81860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 134266880 unmapped: 29310976 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:52.102848+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 134266880 unmapped: 29310976 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:53.102993+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 256 handle_osd_map epochs [256,256], i have 256, src has [1,256]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136380416 unmapped: 27197440 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:54.103137+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 257 handle_osd_map epochs [257,257], i have 257, src has [1,257]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 257 heartbeat osd_stat(store_statfs(0x1b5481000/0x0/0x1bfc00000, data 0x3d0a4f5/0x3eac000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136380416 unmapped: 27197440 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:55.103415+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136380416 unmapped: 27197440 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.781674385s of 10.305429459s, submitted: 252
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2086112 data_alloc: 285212672 data_used: 2621440
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:56.103657+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136388608 unmapped: 27189248 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 258 handle_osd_map epochs [258,259], i have 258, src has [1,259]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 259 handle_osd_map epochs [259,259], i have 259, src has [1,259]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:57.103850+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 259 handle_osd_map epochs [259,259], i have 259, src has [1,259]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 259 handle_osd_map epochs [259,259], i have 259, src has [1,259]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136413184 unmapped: 27164672 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561319e8e800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 260 heartbeat osd_stat(store_statfs(0x1b5474000/0x0/0x1bfc00000, data 0x3d119a2/0x3eb8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:58.104031+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 260 ms_handle_reset con 0x561319e8e800 session 0x561319bb2d20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 260 heartbeat osd_stat(store_statfs(0x1b5475000/0x0/0x1bfc00000, data 0x3d13ef3/0x3eb8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136429568 unmapped: 27148288 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:24:59.104234+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136429568 unmapped: 27148288 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:00.104398+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136429568 unmapped: 27148288 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2088128 data_alloc: 285212672 data_used: 2621440
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:01.104590+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136429568 unmapped: 27148288 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:02.104733+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 260 heartbeat osd_stat(store_statfs(0x1b5475000/0x0/0x1bfc00000, data 0x3d13f22/0x3eb7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136429568 unmapped: 27148288 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:03.104913+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 261 handle_osd_map epochs [261,261], i have 261, src has [1,261]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136429568 unmapped: 27148288 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:04.105040+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 262 handle_osd_map epochs [262,262], i have 262, src has [1,262]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136445952 unmapped: 27131904 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x561318f30400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:05.105174+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 262 ms_handle_reset con 0x561318f30400 session 0x56131ad90f00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136462336 unmapped: 27115520 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2095154 data_alloc: 285212672 data_used: 2621440
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:06.105315+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a70e800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.122555733s of 10.555744171s, submitted: 199
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136462336 unmapped: 27115520 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 263 ms_handle_reset con 0x56131a70e800 session 0x56131b051c20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:07.105517+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776b400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136462336 unmapped: 27115520 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 263 heartbeat osd_stat(store_statfs(0x1b5466000/0x0/0x1bfc00000, data 0x3d1b095/0x3ec7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 263 handle_osd_map epochs [264,264], i have 264, src has [1,264]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 263 handle_osd_map epochs [264,264], i have 264, src has [1,264]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 264 ms_handle_reset con 0x56131776b400 session 0x561319c47c20
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:08.105647+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a225000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136462336 unmapped: 27115520 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 264 ms_handle_reset con 0x56131a225000 session 0x561319e44b40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:09.105850+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136470528 unmapped: 27107328 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:10.106065+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a30bc00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 265 ms_handle_reset con 0x56131a30bc00 session 0x561319e45860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 265 handle_osd_map epochs [265,265], i have 265, src has [1,265]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae79400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136478720 unmapped: 27099136 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 266 ms_handle_reset con 0x56131ae79400 session 0x56131ad91860
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2109850 data_alloc: 285212672 data_used: 2625536
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:11.106231+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 266 heartbeat osd_stat(store_statfs(0x1b545a000/0x0/0x1bfc00000, data 0x3d22106/0x3ed3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a224000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 266 ms_handle_reset con 0x56131a224000 session 0x561319f070e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136478720 unmapped: 27099136 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a224000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 266 ms_handle_reset con 0x56131a224000 session 0x56131a7e12c0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:12.106365+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 267 heartbeat osd_stat(store_statfs(0x1b545a000/0x0/0x1bfc00000, data 0x3d22106/0x3ed3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136478720 unmapped: 27099136 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131776b400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 267 ms_handle_reset con 0x56131776b400 session 0x561319da3e00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a225000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 268 handle_osd_map epochs [268,268], i have 268, src has [1,268]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:13.106513+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136478720 unmapped: 27099136 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 268 ms_handle_reset con 0x56131a225000 session 0x561319bb3a40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:14.106678+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 268 heartbeat osd_stat(store_statfs(0x1b5453000/0x0/0x1bfc00000, data 0x3d26aac/0x3eda000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 268 handle_osd_map epochs [269,269], i have 269, src has [1,269]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 268 handle_osd_map epochs [269,269], i have 269, src has [1,269]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 268 handle_osd_map epochs [269,269], i have 269, src has [1,269]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 268 handle_osd_map epochs [269,269], i have 269, src has [1,269]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136478720 unmapped: 27099136 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae88000
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 269 ms_handle_reset con 0x56131ae88000 session 0x56131ad990e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131ae55800
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:15.106837+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 269 ms_handle_reset con 0x56131ae55800 session 0x561318ad2960
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136478720 unmapped: 27099136 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2117948 data_alloc: 285212672 data_used: 2629632
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:16.107026+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136478720 unmapped: 27099136 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:17.107251+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 269 heartbeat osd_stat(store_statfs(0x1b5450000/0x0/0x1bfc00000, data 0x3d29052/0x3edc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136478720 unmapped: 27099136 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.280651093s of 11.534325600s, submitted: 83
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:18.107385+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136486912 unmapped: 27090944 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:19.107536+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136486912 unmapped: 27090944 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:20.107714+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136486912 unmapped: 27090944 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2119894 data_alloc: 285212672 data_used: 2629632
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:21.107889+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136486912 unmapped: 27090944 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:22.108065+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 270 heartbeat osd_stat(store_statfs(0x1b544d000/0x0/0x1bfc00000, data 0x3d2b487/0x3ee0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136486912 unmapped: 27090944 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:23.108212+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136495104 unmapped: 27082752 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 271 heartbeat osd_stat(store_statfs(0x1b5449000/0x0/0x1bfc00000, data 0x3d2d8a0/0x3ee4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:24.108339+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 271 heartbeat osd_stat(store_statfs(0x1b5449000/0x0/0x1bfc00000, data 0x3d2d8a0/0x3ee4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136503296 unmapped: 27074560 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 271 heartbeat osd_stat(store_statfs(0x1b5449000/0x0/0x1bfc00000, data 0x3d2d8a0/0x3ee4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:25.108537+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136503296 unmapped: 27074560 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2123264 data_alloc: 285212672 data_used: 2629632
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:26.108725+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136503296 unmapped: 27074560 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:27.108956+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136503296 unmapped: 27074560 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:28.109102+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.485260963s of 10.577762604s, submitted: 38
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136527872 unmapped: 27049984 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:29.109257+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136536064 unmapped: 27041792 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:30.109419+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 271 heartbeat osd_stat(store_statfs(0x1b5449000/0x0/0x1bfc00000, data 0x3d2db99/0x3ee5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136544256 unmapped: 27033600 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:31.109589+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2123784 data_alloc: 285212672 data_used: 2629632
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136544256 unmapped: 27033600 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 271 heartbeat osd_stat(store_statfs(0x1b5449000/0x0/0x1bfc00000, data 0x3d2db99/0x3ee5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:32.109757+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136544256 unmapped: 27033600 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:33.109941+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136544256 unmapped: 27033600 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:34.110108+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136544256 unmapped: 27033600 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:35.110278+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136544256 unmapped: 27033600 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:36.110434+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2123286 data_alloc: 285212672 data_used: 2629632
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136544256 unmapped: 27033600 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:37.110630+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136577024 unmapped: 27000832 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 271 heartbeat osd_stat(store_statfs(0x1b544a000/0x0/0x1bfc00000, data 0x3d2dc2d/0x3ee4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:38.110851+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136577024 unmapped: 27000832 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:39.110988+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136577024 unmapped: 27000832 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:40.111117+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.711018562s of 11.757222176s, submitted: 10
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136577024 unmapped: 27000832 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:41.111284+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2124862 data_alloc: 285212672 data_used: 2629632
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136577024 unmapped: 27000832 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:42.111415+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 271 heartbeat osd_stat(store_statfs(0x1b5448000/0x0/0x1bfc00000, data 0x3d2de2d/0x3ee6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136577024 unmapped: 27000832 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:43.111632+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 136806400 unmapped: 26771456 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Got map version 54
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:44.111850+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137003008 unmapped: 26574848 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:45.111996+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137003008 unmapped: 26574848 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:46.112156+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2130740 data_alloc: 285212672 data_used: 2641920
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137003008 unmapped: 26574848 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:47.112309+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137003008 unmapped: 26574848 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 272 heartbeat osd_stat(store_statfs(0x1b5443000/0x0/0x1bfc00000, data 0x3d304c3/0x3eea000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:48.112490+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137003008 unmapped: 26574848 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:49.112632+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137003008 unmapped: 26574848 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:50.112828+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.072029114s of 10.264289856s, submitted: 297
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137003008 unmapped: 26574848 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:51.113007+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2131628 data_alloc: 285212672 data_used: 2641920
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137003008 unmapped: 26574848 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:52.113216+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137011200 unmapped: 26566656 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:53.113372+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137019392 unmapped: 26558464 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 273 heartbeat osd_stat(store_statfs(0x1b5441000/0x0/0x1bfc00000, data 0x3d306b5/0x3eec000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:54.113562+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137019392 unmapped: 26558464 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:55.113716+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137019392 unmapped: 26558464 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:56.113857+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2138778 data_alloc: 285212672 data_used: 2654208
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137019392 unmapped: 26558464 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:57.114020+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137027584 unmapped: 26550272 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:58.114175+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137027584 unmapped: 26550272 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:25:59.114301+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 273 heartbeat osd_stat(store_statfs(0x1b543f000/0x0/0x1bfc00000, data 0x3d32c46/0x3eef000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137027584 unmapped: 26550272 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:00.114457+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.802826881s of 10.000325203s, submitted: 52
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137035776 unmapped: 26542080 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:01.114610+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2140080 data_alloc: 285212672 data_used: 2654208
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137035776 unmapped: 26542080 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:02.114756+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137035776 unmapped: 26542080 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:03.114865+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137060352 unmapped: 26517504 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:04.114976+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 273 heartbeat osd_stat(store_statfs(0x1b543b000/0x0/0x1bfc00000, data 0x3d32e6f/0x3ef1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137060352 unmapped: 26517504 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:05.115123+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137060352 unmapped: 26517504 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:06.115212+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2142750 data_alloc: 285212672 data_used: 2654208
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137060352 unmapped: 26517504 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:07.115413+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137060352 unmapped: 26517504 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:08.115535+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137060352 unmapped: 26517504 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:09.115719+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137068544 unmapped: 26509312 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:10.115863+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 273 heartbeat osd_stat(store_statfs(0x1b543c000/0x0/0x1bfc00000, data 0x3d32e72/0x3ef1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.833648682s of 10.003560066s, submitted: 36
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137076736 unmapped: 26501120 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:11.115990+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2146014 data_alloc: 285212672 data_used: 2654208
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137076736 unmapped: 26501120 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:12.116193+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137076736 unmapped: 26501120 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:13.116368+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 273 heartbeat osd_stat(store_statfs(0x1b5438000/0x0/0x1bfc00000, data 0x3d32f43/0x3ef0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137076736 unmapped: 26501120 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:14.116620+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137076736 unmapped: 26501120 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:15.116860+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137076736 unmapped: 26501120 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:16.117028+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2143142 data_alloc: 285212672 data_used: 2662400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 273 heartbeat osd_stat(store_statfs(0x1b543e000/0x0/0x1bfc00000, data 0x3d32edd/0x3eef000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137076736 unmapped: 26501120 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:17.117227+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137076736 unmapped: 26501120 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:18.117363+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137076736 unmapped: 26501120 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:19.117500+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137076736 unmapped: 26501120 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:20.117691+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.821280479s of 10.000857353s, submitted: 37
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137068544 unmapped: 26509312 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:21.118979+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2143194 data_alloc: 285212672 data_used: 2662400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137068544 unmapped: 26509312 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:22.119153+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 273 heartbeat osd_stat(store_statfs(0x1b543f000/0x0/0x1bfc00000, data 0x3d32f46/0x3eee000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137068544 unmapped: 26509312 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:23.119331+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137076736 unmapped: 26501120 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:24.119482+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137084928 unmapped: 26492928 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:25.119628+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 273 heartbeat osd_stat(store_statfs(0x1b543e000/0x0/0x1bfc00000, data 0x3d330d4/0x3eee000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137084928 unmapped: 26492928 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:26.119770+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2147626 data_alloc: 285212672 data_used: 2674688
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137093120 unmapped: 26484736 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:27.119982+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137093120 unmapped: 26484736 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:28.120144+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137093120 unmapped: 26484736 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:29.120293+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137101312 unmapped: 26476544 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:30.120435+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 274 heartbeat osd_stat(store_statfs(0x1b5439000/0x0/0x1bfc00000, data 0x3d356d7/0x3ef3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.811932564s of 10.003783226s, submitted: 59
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137125888 unmapped: 26451968 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:31.120573+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2150890 data_alloc: 285212672 data_used: 2674688
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137125888 unmapped: 26451968 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:32.120729+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 274 handle_osd_map epochs [274,275], i have 274, src has [1,275]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137134080 unmapped: 26443776 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:33.120870+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137134080 unmapped: 26443776 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:34.121015+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 275 heartbeat osd_stat(store_statfs(0x1b5435000/0x0/0x1bfc00000, data 0x3d37bf1/0x3ef8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137150464 unmapped: 26427392 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:35.121168+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 275 heartbeat osd_stat(store_statfs(0x1b5435000/0x0/0x1bfc00000, data 0x3d37bf1/0x3ef8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137150464 unmapped: 26427392 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:36.121349+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2154634 data_alloc: 285212672 data_used: 2686976
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 137150464 unmapped: 26427392 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:37.121524+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 138199040 unmapped: 25378816 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:38.121642+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 275 heartbeat osd_stat(store_statfs(0x1b5436000/0x0/0x1bfc00000, data 0x3d37bc1/0x3ef7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 138199040 unmapped: 25378816 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:39.121816+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 138199040 unmapped: 25378816 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:40.121976+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.271457672s of 10.442067146s, submitted: 47
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 138207232 unmapped: 25370624 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 276 handle_osd_map epochs [276,276], i have 276, src has [1,276]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:41.122088+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 276 heartbeat osd_stat(store_statfs(0x1b5431000/0x0/0x1bfc00000, data 0x3d3a1f5/0x3efb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2157166 data_alloc: 285212672 data_used: 2699264
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 276 heartbeat osd_stat(store_statfs(0x1b5431000/0x0/0x1bfc00000, data 0x3d3a1f5/0x3efb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139280384 unmapped: 24297472 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:42.122276+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 276 heartbeat osd_stat(store_statfs(0x1b5432000/0x0/0x1bfc00000, data 0x3d3a253/0x3efa000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139280384 unmapped: 24297472 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:43.122453+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139280384 unmapped: 24297472 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:44.122626+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139280384 unmapped: 24297472 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:45.122803+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139288576 unmapped: 24289280 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:46.122960+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2161366 data_alloc: 285212672 data_used: 2699264
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139288576 unmapped: 24289280 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:47.123149+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139296768 unmapped: 24281088 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:48.123294+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 277 handle_osd_map epochs [277,277], i have 277, src has [1,277]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 277 heartbeat osd_stat(store_statfs(0x1b542b000/0x0/0x1bfc00000, data 0x3d3c998/0x3f01000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 140353536 unmapped: 23224320 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:49.123378+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 140353536 unmapped: 23224320 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:50.123521+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139304960 unmapped: 24272896 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:51.123729+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.946651459s of 10.192763329s, submitted: 86
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2164708 data_alloc: 285212672 data_used: 2711552
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139313152 unmapped: 24264704 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:52.123946+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139313152 unmapped: 24264704 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:53.124081+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 277 heartbeat osd_stat(store_statfs(0x1b542b000/0x0/0x1bfc00000, data 0x3d3caf6/0x3f01000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139313152 unmapped: 24264704 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:54.124269+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139313152 unmapped: 24264704 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:55.124405+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 277 heartbeat osd_stat(store_statfs(0x1b542c000/0x0/0x1bfc00000, data 0x3d3cac6/0x3f01000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139313152 unmapped: 24264704 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:56.124575+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2168530 data_alloc: 285212672 data_used: 2723840
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139329536 unmapped: 24248320 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:57.124736+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139362304 unmapped: 24215552 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 278 heartbeat osd_stat(store_statfs(0x1b542a000/0x0/0x1bfc00000, data 0x3d3eff6/0x3f03000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:58.124913+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 278 heartbeat osd_stat(store_statfs(0x1b5427000/0x0/0x1bfc00000, data 0x3d3f0f2/0x3f04000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139354112 unmapped: 24223744 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:26:59.125073+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139354112 unmapped: 24223744 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:00.125301+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139354112 unmapped: 24223744 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:01.125439+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.812380791s of 10.074032784s, submitted: 69
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2170256 data_alloc: 285212672 data_used: 2723840
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139362304 unmapped: 24215552 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:02.125571+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139362304 unmapped: 24215552 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:03.125751+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5429000/0x0/0x1bfc00000, data 0x3d3f1c1/0x3f04000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139362304 unmapped: 24215552 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:04.125875+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139403264 unmapped: 24174592 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:05.126022+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139403264 unmapped: 24174592 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:06.126143+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2174960 data_alloc: 285212672 data_used: 2736128
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139411456 unmapped: 24166400 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:07.126329+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5423000/0x0/0x1bfc00000, data 0x3d41672/0x3f08000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139411456 unmapped: 24166400 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:08.126463+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139411456 unmapped: 24166400 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:09.126628+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139411456 unmapped: 24166400 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:10.126824+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139411456 unmapped: 24166400 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:11.126960+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2177646 data_alloc: 285212672 data_used: 2736128
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5425000/0x0/0x1bfc00000, data 0x3d41866/0x3f09000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139411456 unmapped: 24166400 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:12.127122+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139436032 unmapped: 24141824 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:13.127305+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.888164520s of 12.093890190s, submitted: 58
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139444224 unmapped: 24133632 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:14.127431+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5426000/0x0/0x1bfc00000, data 0x3d41808/0x3f08000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139436032 unmapped: 24141824 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:15.127634+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139436032 unmapped: 24141824 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:16.127834+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2178724 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139436032 unmapped: 24141824 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:17.128004+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:18.128315+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139436032 unmapped: 24141824 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:19.128462+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139436032 unmapped: 24141824 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:20.128623+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139444224 unmapped: 24133632 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5422000/0x0/0x1bfc00000, data 0x3d4189b/0x3f09000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:21.128761+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139452416 unmapped: 24125440 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5427000/0x0/0x1bfc00000, data 0x3d418ff/0x3f07000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2178214 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:22.128981+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139452416 unmapped: 24125440 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:23.129348+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139452416 unmapped: 24125440 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.487685204s of 10.674777031s, submitted: 37
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:24.129501+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139452416 unmapped: 24125440 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:25.129648+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139460608 unmapped: 24117248 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:26.129825+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139460608 unmapped: 24117248 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2180212 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:27.130014+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139468800 unmapped: 24109056 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5426000/0x0/0x1bfc00000, data 0x3d41a2b/0x3f08000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:28.130173+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139476992 unmapped: 24100864 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:29.130337+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139485184 unmapped: 24092672 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5425000/0x0/0x1bfc00000, data 0x3d419f8/0x3f08000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:30.130505+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139485184 unmapped: 24092672 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:31.130650+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139493376 unmapped: 24084480 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2179842 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:32.130814+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139493376 unmapped: 24084480 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:33.130969+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139501568 unmapped: 24076288 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:34.131112+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139501568 unmapped: 24076288 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.305900574s of 10.469120979s, submitted: 34
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5428000/0x0/0x1bfc00000, data 0x3d41a2c/0x3f06000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:35.131333+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139067392 unmapped: 24510464 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:36.131499+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139165696 unmapped: 24412160 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2190506 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:37.131664+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 139165696 unmapped: 24412160 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:38.131864+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 140804096 unmapped: 22773760 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:39.132024+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 140804096 unmapped: 22773760 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:40.132193+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 140804096 unmapped: 22773760 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5338000/0x0/0x1bfc00000, data 0x3e2beb2/0x3ff3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [0,1,0,1])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:41.132339+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 141402112 unmapped: 22175744 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2211248 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:42.132515+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 141402112 unmapped: 22175744 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:43.132676+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 141402112 unmapped: 22175744 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b52b4000/0x0/0x1bfc00000, data 0x3eb1dd8/0x4079000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:44.132814+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 141008896 unmapped: 22568960 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.577235222s of 10.007252693s, submitted: 110
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:45.132992+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 141033472 unmapped: 22544384 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:46.133251+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 141033472 unmapped: 22544384 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2210100 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:47.133444+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5213000/0x0/0x1bfc00000, data 0x3f52071/0x4119000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 141049856 unmapped: 22528000 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:48.133613+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 142336000 unmapped: 21241856 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:49.133730+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 142336000 unmapped: 21241856 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:50.133906+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 142794752 unmapped: 20783104 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:51.134061+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 143613952 unmapped: 19963904 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2237070 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:52.134223+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 143613952 unmapped: 19963904 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b50e9000/0x0/0x1bfc00000, data 0x407d287/0x4243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:53.134403+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 143704064 unmapped: 19873792 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:54.134558+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b50eb000/0x0/0x1bfc00000, data 0x407d2ec/0x4243000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 143785984 unmapped: 19791872 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.595368385s of 10.015462875s, submitted: 92
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:55.134750+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 143892480 unmapped: 19685376 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b50c0000/0x0/0x1bfc00000, data 0x40a821d/0x426e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:56.134907+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 143777792 unmapped: 19800064 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2232200 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:57.135105+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 143777792 unmapped: 19800064 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:58.135256+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 144965632 unmapped: 18612224 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:27:59.135430+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145170432 unmapped: 18407424 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:00.135574+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145170432 unmapped: 18407424 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5027000/0x0/0x1bfc00000, data 0x4141d58/0x4306000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:01.135689+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145170432 unmapped: 18407424 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2234240 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:02.135873+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145170432 unmapped: 18407424 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:03.136027+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145170432 unmapped: 18407424 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5027000/0x0/0x1bfc00000, data 0x4141df0/0x4306000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:04.136228+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145170432 unmapped: 18407424 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.826758385s of 10.003463745s, submitted: 45
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:05.136378+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145178624 unmapped: 18399232 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:06.136502+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145178624 unmapped: 18399232 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2236120 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5025000/0x0/0x1bfc00000, data 0x4141ee8/0x4307000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:07.136664+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145178624 unmapped: 18399232 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5025000/0x0/0x1bfc00000, data 0x4141f4d/0x4307000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:08.136882+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145178624 unmapped: 18399232 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:09.137018+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145178624 unmapped: 18399232 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:10.137157+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145178624 unmapped: 18399232 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:11.137315+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145178624 unmapped: 18399232 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5024000/0x0/0x1bfc00000, data 0x4142012/0x4308000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2237360 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:12.137498+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145178624 unmapped: 18399232 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5024000/0x0/0x1bfc00000, data 0x4142012/0x4308000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:13.137689+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145178624 unmapped: 18399232 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:14.137879+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145178624 unmapped: 18399232 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.867179871s of 10.004052162s, submitted: 30
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:15.138038+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145178624 unmapped: 18399232 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:16.138196+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145178624 unmapped: 18399232 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2236542 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:17.138360+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145178624 unmapped: 18399232 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:18.138667+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145178624 unmapped: 18399232 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5023000/0x0/0x1bfc00000, data 0x414201b/0x4308000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:19.138866+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145178624 unmapped: 18399232 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:20.139053+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145178624 unmapped: 18399232 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:21.139216+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145186816 unmapped: 18391040 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2236574 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:22.139403+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145186816 unmapped: 18391040 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:23.139589+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5024000/0x0/0x1bfc00000, data 0x4141fe6/0x4308000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145186816 unmapped: 18391040 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:24.139721+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145203200 unmapped: 18374656 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:25.139915+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145211392 unmapped: 18366464 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.911595345s of 11.018172264s, submitted: 22
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:26.140121+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145211392 unmapped: 18366464 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2235756 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5027000/0x0/0x1bfc00000, data 0x4141fb6/0x4307000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:27.140358+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5027000/0x0/0x1bfc00000, data 0x4141fb6/0x4307000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145211392 unmapped: 18366464 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:28.140539+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145219584 unmapped: 18358272 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:29.140726+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145219584 unmapped: 18358272 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:30.140963+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5026000/0x0/0x1bfc00000, data 0x4141f84/0x4307000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145219584 unmapped: 18358272 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:31.141115+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5026000/0x0/0x1bfc00000, data 0x4141f84/0x4307000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145219584 unmapped: 18358272 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2235708 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:32.141286+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145219584 unmapped: 18358272 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:33.141434+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145227776 unmapped: 18350080 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:34.141687+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145227776 unmapped: 18350080 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:35.141901+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5027000/0x0/0x1bfc00000, data 0x4141fe9/0x4307000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145227776 unmapped: 18350080 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.913269997s of 10.000793457s, submitted: 19
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:36.142111+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145227776 unmapped: 18350080 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3193103153' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2235580 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:37.142376+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145227776 unmapped: 18350080 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:38.142562+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145235968 unmapped: 18341888 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:39.142825+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145235968 unmapped: 18341888 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b5024000/0x0/0x1bfc00000, data 0x4142148/0x4308000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:40.143056+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145235968 unmapped: 18341888 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:41.143243+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145260544 unmapped: 18317312 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2247510 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:42.143438+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145260544 unmapped: 18317312 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:43.181407+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145260544 unmapped: 18317312 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b4fe5000/0x0/0x1bfc00000, data 0x418206a/0x4348000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:44.181556+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145539072 unmapped: 18038784 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:45.181723+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b4f88000/0x0/0x1bfc00000, data 0x41de652/0x43a5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 146587648 unmapped: 16990208 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.825058937s of 10.004844666s, submitted: 44
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:46.181927+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 146587648 unmapped: 16990208 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2251094 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:47.182138+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 146817024 unmapped: 16760832 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:48.182322+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 146817024 unmapped: 16760832 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:49.182508+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145973248 unmapped: 17604608 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:50.182712+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145907712 unmapped: 17670144 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b4f55000/0x0/0x1bfc00000, data 0x4213bd4/0x43d9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:51.182896+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145989632 unmapped: 17588224 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b4f55000/0x0/0x1bfc00000, data 0x4213bd4/0x43d9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2249578 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:52.183075+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 145997824 unmapped: 17580032 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:53.183243+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 146169856 unmapped: 17408000 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:54.183445+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 146169856 unmapped: 17408000 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:55.183612+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 146169856 unmapped: 17408000 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.842899323s of 10.004701614s, submitted: 31
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b4ac6000/0x0/0x1bfc00000, data 0x42a09f4/0x4467000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:56.183740+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 146276352 unmapped: 17301504 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2263246 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:57.183971+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 146276352 unmapped: 17301504 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:58.184137+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 146276352 unmapped: 17301504 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:28:59.184381+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 147488768 unmapped: 16089088 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b4a70000/0x0/0x1bfc00000, data 0x42f6d36/0x44bd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:00.184585+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 147488768 unmapped: 16089088 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:01.184758+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 147488768 unmapped: 16089088 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b4a5d000/0x0/0x1bfc00000, data 0x430996d/0x44d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2267834 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:02.184945+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 147718144 unmapped: 15859712 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:03.185075+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 147726336 unmapped: 15851520 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:04.185224+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 147726336 unmapped: 15851520 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:05.185383+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 147726336 unmapped: 15851520 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.847238541s of 10.002198219s, submitted: 38
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:06.185533+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 147947520 unmapped: 15630336 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2268340 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b49f1000/0x0/0x1bfc00000, data 0x43774b9/0x453d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:07.185720+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 147947520 unmapped: 15630336 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:08.185859+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148045824 unmapped: 15532032 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:09.186008+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148045824 unmapped: 15532032 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:10.186181+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148135936 unmapped: 15441920 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:11.186344+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148103168 unmapped: 15474688 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2268484 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:12.186482+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148103168 unmapped: 15474688 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b49cd000/0x0/0x1bfc00000, data 0x439d7c7/0x4561000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:13.186643+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148103168 unmapped: 15474688 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:14.186817+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148103168 unmapped: 15474688 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b49cd000/0x0/0x1bfc00000, data 0x439d7c7/0x4561000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:15.186959+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148193280 unmapped: 15384576 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:16.187139+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148193280 unmapped: 15384576 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2267964 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:17.187359+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148193280 unmapped: 15384576 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b49c1000/0x0/0x1bfc00000, data 0x43a967e/0x456d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:18.187556+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148193280 unmapped: 15384576 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 ms_handle_reset con 0x56131ae88c00 session 0x561319b905a0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e5c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:19.187760+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148193280 unmapped: 15384576 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:20.187997+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148193280 unmapped: 15384576 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:21.188158+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148193280 unmapped: 15384576 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2267964 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:22.188385+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148193280 unmapped: 15384576 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b49c1000/0x0/0x1bfc00000, data 0x43a967e/0x456d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:23.188567+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148193280 unmapped: 15384576 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:24.188725+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148193280 unmapped: 15384576 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:25.188901+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148193280 unmapped: 15384576 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b49c1000/0x0/0x1bfc00000, data 0x43a967e/0x456d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:26.189068+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148193280 unmapped: 15384576 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2267964 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:27.189336+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148193280 unmapped: 15384576 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.556356430s of 22.625360489s, submitted: 16
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:28.189495+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b49c1000/0x0/0x1bfc00000, data 0x43a967e/0x456d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148209664 unmapped: 15368192 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:29.189614+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148209664 unmapped: 15368192 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:30.189843+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148209664 unmapped: 15368192 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:31.190000+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b49a5000/0x0/0x1bfc00000, data 0x43c4f15/0x4589000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148209664 unmapped: 15368192 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2270436 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:32.190143+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b498c000/0x0/0x1bfc00000, data 0x43de17d/0x45a2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148209664 unmapped: 15368192 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:33.190308+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148209664 unmapped: 15368192 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b498c000/0x0/0x1bfc00000, data 0x43de17d/0x45a2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:34.190510+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148209664 unmapped: 15368192 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:35.190667+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b498c000/0x0/0x1bfc00000, data 0x43de17d/0x45a2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149553152 unmapped: 14024704 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:36.190820+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b4978000/0x0/0x1bfc00000, data 0x43f1be6/0x45b6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149561344 unmapped: 14016512 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2278876 data_alloc: 285212672 data_used: 2744320
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:37.191049+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149561344 unmapped: 14016512 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:38.191224+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149676032 unmapped: 13901824 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 9000.1 total, 600.0 interval
                                                          Cumulative writes: 20K writes, 76K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s
                                                          Cumulative WAL: 20K writes, 6905 syncs, 2.95 writes per sync, written: 0.06 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 25.51 MB, 0.04 MB/s
                                                          Interval WAL: 10K writes, 4083 syncs, 2.54 writes per sync, written: 0.02 GB, 0.04 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.475468636s of 10.596258163s, submitted: 27
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:39.191366+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 heartbeat osd_stat(store_statfs(0x1b4940000/0x0/0x1bfc00000, data 0x442af2e/0x45ee000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149004288 unmapped: 14573568 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:40.191502+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149004288 unmapped: 14573568 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:41.191628+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 280 handle_osd_map epochs [280,280], i have 280, src has [1,280]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149012480 unmapped: 14565376 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2281248 data_alloc: 285212672 data_used: 2756608
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:42.191727+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149012480 unmapped: 14565376 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:43.191870+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 280 heartbeat osd_stat(store_statfs(0x1b48f2000/0x0/0x1bfc00000, data 0x4476292/0x463b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149012480 unmapped: 14565376 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:44.192002+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148881408 unmapped: 14696448 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:45.192162+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148905984 unmapped: 14671872 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:46.192340+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148905984 unmapped: 14671872 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2284200 data_alloc: 285212672 data_used: 2756608
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:47.192597+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148905984 unmapped: 14671872 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:48.192752+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148905984 unmapped: 14671872 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 281 heartbeat osd_stat(store_statfs(0x1b48c6000/0x0/0x1bfc00000, data 0x44a2dfd/0x4668000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:49.192978+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148905984 unmapped: 14671872 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:50.193128+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 281 heartbeat osd_stat(store_statfs(0x1b48c1000/0x0/0x1bfc00000, data 0x44a5216/0x466c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148905984 unmapped: 14671872 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 281 heartbeat osd_stat(store_statfs(0x1b48c1000/0x0/0x1bfc00000, data 0x44a5216/0x466c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:51.193324+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148905984 unmapped: 14671872 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2286034 data_alloc: 285212672 data_used: 2768896
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:52.193487+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148914176 unmapped: 14663680 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:53.193730+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148914176 unmapped: 14663680 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:54.193998+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148914176 unmapped: 14663680 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:55.194186+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148914176 unmapped: 14663680 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:56.194434+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148914176 unmapped: 14663680 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 281 heartbeat osd_stat(store_statfs(0x1b48c1000/0x0/0x1bfc00000, data 0x44a5216/0x466c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2286034 data_alloc: 285212672 data_used: 2768896
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:57.194650+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148914176 unmapped: 14663680 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:58.194806+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148914176 unmapped: 14663680 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:29:59.194951+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148914176 unmapped: 14663680 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:00.195128+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148922368 unmapped: 14655488 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:01.195266+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148922368 unmapped: 14655488 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2286034 data_alloc: 285212672 data_used: 2768896
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:02.195431+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 281 heartbeat osd_stat(store_statfs(0x1b48c1000/0x0/0x1bfc00000, data 0x44a5216/0x466c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148922368 unmapped: 14655488 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:03.195585+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148922368 unmapped: 14655488 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:04.195743+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148922368 unmapped: 14655488 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:05.195927+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148922368 unmapped: 14655488 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:06.196034+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148922368 unmapped: 14655488 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 281 heartbeat osd_stat(store_statfs(0x1b48c1000/0x0/0x1bfc00000, data 0x44a5216/0x466c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2286034 data_alloc: 285212672 data_used: 2768896
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:07.196236+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148922368 unmapped: 14655488 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:08.196344+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148930560 unmapped: 14647296 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:09.196514+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 281 heartbeat osd_stat(store_statfs(0x1b48c1000/0x0/0x1bfc00000, data 0x44a5216/0x466c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148930560 unmapped: 14647296 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:10.196654+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148930560 unmapped: 14647296 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:11.196853+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148930560 unmapped: 14647296 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2286034 data_alloc: 285212672 data_used: 2768896
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:12.197002+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148930560 unmapped: 14647296 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a7e7400
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.309261322s of 34.440685272s, submitted: 66
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:13.197148+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 148930560 unmapped: 14647296 heap: 163577856 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 281 heartbeat osd_stat(store_statfs(0x1b48c1000/0x0/0x1bfc00000, data 0x44a5239/0x466d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:14.197333+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149438464 unmapped: 22536192 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 281 heartbeat osd_stat(store_statfs(0x1b3c51000/0x0/0x1bfc00000, data 0x5115239/0x52dd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:15.197502+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 282 handle_osd_map epochs [282,282], i have 282, src has [1,282]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 282 ms_handle_reset con 0x56131a7e7400 session 0x56131ad83a40
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: handle_auth_request added challenge on 0x56131a202c00
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149446656 unmapped: 22528000 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:16.197650+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 283 handle_osd_map epochs [282,283], i have 283, src has [1,283]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 283 ms_handle_reset con 0x56131a202c00 session 0x561319b90780
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 150519808 unmapped: 21454848 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297681 data_alloc: 285212672 data_used: 2781184
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:17.197820+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 150519808 unmapped: 21454848 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:18.198052+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 283 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44a9cd0/0x4674000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 150519808 unmapped: 21454848 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:19.198227+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 283 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44a9cd0/0x4674000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 150519808 unmapped: 21454848 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:20.198395+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 150519808 unmapped: 21454848 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:21.198618+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 150519808 unmapped: 21454848 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297681 data_alloc: 285212672 data_used: 2781184
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:22.198879+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 150519808 unmapped: 21454848 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:23.199066+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _renew_subs
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _send_mon_message to mon.np0005548788 at v2:172.18.0.103:3300/0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.138595581s of 10.349264145s, submitted: 46
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149487616 unmapped: 22487040 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:24.199276+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149487616 unmapped: 22487040 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:25.199475+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b5000/0x0/0x1bfc00000, data 0x44ac0e9/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149487616 unmapped: 22487040 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:26.199648+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b5000/0x0/0x1bfc00000, data 0x44ac0e9/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149487616 unmapped: 22487040 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b5000/0x0/0x1bfc00000, data 0x44ac0e9/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297744 data_alloc: 285212672 data_used: 2781184
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:27.199852+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149487616 unmapped: 22487040 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:28.199991+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149487616 unmapped: 22487040 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:29.200160+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149487616 unmapped: 22487040 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b5000/0x0/0x1bfc00000, data 0x44ac0e9/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:30.200333+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149487616 unmapped: 22487040 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:31.200522+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149487616 unmapped: 22487040 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297744 data_alloc: 285212672 data_used: 2781184
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b5000/0x0/0x1bfc00000, data 0x44ac0e9/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:32.200658+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149495808 unmapped: 22478848 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:33.200888+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149495808 unmapped: 22478848 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b5000/0x0/0x1bfc00000, data 0x44ac0e9/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:34.201056+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149495808 unmapped: 22478848 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:35.201352+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149495808 unmapped: 22478848 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:36.201674+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b5000/0x0/0x1bfc00000, data 0x44ac0e9/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149495808 unmapped: 22478848 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297744 data_alloc: 285212672 data_used: 2781184
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:37.201995+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149495808 unmapped: 22478848 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:38.202264+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b5000/0x0/0x1bfc00000, data 0x44ac0e9/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149495808 unmapped: 22478848 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:39.202550+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149495808 unmapped: 22478848 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:40.202771+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149504000 unmapped: 22470656 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:41.203047+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149504000 unmapped: 22470656 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.378524780s of 18.404926300s, submitted: 15
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 ms_handle_reset con 0x56131a70fc00 session 0x56131ae810e0
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297088 data_alloc: 285212672 data_used: 2781184
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:42.203283+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149897216 unmapped: 22077440 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:43.203498+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149897216 unmapped: 22077440 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Got map version 55
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/3354697053,v1:172.18.0.108:6811/3354697053]
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:44.203692+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149897216 unmapped: 22077440 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:45.203927+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149897216 unmapped: 22077440 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:46.204068+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149897216 unmapped: 22077440 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297088 data_alloc: 285212672 data_used: 2781184
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:47.204264+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149897216 unmapped: 22077440 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:48.204477+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149897216 unmapped: 22077440 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:49.204692+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149897216 unmapped: 22077440 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:50.204883+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149897216 unmapped: 22077440 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:51.205046+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149897216 unmapped: 22077440 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:52.205305+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297088 data_alloc: 285212672 data_used: 2781184
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149897216 unmapped: 22077440 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:53.205534+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149897216 unmapped: 22077440 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:54.205714+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149897216 unmapped: 22077440 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:55.205950+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149897216 unmapped: 22077440 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:56.206151+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149905408 unmapped: 22069248 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:57.206393+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297088 data_alloc: 285212672 data_used: 2781184
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149905408 unmapped: 22069248 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:58.206628+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149905408 unmapped: 22069248 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:30:59.206842+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149905408 unmapped: 22069248 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:00.207063+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149905408 unmapped: 22069248 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:01.207253+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149905408 unmapped: 22069248 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:02.207426+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297088 data_alloc: 285212672 data_used: 2781184
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149905408 unmapped: 22069248 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:03.207591+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149905408 unmapped: 22069248 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:04.207820+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149913600 unmapped: 22061056 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:05.208021+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149913600 unmapped: 22061056 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:06.208166+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149913600 unmapped: 22061056 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:07.208405+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297088 data_alloc: 285212672 data_used: 2781184
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149913600 unmapped: 22061056 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:08.208587+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149913600 unmapped: 22061056 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:09.208770+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149913600 unmapped: 22061056 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:10.208970+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149913600 unmapped: 22061056 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:11.209167+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149913600 unmapped: 22061056 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:12.209348+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297088 data_alloc: 285212672 data_used: 2781184
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149921792 unmapped: 22052864 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:13.209513+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149921792 unmapped: 22052864 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:14.209727+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149921792 unmapped: 22052864 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:15.209919+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149921792 unmapped: 22052864 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:16.210087+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149921792 unmapped: 22052864 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:17.210324+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297248 data_alloc: 285212672 data_used: 2785280
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149921792 unmapped: 22052864 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:18.210504+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149921792 unmapped: 22052864 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:19.210705+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149921792 unmapped: 22052864 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:20.210912+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149929984 unmapped: 22044672 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:21.211122+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149929984 unmapped: 22044672 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:22.211327+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297248 data_alloc: 285212672 data_used: 2785280
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149929984 unmapped: 22044672 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:23.211485+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149929984 unmapped: 22044672 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:24.211759+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149929984 unmapped: 22044672 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:25.212020+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149929984 unmapped: 22044672 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:26.212184+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149929984 unmapped: 22044672 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:27.212387+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297248 data_alloc: 285212672 data_used: 2785280
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149929984 unmapped: 22044672 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:28.212609+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149938176 unmapped: 22036480 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:29.212871+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149938176 unmapped: 22036480 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:30.213042+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149938176 unmapped: 22036480 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:31.213191+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149938176 unmapped: 22036480 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:32.213383+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297248 data_alloc: 285212672 data_used: 2785280
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149938176 unmapped: 22036480 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:33.213548+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149938176 unmapped: 22036480 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:34.213727+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149938176 unmapped: 22036480 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:35.213939+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149938176 unmapped: 22036480 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:36.214117+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149946368 unmapped: 22028288 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:37.214381+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297248 data_alloc: 285212672 data_used: 2785280
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149946368 unmapped: 22028288 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:38.215311+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149946368 unmapped: 22028288 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:39.216087+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149946368 unmapped: 22028288 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:40.216689+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149946368 unmapped: 22028288 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:41.217114+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149946368 unmapped: 22028288 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:42.217447+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297248 data_alloc: 285212672 data_used: 2785280
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149946368 unmapped: 22028288 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:43.217674+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149946368 unmapped: 22028288 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:44.217856+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149954560 unmapped: 22020096 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:45.218036+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149954560 unmapped: 22020096 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:46.218261+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:47.218476+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149954560 unmapped: 22020096 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297248 data_alloc: 285212672 data_used: 2785280
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:48.218606+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149954560 unmapped: 22020096 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:49.218736+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149954560 unmapped: 22020096 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:50.218973+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149954560 unmapped: 22020096 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:51.219119+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149954560 unmapped: 22020096 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:52.219504+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149954560 unmapped: 22020096 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297248 data_alloc: 285212672 data_used: 2785280
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:53.219675+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149962752 unmapped: 22011904 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:54.219825+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149962752 unmapped: 22011904 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:55.219974+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149962752 unmapped: 22011904 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:56.220171+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149962752 unmapped: 22011904 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:57.220373+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149962752 unmapped: 22011904 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297248 data_alloc: 285212672 data_used: 2785280
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:58.220527+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149962752 unmapped: 22011904 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:31:59.220768+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149962752 unmapped: 22011904 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:00.220981+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149962752 unmapped: 22011904 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:01.221186+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149970944 unmapped: 22003712 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:02.221367+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149970944 unmapped: 22003712 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297248 data_alloc: 285212672 data_used: 2785280
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:03.221544+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149970944 unmapped: 22003712 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:04.221708+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149970944 unmapped: 22003712 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:05.221882+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149970944 unmapped: 22003712 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:06.222080+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149970944 unmapped: 22003712 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:07.222246+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149970944 unmapped: 22003712 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297248 data_alloc: 285212672 data_used: 2785280
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:08.222384+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149970944 unmapped: 22003712 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:09.222582+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149979136 unmapped: 21995520 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:10.222742+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149979136 unmapped: 21995520 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:11.222914+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149979136 unmapped: 21995520 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:12.223049+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149979136 unmapped: 21995520 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297248 data_alloc: 285212672 data_used: 2785280
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:13.223210+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149979136 unmapped: 21995520 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:14.223360+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149979136 unmapped: 21995520 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:15.223493+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149979136 unmapped: 21995520 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:16.223622+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149979136 unmapped: 21995520 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:17.223822+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149987328 unmapped: 21987328 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297248 data_alloc: 285212672 data_used: 2785280
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:18.224008+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149995520 unmapped: 21979136 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:19.224148+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149995520 unmapped: 21979136 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:20.224304+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149995520 unmapped: 21979136 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:21.224410+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149995520 unmapped: 21979136 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:22.224559+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149995520 unmapped: 21979136 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297248 data_alloc: 285212672 data_used: 2785280
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:23.224733+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149995520 unmapped: 21979136 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:24.224834+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149995520 unmapped: 21979136 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:25.224956+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 150003712 unmapped: 21970944 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:26.225124+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 150003712 unmapped: 21970944 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:27.225278+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 150003712 unmapped: 21970944 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297248 data_alloc: 285212672 data_used: 2785280
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:28.225402+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 150003712 unmapped: 21970944 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:29.225511+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 150003712 unmapped: 21970944 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:30.225693+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 150003712 unmapped: 21970944 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:31.225839+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: do_command 'config diff' '{prefix=config diff}'
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 150011904 unmapped: 21962752 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: do_command 'config show' '{prefix=config show}'
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: do_command 'counter dump' '{prefix=counter dump}'
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: do_command 'counter schema' '{prefix=counter schema}'
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:32.225980+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149741568 unmapped: 22233088 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: bluestore.MempoolThread(0x561316739b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2297248 data_alloc: 285212672 data_used: 2785280
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: osd.0 284 heartbeat osd_stat(store_statfs(0x1b48b6000/0x0/0x1bfc00000, data 0x44ac2fc/0x4678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [1,2,3,4,5] op hist [])
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: tick
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_tickets
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-06T10:32:33.226132+0000)
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: prioritycache tune_memory target: 5709084876 mapped: 149651456 unmapped: 22323200 heap: 171974656 old mem: 4047415775 new mem: 4047415775
Dec 06 10:33:04 np0005548790.localdomain ceph-osd[31627]: do_command 'log dump' '{prefix=log dump}'
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1689303103' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/294876320' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2422302273' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1710489525' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: pgmap v833: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/4062777244' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/1935362394' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2953662664' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/1342647983' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2985416432' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/321501752' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/3932188867' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/908175058' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/3193103153' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/2348448191' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1689303103' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/1182164294' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/730220607' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 06 10:33:04 np0005548790.localdomain rsyslogd[759]: imjournal from <localhost:ceph-osd>: begin to drop messages due to rate-limiting
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4280751732' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Dec 06 10:33:04 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2389094995' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Dec 06 10:33:05 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1798206855' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.50115 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/77878866' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2370934995' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/730220607' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/3268197787' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/946269881' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/4280751732' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2574211342' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/4154228913' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2389094995' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1798206855' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/1230936092' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/1464918040' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/1107750515' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 06 10:33:05 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3451307951' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69737 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59539 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59533 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v834: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:33:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.50127 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.50124 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69758 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd utilization"} v 0)
Dec 06 10:33:05 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/322497280' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59560 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:05 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59566 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:06 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.50136 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:06 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.50139 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:06 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69773 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:06 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69779 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:06 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59572 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:06 np0005548790.localdomain ceph-mon[301742]: from='client.50115 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:06 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/3451307951' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 06 10:33:06 np0005548790.localdomain ceph-mon[301742]: from='client.69737 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:06 np0005548790.localdomain ceph-mon[301742]: from='client.59539 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:06 np0005548790.localdomain ceph-mon[301742]: from='client.59533 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:06 np0005548790.localdomain ceph-mon[301742]: pgmap v834: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:33:06 np0005548790.localdomain ceph-mon[301742]: from='client.50127 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:06 np0005548790.localdomain ceph-mon[301742]: from='client.50124 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:06 np0005548790.localdomain ceph-mon[301742]: from='client.69758 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:06 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/322497280' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 06 10:33:06 np0005548790.localdomain ceph-mon[301742]: from='client.59560 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:06 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.50145 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:06 np0005548790.localdomain systemd[1]: Starting Hostname Service...
Dec 06 10:33:06 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69785 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:06 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59587 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:06 np0005548790.localdomain systemd[1]: Started Hostname Service.
Dec 06 10:33:06 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69791 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:06 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.50157 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:06 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69803 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:07 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:33:07.061 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:33:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59605 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.330 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.331 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.332 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.332 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.332 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.332 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.332 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.332 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.333 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.333 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.333 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.333 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.333 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.333 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.333 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.334 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.334 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.334 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.334 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.334 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.334 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceilometer_agent_compute[237083]: 2025-12-06 10:33:07.335 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 06 10:33:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69815 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.50175 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:07 np0005548790.localdomain ceph-mon[301742]: from='client.59566 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:07 np0005548790.localdomain ceph-mon[301742]: from='client.50136 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:07 np0005548790.localdomain ceph-mon[301742]: from='client.50139 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:07 np0005548790.localdomain ceph-mon[301742]: from='client.69773 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:07 np0005548790.localdomain ceph-mon[301742]: from='client.69779 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:07 np0005548790.localdomain ceph-mon[301742]: from='client.59572 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:07 np0005548790.localdomain ceph-mon[301742]: from='client.50145 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:07 np0005548790.localdomain ceph-mon[301742]: from='client.69785 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:07 np0005548790.localdomain ceph-mon[301742]: from='client.59587 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:07 np0005548790.localdomain ceph-mon[301742]: from='client.69791 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:07 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/488060457' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 06 10:33:07 np0005548790.localdomain ceph-mon[301742]: from='client.50157 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:07 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/1050406131' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 06 10:33:07 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/3447850753' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 06 10:33:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59617 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v835: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:33:07 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec 06 10:33:07 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1050443617' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 06 10:33:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.50187 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69827 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:07 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59629 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "versions"} v 0)
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1525022835' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 06 10:33:08 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:33:08.364 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: from='client.69803 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: from='client.59605 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: from='client.69815 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: from='client.50175 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: from='client.59617 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: pgmap v835: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/375595248' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1050443617' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: from='client.50187 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/2289626626' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: from='client.69827 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: from='client.59629 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/730980905' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1525022835' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/1041013098' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/2525006127' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3853367064' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Dec 06 10:33:08 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2625254279' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 06 10:33:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 06 10:33:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59683 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:09 np0005548790.localdomain ceph-mon[301742]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:33:09 np0005548790.localdomain ceph-mon[301742]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:33:09 np0005548790.localdomain ceph-mon[301742]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:33:09 np0005548790.localdomain ceph-mon[301742]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:33:09 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/3853367064' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 06 10:33:09 np0005548790.localdomain ceph-mon[301742]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:33:09 np0005548790.localdomain ceph-mon[301742]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:33:09 np0005548790.localdomain ceph-mon[301742]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:33:09 np0005548790.localdomain ceph-mon[301742]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:33:09 np0005548790.localdomain ceph-mon[301742]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:33:09 np0005548790.localdomain ceph-mon[301742]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:33:09 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2625254279' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 06 10:33:09 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/759059760' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 06 10:33:09 np0005548790.localdomain ceph-mon[301742]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:33:09 np0005548790.localdomain ceph-mon[301742]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:33:09 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/41112147' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 06 10:33:09 np0005548790.localdomain ceph-mon[301742]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:33:09 np0005548790.localdomain ceph-mon[301742]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:33:09 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:33:09 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:33:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v836: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:33:09 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.50256 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:09 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "config dump"} v 0)
Dec 06 10:33:09 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2611527651' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 06 10:33:10 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.69911 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:10 np0005548790.localdomain ceph-mon[301742]: from='client.59683 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:10 np0005548790.localdomain ceph-mon[301742]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 06 10:33:10 np0005548790.localdomain ceph-mon[301742]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 06 10:33:10 np0005548790.localdomain ceph-mon[301742]: pgmap v836: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:33:10 np0005548790.localdomain ceph-mon[301742]: from='client.50256 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:10 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/995061289' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 06 10:33:10 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2611527651' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 06 10:33:10 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/3002096115' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 06 10:33:10 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/509212967' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 06 10:33:10 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Dec 06 10:33:10 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1474627565' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 06 10:33:11 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "df"} v 0)
Dec 06 10:33:11 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2126088694' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 06 10:33:11 np0005548790.localdomain kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec 06 10:33:11 np0005548790.localdomain kernel: cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec 06 10:33:11 np0005548790.localdomain kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec 06 10:33:11 np0005548790.localdomain kernel: cfg80211: failed to load regulatory.db
Dec 06 10:33:11 np0005548790.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.
Dec 06 10:33:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(cluster) log [DBG] : pgmap v837: 177 pgs: 177 active+clean; 235 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail
Dec 06 10:33:11 np0005548790.localdomain podman[330985]: 2025-12-06 10:33:11.58688882 +0000 UTC m=+0.091856355 container health_status 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 06 10:33:11 np0005548790.localdomain podman[330985]: 2025-12-06 10:33:11.598193204 +0000 UTC m=+0.103160729 container exec_died 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 06 10:33:11 np0005548790.localdomain systemd[1]: 97302af2aee72b86f2c7665facfa7e678b162f338c523143a5d422bb3ffe024c.service: Deactivated successfully.
Dec 06 10:33:11 np0005548790.localdomain ceph-mon[301742]: from='client.69911 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:11 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/1083196786' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 06 10:33:11 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/1474627565' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 06 10:33:11 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/3948867050' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 06 10:33:11 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.106:0/3836096139' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 06 10:33:11 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.108:0/2126088694' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 06 10:33:11 np0005548790.localdomain ceph-mon[301742]: from='client.? 172.18.0.107:0/1936156670' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 06 10:33:11 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "fs dump"} v 0)
Dec 06 10:33:11 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3406386216' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 06 10:33:11 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.59728 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:33:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:33:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] scanning for idle connections..
Dec 06 10:33:11 np0005548790.localdomain ceph-mgr[286934]: [volumes INFO mgr_util] cleaning up connections: []
Dec 06 10:33:12 np0005548790.localdomain ceph-mgr[286934]: log_channel(audit) log [DBG] : from='client.50292 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 06 10:33:12 np0005548790.localdomain nova_compute[280865]: 2025-12-06 10:33:12.095 280869 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 06 10:33:12 np0005548790.localdomain ceph-mon[301742]: mon.np0005548790@2(peon) e15 handle_command mon_command({"prefix": "fs ls"} v 0)
Dec 06 10:33:12 np0005548790.localdomain ceph-mon[301742]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3352183805' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
